: Critically examines gender biases in reference letters generated by LLMs like GPT.
: The authors found that LLMs often use different descriptive terms based on gender—for example, describing female candidates as "warm" while calling male candidates "role models". 243 mp4
: Uses social science-inspired evaluation methods to track bias propagation across language style and lexical content. Resources : Read the Full Paper (PDF) Watch the Presentation (243.mp4) (Direct Video Link) Other Related Papers (Index 243) : Critically examines gender biases in reference letters
Recommended Paper: "Gender Biases in LLM-Generated Reference Letters" Resources : Read the Full Paper (PDF) Watch
: "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models" – A study comparing pretrained multilingual models against monolingual ones.
This 2023 paper by Wan et al. investigates how large language models (LLMs) may perpetuate social biases when writing recommendation letters. It is highly regarded for its systematic approach to examining language style and lexical content.
Wanna be the first to hear about new ELPHNT packs, videos and workshops? Join the mailing list to stay up to date with everything new from ELPHNT.
Free forever. No spam. Unsubscribe any time.
We noticed you're visiting from Poland. We've updated our prices to Polish złoty for your shopping convenience. Use United States (US) dollar instead. Dismiss