Exploring Algorithmic Bias in LLMs
A hands-on, research-focused course exploring how large language models (LLMs) can inherit unfair biases—and what you can do to detect and mitigate these issues. Participants will replicate real bias-auditing methods from studies like the “Silicon Ceiling,” run simple statistical tests to confirm if differences are random or meaningful, and build an interactive mini tool that reveals whether LLM responses change when certain variables (like names) are tweaked. By the end, you’ll understand why bias emerges, learn how to measure it, and deploy a small-scale “bias auditing” web app.
July 7 - 21 | July 28 - August 17
Approximately 7 -10 hours per week commitment, with live sessions and independent project work
.jpg)
Course
Weekly
Schedule
Tuesday
Live Lab
11:00am - 12:30pm
Wednesday
Office Hours
11:00am - 1:00pm
Friday
Demo Day
11:00am - 12:30pm
KEY HIGHLIGHTS
Students will design prompt variations (e.g., different résumés or user attributes) to reveal subtle biases and record results for comparison.
Practical Bias
Detection
At least half of each live session is devoted to building and testing code—whether collecting LLM outputs, analyzing them with simple stats, or refining a mini bias-audit app.
Hands-On Coding Emphasis
Through step-by-step guidance, you’ll create an AI bias-auditing tool—a small web application that compares AI responses across different demographics or prompts.
Research Design and Implementation
Use LTDScience to annotate real research (like The Silicon Ceiling), then switch to LTDCoding to implement what you learned and create web applications. This “Look → Think → Do” flow keeps reading short and coding central.
Look-Think-Do (LTD) Tools Integration
Set aside short discussion time for examining the broader social consequences of biased AI systems, ensuring you see not just the “how” but also the “why it matters.”
Ethical & Policy Reflections
No heavy math required. Learn to compute averages, standard deviations, and simple p-values so you can judge if an observed difference is real or likely random.
Statistics
Topics you'll cover
Understand how large language models can learn and reflect societal biases
Use our custom AI tools to support you as you read and replicate technical research
Run simple calculations (averages, standard deviations) to see if differences are random or consistent
Implement a research idea into a usable web application
Try different strategies for reducing the bias observed in LLMs
Reflect on policy-level solutions for safer AI use
Deal with unexpected outputs from API calls
Deploy a small but functional “bias-auditing” prototype—ready to share or expand
Course Delivery Method
3 Weekly Live Sessions
-
Live Labs: Collaborate in real time with instructors and classmates to build or improve mini-projects.
-
Demo Sessions: Share your creations, gather input, and learn from other students’ approaches.
-
Office Hours: Get one-on-one or small-group troubleshooting and feedback on your code.
Flexible Asynchronous Learning
Complete readings, quizzes, and coding exercises at your own pace.
Community Discussion Forum
Exchange ideas, celebrate milestones, and request peer support for tough coding challenges.
This course is for:
Anyone concerned about AI Ethics seeking practical ways to test and analyse bias in large language models.
Learning Outcomes
Conduct Systematic Bias Detection Research
Design and implement experiments to identify patterns of bias in large language models using methodologies from published studies.
Apply Data Analysis to Evaluate AI Fairness
Use statistical methods to determine whether observed differences in AI responses are statistically significant or random variations.
Develop Interactive Bias Visualization Tools
Build web applications that demonstrate how LLM outputs change when variables like demographics or names are modified.
Formulate Bias Mitigation Approaches
Explore and implement techniques to reduce unfair patterns in AI responses while considering broader ethical implications.
Why take this course?
AI bias affects hiring, healthcare, and everyday decision-making. Learn to detect and analyse bias with a hands-on, research-driven approach.
-
Heavily Hands-On – Each session dedicates at least half the time to coding labs.
-
Step-by-Step Guidance – Read real research in ltdScience, apply it in code, and get instructor support in live labs.
-
Future-Ready Skills – Build a concrete understanding of bias detection and its impact across industries.