Q1 Fall 2024 | Classes held on Zoom | Tuesdays at 2PM PT |
Q2 Winter 2025 | Check-ins held on Zoom | TBD |
Instructor for 2024-25: Emily Ramond (Course Point of contact)
Additional Instructors: Greg Thein, Ryan Cummings, Stephanie Chavez
Previously Developed by Nandita Rahman, Meira Gilbert, Aritra Nath, Emma Harvey, Jeffry Liu, David Danks and Rasmus Nielsen
The explosive proliferation of research around ethical AI (Artificial Intelligence) and AI trustworthiness during the last decade reflects a rising societal awareness and desire to address potential and actual harmful impacts and consequences of AI. We introduce students to the socio-technical risks of AI fairness and explainability by examining several high-profile algorithmic discrimination debates that have sparked seminal research in this field. We will discuss the range of alternative definitions, the perspectives they represent, and metrics used to examine AI fairness, and the papers that describe the interesting yet mathematically irreconcilable relationships that interweave them. Concepts around AI-fairness will be translated into real-world applications through classroom debates from the perspectives of different disciplines and stakeholders, and by replicating the analyses from a relevant use case example. Students will conduct an independent fairness and explainability analysis on a mock model using industry recognized toolkits, with an emphasis on understanding how issues that originate in code and math eventually affect real human beings and society as a whole.
In this domain, project proposals will be focused on the following areas:
About: Emily completed her undergraduate studies at HDSI in 2022, where she was an active member of Marshall College. Her capstone project centered around causal inference. Post-graduation, Emily joined Deloitte as a Business Technology Analyst. In this role, she engaged in diverse tasks encompassing data analytics, machine learning, and engineering for a wide array of clients. Beyond academic and professional pursuits, Emily loves crocheting, travel, snowboarding, and fostering cats. Drawing inspiration from her coursework at Marshall College, Emily is passionate about ethical artificial intelligence. Her commitment extends to prioritizing fairness, transparency, and accountability. She is driven by her interest in leveraging the power of data science for the betterment of the world.
About: Greg completed his undergraduate studies at HDSI in 2021, where he was an active member of the ERC community. His capstone project centered around Alzheimer’s gene analysis. After graduating, Greg joined Deloitte as a Business Technology Analyst, where he engages in diverse tasks encompassing data management, analytics, and dashboarding for various clients. In his free time, Greg loves to travel, explore new restaurants and bakeries, and play sports/working out (tennis, swimming, and snowboarding). As the AI space grows and evolves, Greg is passionate about ensuring products and models are built with ethical considerations in mind, allowing for greater data driven and technological integrations within society.
Mentoring Style: The capstone program is based in active participation from all students. The mentors will provide overall guidance, and a high level of student independence is required. Highlights:
As one of the largest professional services organizations in the United States, Deloitte provides a vast array of information security services across 2,800 engagements in major commercial industries and 15 cabinet-level federal agencies. Our Trustworthy AI team has helped many of our clients work through the burgeoning regulatory landscape and growing awareness around ethical and trustworthy AI. For this course, the Deloitte team will consist of Emily Ramond, Ryan Cummings, Stephanie Chavez and Greg Thein. We’re excited to work with UCSD in developing this course, and we look forward to discussing these exciting topics with you.
Please send any questions to the Responsible AI Discord server:
Students are responsible for completing the readings in full prior to the start of each week’s session in order to facilitate productive class discussion. All readings will be freely available and linked in the course website. Participation in the weekly discussion section is mandatory. Each week, you are responsible for doing the reading/task assigned in the schedule. Come to section prepared to ask questions about and discuss the results of these tasks.
Each student is responsible for preparing one five-minute in-class brief on one of the academic papers assigned as readings.
Please see the Capstone Program Syllabus for a detailed description of the assignment weights and rubric.
Week | Date | Topic | |
---|---|---|---|
2 | 10/06/23 | Introduction to Trustworthy AI | |
3 | 10/13/23 | A Multi-Stakeholder Perspective on Ethical AI | |
4 | 10/20/23 | Replication Project Part 0: Introduction | |
5 | 10/27/23 | Replication Project Part 1: EDA, Running Data Science Teams | |
6 | 11/03/23 | AI Regulations | |
7 | 11/07/23 | Fairness Metrics / Veterans Day | |
8 | 11/17/23 | Bias Mitigation | |
9 | No class | Thanksgiving Holiday - OH on discord | |
10 | 12/01/22 | Capstone Planning and Techno-Solutionism | |
11 | 12/08/22 | Last Week of Class - Q1 Presentations |