[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Responsible AI

DSC180-A10, Data science capstone: Responsible AI

Q1 Fall 2024 Classes held on Zoom Tuesdays at 2PM PT
Q2 Winter 2025 Check-ins held on Zoom TBD

Instructor for 2024-25: Emily Ramond (Course Point of contact)
Additional Instructors: Greg Thein, Ryan Cummings, Stephanie Chavez

Previously Developed by Nandita Rahman, Meira Gilbert, Aritra Nath, Emma Harvey, Jeffry Liu, David Danks and Rasmus Nielsen



Introduction

The explosive proliferation of research around ethical AI (Artificial Intelligence) and AI trustworthiness during the last decade reflects a rising societal awareness and desire to address potential and actual harmful impacts and consequences of AI. We introduce students to the socio-technical risks of AI fairness and explainability by examining several high-profile algorithmic discrimination debates that have sparked seminal research in this field. We will discuss the range of alternative definitions, the perspectives they represent, and metrics used to examine AI fairness, and the papers that describe the interesting yet mathematically irreconcilable relationships that interweave them. Concepts around AI-fairness will be translated into real-world applications through classroom debates from the perspectives of different disciplines and stakeholders, and by replicating the analyses from a relevant use case example. Students will conduct an independent fairness and explainability analysis on a mock model using industry recognized toolkits, with an emphasis on understanding how issues that originate in code and math eventually affect real human beings and society as a whole.

In this domain, project proposals will be focused on the following areas:

  • What does AI trustworthiness and ethics mean for different stakeholders?
  • What are the ways to think about whether or not a model is fair?
  • What are the risks of ‘black box’ algorithms, and how do we mitigate them? How is AI explainability related to fairness?
  • How do inherent fairness problems in AI models affect human beings?

Instructor: Emily Ramond and Greg Thein

About: Emily completed her undergraduate studies at HDSI in 2022, where she was an active member of Marshall College. Her capstone project centered around causal inference. Post-graduation, Emily joined Deloitte as a Business Technology Analyst. In this role, she engaged in diverse tasks encompassing data analytics, machine learning, and engineering for a wide array of clients. Beyond academic and professional pursuits, Emily loves crocheting, travel, snowboarding, and fostering cats. Drawing inspiration from her coursework at Marshall College, Emily is passionate about ethical artificial intelligence. Her commitment extends to prioritizing fairness, transparency, and accountability. She is driven by her interest in leveraging the power of data science for the betterment of the world.

About: Greg completed his undergraduate studies at HDSI in 2021, where he was an active member of the ERC community. His capstone project centered around Alzheimer’s gene analysis. After graduating, Greg joined Deloitte as a Business Technology Analyst, where he engages in diverse tasks encompassing data management, analytics, and dashboarding for various clients. In his free time, Greg loves to travel, explore new restaurants and bakeries, and play sports/working out (tennis, swimming, and snowboarding). As the AI space grows and evolves, Greg is passionate about ensuring products and models are built with ethical considerations in mind, allowing for greater data driven and technological integrations within society.

Mentoring Style: The capstone program is based in active participation from all students. The mentors will provide overall guidance, and a high level of student independence is required. Highlights:

  • Understand the implications of the impossibility theorem for organizations employing AI
  • Develop ethical AI models considering data-specific issues and fairness metrics
  • Explore pre, in, and postprocessing techniques for mitigating fairness issues
  • Analyze the impact of non-technical considerations on the ethical impacts of AI
  • Investigate ethical considerations across different industries and AI techniques
  • Examine the perspectives of stakeholders and the implications of false classifications
  • Utilize the AI Fairness 360 Model and Medical Expenditure data for practical projects
  • Gain insights into data science project management and collaboration within AI teams

Industry Partner: Deloitte, Trustworthy AI Team

As one of the largest professional services organizations in the United States, Deloitte provides a vast array of information security services across 2,800 engagements in major commercial industries and 15 cabinet-level federal agencies. Our Trustworthy AI team has helped many of our clients work through the burgeoning regulatory landscape and growing awareness around ethical and trustworthy AI. For this course, the Deloitte team will consist of Emily Ramond, Ryan Cummings, Stephanie Chavez and Greg Thein. We’re excited to work with UCSD in developing this course, and we look forward to discussing these exciting topics with you.

Course Resources

Office Hours

  • Deloitte will hold office hours: Mondays 4:00-4:30pm and Fridays 12:30-1:00pm Pacific Time - Zoom Link

Course Communications

Please send any questions to the Responsible AI Discord server:

  • Discord server: hm2hndFgTf
  • Primary Course Contact, email Emily Ramond (eramond@deloitte.com)
  • For private or personal questions, you can reach out to Emily privately via email or Discord

Course Expectations and Assignments

Quarter One Project (65%)

  • Introduce students to the area in which they will do their project through replicating a known result.
  • Students will complete coding tasks related to the replication project and are also responsible for creating a final writeup
  • Create written material and code that serves as a foundation for work in quarter-2’s projects.
  • Full details of the requirements for the Q1 project can be found in the Capstone Program Syllabus

Quarter Two Project Proposal (15%)

  • Students will develop a project proposal for Q2 based on their learnings and interests from the course readings and the replication project
  • Full details of the requirements for the project proposal can be found in the Capstone Program Syllabus

Participation Credit

Students are responsible for completing the readings in full prior to the start of each week’s session in order to facilitate productive class discussion. All readings will be freely available and linked in the course website. Participation in the weekly discussion section is mandatory. Each week, you are responsible for doing the reading/task assigned in the schedule. Come to section prepared to ask questions about and discuss the results of these tasks.

Weekly Participation Questions: Writing Prompt (5%)

  • Default Participation Questions are due 24 hours before class (Monday prior, 2PM PT). Please submit these to gradescope.

Overall Class Participation: In-class Brief (5%)

Each student is responsible for preparing one five-minute in-class brief on one of the academic papers assigned as readings.

  • Following the first session, students will have the opportunity to sign up to present on one of the course readings. Students are responsible for creating a PowerPoint presentation summarizing the reading, including its background, methodology, argument/key contributions, and their thoughts on the implications/impact of the article.
  • Reading presentations do not need to be submitted and you will not be graded on the slides themselves. Presentations should take five minutes.

Grading

Please see the Capstone Program Syllabus for a detailed description of the assignment weights and rubric.


Schedule

Week Date Topic  
2 10/06/23 Introduction to Trustworthy AI  
3 10/13/23 A Multi-Stakeholder Perspective on Ethical AI  
4 10/20/23 Replication Project Part 0: Introduction  
5 10/27/23 Replication Project Part 1: EDA, Running Data Science Teams  
6 11/03/23 AI Regulations  
7 11/07/23 Fairness Metrics / Veterans Day  
8 11/17/23 Bias Mitigation  
9 No class Thanksgiving Holiday - OH on discord  
10 12/01/22 Capstone Planning and Techno-Solutionism  
11 12/08/22 Last Week of Class - Q1 Presentations