Skip to main content

Fairness in Visual Recognition

Computer vision models trained on unparalleled amounts of data hold promise for making impartial, well-informed decisions in a variety of applications. However, more and more historical societal biases are making their way into these seemingly innocuous systems.  We focus our attention on bias in the form of inappropriate correlations between visual protected attributes (age, gender expression, skin color, …) and the predictions of visual recognition models, as well as any unintended discrepancy in error rates of vision systems across different social, demographic or cultural groups. In this talk, we’ll dive deeper both into the technical reasons and the potential solutions for bias in computer vision. I’ll highlight our recent work addressing bias in visual datasets (FAT*2020 http://image-net.org/filtering-and-balancing/; ECCV 2020 https://github.com/princetonvisualai/revise-tool), in visual models (CVPR 2020 https://arxiv.org/abs/1911.11834; under review https://arxiv.org/abs/2012.01469) as well as in the makeup of AI leadership (http://ai-4-all.org).

Bio: Dr. Olga Russakovsky is an Assistant Professor in the Computer Science Department at Princeton University. Her research is in computer vision, closely integrated with the fields of machine learning, human-computer interaction and fairness, accountability and transparency. She has been awarded the AnitaB.org’s Emerging Leader Abie Award in honor of Denice Denton in 2020, the CRA-WP Anita Borg Early Career Award in 2020, the MIT Technology Review’s 35-under-35 Innovator award in 2017, the PAMI Everingham Prize in 2016 and Foreign Policy Magazine’s 100 Leading Global Thinkers award in 2015. In addition to her research, she co-founded and continues to serve on the Board of Directors of the AI4ALL foundation dedicated to increasing diversity and inclusion in Artificial Intelligence (AI). She completed her PhD at Stanford University in 2015 and her postdoctoral fellowship at Carnegie Mellon University in 2017.

Part of the CDAC Winter 2021 Distinguished Speaker Series:

Bias Correction: Solutions for Socially Responsible Data Science

Security, privacy and bias in the context of machine learning are often treated as binary issues, where an algorithm is either biased or fair, ethical or unjust. In reality, there is a tradeoff between using technology and opening up new privacy and security risks. Researchers are developing innovative tools that navigate these tradeoffs by applying advances in machine learning to societal issues without exacerbating bias or endangering privacy and security. The CDAC Winter 2021 Distinguished Speaker Series will host interdisciplinary researchers and thinkers exploring methods and applications that protect user privacy, prevent malicious use, and avoid deepening societal inequities — while diving into the human values and decisions that underpin these approaches.

Speakers

Olga Russakovsky

Assistant Professor in the Computer Science Department, Princeton University

Registration

Register
Add To Calendar 01/25/2021 03:00 PM 01/25/2021 04:00 PM CDAC Distinguished Speaker Series: Olga Russakovsky (Princeton) Zoom/YouTube false