Algorithm Audits & Bias

Auditing Automated Hiring Practices
Atcheson & Karahalios | CHI 2022

Addresses techniques to audit for disability discrimination in algorithmic hiring processes.

Debiased Large Language Models Still Associate Muslims with Uniquely Violent Acts
Hemmatian & Varshney | arXiv 2022

Religious bias exists when using large language models (LLMs).

Auditing Algorithms: Understanding Algorithmic Systems from the Outside In
Metaxa et al. | Foundations and Trends in Human–Computer Interaction 2021

Discusses how to ethically audit search algorithms from the outside.

Evaluation of crowdsourced mortality prediction models as a framework for assessing AI in medicine
Berquist & Schaffter et al. | medRxiv 2022

Racial and gender bias in AI Medicine.

Auditing Race and Gender Discrimination in Online Housing Markets
Asplund et al. | ICWSM 2020

Audit technique that detected discrimination in online housing sites and online housing advertisements.

Mind Your Inflections! Improving NLP for Non-Standard English with Base-Inflection Encoding
Tan et al. | EMNLP 2020

A technique motivated by inability of NLP to deal well with World Englishes such as African American Vernacular English.

Quantifying Voter Biases in Online Platforms: An Instrumental Variable Approach
Dev et al. | CSCW 2019

Investigates which audit techniques overestimate and underestimate (reputation, position, social influence) biases.

Pretrained AI Models: Performativity, Mobility, and Change
Varshney et al. | arXiv 2019

Discusses harms from pretrained models and suggestions for governing them.

Interacting with Imperfect AI Systems

Inform the Uninformed: Improving Online Informed Consent Reading with an AI-Powered Chatbot
Xiao et al. | CHI 2023

Evaluates the use of a chatbot in assisting humans during the informed consent process.

Attitudes Surrounding an Imperfect AI Autograder
Hsu & Li et al. |  CHI 2021

Discusses harms from pretrained models and suggestions for governing them.

“We Just Use What They Give Us”: Understanding Passenger User Perspectives in Smart Homes
Koshy et al. | CHI 2021

How power dynamics in the home (the pilot of the smart algorithmic devices vs. the bystander) and lack of control impact perceptions and use.

Awareness, Navigation, and Use of Feed Control Settings Online
Hsu et al. | CHI 2020

An analysis of how people are not aware of existing control settings, cannot accomplish goals with existing controls, and how they misinterpret the messaging in control descriptions in online social media sites. The outcomes affecting safety are more pronounced with machine learning backed settings.

A Slow Algorithm Improves Users’ Assessments of the Algorithm’s Accuracy
Park et al. | CSCW 2019

Slowing down algorithmic outputs increases human perception of flawed algorithms.

User Attitudes towards Algorithmic Opacity and Transparency in Online Reviewing Platforms
Eslami et al. | CHI 2019

With transparency, users begin “writing” for the algorithm instead of for themselves.

Safety in the Face of Unknown Unknowns: Algorithm Fusion in Data-Driven Engineering Systems
Kshetry & Varshney | ICASSP 2019

AI safety for cyberphysical systems with limited training data, such as wastewater treatment plants.

The Illusion of Control: Placebo Effects of Control Settings
Vaccaro et al. | CHI 2018

An analysis signaling people attribute similar power to flawed and functioning algorithms.

Communicating Algorithmic Process to A General Audience

Communicating Algorithmic Process in Online Behavioral Advertising
Eslami et al. | CHI 2018

A workflow that educated people on advertising’s influence in their social media experience.

Contesting Algorithmic & Data Processes

Contestability For Content Moderation
Vaccaro et al. | CSCW 2021

The creation of processes that allow people to contest algorithmic decisions, specifically for marginalized populations.

“At the End of the Day Facebook Does What It Wants”: How Users Experience Contesting Algorithmic Content Moderation
Vaccaro et al. | CSCW2 2020

An application of appeals processes featuring different fairness techniques.

No: Critical Refusal as Feminist Data Practice
Garcia et al. | CSCW 2020

Approaches to mitigate harmful data practices that result in structural inequities compounded by the intersections of one’s gender, race, ethnicity, class, sexuality, ability, and citizenship.

Community Events to Educate around AI Harms

Just Infrastructures Speaker Series
Karahalios, Chan & Gupta | 2021–2022

A year-long speaker series (12 speakers) to interrogate AI and data harms so that we can create Just Infrastructures. Topics ranged from media manipulation, online community algorithm governance, the law and algorithms, fair allocation of voting boxes, surveillance capitalism, and more).

Digital Contact Tracing Systems and Vulnerable Populations.
Visweswaran & Karahalios | 2021

A joint Rice University-University of Illinois community workshop that included public health directors, local community organizers, and researchers from Champaign County, IL and Harris County, TX to discuss inequities in health access, treatment, and outcomes around COVID19 algorithmic processes (including COVID tracking apps). Public health departments reported AI needs different from what was offered by existing (sometimes called predatory) apps.

Contestability in Algorithmic Systems
Vaccaro, Karahalios, Mulligan, Kluttz & Hirsch | CSCW 2019

International workshop that attracted government employees, researchers, and practitioners around methods to contest & appeal algorithmic decisions. Domains ranged from parking, access to health treatment, health insurance, disaster relief drones, etc.

Designing for Values, Interactivity, Contestability, & Ethics in Systems (DeVICES)
Hoogs, Mulligan, Getoor, Howard & Karahalios | DARPA ISAT 2019

A workshop that included researchers, ethicists, practitioners to create ethical AI workflows. One case study centered around automated disaster relief.

The Algorithm and the User: How Can HCI Use Lay Understandings of Algorithmic Systems?
DeVito, Hancock, French, Antin, Karahalios & Tong | CHI 2018

A workshop to share approaches around communicating algorithmic processes to everyday users.
Scroll to Top