Artificial Intelligence (AI) vs. Difference Guest article by Jutta Treviranus

On This Page

This article is written by Jutta Treviranus. Jutta is an AI ethics expert. She discusses how artificial intelligence (AI) can make existing social inequalities worse. AI can cause more unfairness in housing, education, and healthcare. AI favors the average or typical. This can lead to less support for people who are disabled. Jutta explains how AI’s focus on statistics can ignore the needs of individuals who don’t fit the norm. She also knows that AI can also be very helpful to disabled people. It can help make things accessible. The author is working on ways to make AI more inclusive and supportive of differences. This is a hard task requiring broad community support.

A graphic representing people’s needs plotted on a data scatterplot. The graph looks like a starburst. The graph is divided into consecutive circles like a bullseye pattern. The middle or average needs are labelled “highly accurate,” the area further from the middle is labelled “inaccurate” and the area at the edge is labelled “wrong.”

[Editor’s Note: This article is reprinted with the author’s permission from the July-August 2024 edition of the “We Count Recount,” the bi-monthly newsletter of We Count. We Count is a project of the Inclusive Design Research Centre (IDRC) at OCAD University in Toronto.

IDRC is “an open global community working together to proactively ensure that emerging systems and practices are designed inclusively.” We Count was created to address bias, discrimination and barriers to participation and employment for persons with disabilities within the field of data science and data-driven systems.

The author of this piece is Jutta Treviranus, the Director and Founder of the Inclusive Design Research Centre and the Principal Researcher of We Count. Jutta has been recognized by Women in AI with the 2022 AI for Good – DEI AI Leader of the Year award and by Women in AI Ethics as one of 100 Brilliant Women in AI Ethics™ – 2024. She is also chair of the Canadian Government’s Accessible and Equitable Artificial Intelligence standards committee]

Jutta’s August 2024 message to the We Count Community

I know I have focused on AI to the exclusion of other important issues in my recent messages. My reason is that the current design of AI is poised to make every other harm even worse for people who are already vulnerable and every other benefit better for people who are already doing well.

In that sense AI is like a magnifying mirror of our current society, exaggerating existing patterns of disparity. If we proceed with all the hyped applications of AI, inequity in housing, employment, education, wealth, healthcare, political platform priorities, news coverage, probationary policies, government budgeting, research allocations and many other inequities will be amplified, accelerated and automated.Jutta Treviranus

In stark terms, current AI systems are propagating a very seductive form of digital eugenics, eliminating difference and promoting “normality.” While there is a growing distrust of AI by many consumers, people are not aware of all the critical decisions we have already turned over to AI. When the world is unpredictable and chaotic, and changes happen faster than we can process them, people yearn for stability, “normality” and predictability. AI satisfies this yearning.

At its most basic, AI is a high-powered statistical reasoning machine that is efficient at statistically finding and producing the typical, popular, normative, predictable or statistically average. The image above demonstrates the resulting pattern.

In the image, people’s needs are plotted on a data scatterplot. The graph looks like a starburst. It shows three consecutive circles arranged in a bullseye pattern, with people’s needs appearing as dots throughout all the circles. The average population needs are clustered tightly in the middle circle (the bullseye).

Any conclusions drawn through statistical analysis that is the basis of AI only holds for the average or mean of a data set. That’s why the bullseye circle full of dots is labeled “highly accurate.” But in a population data set, the needs of people with disabilities are always at the margins. Not bunched together but scattered further and further apart as you leave the middle. This is true even when disabled people are included in the data. The further you are from that middle bullseye, the more distorted/wrong and inaccurate the results are.

Lately I’ve been looking at the effect on education. Think about how seductive AI’s promise is for a teenager with learning differences who just wants to fit in. ChatGPT can make you sound and look “normal.” What happens to the class’s comfort with and understanding of difference when there are tools to fix the quirks (for those who can afford and access them)?

My distress is intensified by the emerging AI Ethics industry and by the policies intended to provide protection against the harms of AI. The focus has been on the lack of fair representation of disabled people in the data used to train AI. This is a valid concern, but it is also in support of AI’s voracious hunger for data.

I’m distressed because none of the protections address the impact of statistical reasoning on disabled people. Even if we have full and fair proportional representation of disabled people in the training data, the machine will still side with the statistically average. Disabled people, by virtue of their difference (from the average and from each other), have no statistical power. In impact and risk-benefit assessments that promise to protect us, the harms to disabled people will be dismissed as anecdotal and insignificant when weighed against the benefits to the majority.

I do not want to denigrate the extreme opportunities of AI.

AI is life changing when it translates visual information, speech, sound and movement into accessible forms.Jutta Treviranus

AI is most seductive to people who need it the most. We need to differentiate the different forms of AI. We need to understand that this is not a simple case of being for or against AI.

To prevent the harms while supporting the benefits requires a fundamental rethinking of the existing patterns that AI is automating. Should all decisions be based on majority rules? Are statistical findings really the best and only way to arrive at truth and scientific evidence? Should what we know about the majority be automatically applied to the minority? Should we propagate our past onto our future more efficiently?

In our projects at the IDRC we are looking at how we can invert AI to favour, serve and optimize difference, the novel or unique. How can we push the machine to support people whose needs are far from the average, when current AI is designed to optimize the success patterns of the majority? This is a large ask in today’s political and social climate.

Accessible Standards Canada is supporting us in developing a standard that will address statistical discrimination. To have any impact on AI’s trajectory, we need a supportive and informed community. I hope you’ll forgive my current obsession with this challenge; we can’t make progress without you.

[To follow the important work of the We Count Project, subscribe to their bi-monthly newsletter here.]