Robert Riener, does artificial intelligence boost inclusion?
Artificial intelligence simplifies many areas of our lives. But does it also make the world more inclusive? Robert Riener outlines the requirements for this to succeed.?
Artificial intelligence (AI) can open up enormous opportunities for people with disabilities. It is already making their day-to-day lives easier and breaking down barriers: automatic speech recognition systems, for example, make television broadcasts and online lectures more accessible for deaf people. Text generators, in turn, can translate complex content into Plain English. This opens up knowledge previously reserved for experts to a wider group of people.
AI is also opening up new perspectives in medicine and
rehabilitation. Continuous monitoring of health data could detect or
even prevent complications in paraplegics such as pressure ulcers or
circulatory problems at an early stage. And when combined with robotics
and exoskeletons, AI can help people with disabilities achieve greater
mobility and self-determination. As a general rule, if we think
about technology in an inclusive way from the start, everyone benefits.
This phenomenon is known as the ‘curb-cut effect’: A ramp, originally
intended for wheelchair users, also helps parents with prams or
travellers with suitcases. The same is true of AI-based navigation tools
and autonomous vehicles, which also benefit people both with and
without disabilities.
The expert
Robert Riener is Professor for Sensory-Motor Systems in the Department of Health Sciences and Technology at ETH Zurich. He has been working for many years at the intersection between modern assistive technologies, rehabilitation and inclusion. In summer 2025, he held discussions with experts in the fields of AI and inclusion as part of a fellowship at the Thomas Mann House in Los Angeles.
But AI is not a miracle cure; it also carries risks. AI is only as fair as the data it was trained with. And these data are rarely neutral. Many applications reflect social prejudices because they are based on information that comes primarily from healthy, white, male people. People with disabilities simply do not appear in it. This leads to erroneous, often discriminatory results, for example in automatic facial recognition or in the analysis of application forms.
Moreover, when AI generates images or texts about people with disabilities, it often reproduces stereotypical representations. These clichés are not only a technical problem due to flawed training data, but also an expression of a widespread social attitude, where disability is often seen as a deficit rather than part of human diversity. AI reflects these patterns of thinking and reinforces them when we adopt them uncritically. This means it influences how we talk about disability, which role models become visible and who is actually perceived as a ‘full’ part of society.
So what must we do? The solution is obvious, but challenging: AI must be developed in an inclusive way. People with disabilities must not only be a target group, but must also be active contributors, as designers and developers, as test subjects and researchers. This is the only way to create systems that reflect true diversity.
At the same time, we need to diversify our sources of data, check algorithms for distortions and make transparency a key value so that the decisions of AI become comprehensible. AI must not be judged solely on efficiency; it must also promote equity.
If we succeed in this, AI can become a tool for emancipation: it can break down barriers, expand participation and enable new forms of togetherness. But if we leave its development uncritically to the big technology companies, it will exacerbate existing inequalities. So what matters is not what the technology can do, but what we expect of it. AI is not a neutral player. It follows the values we embed in it. Whether it leads to more inclusion or to new forms of exclusion depends on our attitude and whether we are prepared to take responsibility.
From Zukunftsblog to Perspectives
The Zukunftsblog has been redesigned and is now called Perspectives. The ETH Zurich author platform showcases ETH experts who address socially relevant questions and provide context for current topics.
The editorial team welcomes ideas for topics as well as feedback at
Click here to read the latest Perspectives articles