Ideas | Does AI Systematically Discriminate?
Algorithmic Bias and the Future of Inclusivity in the Workplace
Recent advancements in artificial intelligence (AI) and automation technologies created new frontiers that revolutionized how we communicate, commute, and conduct business. Meanwhile, critics pointed out the shortcomings of AI, including the many jobs lost due to the replacement of human labor with technological alternatives. In the current global pandemic, however, AI/automation has empowered various industries across the globe. These include the healthcare sector, judicial systems, hiring, and voting, among others. While these technological advances create significant opportunities, they do not come without challenges. For better or worse, algorithms have the power to shape our world and transform the workplace, which puts the onus on employers and regulators to implement the right policies that drive diversity and inclusion.
What is Algorithmic Bias?
Debates around the terms and definitions used to discuss algorithms in relation to bias, fairness, and accountability tend to be convoluted. Nicholas Diakopoulos, a well-known Communication scholar, defines algorithms as “a series of steps undertaken in order to solve a particular problem or accomplish a defined outcome.” Other branches of AI include neural networks, deep learning, and machine learning (ML). In simple terms, algorithmic bias happens when the outcomes of the computer systems generate unfair results.
The idea that technology and quantified data are always neutral has been widely accepted. However, this should not deter us from the limit of AI, especially when it is promoted as a way to solve all human problems. This is also part of a larger discussion on intuitive versus analytical decision-making. Technochauvinism, a term that suggests that technology is always the solution, is attributed to Meredith Broussard. In her book, Artificial Unintelligence: How Computers Misunderstood the World, Broussard elaborates on Technochauvinism and provides a thorough account of the limits of AI.
“No statistical prediction can or will ever be 100 percent accurate -- because human beings are not and never will be statistics. Culture matters, social issues matter, and they matter just as much as solving mathematical and engineering problems.”
― Meredith Broussard, Artificial Unintelligence
Among the growing body of work on embedded bias in algorithms are books such as Weapons of Math Destruction, Automating Inequality, Technically Wrong, and Algorithms of Oppression. A key area of agreement in the literature seems to suggest that technology is not always neutral. Through various research methods and case studies, the authors concluded that technologies are built by people who have cultural and political biases that can be reflected in the products they design and produce.
AI at Work
Despite the growing concern towards the excessive hype around AI, it does have significant and practical contributions to improving people’s lives. Human-based and automated decisions are not mutually exclusive. Context matters as some decisions will still require human judgment.
Although AI presents many opportunities and can be a force for good, it is not always the case. For instance, several businesses use AI-powered recruiting tools to make hiring decisions. While this can be efficient, given the history of bias in algorithmic decisions, it can lead to discriminatory outcomes. Numerous studies showed that even algorithms designed with the intention to be neutral must still be audited to avoid illegal decisions. Cases of algorithmic bias will continue to emerge if we keep overlooking their potential harms. This is evident by recent examples such as the Amazon recruiting tool, which was found to be biassed towards male job candidates over female applicants. Similarly, the A-level data-led grading controversy in the UK where automated grading algorithms generated bias against students from specific socioeconomic backgrounds. Events like these suggest that AI cannot always solve systematic problems.
Bias in Facial Recognition Technology
Another case of controversial algorithms is the development of face-recognition systems. Joy Buolamwini, an MIT-trained researcher, found that AI has a bias problem. Her work focused on facial recognition technology and exposed bias based on race and gender, leading major tech companies such as Microsoft, Amazon, and IBM to stop working on facial recognition technology. Buolamwini's research made a significant impact and emphasized the importance of building inclusive systems. Data sets, she suggested, must be trained to correspond to a diverse population.
Despite the significant and critical scholarship on algorithmic bias, some researchers argue that not all bias is inherently bad. When AI systems favour women or people of colour they are, in fact, providing opportunities for historically disadvantaged communities. For instance, a study examined automated underwriting tools in mortgage lending decision systems in the US that provided higher rates for borrowers from underserved populations.
While automated decisions can improve human decisions, we may not yet be able to rely on them solely. How much do you trust AI to make a decision? Or do you trust it at all in the first place? AI can indeed decide who gets a job, a loan, a school admission, and much more, but it cannot define who we are.
Pier25 Reads:
Artificial Unintelligence: How Computers Misunderstand the World
By Meredith Broussard