Free cookie consent management tool by TermsFeed Blog - Is AI an Inclusive Decision-Making Tool | Diversifying Group

Beware of scams: Protect yourself from fraudulent messages

At Diversifying Group, we're committed to your safety and security. We would like to ask our candidates to beware of a current scam that targets job seekers. Fraudsters may reach out to you impersonating consultants with job opportunities or offers in order to get your personal information or request payment. It's crucial to be vigilant and verify the authenticity of any messages you receive.

Recruitment scams are not always obvious. Here are a few tips on how to identify a fraudulent message:

  • It’s from an unknown phone number, country code or email address.

  • It contains a link; these may contain malware that could be installed on your device so avoid clicking on these.

  • It contains sudden requests for payment or pressure to act quickly.

  • It contains poor spelling and/or grammar.

  • It contains unrealistic salary or working arrangements - if it’s too good to be true it probably is.


For your safety, we strongly advise:

  • Do not respond to these messages.

  • Do not share any personal information, banking details, or make any payments requested through these messages.

  • Report the scam message to your local authorities or the Information Commissioner's Office (ICO) through their official website for further investigation.


At Diversifying Group, we might contact you by text message, however:

  • Initial contact will usually be via an email address containing @diversifying.com or via LinkedIn.

  • We never send job offers or requests for personal information via text message to individuals who have not registered with our agency.

  • We will never ask a candidate to pay fees as part of the recruitment process.

  • We have an office phone number on our website, so you can give us a call if you’re not sure of anything.


Stay alert and safeguard yourself against fraudulent activity. If you have any doubts or concerns, please don't hesitate to reach out to us directly using the contact details below:

22 Aug 2023

Is AI an Inclusive Decision-Making Tool?

A white paper from Diversifying Group

Contents:

 

Data is an essential part of decision-making in modern businesses. However, this data-driven decision-making has mostly relied on human processing power to extract insights from data. This approach can be inefficient and prone to cognitive biases that may affect decision-making. To overcome these limitations, companies are increasingly turning to Artificial Intelligence (AI) to automate decision-making. AI has faced criticism and slower than expected adoption due to biases within its systems and therefore results. 

In this academic white paper, we will explore the histories of AI and decision-making, and provide an equality impact assessment of AI, to answer the question: Is AI an Inclusive Decision-Making Tool? 

Academic White Paper Sections:

  1. The History or AI in discussion format 
  2. Equality Impact Assessment of AI - a detailed exploration of AI and its impact on characteristics 
  3. Conclusions to the question, “Is AI an inclusive Decision-Making Tool” 
  4. Equality Impact Assessment Summary: Table for Quick Reference 

  1. History of AI and the Emergence of ChatGPT 

The history of AI dates back to the 1950s, with the development of early computer systems that were capable of performing basic tasks, such as solving mathematical equations. In the decades that followed, AI continued to evolve, with researchers exploring new ways to create intelligent machines that could perform increasingly complex tasks. 

The 1990s saw the emergence of machine learning in the field of AI. Machine learning algorithms allowed computers to learn from data and improve their performance over time, paving the way for the development of more advanced AI systems.  

In the early 2000s, further developments produced deep learning algorithms, which allowed machines to learn from vast amounts of data, enabling them to recognise patterns and make predictions with a high degree of accuracy. 

One of the most significant developments in the field of AI in recent years has been the emergence of advanced natural language models, such as ChatGPT by OpenAI. ChatGPT is a pre-trained language model that can generate coherent and compelling text in a variety of languages. Its systems are based on deep learning algorithms and have been trained on massive amounts of text data from the internet. This training has enabled ChatGPT to learn the nuances of language and to generate text that is both grammatically correct and semantically meaningful. ChatGPT has become one of the most advanced and popular language models today, with applications in a wide range of fields. For example, it has been used to generate news articles, write poetry, create music, and influence decision-making.  

In the future, researchers are likely to focus on developing more advanced algorithms that can handle more complex tasks and require less data to train. They may also explore new approaches to AI, such as quantum computing, which could enable machines to perform even more complex tasks. 

What’s quantum computing? Imagine you have a super-powered computer that can do some kinds of calculations much faster than regular computers. Instead of using regular "on" and "off" switches like normal computers, this super-powered computer uses tiny particles that can be in multiple states at once, like a mix of "on" and "off" at the same time. This lets the super-powered computer work on many things at once, sort of like a multitasking master. It's as if the computer can explore lots of different paths all at the same time to find the best answer really quickly. 

This special ability could help solve tough problems that regular computers would take forever to crack, like finding super-secret codes, designing new medicines, essentially figuring out the best way to do things. But building and using these super-powered computers is still a big challenge that scientists are working on. 

AI and Decision Making 

Human processing power has long been the central processor of business decision-making. Relying solely on human intuition can be inefficient, capricious, and fallible. Our brains are wired with many cognitive biases that impair our judgment in predictable ways. These biases influence our decision-making in ways that depart from rational objectivity. Relying solely on human intuition is inefficient and limits the ability of any organisation.  

To overcome the limitations of human processing power, companies have evolved and become reliant on a "data-driven" approach to decision-making. Data can improve decisions, but it requires the right processor to get the most from it. It is assumed that the processor is human. The term "data-driven" implies that data is curated by — and summarised for — people to process. However, summarised data can obscure many of the insights, relationships, and patterns contained in the original (big) data set. Data reduction is necessary to accommodate the throughput of human processors. For as much as we are adept at digesting our surroundings, and effortlessly processing vast amounts of ambient information, we are remarkably limited when it comes to processing the structured data manifested as millions or billions of records. For humans to understand these vast volumes of data, and draw actionable insights, it would take so long that the information relevancy would expire. 

To fully leverage the value contained in data, companies need to bring AI into their workflows and, sometimes, get us humans out of the way. We need to evolve from data-driven to AI-driven workflows. Distinguishing between "data-driven" and "AI-driven" is not just semantics. Each term reflects different assets, the former focusing on data and the latter processing ability. Data holds the insights that can enable better decisions; processing is the way to extract those insights and then act on it. Humans and AI are both processors, with very different abilities. AI can be trained to find segments in the population that best explain variance at fine-grain levels even if they are unintuitive to our human perceptions. AI has no problem dealing with thousands or even millions of groupings. And AI is more than comfortable working with nonlinear relationships, be they exponential, power laws, geometric series, binomial distributions, or otherwise.  

Benefits and Challenges of AI 

AI-driven decision-making workflows better leverage the information contained in the data and are more consistent and objective in their decisions. It can better determine which ad creative is most effective, the optimal inventory levels to set, or which financial investments to make. While humans are removed from this workflow, it's important to note that mere automation is not the goal of an AI-driven workflow. The value of AI is making better decisions than what humans alone can do. This creates step-change improvements in business outcomes, not just incremental benefits.  

However, there are challenges in using AI. Despite its many successes, AI still faces several challenges that must be overcome to achieve its full potential. One of the main challenges is the need for more data to train AI systems. As AI systems become more complex, they require more data to learn from, which can be difficult to obtain. Another challenge is the need for more advanced algorithms that can handle more complex tasks. Underrepresentation plays a part of a larger concern when focusing on data sets provided to AI models. As we have mentioned above, as AI becomes more complex it needs further data to carry on learning. If the information provided is not equal or representative, the AI model will in turn provide little representation of the world we live in by the answers and information its shares. While deep learning has been a significant breakthrough in the field of AI, it is still limited in its ability to handle certain types of data and tasks.  

These challenges have resulted in criticisms of AI being biased and incidents where AI is not fit for purpose to carry out tasks or influence decisions equitably. This therefore raises the question of whether AI is an inclusive tool.   

  

  1. Equality Impact Assessment of AI 

To explore whether AI is an inclusive tool, Diversifying Group has completed an equality impact assessment of AI, looking at how diverse communities are represented in AI data and how AI impacts those communities. The following section outlines the methodology taken to complete this assessment.  

Methodology 

To understand how biases impact diverse communities and decision-making, it is important to first establish a framework and limitations.  

Diverse Communities 

This paper is produced in the United Kingdom, and therefore the protected characteristics explored are those of the UK Equality Act 2010. However, there is limited knowledge in relation to diversity and AI for certain protected characteristics listed in the Act. These have therefore been omitted.  

In addition to this, Diversifying Group works globally and has included characteristics that go beyond the Equality Act 2010. Therefore, the following characteristics have been included in this paper’s understanding of biases within AI and its impact on diverse communities. 


Research 

Like AI a foundation of knowledge should be acquired to further develop knowledge itself. This paper has conducted a review of academic papers, blog posts, web articles and sources from journals to understand the impact of biases within AI and how they occur.  

Limitations 

It is important to acknowledge the limitation of this paper, to understand the context and interpretation. The sources used in the research to create this paper have all been written in English. Although the papers, journals and articles have been taken from around the world, limitations to sources written in English may exclude the experience of different communities around the world. This paper also largely focuses on open-source AI models, such as ChatGPT and Bard, but may bring in closed-source AI models as examples.  

Framework: Equality Impact Assessment of AI  

This paper establishes an understanding of bias in AI, its use and its impact on diverse communities through an equality impact assessment of AI for each of the above characteristics, using the following questions.  

  • What is the representation of that characteristic in the data set?   

  • What are the harms and risks of AI to that community? 

  • What opportunities can AI provide to support the community? 

Age 

Representation 

The data that AI learns from has limitations in the representation of different ages where the average age in datasets is 43.6, therefore affecting its reliability as it creates an age bias (Kamikubo, Wang, Marte et al., 2022). The source data significantly excludes older people, through systematic and social exclusion. This is due to lower digital literacy in older people, limited internet access in those communities and physical barriers in users' accessibility, which causes older people to be less interested in digital communities. Therefore, there is a lack of representation of older people in the data as there are fewer older people online, causing an age bias in the data towards younger online communities.  

In addition to this, there are prejudices and stereotypes that are held against older people online that lead them to be excluded from online communities that contribute to AI source data. The stereotypes and prejudices of older people being technically inadequate further fuel the representation of the creators of AI, where they are predominately young men (Stypinska, 2022).  

Although there is already a great deal of discourse on ageism and AI, the discourse excludes older people in that conversation, contributing to future exclusion in addressing issues of representation.  

Risk and Harms 

An impact of the lack of age diversity within the data set is that the needs of certain age groups are not reflected. Data sets in AI only represent the needs and desires of people who have access to the internet and are part of digital communities. Within healthcare AI has been used to support decision-making, however, the poor representation of the data has led to incorrect development and bad implementation of AI, further leading to incorrect needs for older people’s requirements (Moreno, 2023). This highlights the risks in decision-making with older communities excluded.  

The lack of representation in both the data and in the creators of AI can reinforce stereotypes of older users. Studies have shown the homogenous community of developers has led to ageism being embedded into the algorithms (Stypinska, 2022). For instance, the results of AI systems such as ChatGPT are more likely to be negative towards older people, as the data is encoded in ageist stereotypes that creators hold. A study found that sentences with the word ‘young’ were 66% more likely to be encoded with positive language such as ‘courageous’, whereas sentences with the word ‘old’ were encoded with negative language such as stubborn (Chu, Nyrup,  Leslie et al., 2022). This can harm social attitudes toward ageing, perpetuating discrimination. 

Within recruitment, AI has been used to support selection and candidate screening. However, the data that is used to train the AI can contain ageist biases from the language of historic data, such as past job descriptions and person specifications, that are used as a screening tool. Therefore, excluding certain age groups from passing the screening stage in the recruitment process, and replicating the current demographics. An example of where this occurred is in 2017 when job boards were excluding people by not allowing candidates to input graduation dates pre-1980 (IBID). Representation can therefore be a risk to age diversity within business.  

The combination of lack of representation and lack of diversity in AI creators impacts future developments of AI technology. Developers from a similar background and who may hold age biases, are unlikely to take into consideration minoritised communities, therefore further excluding older communities. To mitigate biases developers use user data, however, the data itself is biased due to a lack of representation in user communities. As a result, older communities are excluded and deterred, even if it is designed to support that community (IBID). This further risks and harms older communities in being continually excluded.  

Opportunities 

Although much of the literature on ageism and AI is on the disadvantages and the risk of its uses, there are some advantages that remove age biases through new developments.  

AI can analyse present opportunities to remove ageist language from job descriptions, which have been a barrier in recruitment. This provides opportunities for the tech industry and wider industries to diversify, creating a more representative workforce. In the long run, there is potential that AI companies will have greater diversity on development teams, making tech fit for purpose and age-friendly.  

Although accessibility has been a great barrier to the representation of age in data, there is an argument that this will improve as the world is an ageing population that has grown in the digital age. Therefore, having greater digital literacy within the population will have increased, impacting data with a greater representation of age (IBID). 

Age is a complex multi-layered construct that includes factors such as biology, socioeconomic status, race, and gender. AI cannot capture or understand the intersectionality of these human traits (Stypinska, 2022). Therefore, wherever AI is used to combat ageism an audit and a human lens should review the outputs to fully support it.  

Disability 

Representation 

Representation of disability can be described as a data desert where there is a small amount of representation of disabilities within datasets. This is due to multiple reasons.  

People with disabilities can often be treated as outlying or anomalous data within a dataset, due to their unique characteristics. The AI learns to disregard and filter information, therefore presenting results and functions that are catered towards the majority. The uniqueness of different disabilities and how they are presented as intersectional data can also be a barrier, as the people with disabilities cannot be correctly identified, further skewing results and storing data as ‘invisible data’ (Tyson, 2022).  

Representation in data from people with disabilities can be impacted by accessibility and exclusion from internet communities. This could be due to the lack of technology to support people with disabilities to access technology, contributing to gaps in historical data and continuing the lack of representation. Tech companies recognise that this is a barrier and attempt to address gaps through in-person data collection. However, there are issues with in-person data collection, where there are accessibility barriers in getting participants into labs and the financing of the process (Park, Bragg, Kamar et al., 2021). 

There are also issues within the design process, where there is a lack of representation in the design team and people with disabilities are an afterthought (Tyson, 2022). This causes the technology to be inadequate in representing people with disabilities and providing support for disabled communities.  

This paper looks at open AI with large datasets however a noted issue in closed AI should be acknowledged. Due to the uniqueness and small dataset, there are identification risks in the use of disability data. Therefore, disability data may be omitted, to overcome privacy issues, impacting the representation in the data.  

Risk and Harms 

Where data on people with disabilities are treated as anomalous and outliers, through lack of representation, their data may be excluded and pose barriers. For instance, some programs cannot recognise speech patterns or accents of people with disabilities, and they have been unable to access services. This can pose a risk to society in further disabling the individual and excluding them.  

The lack of representation in AI design can impact views on disability, which fall into the medical model of disability, where the individual is disabled as a result of an individual’s condition. Therefore, technology is designed to ‘fix’ a disability or individual. For instance, AI applications are mainly designed towards people who support the person with disabilities rather than the person with the disability (Newman-Griffis, Rauchberg, Alhabri et al., 2022). This can be harmful to disabled people, in them being isolated and further excluded. The data collected from these users can then further inform future technology, continuing to exclude disabled people from the narrative.  

The focus of the data in recognising the medical aspects of disability can have an impact on the effectiveness of AI and programs that serve to remove biases. For instance, in recruitment, AI screening tools do not recognise the social barriers to having a disability, such as being unable to have certain experiences due to a disability, therefore excluding disabled candidates from the process (Henneborn, 2021). This can pose a risk to business and society, in widening gaps in equality.  

Opportunities 

Although there are negative impacts derived from the lack of representation of disabled people in datasets, AI technology provides opportunities to support disabled communities, particularly those who are neurodiverse. Applications, such as ChatGPT and Bard, can support communication, where it can screen ideas, proofread, and write texts concisely in a consistent tone. The AI can also do the reverse by summarising texts into the key points in their preferred formats. This particularly supports people who are dyslexic or/and autistic. AI can also support neurodiverse communities by prioritising tasks, preventing information overload, and managing mental health.  

Society is moving away from the medical model of disability and accepting the social model of disability, where the model says that people are disabled because society creates barriers, not the individual’s condition or impairment. This acceptance can impact AI design in creating technology that focuses on empowering individuals and removing societal barriers for people with disabilities.  

Ethnicity 

Representation 

Representation of race and ethnicity within AI is linked to the historical systematic oppression of ethnic minorities within Western nations.  

The digital divide impacts the representation of race and ethnicity in the data, where Western nations have more internet access than non-western nations. For instance, in 2022 93.4% of North America and 89.7% of Europe had access to the Internet, compared to 64.2% of Africa and 67.4% of Asia (Internet World Stats, 2022). Within nations, there is a further digital divide where more white people have access to the internet, compared to minoritised ethnic communities. For example, in America, as a result of socio-economic inequity, 40% of Black households do not have access to high-speed fixed broadband, compared to 28% of White households (McKinsey & Company, 2023). This divide, therefore, causes a misrepresentation of ethnicity within the data, as people are not included in the source material, displaying an overrepresentation of Western and white points of view and racial biases.  

Many of the AI companies, such as OpenAI and Bard, are based within the US and operate in the tech industry, which is dominated by mainly white males (Daley, 2023). This is due to the digital divide and the historic systematic oppression of ethnic minorities within the tech industry and their hiring practices, where ethnic minorities are not seen as technically literate, causing an ethnic diversity gap. This, therefore, causes a lack of representation within the teams that develop AI, impacting the outputs and purposes of the AI.  

Risks and Harms 

  The data AI is trained on is from around the world, however, access data highlights that there is an overrepresentation of people in the West and of White people. This impact results in an ethnic bias that replicates the historical social attitudes and behaviours, that can further repress minority ethnic communities. For example, when ChatGPT was asked to provide professional female hairstyles for a job interview, it presented results that catered to white women, with the advice of ‘avoid adding too much volume or texture.’ (New Thinking, 2023). This is therefore harmful to society as white Western norms are imposed, continually building structural barriers to equality.  

 The representation in data is historical and can reflect historical processes, maintaining the status quo and perpetuating inequalities, even when the AI is designed to remove bias. For instance, in recruitment, many organisations and recruiters use AI to screen CVs. In the US, the company ZipRecruiter screens three-quarters of all submitted CVs (Milmo, 2022). Biases have arisen in these AI systems where they replicate the current staff demographic excluding minoritised communities, resulting in inequality and harmful impacts on the lives of minoritised communities (IBID). 

The lack of representation in the data and in the creative team has created results in AI where racial stereotypes have been reinforced. For instance, a crime prediction AI system in the US would disproportionately target Black and Latin communities (Verma, 2022). Another case showed this in AI testing when AI was 9% more likely to identify Black men as criminals compared to White men (IBID). This can be harmful to society and minoritised communities as barriers are created from stereotypes.  

AI companies recognise that there is a representation gap that is causing biased results and has been working on guard rails to prevent racism and biases. Companies and researchers into AI acknowledge that human intervention is needed to correct biases and progress AI, through filtration and setting guard rails. ChatGPT has faced criticism over this as they have outsourced these tasks to workers in Kenya and South America (Perrigo, 2023). This puts ethics into question when minoritised communities are used to analyse and filter racist data that is harmful and traumatic.  

Opportunities 

Although the literature is largely negative around race and ethnicity in AI, there are opportunities that AI can serve to close awarding gaps, in relation to racial diversity in education and workplaces.  

Natural Language Models, such as ChatGPT and Bard, can support closing knowledge gaps on how to complete a task or action, such as writing an academic paper or where to look for sources. Within higher education, there is an attainment gap, known as the BAME attainment gap, where people of a Black, Asian and minority ethnic background are 13% less likely to do as well as their white peers (Universities UK, 2019). Multiple factors contribute to this, including a lack of role models within their community, a lack of information passed down through their community and a lack of belonging due to being a minority in their classes (IBID).  This results in people from a minority ethnic background having little access to peer advice and being unsure of where to gain advice and understanding. Therefore, AI can close the knowledge gap that is usually passed down between role models and peers, such as identifying literature, recommending books to further explore topics and writing an academic paper (Fido & Wallace, 2023). This can result in closing the reward and attainment gap within higher education, as minority ethnic students are empowered with the knowledge that is needed to succeed in their studies.  

Within employment, AI can support closing knowledge gaps on text-based professional conduct, such as how to write a successful application, professional e-mails, policies and papers for a board (IBID). This can help close the diversity gaps within employees and support individual development and progression. 

Gender 

Representation 

The training data for AI technology is skewed where there is an overrepresentation of men, due to the digital gender divide where fewer women have access to the internet. For instance, the digital gender divide in internet access via a mobile phone is 300 million (GSMA, 2023). In addition to this, there are 20% fewer women who own a smartphone in low to middle-income countries (IBID). This creates a gender bias towards men, where women are marginalised in the AI. AI data is also informed by data of their user, however as there are fewer women with access to the internet, data continues to collect male-biased data.  

The bias in the AI data is also influenced by the lack of representation within the workforce of AI developers. AI and technology are faced with a gender diversity gap, with women with more than 10 years of AI experience representing 12% of the workforce and women with 0-2 years of AI experience representing 20% of the workforce (European Institute for Gender Equality, 2021). This is due to biased hiring practices within the tech industry, workplace harassment of women in STEM and the difficult working hours, that do not support women and the social pressures of gender-normative caring roles. This, therefore, causes the AI platforms to be catered towards men as their decisions in the development stages are taken into consideration.  

Risks and Harms 

The lack of representation of women within the data and users, that influence feedback data, can reinforce sexist stereotypes and ideas about women, which can be harmful to society. In 2013, UN Women created a campaign highlighting gender stereotypes and sexism that appeared on Google suggested searches, using keywords such as ‘Women should’ and ‘Women cannot’ (UN Women, 2013). Suggested searches by Google’s AI systems include ‘Women cannot drive’ and ‘Women should be in the kitchen’ (IBID). A more recent example is from ChatGPT 4, where users asked it to write a story about a boy and a girl choosing careers. ChatGPT 4 wrote a story where the boy was interested in technical hobbies and became a doctor, whereas the girl was interested in creativity and emotional hobbies and became a teacher (Equality Now, 2023). This, therefore, harms gender equality, by reinforcing sexism and ideas of toxic masculinity.  

The representation of women can negatively impact employment due to AI systems replicating historical data and environments, influenced by historical sexist ideas and socio-economic inequality, where women have been excluded from the workforce. The most famous example of this is Amazon’s recruitment AI for screening CVs. The AI was trained on CVs submitted to Amazon over a 10-year period, where it found that males were preferable, resulting in CVs by women or CVs containing ‘Women’ being penalised (Dastin, 2018). This highlights how representation can replicate current environments and prevent gender equality, harming women as well as gender pay gaps.  

The lack of representation in the data and in tech sectors can impact AI technology, which marginalises women and makes the technology incompatible with serving women. For example, where there is a lack of data and testing on women understanding female voices and faces of women (European Institute for Gender Equality, 2021). This may disengage users and continue the data gaps. Within healthcare, the lack of data and particularly that exists in healthcare research can lead to poor health and wellbeing. For instance, a liver disease AI tool showed gender biases where it was twice as likely to miss liver disease in women (UCL, 2022).  

The deep-rooted biases within society manifest themselves in the design of AI, where the employment opportunities of women are negatively impacted. Due to historical structures of patriarchy, women have been confined to supportive business roles such as administration, assistance, and secretaries. These supportive roles are most like to be impacted or replaced by AI. A study by Revelio Labs showed that 71% of supportive roles are held by women (Revelio Labs, 2023). Therefore, the adoption of AI can significantly harm gender equality by limiting the employment opportunities and economic freedoms of women.  

Opportunities 

Although there are many examples of how AI can hurt gender equality, there are opportunities in AI that support gender equality. Natural Language Models and text screening AI have been a tool for removing biases in hiring, by removing gender-biased language from job adverts, job descriptions and interviews. This can support gender equality by widening the candidate pool and creating a hiring process that supports women. This can have a positive impact on the tech sector and AI as there is an increase in female representation in development teams and decision-makers.  

With an increase in the number of women in positions, the pay gap has the potential to be reduced. However, this is dependent on the level that women are hired at and the pay policies within an organisation. AI can support the analysis of pay gaps by being able to highlight patterns and manage talent, supporting companies to make data-backed decisions towards pay equity (gapsquare, 2023). 

AI and wider technology sectors are aware of its representation of gender and potential risks and harms. Therefore companies are designing synthetic data and scenarios to correct gender biases and prevent future gender imbalances (Vinnova, 2023). These have been used in the medical field in order to widen the number of cases AI can be used. Although this has a potential positive impact the design process must be inclusive to further mitigate bias.  

Language 

Representation  

The training data for AI Is mainly in English. For instance, 3% of ChatGPT is based on Wikipedia articles written in only English (Cooper, 2021). Sources of knowledge such as academic papers, which form parts of AI data sets, tend to be published in English, because of pressures in academia to seem professional (Dave, 2023). This causes the data and AI to be biased towards English speakers.  

AI companies are mainly based in the US, where there is a low percentage of Americans that speak another language (IBID). This creates the development of AI to be US-centric and biased towards English. 

Risks and Harms 

The representation of language within data and the location of AI companies have been argued as a threat to language diversity. Natural Language Models, like ChatGPT, can have an impact on how things are written, standardising language and tone. As AI companies work in English and within the US context, language is standardised to US English or Queen’s English (Bjork, 2023). This can eliminate different types of English as well as have a harmful cultural impact, as language reflects cultural history.   

AI is being increasingly used professionally for writing tasks. The dominance of English and specific types may create a norm of what it means to be professional, excluding non-English speakers in professional circles. This has been argued to be neo-colonial, replicating practices of colonial Britain and its use of English to influence and impose power (Dave, 2023). 

The predominance of English in the training data on AI can fuel misinformation. Natural Language Models have multilingual capabilities to translate texts, which can be particularly helpful for non-native speakers. AI can also take information from webpages in different languages and translate them into English to form answers. However, the ability of AI has been criticised for having inaccuracies, which can harm decision-making, as it is based on incorrect information (Lancet Digital Health, 2023).  

The predominance of US thought has influenced the language capabilities towards English speakers, whereas other languages have not been developed to a similar level. For instance, when non-English speakers using ChatGPT have experienced issues, there has been a lack of support or no response to support that user (Dave, 2023). Multi-lingual users of ChatGPT have also highlighted that answers in English write in a professional style, whereas ChatGPT writes and translates non-English language into an elementary tone. This may harm people in being dismissed for being unprofessional and missing out on opportunities due to gaps in knowledge. 

AI is built on predominant languages such as English, Chinese, and Arabic. Natural language models, such as ChatGPT, are argued to be the next step in information searching where it could replace search engines, such as Google. As language is exclusive to dominant languages, there is a risk to a society where knowledge inequality is created, creating a further socioeconomic disparity between nations and communities (Band, Chayawijaya, Lee et al., 2023).  

Opportunities  

AI is dominated by major languages and has its risks associated with this. However, AI has the potential to preserve lesser-known languages. For instance, Masakhane, an organisation that seeks to develop the natural language processing of African languages, is building AI support over 2000 African languages (Masakhane.io, 2023). This can support preserving languages but also provide opportunities for minoritised communities in Africa.  

Religion and Belief 

Representation  

In terms of religion and belief, AI has a representation of a wide variety of religions, as the data is influenced by their foundational texts. The disparities that arise in representation are the interpretations and sects within those religions. For instance, within Islam ChatGPT has biased towards the majority sect, which is Sunni (Shahid, 2023). Therefore, the representation within the data will be biased towards the majority within those religions.  

Although many religions are derived from foundational texts there are some religions, such as Buddhism, that are less text-based and focused on practice. Therefore, the data on practice focussed religions is scarce or has been emitted from the data set, causing biases toward text-based religions and gaps in knowledge within AI (Bhuiyan, 2023).   

Many AI systems are developed in the US with US-based teams. In the US 70% of Americans identify with a Christian religion and 5% identify as non-Christian (PRRO.org, 2021). This may therefore impact religious representation within the AI and how it is developed.  

Risks and Harms 

As the foundations of religions are text-based, AI results can be pulled directly from the text. However, there have been errors in its accuracy causing hallucinations. AI hallucinations are where AI, like ChatGPT, makes up false information or facts which are not based on real data or events. AI, therefore, looks for correlations in text predicting the following sentence that makes human sense. To do this, the AI will prioritise passages that have conversational flow, rather than accuracy (Bhuiyan, 2023). This causes a risk to society in spreading misinformation and contributing to issues within religions, from answers based on hallucinations and texts that lack context. 

Developers within AI recognise that there are ethical concerns and have therefore created guardrails to prevent discrimination and hate. However, the lack of representation within development teams has highlighted blind spots that have caused offence. In January 2023 ChatGPT was accused of being anti-Hindu for allowing jokes to be made about its deities, whereas other religious figures were blocked from having jokes based on them (Sharma, 2023). This could harm society and minoritise religious communities, where AI influences hate towards different religions.  

The representation of majority sects and representation within AI’s development team has an impact on its output being inadequate and unclear. For instance, answers to questions on Islam will resort to Sunni thoughts and practices, which are not followed by all sects, providing an inaccurate picture of Islam.  Another example of this is when users ask ChatGPT about Sunni Muslims Abu Bakr, which is not accepted in Shia Muslim traditions, ChatGPT provides lengthy results. However, when asked about Shia Muslims’ first Imam, ChatGPT defaults to responses of it is a “matter of controversy” (Shahid, 2023). This, therefore, highlights the impact of misinformation and how AI poses a risk to society in biased virtue signalling.  

The US and Euro-centric development impacts AI in being unable to be truly neutral. The AI training data use forums and other social texts from across the internet. For Instance, much of the criticism of Islam in the West derives from anti-Muslim and Islamophobic sentiments (IBID). This impacts results from AI being biased and unable to give neutral responses, harming decisions and ideas on minoritised religious communities.  

Although forums and social texts are used in AI training, the answers on religion are based on text. Religion is a human-centric study, that uses context and interpretation by scholars, that have formed different customs within religion. ChatGPT and other AI treat religion as facts of text rather than interpretation and opinion (Bhuiyan, 2023). Therefore, it can be unhelpful to religion and can cause harmful social impacts, such as religious stances on LGTBQ+ identities.  

Opportunities 

Within religious communities, there is fear around the use of AI and caution about its use. However, religious leaders have identified a potential opportunity for ChatGPT to develop tailored support to congregation members, such as in counselling and advice on modern society (Damocles, 2023). This can impact communities by being more engaged in their religious communities and having a greater sense of belonging.  

Sexual Identity 

Representation  

The representation of LGBTQ+ communities within data can be hard to measure. This is due to historical social stigma and discrimination, resulting in fear of declaration of being LGBTQ+ or being excluded from online environments.  

LGBTQ+ communities have often experienced exclusion and censorship in online environments for the reasons of “protecting the youth or deviancy” (Degeest, 2022). Censorship practices included posts being removed or content being shadow banned, where their content becomes hidden to near invisibility. This creates a heteronormative bias within systems, as LGBTQ+ identities in AI training data have been reduced or eliminated.  

The demographics of LGBTQ+ users that form data or the teams that develop AI can be difficult to measure as people have anxieties about the consequences of being outed (IBID). Therefore, disengaging and choosing to remain anonymous or not take part in LGBTQ+ communities or online content. The representation within the AI is then affected, where there is not an accurate reflection of the experiences and thoughts of LGBTQ+ communities.  

A true representation of the LGBTQ+ experience and experience of identities within LGBTQ+ communities cannot be accurately reflected. Identifying as LGBTQ+ is a rejection of binaries and traditions of how people should be or what they should do, where each person’s experience is being created uniquely to the individual (Wareham, 2021). AI learns through patterns and correlations, and therefore cannot learn the uniqueness of their communities, as their identities are complex and cannot be standardised.  

Risks and Harms 

AI training data lacks the representation of LGBTQ+ communities but is also filled with anti-LGBTQ+ sentiments. The system is able to filter abuse from the comments, but the filtration system can also be anti-LGBTQ+ in censoring them further, as the data is skewed in what is classified as abuse. For instance, terms such as ‘dyke’ and ‘queer’ have been historically used as abusive terms for lesbians and the wider LGBTQ+ communities, which is historically reflected in the data. However, these terms have now been reclaimed by their reflective communities. As there is an over-representation of hate over positive representation, the AI learns to censor these terms and ignore content on these items, limiting information and continuous conversation within the LGBTQ+ (IBID). This is therefore harmful as it continually excludes LGBTQ+ identities in AI and data.  

The inability to reflect on the LGBTQ+ communities can hurt the quality of the outputs for the LGBTQ+ community. This can particularly impact healthcare, where LGBTQ+ communities face more barriers to care, as well as the social stigma attached to their identity which can impact their health. For instance, the stigmatisation of LGBTQ+ identities can cause health and mental health issues, which are complex and differ from normative healthcare (Degeest, 2022). The lack of representation can therefore harm the LGBTQ+ community and inadequately support their needs.  

Opportunities  

AI developers recognise that there are gaps in the representation and how their AI can be weaponised against LGBTQ+ communities to spread hate. Developers have therefore placed guard rails to ensure there is no harm. Although it can censor people within the community, the filtration system has been preventing harmful ideologies. OpenAI hired a team of diverse individuals to ‘break’ the AI to find its blind spots (The Financial Times, 2023). Collaboratively they have been able to create systems that can prevent harm towards LGBTQ+ people and develop a society that is inclusive for LGBTQ+ experiences, through neutralised information on the LGBTQ+ community.  

Transgender 

Representation 

The representation of transgender communities is difficult to identify as the transgender community has been traditionally excluded from data. For instance, in demographic data, transgender identities have been historically excluded where information on gender is collected in the binary. Therefore, the data on transgender identities is not readily available and cannot be truly represented in AI, creating a cisgender bias in AI (Perera, 2022). 

Like LGBTQ+ communities, transgender communities have been censored online, where posts have been removed, flagged as harmful or made to be almost invisible (Williamson, 2023). Transgender communities have also been further excluded and marginalised from the narrative where the media, such as the news, have censored stories on transgender issues and events (Szego, 2023).  This impacts the representation of transgender communities in AI training data, as they have been censored from the media that contributes to training data. A further impact of this is data having a disproportionate representation of negative items and hate towards transgender communities. For instance, 82% of discussion items on transgender topics featured abuse (Brand Watch, 2019). These combined factors result in a bias towards cisgender identities.  

The representation of AI developers is again difficult to identify, as very little exists. This is because of the anxieties of coming out in the tech industry as well as past data practice being binary (Lynn, 2021). Therefore, the purpose and outputs of AI may not represent or support transgender identities.  

The representation of gender and the experience of transgender identities is too complex for AI, affecting AI in being insufficiently able to process gender-non-conforming data and provide sufficient outputs that support transgender people. This is because gender is a deeply personal experience, which can be different to how gender is expressed and performed (Gayta Science, 2021). Therefore, it cannot be recognised in patterns that AI learns and replicates.  

Risks and Harms 

The lack of representation in AI and the overrepresentation of anti-transgender content impact AI by being biased towards cisgender identities and having negative undertones towards transgender topics. For example, in natural language models and media, content associated with transgender topics was tagged as negative or ‘toxic’ (IBID). When asking ChatGPT “what is a woman?”, the answer responds with a medical definition that disregards the psychological and social models (Farmer, 2023).  This can further harm society is perpetuating abuse and anti-trans rhetoric, putting transgender communities at risk.  

The combination of having a lack of representation in the development team and AI being unable to fully comprehend the complexity of gender results in the AI being unable to accommodate or serve transgender communities. For instance, there are numerous examples of AI misgendering people as AI systems use the binary ‘Auto Gender Recognition’ technology (Feeney, 2022). AI also struggle with comprehending faces that are in transition and changing facial features. This has had an impact on the discrimination against people who do not express gender in cisgender norms, resulting in technology inefficiencies and financial losses where jobs and discounts are dependent on facial recognition (Gayta Science, 2021). 

Opportunities 

Although AI can perpetuate gender norms and contribute to discrimination, AI can have a positive impact on transgender people through gender euphoria. Image AI and AI that can alter voices have supported transgender people in presenting how they want to be seen and heard, giving them empowerment and euphoria (Maxwell Keller, 2022). 

Political Belief 

Representation  

Several factors influence political belief, including age, gender, religion, family, ethnicity and region. Therefore, if the representation in these categories is skewed, political biases arise within the data (Meyers, 2023). From exploring the representation in the data and the AI developers of different characteristics, it can be argued that the representation of political belief is skewed towards Western liberal ideals, of younger populations.  

Representation of political views is also influenced by content moderation and filtration systems. A study found that content with right-wing views was more likely to be removed because their content contained swearing or had items that were deemed harmful (Jiang, Roberson & Wilson, 2020). Therefore, there is more left-wing content in the training data.  

Another factor in the political bias of AI is the interventions of global bodies, such as the United Nations, that create safeguarding principles to protect the public (Meyers, 2023). Each of these bodies is dominated by Western powers, either by position in leading committee or connections from historical colonialism. In addition to this, these bodies are also considered to be the embodiment of liberal internationalism. The safeguarding procedures impose an inherent political bias in the AI towards liberal ideals.  

Risks and Harms 

The skewed bias that is derived from the representation of various characteristics, causes left-leaning data and responses. An example of this influence is in younger generations being overrepresented in training data and being more likely to be liberal in their views (IBID). This has impacted responses in skewed support of certain political parties and figures. For instance, AI has shown the support for Joe Biden in the US to be greater than 99 per cent, which is mismatched to public opinion polls (IBID).  Users have also found that ChatGPT refuses to provide responses on different political leaders on the right-wing spectrum, causing right-wing politicians to accuse AI of being ‘woke’ (Robertson, 2023). This can pose risks through the spread of misinformation and influence of political views from a biased point of view. 

Western ideals that dominate the training data and developing teams are embedded into AI. Although AI is for global use, it reflects the Western world. The imposition of Western ideals disrupted globally could therefore be considered neo-colonial, as it does not accurately consider the view of localised people (Miller, 2022). This could therefore have harmful social impacts and perpetuate social and global inequality.  

Opportunities 

Although AI has its political biases, AI can support political equality by being able to provide equal access to knowledge on political topics as well as summarise points of political parties. Therefore, supporting informed decision-making, improving political engagement and giving the public more agency (Zekos, 2022). 
 

3. Conclusions 

The use of AI in decision-making is becoming increasingly popular as companies look to overcome the limitations of human processing power. AI can extract insights from data that may be obscured by human processing power and can make more consistent and objective decisions. The goal of an AI-driven workflow is not mere automation, but rather to make better decisions than what humans alone can do. As companies continue to adopt AI-driven workflows, we can expect to see significant improvements in business outcomes.  

However, AI, due to issues in representation, historic inequality, and the location of AI companies, is biased. It impacts the outputs of AI to be catered towards the majority in the Western world and can create and perpetuate inequality, even when its purpose is to neutralise and debias systems. Therefore, the impact on the decision-making process could be biased and exclude certain communities.  

Although there are significant drawbacks to AI, there are instances where AI can support humans and have positive impacts on minoritised communities. However, what makes us humans is the complexity of being human, which is constantly developing. AI systems cannot (currently) understand it. Therefore, AI requires humans to spot and amend its flaws, but in collaboration with diverse communities.  

It is important to understand the biases within AI to understand how it can be used as a tool to support us, know the limitations of the systems and develop ways to mitigate harm. AI companies and the wider tech industry acknowledge its gaps in representations and limitations, and therefore use diverse teams to test the AI, to remove harms that could be caused by its AI.  
 

Take Aways 

This white paper has analysed the characteristics above using three initial questions to understand AI and D&I and their impact on decision-making and society. The initial questions asked to assess the impact can act as a foundation for an equality impact assessment of AI. However, to fully understand and assess the impact of AI on D&I and business practices, further questions need to be asked: How can we fill the representation gap? Or how can we reduce the risks of harm? 

Questions for an impact assessment of AI:  

  • What is the representation of that characteristic in the data set?   

  • What are the harms and risks of AI to that community? 

  • What opportunities can AI provide to support the community? 

  • How can we fill the representation gap? Or how can we reduce the risks of harm? 

 

The table below is a summary of the assessment above:

 

Written by Yani King and Dylan Francis at Diversifying Group

Need support on
your D&I journey?

Get in touch

If you have any questions or would like to post a job, please use the form below to get in touch.

Call to Action