12 Dec 2023
Diversifying Group’s Playbook for Using AI in the Workplace
A white paper from Diversifying Group
Diversifying Group’s playbook for using AI in the workplace explores the integration of Artificial Intelligence (AI) in tools and applications that can support users and businesses with day-to-day tasks, analysing its potential uses, benefits, risks and mitigation of risk. This paper has analysed the uses through a diversity and inclusion lens and therefore has a focus on risks of bias and sensitive data. This paper also provides protective diversity practices that businesses can implement when using AI.
The playbook will explore AI in the workplace in the following sections.
AI in Email Writing
AI in email writing has the potential to revolutionise the way we compose and manage emails, offering numerous benefits to users. AI can support all businesses and staff in composing emails, providing suggestions on tone and spotting grammatical errors. The AI’s ability to analyse and highlight patterns in systems can further support email management, by categorising and prioritising emails.
The integration of AI in email writing brings about a multitude of benefits that significantly enhance the email experience for users. One of the primary advantages is the time savings it offers. AI streamlines email composition through content suggestions and grammar checks, reducing composition time and reviewing. AI algorithms can also support the management of emails through categorisation and prioritisation of emails based on their content, urgency, and relevance, streamlining email organisation, and increasing efficiency. This can positively impact businesses as AI allows users to redirect their focus towards more high-value tasks, increasing overall productivity.
AI-generated suggestions and corrections can play a vital role in enhancing communication. Users can benefit from AI's ability to offer real-time, relevant suggestions, leading to more effective and professional email exchanges. AI ensures consistency in language and tone across emails. This can support businesses in promoting cohesive brand identity and professionalism in all communications.
AI email writing also lightens the workload for users by handling routine tasks, such as auto-completion and grammar correction. This automation reduces the burden on individuals and teams, freeing up valuable time and mental energy for other essential responsibilities. This can positively impact the business by streamlining time management and have a longer-term impact on innovation as more time is spent on tasks.
Another advantage of AI in email writing is its potential to contribute to users' language skills. Through frequent AI feedback and suggestions, users can gradually improve their language proficiency, leading to better-written emails and enhanced communication overall. This can support overall brand perception and ensures that communications can be understood by a wide audience.
The language support AI provides can particularly support people who have communication and language disabilities to write clearly and concisely. The AI can support disabled users in detecting and rectifying grammar, spelling, and syntax errors, leading to improved email quality. In addition to this, AI can analyse texts to offer suggestions on improving engagement and clarity. This can positively impact disabled users by building confidence in their writing and removing barriers in their work.
The integration of AI in email writing comes with several inherent risks and challenges that demand careful consideration. One significant concern is the bias data risk. AI models can inadvertently learn from biased datasets, leading to the continuation of stereotypes and discriminatory language in email writing. This can harm businesses as communications could exclude audiences, and lead to poor brand perception. For instance, testing in copywriting text showed that in some instances initial AI responses contained harmful ideas and language (Wired, 2021). Another example is Microsoft’s Twitter Chatbot. It was designed to learn casual conversation, which quickly learned to be racist and sexist in less than 24 hours, due to data taken from Twitter.
Privacy concerns are another critical issue when using AI in email writing. As AI algorithms process and analyse email content, sensitive information may be exposed, raising potential privacy risks for users. Furthermore, the deployment of AI systems handling emails presents security challenges. Cyberattacks targeting AI algorithms can lead to data breaches and unauthorised access to sensitive information. This can be harmful as users could be identified, breaching organisational policies and data laws, as well as posing monetary risks to businesses.
Lastly, there is a risk of overreliance on AI in email writing. Users might become overly dependent on AI-generated suggestions and corrections, leading to a decline in their essential language and communication skills. This could have a negative effect on businesses that become over-reliant on this tech. Should an occasion arise where these are not effectively working, a decrease in self-reliance will lead to errors and inefficiencies.
Mitigation of Risks:
Effectively addressing the biased data risk associated with AI in email writing is of paramount importance to ensure fair and unbiased communication. One approach is to use diverse and representative training data during the development of AI models. By incorporating a wide range of sources and perspectives, potential biases can be minimised, leading to more equitable AI-generated content. This process entails careful curation and vetting of training datasets to avoid reinforcing existing stereotypes or discriminatory language.
Mitigating privacy and security concerns related to AI in email writing is crucial to instil user confidence and safeguard sensitive information. A fundamental measure for mitigation is the implementation of robust encryption techniques. By encrypting sensitive email data both during transmission and storage, organisations can significantly reduce the risk of unauthorised access or data breaches. These initiatives should also be supported by policies that are compliant with data and privacy regulations, to ensure protective practices are followed. This includes practices on what types of data are collected and minimising data collection to what is necessary.
To combat overreliance on AI in email writing, a multifaceted approach is essential to strike a balance between technology assistance and human skills. Organisations should invest in training and education initiatives to empower users to maintain their language and communication skills. By providing resources, workshops, and skill-building programs, users can develop a deeper understanding of effective communication and ensure they remain adept at composing thoughtful and personalised emails, even with AI assistance. Organisations should also actively communicate the limitations of AI and its role as a tool rather than a substitute for human judgment. Encouraging users to critically review AI-generated content and apply their discretion when accepting suggestions ensures that they maintain their agency in email communication. This can ensure more meaningful and authentic email interactions.
AI in Copywriting
Copywriting is a crucial aspect of modern marketing and communication strategies. With advancements in AI technology, businesses are increasingly turning to AI-driven solutions to enhance their copywriting processes. AI can generate content at scale and speed and has the potential to improve creativity and personalisation.
AI-powered copywriting tools offer a range of advantages that enhance businesses' marketing endeavours. First and foremost, these tools significantly boost efficiency by generating high-quality content at a rapid pace, saving valuable time and resources. When a business can re-allocate efforts to new innovative campaigns and endeavours it would see an advantage over other competing companies.
AI can analyse vast amounts of data rapidly, understanding data trends and patterns that inform practices. The AI can analyse customer data to facilitate personalised content creation, resulting in improved customer engagement and higher conversion rates. Additionally, AI-driven A/B testing empowers companies to fine-tune their messages and optimise communication strategies for better outcomes.
AI’s training data is derived from different languages. Using the multilingual capabilities of AI can open doors to global expansion, allowing businesses to connect with diverse audiences in their preferred languages. This can support further business development as well as innovation.
Despite the undeniable benefits, AI copywriting does present some challenges that require careful consideration. One of the primary concerns is the potential for bias in language. If AI models are trained on biased datasets, the generated content may inadvertently reinforce stereotypes or discriminate against certain groups, leading to negative consequences for brands and customers alike. AI may also exclude communities as the language capabilities are not as developed as in certain regions, leading to miscommunication or misinformation. This can therefore harm brand perception and trust.
Data security is another crucial aspect to address, as AI-driven platforms often rely on user data. Inadequate measures to protect this data can lead to breaches, compromising customer privacy. In terms of diversity, this may lead to users with protected characteristics being identified. Although data is initially anonymised, multiple datasets can be combined to identify individuals. This creates privacy risks and can harm trust in the organisation.
Relying on AI for copywriting may carry risks of plagiarism. Using copy from AI may result in unintentional plagiarism, as AI-generated content resembles copyrighted material. The source of the material is from language models, such as ChatGPT, and can be from copyrighted material that is difficult to reference back to the owner. Therefore, this risk can expose businesses to legal liabilities and extra costs.
Mitigation of Risks:
To minimise bias in language, companies must ensure that AI models are trained on high-quality and diverse datasets, representing a wide range of perspectives. Although AI can create copy for content, using the ideas and input of diverse communities can form inclusive training data where all communities are included and reflected. This can be further supported with regular audits that can identify and rectify any biases present in the content generation process.
Protecting user data requires robust anonymisation techniques that prevent any linkage between sensitive information and individual users. These protections should be communicated organisationally and set out in data policies. Having a ‘near misses’ log and risk assessment can future inform practices and continue to protect data.
Employing advanced plagiarism detection tools is essential to verify that AI-generated content does not infringe on existing copyrights. Moreover, incorporating human oversight into the AI copywriting process can help align content with brand values and ethical guidelines.
AI in Prioritisation and Time Management
AI has been supporting workers and businesses by prioritising and coordinating tasks and projects that shape the scheduling of workflows and daily work calendars. Users can ask AI for an hour-by-hour schedule of the tasks that need to be completed on that day and manage long-term projects. The schedule can then be adjusted to the priority of the task and time taken, depending on the data the user has fed the AI.
Using AI to support prioritisation and time management, can streamline work and create efficiencies. For instance, AI can be used to schedule staff to work on projects, streamline elements of projects and analyse the time taken to meet demands. This can be particularly impactful in project-based and modular work involving multiple stages and teams. As AI is based on historical data, the AI can further predict the time taken for staff to complete the task and better plan, as well as support prioritisation based on interactions. AI can also remove certain biases from scheduling, such as favouritisms, creating fairer schedules that are aimed toward efficiency.
AI used as a scheduling tool can support neurodivergent people who can often face challenges with time management and prioritisation. AI can support them in creating work schedules and reminders for their specific needs and time requirements. The tools can also be integrated into current work systems, such as Outlook to create reminders and alerts. This can help people focus and manage their workflows and daily lives effectively, leading to greater independence and wellbeing.
The schedules created by AI can incorporate time for personal aspects of life as well as work. For instance, it can include time for breaks, picking up kids from school and spending time with people. This can be supportive for carers and parents, as well as those with long-term health conditions. Therefore, supporting overall work-life balance. A potential impact of this is an improvement in gender equality in retaining female staff, who traditionally take on caring roles, as their responsibilities are managed and supported. Staff with disabilities can be additionally retained as their needs are accommodated.
There may be risks to security in AI being incorporated into business systems and programmes. AI, such as ChatGPT, retains data to train its systems. Therefore, private company practices and projects may be exposed, through AI training data and its results. In addition to this, there may also be a risk to the private personal data of staff who use it to support their needs, which can lead to them being identified through reverse engineering. These risks can therefore have negative financial and reputational impacts.
AI recognises patterns based on historical data and can lack the context of the situation or the human elements that impact scheduling. For instance, project managers make strategic decisions on the project as they have the context of the full project, an understanding of the human nuances of the project and the knowledge of staffing and their abilities. Therefore, a risk may be scheduled as the plans are ill-matched to the context of the work and person situation, exerting pressure that could be detrimental to wellbeing.
The lack of human context may pose a risk to diversity. People who are neurodivergent use AI as a tool to schedule their work and keep them on track. However, AI is biased towards neurotypical people. Therefore, AI that supports neurodivergent communities may not consider their needs. The scheduling may have a negative impact where prioritisation and time management issues are exacerbated, risking wellbeing.
Mitigation of Risks:
A policy on AI, data and IT systems can mitigate risks by providing an understanding and guidance on how AI can be used, and what data can be inputted into AI. For example, guidance on avoiding the use of personal data when using AI. This can manage data and avoid data leaks that could damage an organisation.
Data audits can further support risk mitigation and policies by understanding what types of data are being used with AI. An audit may also be able to highlight areas of development in line with current trends. This can help manage AI and its risks, creating further protective practices.
AI can benefit prioritisation and time management, supporting schedules. However, they should not be completely relied on as there is a lack of human context that may negatively impact staff wellbeing and cause project delays. Therefore, the schedules should be reviewed and edited for human and individual contexts.
AI in Data Entry
Data entry is a critical component of data management, often involving repetitive and time-consuming tasks. AI technology has the potential to revolutionise this process, offering automation, accuracy, and increased efficiency.
AI-driven data entry solutions offer a wide range of advantages that revolutionise the way organisations handle their data. First and foremost, the introduction of AI can enhance accuracy by reducing human errors. Through automation, AI systems achieve a level of precision that surpasses traditional manual methods, leading to more reliable datasets. This improvement in data quality enables businesses to make well-informed decisions, driving growth and optimising operations.
AI in data entry can boost efficiency by automating repetitive tasks. The speed at which data can be processed and recorded far exceeds what humans can achieve manually. As a result, employees are freed from data entry, allowing them to focus on more value-added tasks that require human creativity and problem-solving. This increased efficiency translates to higher productivity levels and more efficient use of resources within the organisation.
AI can handle vast amounts of data, making them invaluable tools for organisations dealing with large volumes of data or experiencing data spikes during specific periods. Whether businesses are facing constant data growth or periodic fluctuations, AI can accommodate these requirements, providing a reliable and scalable data entry solution. Therefore, further supporting business efficiency and accuracy.
The capabilities of AI extend beyond automation. Machine learning algorithms empower AI systems with the ability to learn continuously. As these algorithms process more data, they refine their accuracy and performance over time. This iterative learning process ensures that the AI remains relevant and effective in handling data entry challenges in the long run. The AI's continuous learning capability enables it to adapt to changing data patterns and further enhances the precision and efficiency of data entry tasks.
While AI brings remarkable benefits to data entry, it also introduces certain risks that demand attention. One of the primary concerns is the potential for bias in the data. If AI models are trained on biased datasets, they may inadvertently perpetuate existing biases in the data entry process. This can lead to skewed results and reinforce unfair decision-making. For instance, Amazon’s AI hiring system was famously shut down because the program replicated biases in the data, skewing the outcomes by discriminating against women. Overall, this can potentially affect various aspects of an organisation's operations, including hiring practices, customer interactions, and resource allocation. > Another critical risk associated with AI data entry is data security. AI-driven data entry systems require access to sensitive information, making them attractive targets for cyberattacks and data breaches. The compromise of such systems could expose confidential data, leading to severe consequences such as financial losses, legal liabilities, and damage to the organisation's reputation. Ensuring robust data security measures becomes imperative to safeguard against potential threats and maintain the integrity of sensitive information. Additionally, the increased reliance on AI for data entry raises privacy concerns. Storing and managing vast amounts of data through AI raises the risk of unauthorised access and potential violations of user privacy and data protection laws. Organisations must prioritise privacy-focused policies and adhere to data protection regulations to safeguard user data and maintain compliance with applicable laws.
Mitigation of Risks:
Mitigating risks in AI data entry is essential for responsible and secure data management. Organisations can adopt effective strategies to address potential challenges and ensure the integrity of their data-driven operations.
A key concern is the potential for bias in AI data entry. To minimise bias, organisations must use diverse and representative training data that accurately reflects the demographics and characteristics of the target population. Conducting regular bias audits enables businesses to proactively identify and rectify biases in AI-generated data, promoting fairness and inclusivity in decision-making.
Data security is paramount in AI data entry. Implementing robust security measures, including encryption, access controls, and authentication protocols, helps safeguard sensitive information from cyber threats and unauthorised access. Compliance with data protection laws and the implementation of privacy-focused policies protect user data and foster trust with customers and stakeholders.
Human oversight plays a crucial role in error detection and correction. Incorporating human reviewers in the AI data entry process ensures the accuracy and reliability of the final data entries. Human oversight also helps the AI system operate within ethical guidelines and maintain high data quality.
AI in Generating Ideas and Researching
Large language models, such as ChatGPT, can be used to research topics, contributing to idea generation and brainstorming. Users can provide AI with starting prompts for a core concept or idea, where users can continue to explore results and refine the answers to inspire creativity.
AI in research can be more effective, as large language model AI, such as ChatGPT, provide a summary of results in a narrative, rather than manually searching the list of links. This can speed up the research phases of projects where more time can be dedicated to ideas and implementation. Therefore, it has a positive impact on business efficiency.
The ability to summarise can also support the understanding of different concepts and ideas. This can support people who are neurodiverse, by providing them with a clear understanding, that is explained in formats that are accessible to them. This can therefore lead to further inclusion where their views contribute to decisions.
As AI is informed by data from across the internet, there are ideas and practices from different business sectors. Therefore, AI can support brainstorming by collecting ideas from multiple sectors, positively impacting organisations to create innovative business practices. AI is a pattern recognition tool and can therefore analyse data and highlight trends. The AI can further support brainstorming where decisions and ideas are data-led. This can impact business practices and activities where they are innovative and engaging towards their audiences and key stakeholders.
Using AI in research should come with some cautions. Large language models can search large amounts of data and summarise results into a narrative, however, some of the data may include “hallucinations”, where answers have been made up by the AI. This is because the AI will prioritise items that make narrative sense rather than accurate results. This can lead to inaccuracies and poses risks to decisions, as they are based on false results.
Although AI research can be useful in having a summary of topics, there are issues with its transparency and intellectual property. ChatGPT derives data from multiple sources and the intellectual property of others, and the result has little clarity on the sources or citations. The use may result in issues with copyright and intellectual property, which can have financial and reputational implications. Therefore, the results need to be used carefully to avoid copyright infringement or damages to intellectual property rights.
AI in research has risks associated with its biases. AI data for ChatGPT, for instance, uses user data of those who have access to the internet and are included in spaces the data is derived from. It, therefore, excludes multiple minoritised communities, such as older people, women, people with disabilities and people of a minoritised ethnic background. In addition to this, the data that shapes AI is from sources written in English which creates a bias in results being dominated by Western ideas and practices. This poses a risk to decision-making and research in the results being unrepresentative, impacting business practices and activities by being exclusionary.
AI supports innovation by highlighting trends, however, this function can be limited. AI data has knowledge cut-off dates. For instance, ChatGPT-4’s knowledge cut-off date is September 2021. Therefore, the AI will lack any knowledge of current events, data, and trends. This can impact innovation as ideas are based on prior years and not current engagement activities.
Mitigation of Risks:
The risks of hallucinations and copyright infringement pose great risks to organisations. Therefore, in research, AI use should be limited to summarising and understanding topics, to provide a foundation that could inform further independent research and use original material for citation.
Representation in the data bias may harm brainstorming where communities may be unintentionally excluded. It is therefore important that diversity is included in brainstorming conversations, and AI is not heavily relied on. If brainstorming results in new practices and policies, it is important to complete an equality impact assessment to ensure there are no adverse effects on diversity and inclusion.
Overall Risks to Diversity
In all areas where AI is used in the workplace, there are two common diversity risks. These are AI job losses and data privacy.
Studies have highlighted that 40% of working hours could be replaced with AI, causing approximately 300 million jobs to be at risk (CNN, 2023). Many of these jobs that are at risk are language, admin and analytical-based roles, which are clerical and secretarial (World Economic Forum, 2023). These roles tend to be filled by minority ethnic people and women. For instance, the McGregor-Smith Review found that there was an over-representation of people from Black and Minority Ethnic backgrounds in clerical and secretarial roles (McGregor-Smith Review, 2017). A study by the European Union found that in the third quarter of 2021, 66% of clerical support workers were women (Eurostat, 2023). Therefore, the risks of AI-related job losses greatly impact diverse communities, as there is a potential for an over-representation of minority communities at risk of losing their job. This risks overall diversity as well as creating inequality.
AI can be a powerful tool in data analytics. However, organisations need to be cautious of what data is being inputted into open AI systems, such as ChatGPT. Data inputted into these systems becomes public domain, posing a risk to expose private trade secrets, intellectual property, and sensitive privacy data. When the data is inputted into open-sourced AI, the data is retained to further train the AI and may therefore be used in results. As well as business risks, there is a particular risk of diversity in the sensitive data being retained and being used in AI results, in which individuals could be identified through reverse engineering (World Economic Forum, 2022). Therefore, potentially risking the safety of individuals and causing legal implications to the organisation.
Practices to Protect Diversity When Using AI
The development and evolution of technology and search engines are heading towards greater use and integration of AI. Although there are hesitancies due to its risks and biases, organisations have opportunities to understand the potential of AI and create practices and procedures that manage AI and its risks, while protecting diversity. The following are practices organisations can adopt to manage AI and protect diversity:
Talk About the Use of AI
Talking about AI openly and transparently can mitigate data risks and remove anxieties associated with AI usage and potential job losses. Without discussion, staff may use AI in their work, potentially risking data breaches and becoming overly reliant on AI. Having open discussions can share best practices and raise awareness of the limitations of AI.
In addition to this, AI has been a popular topic in the media, with stories of job losses and potential impact. Having conversations on how AI is used and how it can be managed can support staff wellbeing, by addressing concerns.
These are questions that can help shape a discussion on AI:
How can AI support your role/team?
What are the benefits?
What are the risks?
Are there any risks to inclusion or are there barriers to inclusion?
How can we mitigate those risks?
Ensure There are Protective Policies
Having policies can provide an understanding of AI, its uses and how risks can be mitigated. Policies can also provide reassurance to people that they can grow with the technology.
The following are policies that can support AI in the workplace, whilst protecting staff and diversity:
An AI policy can provide an understanding of how AI can be used, raise awareness of the risks of using AI, outline AI biases, and know how to mitigate the risks.
A people policy can support AI in the workplace by ensuring that jobs and diversity are protected. Protection can be included in HR policies related to reskilling and the development of people.
Data policies that include sections of AI can support managing the risks associated with AI and protecting against data exposure.
Complete AI Audits and Assessments
As part of AI management, regular AI audits and assessments of the use of AI should be practised. Regular audits can ensure that policies are applied, identify any emerging risks, and be agile to new developments and risks.
Assessments through a diversity and inclusion lens can also ensure that diversity is included at the start, ensuring that AI practices avoid biases and there are no implications for diversity. A review of an assessment should look at where the AI has been applied, the benefits, and risks to each protected characteristic, and what actions can be put in place to ensure inclusion is protected.
The integration of AI in various domains, including email writing and data entry, holds immense promise for enhancing productivity, efficiency, and user experiences. By leveraging AI-generated content and automated processes, businesses and individuals can achieve greater communication effectiveness and data management capabilities. The benefits of AI in these areas range from time savings and improved accuracy, to personalised communication and data analysis at scale.
However, it is essential to acknowledge and address the potential risks associated with AI adoption. Bias in language and data, as well as privacy and security concerns, require proactive measures to ensure responsible AI usage. By carefully curating diverse and representative training data, conducting bias audits, prioritising user privacy and data security, and having open conversations around the uses of AI in business, organisations can mitigate these risks and foster a more inclusive and secure AI environment.
As AI technology continues to advance, ongoing efforts to address challenges and responsibly implement AI solutions will be critical in realising its full potential. By embracing AI as a valuable tool that complements human capabilities, we can optimise our workflows, enhance communication, and elevate our data management practices while upholding ethical principles and user trust.
In adopting AI businesses need to be aware of the potential diversity implications AI has on the shape of the workforce. It is therefore important that businesses understand these impacts, and build practices that protect diversity, avoiding the risk of underrepresented communities being excluded from the workplace. This paper has set out practical practices to ensure these risks are avoided.
In conclusion, this AI playbook has outlined the vast potential of integrating AI into various aspects of our daily lives, including email writing, data entry and research. While AI offers numerous benefits, it also comes with inherent risks, such as bias and data security vulnerabilities. By adopting responsible AI practices, businesses and individuals can harness AI's advantages while ensuring fairness, privacy, and security. The effective integration of AI in these domains can lead to a more efficient, inclusive, and secure communication and data management landscape, shaping the future of our interconnected world. Through protective diversity practices, a balance of AI and human capabilities can be met, and inclusive practices are built from the beginning.
Written by Yani King and Dylan Francis from Diversifying Group