Written by Nicola Gater | 3rd November 2023

Artificial Intelligence (AI) is fast becoming part of our everyday lives. In the workplace we’re using it in different ways, from spreadsheets to social media applications, and recently there has been a massive shift in the tools available to us.

One AI tool, ChatGPT, an AI language model in the form of a chat bot that can generate responses to questions that sound just like us, is rising in popularity. Employees can use it to write emails, draft content and even for CVs and cover letters.

So why are two thirds of UK organisations considering bans on AI applications? While the potential benefits of AI cannot be ignored, the risks around security, privacy and discrimination are making businesses hesitate.

Considerations for the use of AI in the workplace:

Legal and regulatory considerations

When it comes to laws and regulations, there is currently nothing governing the use of AI in the UK – including in the workplace.

While the EU is introducing tough restrictions on the use of AI, such as Italy banning ChatGPT over privacy concerns, the UK is considering issuing guidance on its use, according to a recent Government whitepaper.

In the absence of specific legislation, it’s still important to understand how existing legal risks and obligations may affect your use of AI.

This includes discrimination in recruitment from the risk of bias in algorithms. You may have seen in the headlines that Amazon had to scrap an AI recruiting tool which had recognised that more men happened to apply for roles in this typically male-dominated tech industry and so taught itself that male candidates were preferable to female. Over time, as AI continues to learn, bias can creep in, so it’s important that if you are going to use it, you don’t solely rely upon it. There should always be an element of human oversight, with algorithms tested regularly. AI should be used to support human decisions, not instead of!

Generative AI such as ChatGPT uses the data it is given to identify patterns and create new data or content. If your employees are using AI by uploading Company information in order to create content output, do not lose sight of the fact that they could be sharing confidential and commercially sensitive data with others outside your organisation who are also using the same AI tool. If you upload personal employee data to generate annual pay rveiew letters, or if you upload product design and pricing information to generate strategic plans, you are sharing this data with the world. Make sure you know how your employees are using AI tools and ensure it is compliant with GDPR and your own privacy and confidentiality policies.

Data protection legislation will also come into play if you are using AI technology to monitor employees at work, for example through tracking software or remotely monitored web cams.

Introducing an AI policy

In order to regulate the use of AI apps, you can set out clear parameters through an AI policy to explain when and how employees can use this technology.

The majority of business leaders (68%) are in agreement and think employees should not use AI without a managers’ permission and position according to technology authority Tech.co.

Publishing an AI workplace policy lets employees know exactly how they are allowed to use AI tools. It should include what is and isn’t acceptable use, an explanation of your organisation’s stance on data protection and information on privacy laws, the disciplinary actions that may be taken if the policy is violated and clear guidance on how to safeguard intellectual property.

You can include stipulations, such as allowing employees to use ChatGPT to create some form of documents but not others.

It will also be important to know what content was produced by AI versus an employee. You could clarify expectations in an AI policy about when employees should disclose their use of AI. For example, are employees required to let managers know if they use AI for any part of their job duties?

You may want to include other best practices such as what information is and isn’t ok to put into AI tools and any training you may offer, or you could signpost employees to resources where they can educate themselves on the use of AI.

Because AI is continually evolving, it’s important to stay up to date on new technologies and any legislation that may emerge. Because of that, you may want to include a “subject to change” rule in your policy and make regular updates based on any changes. Keep the conversation around AI alive with frequent team discussions around it, so your people are always up to date on what is and isn’t acceptable.

AI in recruitment

As we mention above, AI does have its risks in recruitment. However it can streamline the recruitment process, from creating job descriptions and ads to screening candidates and scheduling interviews.

AI uses algorithms and keywords to sort through CVs and shortlist candidates, which can help speed up the recruitment process.

Chatbots and Applicant Tracking System (ATS) can also gather data to generate personalised feedback reports for candidates, highlighting their strengths and weaknesses.

Ultimately, AI improves efficiency through the automation of tedious tasks – freeing recruitment teams up to focus more on strategic, big-picture goals.

It has also been suggested that AI can remove bias during the screening process, as it removes personal judgement during when screening applicants. This can help make recruitment more interactive and transparent and supports a more diverse workforce.

Despite its benefits, we see AI as a tool to enhance recruitment, not take over the process entirely. While AI can work smartly, it misses the human input and skills needed to create a robust recruitment process, positive candidate experience and ensure you have the right fit for the role and your company.

Learn more about AI in the recruitment process in our blog here.

If you have any questions or would like HR Consulting support on developing an AI policy, please get in touch with our team at [email protected].