Written by Nicola Gater | 28th August 2024

Generative AI is fast rising in popularity. This technology can answer your questions, write copy and generate new images, video, audio and even code from your prompt or request.

An AI tool you may already be aware of is ChatGPT, a chat bot that can generate responses to questions that sound just like us. Just two months after its launch in 2023, it quickly became the fastest-growing consumer application in history. So it’s no surprise that organisations today are exploring the potential benefits generative AI can deliver, including improving employee productivity, and the impact it can have on the future of work.

Employees might be using ChatGPT to write emails and reports, draft blog content and social posts or to design graphics. Many other generative AI tools work in much the same way, enabling new ways to work and cutting out the repetitive tasks that consume so much of employees’ time, enabling them to focus on projects that really add value to your organisation.

While generative AI is capable of carrying out some tasks for us, it is by no means a replacement for an employee and their job role. The use of generative AI in the workplace also raises legal considerations and it’s important to ensure your employees are using AI sensibly and not putting your business and others at risk.

Here are our suggestions on how to manage the use of generative AI in your workplace:

Have an AI policy

Publishing a carefully considered AI policy lets employees know exactly how they are allowed to use generative AI tools. It should include what is and isn’t acceptable use, an explanation of your business’s stance on data protection and information on privacy laws, the disciplinary action that may be taken if the policy is violated and clear guidance on how to safeguard intellectual property.

You can include stipulations, such as allowing employees to use ChatGPT to create some form of documents but not others.

You could clarify expectations in an AI policy about when employees should disclose their use of AI. For example, are employees required to let managers know if they use AI to write a report?

Because AI is constantly changing, you may want to include a ‘subject to change’ rule in your policy and make regular updates to it. Ensure the conversation around AI keeps happening, sharing real life examples of how it is being used, so your teams are always up to date on what is and isn’t acceptable.

Use multiple communication channels such as company-wide emails, intranet platforms, team meetings and workshops to enhance the reach and understanding of your policy.

Train employees on how to use AI

To enforce your AI policy, you will want to deliver training to employees which familiarises them with generative AI software, the software you may allow them to use (e.g. ChatGPT), and its impact on their roles and the business. Training sessions could cover topics such as the capabilities of AI and its limitations as well as how to use it responsibly.

You will also want to give employees the knowledge and skill to carefully review any content generated by AI and to spot bias, which can easily develop due to its machine-learning process. Open AI, the founder of ChatGPT, said the model may agree with a user’s opinion over the course of an interaction, which sways it from fact. There are also concerns that tools like ChatGPT can have political, racial and gender bias. London School of Economics asked ChatGPT to write performance reviews about employees in the same role, and it gave favourable reviews to male employees. If you decide to use ChatGPT in this instance, having a real person monitor and review its use is essential.

Review your business’ approach to data protection

Businesses have certain legal obligations when it comes to controlling and processing the personal data of its employees, clients and other stakeholders, and they will also want to carefully protect their own intellectual property and confidential commercial data. It is easy for employees who are not aware of how AI works to input names, addresses, telephone numbers and any other sensitive information into a generative AI tool. For example, managers may do this to generate appraisal feedback or annual pay review letters, or your business development teams might upload pricing and product information if generating proposals or presentations.

Make sure you know how your employees are using generative AI tools, as uploading any information will mean it is available to others using the same AI tool outside of your organisation.

Carry out a Data Protection Impact Assessment (DPIA) to assess privacy risks, identify appropriate privacy measures and ensure compliance with UK GDPR when processing personal data using AI tools. A DPIA must document the scope, context and purpose of your use of any personal data, and also clearly outline how and why AI will be used to process data.

It’s clear the rise of generative AI is showing no signs of slowing down, so it’s never been more important for employers to take its usage seriously and ensure your people are working with it responsibly.

If you have any questions or would like support creating an AI policy, please get in touch with our team at [email protected]