The recent popularity of ChatGPT has many businesses considering using generative AI tools to improve the way their business operates. However, if you intend to implement AI solutions in your business, it's important to be aware of the risks involved.
What is generative AI?
ChatGPT is an example of generative AI – an artificial intelligence technology that uses algorithms to generate new content, including audio, code, images, text, simulations, 3D objects, and videos, based on patterns learned from existing data.
More specifically, it's a type of generative AI called a large language model (LLM) that's designed to understand and generate human-like language using a text interface.
There are a number of ways to use generative AI in your business, including to:
- improve productivity and efficiency through automating routine tasks;
- create marketing assets;
- write search engine optimisation copy
- provide enhanced data insights; and
- communicate with customers.
You should be aware though that your business's use of generative AI raises a number of legal issues.
Data privacy and protection
Data privacy
By using generative AI tools, there's a risk that you'll inadvertently make personal data publicly available.
Any information you input into an online generative AI tool is transmitted to its provider. The provider can then use this information to generate future outputs that could be disclosed to the public.
See the ICO's guidance on AI and data protection and its blog article 'Generative AI: eight questions that developers and users need to ask' for more on data privacy risks.
Data security
AI models are susceptible to adversarial attacks, where malicious actors exploit vulnerabilities to manipulate the model's behaviour. These attacks can lead to compromised decision-making processes, financial losses, or reputational damage.
Additionally, the integration of AI within business processes can create new avenues for insider threats.
The National Cyber Security Centre has advice on the security of ChatGPT and LLMs here, and tips on assessing AI tools for cyber security here.
Intellectual property rights infringement
Lack of transparency about the origin of materials used for training generative AI models raises concerns about intellectual property rights, in particular copyright.
Copyright protected information may be used to train generative AI models, which may amount to infringing the copyright of the rights owner. This information could in turn end up being reproduced verbatim (or almost verbatim) in replies to a user-prompt, without any credit or reference to the source or the author.
Generative AI models also currently can't properly list and credit the materials they reproduce, making it difficult to obtain the necessary authorisation from the rights owners, while AI model owners often waive any responsibility.
Error and bias
AI is only as good as the data it's trained on.
If this data is old, incomplete and inaccurate, AI tools will produce inaccurate or out-of-date results.
This can lead to 'hallucinations', in which a tool confidently asserts that a falsehood is real.
Similarly, training data that contains bias will result in tools that propagate bias and discriminatory practices.
You should always critically assess any response produced by a generative AI model for potential biases and factually inaccurate information.
You should also establish protocols for the regular review of datasets used to train AI models to ensure they remain up-to-date, accurate and to remove bias.
Regulatory requirements
Existing regulations continue to apply; this includes the UK GDPR, as well as sector-specific regulations, e.g. for financial services and transport and product safety regulations.
The Government has also published a policy paper outlining its intention to develop a framework for regulation to be implemented by existing regulators.