Man v Machine: Protecting your business in an AI world

The rapid development of artificial intelligence (AI) and machine learning applications is seeing exciting new technologies being introduced to improve productivity across a wide variety of sectors.

For example, it is used effectively in cybersecurity to detect malicious activity and helps doctors and healthcare professionals with image analysis systems in cancer screening programmes.

However, its introduction to modern day life is also bringing with it some worrying problems. 

In May 2022 the Information Commissioner fined New York based facial recognition company Clearview Inc £7.5 million for collecting images of people from social media platform to add to its database.  The ICO ordered the company to delete any data that it held about UK citizens.  Clearview sold its software services to many businesses and private users.

AI tools are often sold as a service for companies to use in their business. However, if the AI software acts unlawfully, for example if it breaches data protection, discrimination or advertising laws, the organisation using the system could find itself facing legal proceedings from individuals and investigation or fines from regulatory authorities.  

Over the next few years, we expect to see a rapid increase in the number of businesses using AI and machine learning applications. Some of these will undoubtedly bring positive benefits to businesses and consumers alike.

The question begs, how can a smaller business protect itself from these risks while still taking advantage of the benefits offered by AI business processes?

Carefully review the software specification

Understanding the limitations of the AI system and where its knowledge and ability to learn begins and ends is vital. Having oversight of what controls are in place in the system to prevent it from learning bias and discrimination should be a priority to ensure continued compliance with data protection, employment and other relevant laws. consider overseeing and managing their performance and output in a similar way to how you would oversee and manage a new member of staff.

Due diligence

Conduct due diligence on the system itself and the underlying data used to teach the systems; find out whether the AI system’s learning has been supervised or unsupervised.

Operating unlawfully

An AI system that publishes untrue statements that cause or are likely to cause serious harm to the reputation of an individual could expose the owner or operator of the system to a claim for defamation. An AI system that breaches anti-discrimination or data protection laws could expose the system operator to regulatory fines and claims from individuals.  Having contingency plans in place to take the system down quickly to prevent further damage is paramount. 

Insurance plans

Will your supplier accept liability if the system operates unlawfully and does its insurance cover this? Ensure that you or your provider of the AI system has appropriate insurance to protect your business from some of the legal risks associated with AI systems that cause harm or damage.

A robust legal framework will help build trust in the use of AI systems, however, this may take time to develop. In the meantime, businesses should take proactive steps to reduce the legal risks.

How can Moore Barlow help?

Dorothy Agnew, is a commercial partner at Moore Barlow who specialises in Information Technology, Telecommunications, Intellectual Property, Data Protection and Privacy.  In particular, she advises on IT and non-IT outsourcing agreements, development and commercialisation of new technologies, eCommerce, and the protection and exploitation of intellectual property rights.

To find out more about Moore Barlow’s commercial law offering please visit the website.