The rapid development of artificial intelligence (AI) and machine learning applications is seeing exciting new technologies being introduced to the market across a wide variety of sectors.
However, it is also bringing some worrying problems. What are some of the legal risks associated with AI and machine learning and what should you do to protect your business from these risks?
Alan Turing, considered by many to be one of the founding fathers of AI, published a paper in 1950 which discussed the possibility of machines which think and learn. Since then, the massive increase in computer processing power, the availability of large amounts of data and the reduction in cost of computing equipment and storage has enabled the growth of AI and machine learning.
In its Industrial Strategy White Paper, the Government defined artificial intelligence as “Technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation”. The paper also defined machine learning as “a type of AI that allows computers to learn rapidly from large datasets without being explicitly programmed”.
True AI is one that does not rely on pre-defined behavioural algorithms to reach decisions and meanings but can learn on its own and improve and enhance its capabilities and knowledge from past knowledge and decisions. Whilst true AI is still in its infancy, we are seeing advancement towards true AI in the form of machine learning software that uses complex behavioural algorithms and vast datasets to improve their skills and adapt themselves to our requirements.
Examples of this type of AI can be seen in:
, Home personal assistants like Apple’s Siri, which uses machine learning to improve its ability to predict and understand questions and requests.
, Apple’s HomePod which uses deep learning models and online learning algorithms to enhance and decipher speech and remove echo and background noise.
, The use of AI algorithms to improve diagnostic accuracy in breast cancer detection and reduce the number of false negatives.
However, despite the benefits of AI, some worrying problems have also arisen:
, Amazon ceased use of an internal AI tool it had created in 2014 to sort through job applications, after they found out that the tool had taught itself to prefer male candidates over female candidates and penalised CVs that included the words “women’s”.
, In autumn 2016, computer scientist Professor Vincent Ordonez noticed a pattern of gender bias in some guesses made by image recognition software he was building. In investigating the cause of the bias, he and his team tested two large collections of photos used to train image recognition software. He discovered these research collections displayed notable gender bias in their depiction of activities such as cooking and sport.
, Soon after Microsoft launched its AI chatbot called Tay into social media, it tweeted wildly inappropriate words and images. Peter Lee, Corporate VP, Microsoft Healthcare wrote “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.” he explained “a coordinated attack by a subset of people exploited a vulnerability in Tay”. Microsoft deactivated Tay within 24 hours of its launch.
It is clear that a regulatory framework is needed for successful and safe implementation of AI systems. Governments and collaborations of businesses and experts around the world are recognising the benefits of AI technology and the challenges associated with its use and are taking steps to research and recommend policies and laws on the use of AI technology.
Examples of these include:
, The UK government advisory body, Centre for Data Ethics and Innovation, which has been tasked with investigating and advising on how we can maximise the benefits of AI and other data-enabled technologies.
, The Joint Research Centre (the European Commission’s science and knowledge service), in collaboration with the European Institute of Innovation & Technology, is seeking to identify legal and regulatory challenges that using AI technology may bring for start-ups and research projects.
, The European Commission’s High-Level Expert Group on Artificial Intelligence is tasked with advising the Commission on how to address AI challenges and opportunities through policy development and legislation.
Who is responsible for an AI system that causes damage or harm?
In legal terms, AI systems do not have legal personality in their own right; rather the business or individual that owns the AI or supplies the products and services that the AI system produces will be legally responsible for the AI system. It is these businesses or individuals – not the AI itself – that will be the ones responsible for any wrong-doing or harm committed by the AI, or caused by any output of the AI system.
What are some of the legal risks arising from using AI systems?
, Inadequate knowledge can cause mistakes in results. Inherent biases in the datasets used can cause biases and discrimination in results.
, A business that uses an AI system to provide information and advice to its customers could be liable to those customers for loss or damage that they suffer if the system gives misleading or inaccurate advice.
, An AI system that publishes untrue statements that cause or are likely to cause serious harm to the reputation of an individual could expose the owner or operator of the system to a claim
, A business that uses an AI system to filter a shortlist of candidates could be liable for discrimination if in selecting candidates the system deselects or disregards candidates, on the grounds of their race, gender or age in the same way as if it had been a human doing the selection.
What steps can businesses take to reduce these legal risks?
When developing or acquiring an AI technology, or when contracting with an organisation to use their AI system, consider taking the following action:
1. Due diligence: Conduct due diligence on the system itself and the underlying data used to teach the systems; find out whether the AI system’s learning has been supervised or unsupervised.
2. Testing: Test AI systems thoroughly for bias and discrimination.
3. Specification: Carefully review specifications of the AI system. Understand the limitations of the system (where does its knowledge and its ability to learn begin and end). Understand what controls are in place in the AI system to prevent it from learning bias and discrimination and ensure continued compliance with data protection, employment and other relevant laws.
4. Legal compliance: Seek contractual assurances that the AI system does not and will not operate in a way that could cause your organisation to break current laws (including laws regarding discrimination and data protection).
5. Insurance: Ensure that you or your provider of the AI system has appropriate insurance to protect your business from some of the legal risks associated with AI systems that cause harm or damage.
6. Support: Consider obtaining support and maintenance for the AI to ensure that it continues to operate properly and within agreed parameters.
7. Take-down: If you discover that your AI system is operating unlawfully or causing harm, make sure that you can take your AI system offline quickly to prevent further damage.
8. Oversight: When implementing and using AI tools which are used to provide advice or are interacting with your customers, consider overseeing and managing their performance and output in a similar way to how you would oversee and manage a new member of staff.
Over the next few years, we expect to see a rapid increase in the number of businesses using AI and machine learning applications. Some of these will undoubtedly bring positive benefits to businesses and consumers alike.
There are also some worrying problems with this type of technology. A robust legal framework will help build trust in the use of AI systems, however, this may take time to develop. In the meantime, businesses should take proactive steps to reduce the legal risks.