Interviews, News

“Governance is the most important concept for organisations developing AI” – Simmons & Simmons

CNME Editor Mark Forker spoke to Olivia Darlington, Of Counsel and Head of Insurance – Middle East & Africa at Simmons & Simmons, to learn what measures enterprises can take to ensure they avoid being discriminatory when deploying AI models, what data risks are associated with AI – and the critical importance of governance when using AI.

Many people remain skeptical of AI, and we have seen in some cases, for example in recruitment and policing, that it has mitigated bias against underrepresented groups and minorities. From a legal perspective, what do companies and organisations need to do to ensure that they are ethical when it comes to deploying AI technology – and how can they avoid being party to promoting unconscious bias? 

AI, in some form, has been used since the 1950s. We are seeing an exponential increase in its use today for two reasons: the prevalence of ‘big data’ (in digital form) and advanced computer processing power capable of reviewing and analysing that data.  

The essence of advanced machine learning (a common type of AI) is the ability of computers to review and learn from vast amounts of data, and to make decisions based on that learning.

Whilst computers have conventionally been automated systems (following rules set by humans), AI is about computers (or ‘models’) acting autonomously. This unique feature is one of the main reasons why AI presents risks.  

One of those risks is discrimination, which is often associated with bias. However, bias in AI refers to a skewed outcome or decision by the model.

That may not necessarily result in discrimination i.e. decision-making which unfairly / unlawfully impacts an individual or group on the basis of a characteristic such as gender or ethnicity. But bias can result in discrimination and that’s why it’s a key issue with AI.

We have seen bias in AI models cause discrimination in high-profile cases, including in the UK and US. 

Discrimination is typically unlawful (rather than merely unethical or unfair). So, organisations using AI need to take steps to avoid it. For example: 

  • They should take technical steps to ensure that the AI models themselves do not produce discriminatory or other unfair outcomes. These steps range from ensuring that the data on which their models are trained are representative and not causative of any discrimination, to rigorous testing of the AI model prior to deployment. 
  • Organisations should also have the right governance in place to reduce the risk of discrimination from AI; for example, putting in place policies and procedures for their technical teams to follow to reduce or eliminate bias, ensuring that legal / compliance teams feed into these policies and procedures (e.g. by defining fairness metrics and advising on the legal position), and having controls and checks in place to monitor for discrimination and take remedial steps if it arises. 

We know that data is the new oil – and that data fuels the current global economy. However, we know that there are a lot of data risks with AI technology, can you outline to our readers what those risks are – and again what are the best practices you would advocate for when using AI to manage and process your data?  

I would highlight 3 data risks when it comes to AI: 

  1. The first obvious (but under-appreciated) risk is that the data is simply not good enough or fit for purpose to enable the AI system to work as it is intended to do (for example, it may not be sufficiently accurate). Many AI projects fail because the right data is not available and so the model cannot be properly trained. 
  1. Data privacy is a significant risk with AI. Many AI models will be trained on personal data and will process personal data once deployed. Both of these aspects attract the application of data privacy law; for example, the GDPR, which creates various legal risks. Aside from the ‘core’ obligations around processing of personal data, there may be other obligations in data privacy legislation which apply specifically in an AI context. For example, where a data controller uses a solely automated decision-making system (including AI), it is obliged under the GDPR to provide “meaningful information about the logic involved”. This has been construed as an ‘explainability’ obligation i.e. a requirement to explain how the AI operates and makes decisions. In 2021, we saw the first legal actions taken against companies (for example, Uber and Deliveroo) on the basis of an alleged lack of transparency in their algorithmic decision-making systems. 
  1. As explained above, discrimination resulting from bias in an AI model is also a data risk. This is principally because bias can emanate from the data used to train the AI model. For example, if the data is not sufficiently representative, then it will cause the AI model to take skewed decisions, which could discriminate against certain individuals or groups.  

Fundamentally, organisations need to pay careful attention to the data aspects of AI. If third party datasets are used, or if third parties are otherwise involved, then due diligence is required to understand the data (its provenance, how it was collected, any processing or editing of that data etc.).

Keeping a written record of steps taken with regard to data is also vital, particularly so that any issues identified later can be traced back to any problems with the data.  

Many high-profile technologists believe that AI needs to be regulated, or subject to stricter governance, but opponents of that stance believe that will only serve to stifle innovation and creativity. What is your view on the role of governance in AI – and do you believe that better governance would mitigate data risks in AI?  

It’s important, first, to dispel the misconception that AI is not currently regulated – it is. Whilst there may not be much AI-specific regulation currently in force, existing regulation can still apply to AI.

For example, equality legislation is likely to apply to discriminatory decisions taken by an AI model and the GDPR (as noted above) applies to automated decision-making. Sector-specific regulation – for example, rules about how financial institutions deal with consumers – is also likely to apply to AI.   

That said, there are persuasive arguments for adopting regulation which specifically targets AI use (the EU’s forthcoming AI Act being a good example).

For example, the general regulation noted above may not easily apply to AI (e.g. is an AI model a “product” for the purposes of product liability law?) and some of that regulation is intended to deal with the impact of AI having gone wrong (e.g. equality legislation), whereas AI-specific regulation may be more effective at preventing the harm in the first place.  

I support AI-specific regulation, principally because we’re dealing with a novel form of technology which has the ability to act autonomously and cause harm. This is in circumstances in which humans may have a limited understanding of how the technology even operates.

With AI adoption increasing exponentially, I think regulation plays a vital part in ensuring that AI is developed and deployed safely. That said, it’s important to strike the right balance between protecting consumers and encouraging innovation. If we over-regulate, then we risk missing out on the opportunities that AI can generate  

Governance, in my view, is the most important concept for organisations developing or using AI. Governance encompasses regulatory compliance, but it goes further than that.

It’s about organisations having the right structures, policies and processes in place to ensure that they’re in control of the development or deployment of their AI (including the data aspects around AI noted above) and that, for example, if things go wrong, they can act in the right way to mitigate any harm.  

Can you provide our readers with an overview of your role and the Simmons & Simmons AI Group?  

I am a member of the AI Group at Simmons & Simmons, which is led by Partner, Minesh Tanna, and comprises over 50 lawyers and non-lawyers from across our business, including  data scientists from our Wavelength offering.

I provide particular input from an insurance angle as Head of Insurance – Middle East & Africa.   

Our AI Group’s work includes producing know-how on AI legal, regulatory and ethical issues (for example, our “stay smart” bulletin), leading ground-breaking projects (for example, we recently advised on the world’s first AI Explainability Statement to receive input from a regulator) and contributing to policy developments.  

We also increasingly advise clients on the legal risks relating to AI – whether from forthcoming regulation or in a transactional context.

This includes developers of AI (for example, we have advised the world’s leading developers of facial recognition technology) to users of AI (which includes financial institutions and healthcare clients).  

Previous ArticleNext Article

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines