Features, Insight, Interviews

Understanding the mechanics behind AI regulation and human equity

Almost every discussion about technology is incomplete without the mention of artificial intelligence. Its use cases, real-life applications and investments seem to be growing day by day, proving to technologist, futurists and industry leaders that it is the next big thing! In parallel, discussion about AI ethics, governance and regulation is also taking shape with the same momentum. At the SAS Innovate on Tour event in Dubai, we sat down with Reggie Townsend, the Director of SAS’ Data Ethics Practice, to dive into the true meaning of Responsible AI – a concept that is evolving as the next frontier of AI as its capabilities engulf all facets of business and society and is a main driver testing fitness of data and biases.

What does the responsible use of AI mean to the workforce?

AI and responsible AI are different. For example, with electricity, people don’t say they love responsible electricity. AI is simply helping businesses to make decisions at scale and enabling humans to act on the insights presented. Humans need to determine the levels of responsibility based on the context. So, the responsible use of image analytics in a surveillance context is different than the use of responsible image analytics in a medical context. Responsibility ultimately will be the domain of the user, whether he is going to depend on the use case or the context. We at SAS like to wrestle with these kinds of predicaments, and help a customer define the areas where we believe our technology is best applied. We can try to define some intended use for technologies, and then monitor from the periphery for use cases that fall outside of those boundaries.

It is not possible to anticipate how people and businesses are going to use our tools, no more than how much we can anticipate how a person is going to use a hammer. But we are strongly positioned to define the intended use for our hammer.

So, what kind of impact does that have on the workforce for the people who are involved with AI development, deployment, and use? There is a need to have an increased level of awareness, in terms of impact, potential harm and exploitation, so that organisations using AI can act in the most responsible way. I anticipate upskilling and reskilling of the workforce, in the sense that we’ll all need to have general understanding about how AI works.

Responsible innovation is a concept that we see organisations struggling with, not just in AI but every step? Could you explain its prevalence, from development to deployment, and what this means for SAS?

It’s important to recognise that SAS has been around for 46 years. Culturally, responsible innovation refers to the need to create a level of consistent vernacular, a common ethos. This translates to actual developers being guided or thinking through their specific development task. As an example, it’s one thing to say that we’re going to create technology because we can “create it”. It’s quite another to consider what are the possible vulnerabilities that this particular technology might exploit, and how do we prevent it? So, there is another layer of thought with respect to development that needs to occur at this point. My team at SAS works directly with product management and engineering to be reflective.

First of all, let’s set a course in terms of the technology that we know we want to develop. We need to be reflective and assess who are we developing this technology for? Often times in a B2B scenario, we provide our technology to banks or healthcare systems. How might they use the technology? Can we work with them and anticipate some of their clients’ vulnerabilities and ensure that we don’t end up harming further down the value chain?

To sum up, it’s a secondary and tertiary level of consideration more than anything, and we’re in the process of formalising that as we consider development.

How is SAS preparing for the new laws and regulations governing the use of AI?

SAS diligently keeps track of emerging regulatory activities across the globe, and we attempt to harmonise aspects that are consistent and common. We have been witnessing changes in guidance within regulations. Let’s take the European Union as an example, they’ve provided guidance in terms of what their proposed regulations will be, but they are not hard-set. Similarly, in parts of Asia, countries are providing guidance for their proposed regulations.

Many countries are of the opinion that regulations should not be proactively set because of where technology is at present, as it’s very hard to recall later. Staying in line with this viewpoint, we’ll see where the guidance is going, and assess where the commonalities exist. As an example, a lot of commonalities exist around principles, fairness, accountability, explainability and transparency. For example, we evaluate how AI regulations are and are being talked about in the UAE and the rest of the world.  As long as there exists some consistency there, we then take the commonality and map it to our technology solutions. The modifications based on common insights highlight how we will appeal to matters of fairness and explainability, and so on and so forth. If there are differences, we will address those accordingly.

SAS believes that if we can harmonise these commonalities found within AI regulations and build them into our platform, we’ll be in a stronger position to help our customers when guidance evolves or changes. If SAS can have these regulatory aspects covered, to the best of our ability, means our customers can be more confident in meeting regulatory expectations.

What is SAS doing to reduce bias in AI and promote human well-being, agency, and equity?

A straight answer to that is we are integrating bias detection and bias mitigation features in our software. This eases the task for data scientists to determine sensitive variables, which are primarily determined by the users, when they asses actual data sets.

As a platform provider, the best that we can do is put the tools in the hands of the people who are going to do the work, and then work with them, advise them on how best to use the tool. A good example is a car. An individual needs to get a license to drive a car, but at a point, someone has to teach him or her how to drive a car. So, the car manufacturers can only do so much: manufacture a car. At some point, when you hand off the car to the buyer, levels of responsibility and liability start to shift.

We want to do our part, and we want to try and anticipate the responsibility of others that we are handing off to, but at some point, there is a hand off. We can only address it very tactically in the platform, and then through education, training and workshops etc. we can advise people on how best to use it.

Previous ArticleNext Article

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines