Josefin Rosen – Trustworthy AI specialist, SAS Data Ethics Practice, tells Anita Joseph that while AI enhances productivity and contributes to a seamless experience, it is critical to ensure that this is done ethically and responsibly.
What are AI innovations’ most significant benefits and risks today?
AI innovations offer numerous benefits, enhancing efficiency and providing personalized experiences. They also play a crucial role in addressing complex challenges such as the climate crisis. However, alongside these advantages, AI poses risks and challenges that demand our attention.
Bias is a significant concern in AI development. Human biases, including unconscious prejudices, can seep into AI systems through historical data. For instance, when training AI for automated decision-making based on data from past human decisions, these biases can become ingrained. If not carefully managed, automated decisions can perpetuate and even amplify these biases, leading to discriminatory outcomes.
Model Decay is another potential problem. Models are in a sense like milk. They expire. AI models start to decay, and can then enable inaccurate decisions when they are not aligned with the present. Continuous monitoring, management, and, when necessary, retraining or replacement of these models are essential to prevent erroneous predictions that could harm businesses and individuals.
Privacy is a third concern that arises due to the substantial data requirements of AI. While data is essential for AI to function effectively, it raises the risk of invading people’s privacy and jeopardizing their personal information. Safeguarding individuals’ privacy rights while harnessing the power of AI is paramount.
To mitigate these challenges and ensure the safe and fair deployment of AI, a robust AI governance framework is essential. By embedding trustworthiness into our platform, SAS Viya, we empower our customers to innovate responsibly. This approach facilitates the development of compliant and responsible AI, fostering a balance between technological advancement and ethical considerations. By prioritizing transparency, accountability, and fairness, we can harness the full potential of AI while minimizing its associated risks.
What is the SAS strategy regarding technology and business development?
Given our mission to provide knowledge in the moments that matter, our immediate strategy is to transition our current installed base and new customers to SAS Viya for a modern AI experience that is productive, performant, and trustworthy. That’s reflected in our recent App Factory and Workbench announcements, as well as recent partner announcements to enhance our partner ecosystem. We believe when customers take advantage of SAS Viya, they will be in the greatest position ever to receive knowledge in the moments that matter to them.
How is SAS supporting responsible innovation?
Responsible innovation includes, but goes beyond, important topics like trustworthy AI and bias to consider the entire innovation process. A responsible innovation approach injects equity and fairness at every step, from idea to development to deployment.
SAS believes humans should be at the center of innovation and is committed to creating technology that uplifts people, not holds them down. As a responsible innovator, SAS seeks to build trust in technology. Analytic-driven decisions must be repeatable, governed, transparent, and interpretable.
Asking “could we” AND “should we” when evaluating models to ensure desired results can reduce bias in data and algorithms, improve the value and quality of data, and build confidence in the fairness of models. For SAS, as responsible innovators, we believe we have a duty to care. When formalizing our responsible innovation commitment, we focused on our principles: human-centricity, inclusivity, accountability, robustness, transparency, and privacy and security. We have put a lot of effort into making sure our values guided by these principles are reflected in our employees, in the way we operate as well as in our software platform.
For organizations to achieve their responsible AI goals they need to be able to trust their AI platform. SAS Viya includes trustworthy AI capabilities like information privacy, bias detection and bias mitigation, explainability, decision audibility and model monitoring, governance, and accountability.
Since bias can take many forms throughout the AI process, those capabilities help organizations identify potential bias risks during data management and modeling, increasing confidence in an organization’s responsible AI efforts.
Why should organizations commit to responsible innovation?
Responsible innovation is not only a moral imperative, and not only a compliance matter but also a wise business strategy that safeguards compliance, reputation, trust, sustainability, and financial prosperity. It’s a crucial component of a forward-thinking organizational approach.
How important is it to regulate AI development? Will such legislation slow down innovation and technology advancements?
Legislation can benefit companies using AI because it sets a framework that can reduce risk. There can still be ample room for innovation with well-thought-through, outcome-based, clear but flexible rules. Such guidelines may help ensure less risk of unintended harm to society and customers, as well as to a company’s reputation, brand, and bottom line. We need to remember though that it takes more than regulations to ensure trustworthy AI. We need a comprehensive approach where people, processes, and technology work together to embed trustworthy AI principles throughout the AI lifecycle – from the initial questions asked to the decisions that come out at the end. Responsible innovation is carried out by responsible innovators and depends on human choices and decisions. We have a duty to care for that AI reflects the best of our values and that it doesn’t repeat the injustices of the past.