Technology, Vendor

IBM wants to help organisations detect AI bias

IBM has unveiled a new technology aimed at giving businesses new transparency into AI, enabling them to more fully harness its power.

The software service, which automatically detects bias and explains how AI makes decisions – as the decisions are being made – runs on the IBM Cloud, and helps organisations manage AI systems from a wide variety of industry players. IBM Services will also work with businesses to help them harness the new software service.

In addition, IBM Research will release into the open source community an AI bias detection and mitigation toolkit, bringing forward tools and education to encourage global collaboration around addressing bias in AI.

“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies,” said Beth Smith, general manager of Watson AI at IBM. “It’s time to translate principles into practice. We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”

These developments come on the back of new research by IBM’s Institute for Business Value, which reveals that while 82 percent of enterprises are considering AI deployments, 60 percent fear liability issues and 63 percent lack the in-house talent to confidently manage the technology.

IBM’s new trust and transparency capabilities on the IBM Cloud work with models built from a wide variety of machine learning frameworks and AI-build environments such as Watson, Tensorflow, SparkML, AWS SageMaker, and AzureML. This means organisations can take advantage of these new controls for most of the popular AI frameworks used by enterprises.

The software service can also be programmed to monitor the unique decision factors of any business workflow, enabling it to be customized to the specific organisational use.

The fully automated software service explains decision-making and detects bias in AI models at runtime – as decisions are being made – capturing potentially unfair outcomes as they occur. Importantly, it also automatically recommends data to add to the model to help mitigate any bias it has detected.

In addition, IBM Research is making available to the open source community the AI Fairness 360 toolkit – a library of novel algorithms, code, and tutorials that will give academics, researchers, and data scientists tools and knowledge to integrate bias detection as they build and deploy machine learning models. While other open-source resources have focused solely on checking for bias in training data, the IBM AI Fairness 360 toolkit created by IBM Research will help check for and mitigate bias in AI models. It invites the global open source community to work together to advance the science and make it easier to address bias in AI. You can read more in a blog here.

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines