IBM is launching cloud software designed to identify any bias in artificial intelligence deployments while also recommending fixes.
While artificial intelligence (AI) may never run amok to the level of gun-totin' "Westworld" character Dolores, business executives are leery of trusting business decisions to data models and algorithms that they don't understand.
IBM is seeking to take the uncertainty out of AI with its "Fairness 360 Kit," a toolkit that includes a library of algorithms, code and tutorials that will give academics, researchers and data scientists tools and knowledge to integrate bias detection as they build and deploy machine-learning models.
IBM is putting its Fairness 360 Kit, which was developed by IBM Research, into open source. While other open source projects have focused solely on checking for bias in training data, IBM said the Fairness 360 Kit checks for and mitigates bias in AI models.
IBM said its software works with a model built on machine-learning frameworks such as Watson, TensorFlow, Spark ML, AWS SageMaker and Azure ML.
"IBM led the industry in establishing Trust and Transparency principles for the development of new AI technologies," said Beth Smith, general manager of Watson AI at IBM, in a prepared statement. "It's time to translate principles into practice. We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making."
While Google, Amazon and IBM have been among the leaders in developing and deploying AI products and services, the companies using them don't always know if their AI solutions are achieving their goals. On that note, IBM said its software service, which runs on IBM Cloud, automatically detects bias and explains how AI is making decisions—as the decisions are being made on the fly—and helps organizations manage AI systems from a wide variety of industry players.
According to new research by IBM's Institute for Business Value, 82% of the businesses surveyed are considering AI deployments, but 60% fear liability issues, while 63% lack the in-house talent to confidently manage the technology.
The software service can be programmed to monitor the unique decision factors of any business workflow. The software explains decision-making and detects bias in AI models at runtime, capturing potentially biased outcomes as they occur. It also automatically recommends data to add to the model to help mitigate any bias it has detected.
On the human front, IBM also has a consulting services team to help companies design business processes and human-AI interfaces to further minimize the impact of bias in their decision-making.