Is your bot biased?
| | | |
|
| Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
| | | | | | |
|
Poor workers always blame their tools, the saying goes. The lesson, of course, is a simple one: the more knowledgeable the worker, the more effective the tool. That’s as true for, say, a factory lathe machine as it is for the complex algorithms that underpin the machine learning and AI technologies that companies increasingly use. Machine learning holds immense promise for businesses that can effectively harness the powerful technology, but much like today’s advanced-analytics and generative AI technology, it’s only as good as the data it’s working from—and, perhaps even more important, the people inputting the data.
One of the risks of machine learning is that the algorithms that support it can easily inherit the behavioral biases of their human creator, derailing projects and creating costly errors in the process. Organizations can take measures to protect against algorithmic bias, including understanding the shortcomings of the algorithms they’re working with, shaping data samples in such a way that minimizes bias, and knowing when not to use the technology if a more traditional decision-making process is appropriate.
Companies are only just beginning to experiment with the powerful new generative AI and machine learning technologies available. To help ensure that both worker and tool are functioning optimally, business leaders would do well to heed the lessons from this 2017 classic from McKinsey partner Vishnu Kamalnath, “Controlling machine-learning algorithms and their biases.”
|
|
|
|
|
|
|
| | | | | | | |
|
Copyright © 2023 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
|
|
|
|
|