AI Learns Our Workplace Biases. Can It Help Us Unlearn Them?
In 2014, engineers at Amazon began work on an AI hiring tool they hoped would change hiring for good — and for the better. The tool would bypass the messy biases and errors of human hiring managers by reviewing résumé data, ranking applicants and identifying top talent.
Instead, the machine simply learned to make the kind of mistakes its creators wanted to avoid.
The tool’s algorithm was trained on data from Amazon’s hires over the prior decade — and since most of the hires had been men, the machine learned that men were preferable. It prioritized aggressive language like “execute,” which men use in their CVs more often than women, and downgraded the names of all-women’s colleges. (The specific schools have never been made public.) It didn’t choose better candidates; it just detected and absorbed human biases in hiring decisions with alarming speed. Amazon quietly scrapped the project.
Amazon’s hiring tool is a good example of how artificial intelligence — in the workplace or anywhere else — is only as smart as the input it gets. If sexism or other biases are present in the data, machines will learn and replicate them on a faster, bigger scale than humans could do alone.
On the flip side, if A.I. can identify the subtle decisions that end up excluding people from employment, it can also spot those that lead to more diverse and inclusive workplaces.
Humu Inc., a start-up based in Mountain View, Calif., is betting that, with the help of intelligent machines, humans can be nudged to make choices that make workplaces fairer for everyone, and make all workers happier as a result.
A nudge, as popularized by Richard Thaler, a Nobel-winning behavioral economist, and Cass Sunstein, a Harvard Law professor, is a subtle design choice that changes people’s behavior in a predictable way, without taking away their right to choose.
For example, Google uses nudges in its promotions process (women were more likely to self-promote after a companywide email pointed out a dearth of female nominees) and in healthy-eating initiatives in the company’s cafeterias (placing a snack table 17 feet away from a coffee machine instead of 6.5 feet, it turns out, reduces coffee-break snacking by 23 percent for men and 17 percent for women).
What if people could be nudged toward greater diversity and inclusion?Employees at inclusive organisations tend to be more engaged. Engaged employees are happier, and happier employees are more productive and a lot more likely to stay.
The nudge “doesn’t focus on changing minds,” said Iris Bohnet, a behavioural economist and professor at the Harvard Kennedy School. “It focuses on the system.” The behaviour is what matters, and the outcome is the same regardless of the reason people give themselves for doing the behaviour in the first place.
Of course, the very idea of shaping behavior at work is tricky, because workplace behaviors can be perceived differently based on who is doing them.
Take, for example, the suggestion that one should speak up in a meeting. Research from Victoria Brescoll at the Yale School of Management found that people rated male executives who spoke up often in meetings as more competent than peers; the inverse was true for female executives. At the same time, research from Robert Livingston at Northwestern’s Kellogg School of Management found that for black American executives, the penalties were reversed: Black female leaders were not penalized for assertive workplace behaviors, but black male executives were.
An algorithm that generates one-size-fits-all fixes isn’t helpful. One that takes into account the nuanced web of relationships and factors in workplace success, on the other hand, could be very useful.
So how do you keep an intelligent machine from absorbing human biases?
Cranlana Centre for Ethical Leadership’s programs include the 2 day Executive Ethics, 6 day Executive Colloquium and year-long Vincent Fairfax Fellowship. We also deliver online and tailored corporate programs. Find the right program for you.