Ventured

Tech, Business, and Real Estate News

How A.I. Could Fix Workplace Harassment

Source: OZY, Nick Fouriezos
Photo: NED

Human societies carry centuries of discriminatory beliefs and practices. Robots, we thought, would not be so encumbered. It’s now known that artificial intelligence has biases, though. Crime prognostication tools like COMPAS and PredPol have been accused by experts of being racist, causing criminal justice officials to unfairly target people of color and majority-minority neighborhoods. Facial recognition software by IBM and Microsoft accurately guessed White men’s gender, but failed miserably when it came to dark-skinned women. And last year Amazon had to scrap an internal AI recruiting tool that was found to be anti-women.

So who is going to solve this problem? More robots. But this time, better ones.

A number of AI-driven technologies are emerging to address workplace harassment and discrimination issues. Spot uses an AI chatbot (launched in 2018) that allows employees to report incidents while getting man-made error out of the equation, in countries including the United States, United Kingdom, Japan and India, among others. Botler AI, created in 2017, is a conversational AI system trained on U.S. and Canadian criminal code, which helps people who feel violated understand if a crime was committed. And Callisto, a web-based tool launched in 2018, can help sexual assault victims report what happened to them and identify serial offenders at tech companies and college campuses.

Notice that the biggest difference between these new automated diversity bots and the old bigoted guard is that they are almost solely focused on the reporting side of abuse — rather than sussing out instances of bias, they are making it easier for people to report them. Still, the need is real: 79 percent of respondents to a recent survey by Spot said they had witnessed harassment and discrimination in the last five years in the workplace, while 77 percent never reported the incident to human relations … instead, telling colleagues, friends or family.

That game of telephone obviously doesn’t serve victims well. But it also doesn’t serve companies, leading to toxic cultures with “open secrets” that never get dealt with. “It’s not just a reactive tool for when things go wrong, but also a way for these companies to proactively address the issue and get ahead of it,” says Jessica Collier, CEO of Spot.

Each AI company takes on a different task. Unlike a hotline-focused anonymous reporting company, Spot uses natural language processing tools to guide victims through the incident. After the victim signs into the Spot messaging platform, the bot sets clear expectations, helping them understand the process, asking one question at a time and allowing answers to be edited. It then asks the victim to recall what happened, with follow-up questions that can’t be influenced by human biases, unlike an in-person interview.

That’s important, because contrary to what people might think, studies show that having a human asking the questions actually makes for less objective data. Interviewers are likely to focus their follow-up questions on things they find interesting, not what the employee has chosen to highlight — and at times they introduce assumptions into the already difficult process. As Collier puts it: “If I’m thinking a specific team has shown patterns of discriminatory or harassing behavior, I might follow up by asking, ‘Was this someone on the sales team?’ or ‘did this happen at the product team happy hour?’”

The automated process also allows victims to go more in-depth with their answers, and feel more comfortable while doing so. DaVita, a dialysis care company serving 1.7 million patients across 10 countries, has seen a 60 percent increase in cases where employees re-engaged, compared to a hotline, since adopting Spot. “We want to build a community where bullying and harassment don’t exist but we’re not naive enough to think that this will never darken our door,” said Ellie Macdonald, an HR professional at Monzo, an online-only bank based in London. For them, adding AI was preemptive, heading off discrimination “before we have had to deal with any of this behavior.”

Collier acknowledges the bad rap AI gets in the workplace, particularly when it comes to diversity. “We need HR people. We just need to take them out of the part of the process where humans are the biggest liability,” she says. “I’m not personally interested in the conversations about sweeping replacement of a job function by computer overlords. But we can do really important, targeted work, with AI.”

With the right data, AI can help bosses see patterns that professionals can’t — heat maps that show where certain problems are arising and under what circumstances. For a few years now, the thorn in AI’s side has been its unwitting reinforcement of discrimination. Now the tables are turning, and AI companies are removing the splinter from their own eye before settling out to fix the world’s problems.

https://www.ozy.com/fast-forward/ai-has-its-biases-now-it-might-also-fix-discrimination-harassment