If you experiences inappropriate behaviour in the workplace, you have a number of different avenues you can consider. You can report the incident to human resources, hire an attorney, or, now, tell an unblinking machine. Spot, an AI-powered, browser-based chatbot for reporting workplace misconduct, launched out of beta this month in order to, as its website states, let people “report workplace harassment and discrimination without talking to a human.”
This is potentially a beneficial tool for employees who don’t trust the current systems in place to fairly document and deal with misconduct. But it’s one that needs better privacy policies in place in order to ensure that vulnerable people using it won’t get screwed over.
Spot is pretty easy to use. You just open up the chatbot app, which then asks you a series of questions about the incident based on what you say. My own experience with the app was pretty straightforward. When I typed in clear-cut (hypothetical) instances of harassment, it prompted me with natural next questions, asking me if there were witnesses, how it made me feel, and if I had any evidence of the situation.
“The policy is poorly framed for the kind of data that they’re collecting, and they are quick to absolve themselves of the very real risks that may impact people using their app.”
When you’re done documenting an incident, a time-stamped PDF is emailed to you. You then have the option to file the report to your company, either anonymously or not. Spot says it will delete your email address from its servers, and the PDF will be deleted after 30 days.
At face value, there are some immediately obvious advantages to this. By affording someone a mechanism by which they can document an incident anonymously to a third party, it encourages record-keeping without a fear of retaliation.
Human resource departments do not always have an employee’s best interest in mind, a trend increasingly evident in the HR failures brought to light as a litany of allegations recently surfaced from within Silicon Valley. Therese Lawless, an employment attorney working with gender discrimination and sexual harassment cases in the Valley, told Gizmodo that removing people from the reporting process isn’t inherently bad, given the current state of HR.
“I see so many human resources people just clearly working on behalf of the company,” Lawless said, adding that often their response is “inadequate” or not in favour of the employee. “Why shouldn’t it be a machine?”
The instant, anytime access to the chatbot also allows an employee to document an incident while it is clear in their mind, and the AI powering the chatbot won’t miss key details like a human interviewer might.
“A perfect memory interviewer is calm, is neutral, and makes sure that they don’t ask leading questions,” Julie Shaw, a Spot co-founder, told Smithsonian Magazine. “The problem is, it’s quite difficult to train people, and to train people to actually stick to the script. People are easily led astray and distracted.”
“Nothing they say about security is reassuring,” Irwin said in an email to Gizmodo. “The policy is poorly framed for the kind of data that they’re collecting, and they are quick to absolve themselves of the very real risks that may impact people using their app.”
A few other glaring issues Irwin pointed out included the absence of a vulnerability disclosure policy—something that tells people when Spot has security holes and how developers dealt with them—and an email address for people can use to report security bugs. This type of reporting ability “is vital for an application with this kind of threat model, and there’s absolutely nada there,” she said.
What’s more, Irwin said, the data deletion policy appears contradictory because “they say they will get rid of it, but later claim that they can keep it around for regulatory and research reasons, even if you’ve requested deletion.” We reached out to Spot on whether it will roll out a vulnerability disclosure policy, and under what circumstances a user’s data might not be deleted. We will update with a response.
Then there’s the sensitive nature of documenting an incident of harassment or discrimination at all. Lawless said that if someone is going to use a chatbot as a recordkeeper, they need to ensure their statements are 100 per cent accurate, not exaggerated, and include legal language such as discrimination, harassment, or retaliation. And if there’s more to the incident than what is being documented through the chatbot, employees should make sure to specifically note that it isn’t a complete story.
“Employers try to use different statements to create the impression that someone is lying by highlighting small differences,” Larry Organ, an attorney at the California Civil Rights Law Group, told Gizmodo in an email. “The best way to address this is maintaining consistency in statements and not being overly inclusive. Theoretically, all you have to do is put the employer on notice so that they then have to do an investigation.”
Spot signals an unmistakable need for a neutral system to report inappropriate behaviour in the workplace. Human resources can oftentimes be biased, and while AI is not free from bias, these machines are not beholden to your employer. But without proper security measures in place, your privacy could be at risk, thus eroding the safe space Spot is meant to create.