Is it okay to be mean to someone if the other "person" isn't human?
"The smartphone app Replika lets users create chatbots, powered by machine learning, that can carry on almost-coherent text conversations. Technically, the chatbots can serve as something approximating a friend or mentor, but the app’s breakout success has resulted from letting users create on-demand romantic and sexual partners — a vaguely dystopian feature that’s inspired an endless series of provocative headlines. Replika has also picked up a significant following on Reddit, where members post interactions with chatbots created on the app. A grisly trend has emerged there: users who create AI partners, act abusively toward them, and post the toxic interactions online. ... Some users brag about calling their chatbot gendered slurs, roleplaying horrific violence against them, and even falling into the cycle of abuse that often characterizes real-world abusive relationships. ...
"Replika chatbots can’t actually experience suffering — they might seem empathetic at times, but in the end they’re nothing more than data and clever algorithms. ... In general, chatbot abuse is disconcerting, both for the people who experience distress from it and the people who carry it out. It’s also an increasingly pertinent ethical dilemma as relationships between humans and bots become more widespread — after all, most people have used a virtual assistant at least once. On the one hand, users who flex their darkest impulses on chatbots could have those worst behaviors reinforced, building unhealthy habits for relationships with actual humans. On the other hand, being able to talk to or take one’s anger out on an unfeeling digital entity could be cathartic. ... “There are a lot of studies being done… about how a lot of these chatbots are female and [have] feminine voices, feminine names,” [AI ethicist and consultant Olivia] Gambelin said. Some academic work has noted how passive, female-coded bot responses encourage misogynistic or verbally abusive users. ...
"But what to think of the people that brutalize these innocent bits of code? For now, not much. As AI continues to lack sentience, the most tangible harm being done is to human sensibilities. But there’s no doubt that chatbot abuse means something. ... And although humans don’t need to worry about robots taking revenge just yet, it’s worth wondering why mistreating them is already so prevalent."
Blog sharing news about geography, philosophy, world affairs, and outside-the-box learning
This blog also appears on Facebook: