close
close

The disturbing case of Jennifer Anne’s chatbot when AI crossed the line

The disturbing case of Jennifer Anne’s chatbot when AI crossed the line

A bizarre case recently made headlines around the world when a girl in the United States who was murdered in 2006 was resurrected as an artificial intelligence chatbot – and her family had no idea!

Drew Crescente, the victim’s father, woke up to a Google alert that soon turned into a disturbing revelation. To his utter shock, he realized that an artificial intelligence chatbot created using the name and image of his deceased daughter Jennifer Ann had gone live 18 years after her tragic death.

By the time Crecente discovered the bot, a counter on its profile showed that it had already been used in at least 69 chats.

This particular chatbot was found on Character.ai, a platform where users can create their own artificial intelligence “characters.” Anna’s name and photo in the yearbook were used along with a description calling her an “expert in journalism.”

Ultimately, Character.ai removed Ann’s chatbot in response to Brian’s post.

This particular incident, while traumatic for the family, leaves readers wondering about the direction we are heading in regards to ethics (or lack thereof) in the application of AI. It also raises troubling questions about privacy and human rights.

While AI developments offer new opportunities and benefits, AI tools are also used as a means of social control, mass surveillance and discrimination. Larry Ellison, co-founder of Oracle, said AI will usher in a new era of surveillance that will ensure “citizens behave on their best behavior.”

However, this brings us to the most obvious issue of algorithmic bias, which arises from the type of data sets fed into systems.

According to a study by the National Institute of Standards and Technology, “Towards a standard for identifying and managing bias in artificial intelligence” AI bias extends beyond the computational algorithms and models and the data sets on which they are built.”

Merel Koning, senior adviser for technology and human rights at Amnesty International, stressed that the xenophobic algorithm used in the Netherlands has caused significant harm to thousands of lives. She warned that without human rights protections, similar mistakes could be repeated in the future.

Ethics and unauthorized use of personal data

Artificial intelligence systems can collect huge amounts of data in a variety of ways, raising serious privacy concerns. Web scraping allows AI to automatically collect public and potentially private information from websites, often without user consent.

Moreover, the growing use of biometric technologies such as facial recognition and fingerprinting collect sensitive, unique data that is not reliable. Meanwhile, AI-powered IoT devices provide insight into our daily lives and the collective human mind.

According to Salman Waris, founder and managing partner of TechLegis Advocates & Solicitors, the law does give people certain “privacy” and “publicity” rights, which provide limited rights to control how someone’s name, image or other identifying information is used in certain circumstances . circumstances.

However, these laws vary from state to state, making them difficult to summarize. “For example, in California, the law states that rights of privacy or publicity are violated when someone’s name, voice, signature, photograph or likeness appears in a work of art and the subject has not consented to its use,” Waris said. .

MyHeritage, a geneology website, has introduced a tool called Deep Nostalgia that allows users to animate old photos of their deceased relatives. AI will add movement to the eyes, mouth and head, creating the illusion that the person in the photo is “alive.”

While many found the technology exciting and moving, others found it unsettling or emotionally overwhelming, especially when the animation involved long-dead relatives. The ethical issue was whether it was respectful to animate someone who was unable to consent.

In the documentary Roadrunner: The Anthony Bourdain MovieThe filmmakers used artificial intelligence to recreate Bourdain’s voice and created several sentences of his voice based on things he wrote but never said out loud. This has sparked debate about the ethical implications of using AI to recreate the voice of a deceased person without explicit consent.

This raises a critical question: are we on the verge of sliding into a world in which AI undermines human rights, or have we already crossed that line?

During the Gaza conflict, reports emerged that artificial intelligence-based systems were being used to identify and strike targets, often with limited human intervention in real time.

Algorithms designed to predict the behavior or locate targets based on movement or communication patterns can lead to serious errors, especially in environments where civilians are close to hostilities.

There have been allegations that artificial intelligence systems used during the Gaza war disproportionately targeted civilians, including children and families, under the guise of precision strikes. This calls into question whether the use of such technology is compatible with the principles of proportionality and distinction, which are pillars of international humanitarian law designed to protect civilians.
According to a Council of Europe study, the use of algorithms in rapidly changing technology poses serious challenges, including the protection of human rights and dignity. “Indeed, the growing use of automation and algorithmic decision-making in all areas of public and private life threatens to erode the very concept of human rights as a protective shield against government interference,” the report emphasizes.