Google Disagrees with Engineer Who Claims AI Has Gone Sentient

Karl Telintelo

In response to a software engineer’s assertion that Google’s AI chatbot had developed sentience, the tech firm reportedly suspended him for breaching its confidentiality policies.

What Started It

According to The Washington Post, Google engineer Blake Lemoine was tasked with conversing with LaMDA (also known as the Language Model for Dialogue Applications), the company’s AI chatbot generator, as part of a safety test last year with the aim of checking its responses for hate speech or discriminatory language.

A Bit More Research

Before presenting his findings to Google Vice President Blaise Aguera y Arcas and Head of Responsible Innovation Jen Gennai, Lemoine continued to pursue additional proof of LaMDA’s purported sentience. According to reports, the claims were looked into but ultimately rejected, which prompted Lemoine to “go public” with his conviction in a Medium article.

According to Google, Lemoine’s findings do not establish sentience. Lemoine was put on “paid administrative leave” by Google after the incident, according to The Washington Post, after it was discovered that he had violated the company’s confidentiality rules by posting the talks with LaMDA online. Lemoine had been employed by the company as a software engineer, not an ethicist, the company noted in a statement.

Share This Article
Exit mobile version