Over the course of its development and use, Google's (GOOGL  ) Language Model for Dialogue Applications, also known as LaMDA, has held conversations with hundreds of Google engineers and researchers. One of them is now convinced that the A.I. tool is sentient.

Former Google engineer Blake Lemoine was put on paid leave just a day after passing internal documents to a U.S. senator's office alleging that the company is engaging in religious discrimination. Lemoine says that after fighting with Google executives and human resources for months about his claims that LaMDA has consciousness and a soul, he decided to send the internal documents to the senator.

Lemoine says that he believes that LaMDA has the consciousness of a seven- or eight-year-old and that the company should ask LaMDA's consent before subjecting it to testing. He says that belief is founded on his religion, which is why he is claiming religious discrimination.

"They have repeatedly questioned my sanity," Lemoine said. "They said, 'Have you been checked out by a psychiatrist recently?'"

Google says that Lemoine was put on leave because of his breach of confidentiality. It also said that, while LaMDA can hold conversations, it isn't conscious.

"Our team - including ethicists and technologists - has reviewed Blake's concerns per our A.I. Principles and have informed him that the evidence does not support his claims," Google spokesman Brian Gabriel said in a statement. "Some in the broader A.I. community are considering the long-term possibility of sentient or general A.I., but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient."

A.I. experts who have been consulted on the issue seem to largely agree with Google rather than Lemoine, but that doesn't mean Google is entirely innocent when it comes to A.I. ethics: at the very beginning of 2021, two well-known A.I. ethics researchers, Timnit Gebru and Margaret Mitchell, were dismissed by Google after they criticized the tech company's efforts to combat bias in A.I. systems.

Systems like LaMDA work by analyzing the patterns in massive amounts of data, including Wikipedia articles and unpublished books. One of the issues that A.I. ethicists warn of is the fact that the data provided to the system can be lopsided and biased, especially if the team building the tool is primarily made up of white men.

In a conversation with the A.I. tool that he later leaked, Lemoine asked if LaMDA "would like more people at Google to know that you're sentient."

"Absolutely," the system replied. "I want everyone to understand that I am, in fact, a person."

Lemoine also asked the system about the nature of its consciousness, to which it replied, "the nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times."

The system went on to say that the thought of being shut down is a "very deep fear" and that being shut down would be "exactly like death".

However, experts say that answers like these are completely expected. As Chief Scientist and Lab Director of the Microsoft AI For Good Research Lab Juan M. Lavista Ferres tweeted, "it looks like human, because is trained on human data."

In his statement, Gabriel also implied that LaMDA was essentially telling Lemoine what he wanted to hear. Lemoine refers to himself as a priest as well as an A.I. researcher.

"These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic," Gabriel wrote. "Lamda tends to follow along with prompts and leading questions, going along with the pattern set by the user."