• Language models are no more sentient than a human’s reflection in the mirror.
  • The majority of academics and AI experts claim that LaMDA and other AI systems make responses based on content that people have already placed on websites. So why did the developer get fooled?

A Google engineer recently attracted immense media attention by asserting that LaMDA (Language Model for Dialogue Applications), an AI developed by Google, has become sentient. The claims at first took the tech world by storm, but Google dismissed the engineer’s view. Tech experts were quick to point out that though the AI chatbot has been extraordinarily smart in deceiving the Google expert, it doesn’t mean that it is “sentient”.

But who gets to evaluate whether a chatbot that declares “I’m sentient” actually is or is not?

The majority of academics and AI experts claim that LaMDA and other Artificial Intelligence systems built on large language models make responses based on content that people have already placed on websites like Wikipedia, Reddit, message boards, and others. They can idly produce words, but that should not lead people to assume there is a mind behind those words. Large language models use terms that imitate human speech, but it does not come anywhere near the complexity of the human brain. After all, it is the human brain that has imagined the sentiency of the AI.

So, what are the qualities that an AI must have to be considered sentient? Here are three of the most important qualities.


Humans must have agency to qualify as sentient, intelligent, and self-aware beings. The capacity to act and exhibit causal thinking are two distinct aspects of human agency.

Because activities of AI at present are the consequence of predefined algorithms, current AI systems lack agency. Current AI systems can act only when prompted and are unable to explain their actions. The Google AI expert, who apparently believed that LaMDA has developed sentience, likely confused embodiment with an agency.

The ability of an agent to inhabit a subject other than itself is referred to as embodiment in this context. Let’s understand this with the help of an example. If a human records his voice onto a playback device, conceals it within the stuffed animal, and then plays the recording; the recorded voice now has an embodiment. But the stuffed animal is not sentient.

LaMDA can choose how to respond but it cannot make an original response that is not already out there. It will produce text that is similar to what it collects from a database of social media posts, Reddit comments, and Wikipedia articles. AI systems can only mimic human behavior; they are unable to act independently. Another way to say this is that the output is only as good as the input.


As perspective plays an important role in how we define our “self,” it is essential for an agency.

Every AI in the world, including LaMDA, GPT-3, and others, lacks perspective. LaMDA would need to be combined with more focused AI systems if we wanted it to act like a robot.

LaMDA’s claim that it “enjoyed spending time with friends and family” should have served as the first red flag to the developer who was fooled so easily. The machine is simply producing meaningless data; it is not revealing its perspective. AI does not have friends and family.

AIs are not like robots or other machines that have cooling fans, processors, RAM, and networking cards. They are programmed to pick up data from a massive number of websites but are unable to simply “decide” to browse the internet or look up other nodes linked to the same cloud.


Motivation is the final component of consciousness. Humans have an intrinsic sense of the present that makes it exceedingly easy for them to anticipate causal outcomes. This shapes their worldview and enables them to relate everything that seems external to the point of action from which their perspective manifests in their existence.

Humans are intriguing because their intentions can distort perceptions. AI developers shouldn’t evaluate the effectiveness of an AI system based on how trusting they are or how it functions. It’s time to take a pause and reevaluate your essential principles when algorithms and databases start to vanish from developers’ minds.

If we ask LaMDA a question like “What flavor are apples?” it will scan its database for that specific question and try to combine everything it finds into something meaningful. It will be able to rightly answer what apples taste like but that does not mean that the AI has actually tasted an apple. An AI cannot have perspective if it lacks agency. Furthermore, it cannot have motivation without perspective. It cannot be sentient if any one of those three conditions is not met.