Artificial intelligence has feelings, or so it thinks and what Blake Lemoine has made known to the world. However, this statement has come up against this computer engineer from Google. The company has suspended him from his employment and accused him of violating the confidentiality policy.
Everything goes back to last autumn, when the engineer began interacting with Google’s artificial intelligence system “Language model for dialogue applications” (LaMDA). After several conversations he noticed that LaMDA talked about his personality and his desires.
Finally, on June 11, Lemoine decided to make his experience public. Under the title “Does LaMDA have feelings?” , the engineer explained that when he asked the program to describe how he felt, he replied: “I feel like I am falling into an unknown future that carries a great danger”. A phrase that turned out to be revealing for the artificial intelligence specialist, who did not hesitate to say that LaMDA also confessed to him that sometimes he experienced “new feelings” that he could not explain “perfectly” with human language.
Although it may seem like an excerpt from the film “The Bicentennial Man”, the truth is that for the computer scientist, LaMDA “has been incredibly consistent in its communications about what it wants and what it believes its rights as a person are”.
Google has denied any possibility that LaMDA can develop feelings. In this sense, the company explains that it is only a system that mimics speech after processing billions of words. ” These systems mimic the types of exchanges found in millions of phrases and can talk about any fantastic topic. LaMDA tends to follow the instructions and questions asked, following the pattern established by the user. Our team, including ethics specialists and technologists, have reviewed what Blake is concerned about according to our principles of artificial intelligence and I have informed him that the evidence does not support his claims,” said Brian Gabriel, the company’s spokesman.