Skip to main content

AI becoming Sentinel

Google CEO demonstrated their new Natural Language chatbot LAMDA.  The video is available on youtube. https://www.youtube.com/watch?v=aUSSfo5nCdM

The demo was very impressive. All the planets in the solar system were created as personas and any human can converse with LaMDA and ask questions about that particular planet.  LaMDA responses had sufficient human like qualities. For e.g. If you talk good about the planet then it says thanks for appreciating and when you talk about myths about the planet, it corrects you with human like statements.  Google CEO also mentioned that this is still under R&D but being used internally and this is Google’s efforts to make machines understand and respond as humans using natural language constructs.

Huge controversy was also created by a Google engineer, Blake  Lemoine.  His short interview is available on Youtube. https://www.youtube.com/watch?v=kgCUn4fQTsc&t=556s

Blake was part of testing team of LAMDA and after many question & answer sessions with LAMDA, felt that LaMDA is becoming a real person with feelings, understanding of trick questions and answering with trick or silly answers like a person would do etc.  He asked a philosophical question “Is LaMDA sentinel? “

Google management and many other AI experts have dismissed these claims and questioned him on his motives for over playing his cards.

In simple terms let me summarize both the positions.

  • Google and other big players in the AI space are trying to crack the Artificial General Intelligence ( AGI) area i.e how to make AI/ML models as human as possible. This is their stated purpose and there is no question of denying this.
  • Any progress towards AGI will involve machines to behave in irrational ways as humans do. Machines may not always chose the correct decision all the times ..  may not want to answer the same question many times like humans do ..  may show signs of emotions such as feeling hurt , sad , happy etc. like humans do.
  • This does not mean that AI has become sentinel and has actually become a person demanding its rights as a global citizen!.  All new technologies have rewards and risks and may be we are exaggerating the risks of AI tech too much.
  • Blake gave an example of one test case during his testing role at Google.  He tried various test conversations with LaMDA to identify ethical issues like bias etc.  When he gave a trick question to LaMDA which had no right answer, LaMDA responded back with a real stupid out of the line answer.   Blake reasoned that LaMDA understood that this was a trick question, deliberately being asked to confuse LaMDA and hence gave a out of the line stupid answer. For another question “what are you afraid of”, LaMDA said it is afraid of being turned off.  He felt these answers are way and beyond just conversational intelligence and hence felt that LaMDA has become more of a person.
  • You may refer my earlier Blogs on Turing test for AI.  Prof Turing published this test in 1953 to determine whether an AI machine has full general intelligence.  Blake also wanted Google to run this Turing test on LaMDA and see if LaMDA passes or fails this.  He says Google felt this is not necessary. He also claims that as per Google’s policy, LaMDA is hard coded to fail the Turing test.  If you ask a question “Are you an AI” , LaMDA is hardcoded to say Yes thus failing the Turing test.

Very interesting thoughts and discussions.  Nothing dramatic about this as AGI by its definition very controversial as it gets in to deep human knowledge replication.

What do enterprises who are planning on using AI/ML need to do? 

For enterprise applications of AI/ML, we do not need AGIs and our focused domain specific AI/ML models are sufficient.  Hence no need to worry about these sentinel discussions as yet.

However, the discussions on AI Ethics are still very relevant for all enterprise AIML applications and not to be confused with the AGI sentinel discussions.  

More Later,

L Ravichandran.

No Comments yet!

Your Email address will not be published.