Skip to main content

AI becoming Sentinel

Google CEO demonstrated their new Natural Language chatbot LAMDA.  The video is available on youtube. https://www.youtube.com/watch?v=aUSSfo5nCdM

The demo was very impressive. All the planets in the solar system were created as personas and any human can converse with LaMDA and ask questions about that particular planet.  LaMDA responses had sufficient human like qualities. For e.g. If you talk good about the planet then it says thanks for appreciating and when you talk about myths about the planet, it corrects you with human like statements.  Google CEO also mentioned that this is still under R&D but being used internally and this is Google’s efforts to make machines understand and respond as humans using natural language constructs.

Huge controversy was also created by a Google engineer, Blake  Lemoine.  His short interview is available on Youtube. https://www.youtube.com/watch?v=kgCUn4fQTsc&t=556s

Blake was part of testing team of LAMDA and after many question & answer sessions with LAMDA, felt that LaMDA is becoming a real person with feelings, understanding of trick questions and answering with trick or silly answers like a person would do etc.  He asked a philosophical question “Is LaMDA sentinel? “

Google management and many other AI experts have dismissed these claims and questioned him on his motives for over playing his cards.

In simple terms let me summarize both the positions.

  • Google and other big players in the AI space are trying to crack the Artificial General Intelligence ( AGI) area i.e how to make AI/ML models as human as possible. This is their stated purpose and there is no question of denying this.
  • Any progress towards AGI will involve machines to behave in irrational ways as humans do. Machines may not always chose the correct decision all the times ..  may not want to answer the same question many times like humans do ..  may show signs of emotions such as feeling hurt , sad , happy etc. like humans do.
  • This does not mean that AI has become sentinel and has actually become a person demanding its rights as a global citizen!.  All new technologies have rewards and risks and may be we are exaggerating the risks of AI tech too much.
  • Blake gave an example of one test case during his testing role at Google.  He tried various test conversations with LaMDA to identify ethical issues like bias etc.  When he gave a trick question to LaMDA which had no right answer, LaMDA responded back with a real stupid out of the line answer.   Blake reasoned that LaMDA understood that this was a trick question, deliberately being asked to confuse LaMDA and hence gave a out of the line stupid answer. For another question “what are you afraid of”, LaMDA said it is afraid of being turned off.  He felt these answers are way and beyond just conversational intelligence and hence felt that LaMDA has become more of a person.
  • You may refer my earlier Blogs on Turing test for AI.  Prof Turing published this test in 1953 to determine whether an AI machine has full general intelligence.  Blake also wanted Google to run this Turing test on LaMDA and see if LaMDA passes or fails this.  He says Google felt this is not necessary. He also claims that as per Google’s policy, LaMDA is hard coded to fail the Turing test.  If you ask a question “Are you an AI” , LaMDA is hardcoded to say Yes thus failing the Turing test.

Very interesting thoughts and discussions.  Nothing dramatic about this as AGI by its definition very controversial as it gets in to deep human knowledge replication.

What do enterprises who are planning on using AI/ML need to do? 

For enterprise applications of AI/ML, we do not need AGIs and our focused domain specific AI/ML models are sufficient.  Hence no need to worry about these sentinel discussions as yet.

However, the discussions on AI Ethics are still very relevant for all enterprise AIML applications and not to be confused with the AGI sentinel discussions.  

More Later,

L Ravichandran.

EU Artificial Intelligence Act proposal

Lot has been talked about #ResponsibleAI, #ai and #ethics. We also have a brand new filed called #xai Explainable AI with the sole objective of creating new simpler models to interpret more complex original models.  Many tech companies such as Google, Microsoft, IBM etc. have released their #ResponsibleAI guiding principles. 

European Union has circulated a proposal for a “The EU Artificial Intelligence Act”.  As per process this proposal  will be discussed, debated, modified and made in to law by the European parliament soon.

Let me give you a brief summary of the proposal.  

First is the definition of 4 risk categories with different type of checks & balances in each category. 

The categories are  

  1. Unacceptable
  2. High Risk
  3. Limited Risk
  4. Minimal Risk

Category 1 the recommendation is a Big NO.   No company can deploy this category SW within EU for commercial use.

Category 2 consisting of many of the business innovation & productivity improvement applications will be under formal review & certification before put to commercial use.

Category 3 will require full transparency to the end users and option to ask for alternate human in the loop solutions.

Category 4 is not addressed in this proposal.  Expected to be self-governed by companies

Let us look at what kind of applications fall in Category 2

  • Biometric identification and categorization
  • Critical Infrastructure management
  • Education and vocational training
  • Employment
  • Access to public and private services including benefits
  • Law enforcement ( Police & Judiciary)  
  • Border management ( Migration and asylum)
  • Democratic process such as elections & campaigning.

Very clearly EU is worried about the ethical aspects of these complex AI systems with their inbuild biases, lack of explain ability, transparency etc. and also clearly gives very high weightage to human rights and fairness & decency.

 I recommend that all organizations start reviewing this and include this aspect in their AIML deployment plans without waiting for the eventual EU law.