Skip to main content

Ethics at the Forefront: Navigating the Path at frontier of Artificial General Intelligence

While we may want to cautiously avoid the term Artificial General Intelligence (AGI) today, it is evident from the general capabilities of the systems currently in place that we are either close to, or perhaps already have, some form of AGI in operation. In this scenario, it is crucial that AI ethics take center stage for several compelling reasons:

Unprecedented Autonomy and Decision-Making: AGI’s capability to autonomously perform any intellectual task necessitates ethical guidelines to ensure that the decisions made do not harm individuals or society.

Societal Impact and Responsibility: The profound impact of AGI across all societal sectors demands an alignment with human values and ethics to responsibly navigate changes and mitigate potential disruptions.

Avoiding Bias and Ensuring Fairness: To counteract the perpetuation of biases and ensure fairness, AGI requires a robust ethical framework to identify, address, and prevent discriminatory outcomes.

Control and Safety: The potential for AGI to surpass human intelligence necessitates stringent ethical guidelines and safety measures to ensure human control and to prevent misuse or unintended behaviors.

Transparency and Accountability: Given the complexity of AGI decision-making, ethical standards must enforce transparency and accountability, enabling effective monitoring and management by human operators.

Long-term Existential Risks: Aligning AGI with human values is crucial to avert existential risks and ensure that its development and deployment do not adversely impact humanity’s future.

Global Collaboration and Regulation: The global nature of AGI development necessitates international cooperation, with ethical considerations driving dialogue and harmonized regulations for responsible AGI deployment worldwide.

To expand on the important aspect of “Unprecedented Autonomy and Decision-Making,” the profound ability of AGI systems to perform tasks across various domains without human intervention is noteworthy. Organizations can proactively establish certain measures to ensure that the development and deployment of AI systems are aligned with ethical standards and societal values. Here’s what organizations can put in place now:

Aspect Manifestation Importance of Ethics
Decision-Making in Complex Scenarios Future AGI can make decisions in complex, unstructured environments such as medicine, law, and finance. Ensuring Beneficence: Ethical guidelines are needed to ensure decisions made by AGI prioritize human well-being and do not cause harm.
Continuous Learning and Adaptation Unlike narrow AI, AGI can learn from new data and adapt its behavior, leading to evolving decision-making patterns. Maintaining Predictability : Ethical frameworks can guide the development of AGI to ensure its behavior remains predictable and aligned with human intentions.
Autonomy in Execution AGI systems can act without human oversight, executing tasks based on their programming and learned experiences. Safeguarding Control: Ethics ensure that even in autonomous operation, AGI systems remain under human oversight and control to prevent unintended consequences.
Interaction with Unstructured Data AGI can interpret and act upon unstructured data (text, images, etc.), making decisions based on a wide array of inputs. Preventing Bias: Ethical standards are crucial to ensure that AGI systems do not perpetuate or amplify biases present in the data they learn from.
Complex Communication Abilities AGI can potentially understand and generate natural language, enabling it to communicate based on complex dialogues and texts. Ensuring Transparency: Ethics demand that AGI communication remains transparent and understandable to humans to facilitate trust and accountability.
Long-Term Strategic Planning AGI could plan and execute long-term strategies with far-reaching impacts, considering a wide array of variables and potential future scenarios. Aligning with Human Values: Ethical guidelines are essential to ensure that AGI’s long-term planning and strategies are aligned with human values and ethics.

By taking these steps, organizations can play a pivotal role in steering the development of AGI towards a future where it aligns with ethical standards and societal values, ensuring its benefits are maximized while minimizing potential risks.

 

AI becoming Sentinel

Google CEO demonstrated their new Natural Language chatbot LAMDA.  The video is available on youtube. https://www.youtube.com/watch?v=aUSSfo5nCdM

The demo was very impressive. All the planets in the solar system were created as personas and any human can converse with LaMDA and ask questions about that particular planet.  LaMDA responses had sufficient human like qualities. For e.g. If you talk good about the planet then it says thanks for appreciating and when you talk about myths about the planet, it corrects you with human like statements.  Google CEO also mentioned that this is still under R&D but being used internally and this is Google’s efforts to make machines understand and respond as humans using natural language constructs.

Huge controversy was also created by a Google engineer, Blake  Lemoine.  His short interview is available on Youtube. https://www.youtube.com/watch?v=kgCUn4fQTsc&t=556s

Blake was part of testing team of LAMDA and after many question & answer sessions with LAMDA, felt that LaMDA is becoming a real person with feelings, understanding of trick questions and answering with trick or silly answers like a person would do etc.  He asked a philosophical question “Is LaMDA sentinel? “

Google management and many other AI experts have dismissed these claims and questioned him on his motives for over playing his cards.

In simple terms let me summarize both the positions.

  • Google and other big players in the AI space are trying to crack the Artificial General Intelligence ( AGI) area i.e how to make AI/ML models as human as possible. This is their stated purpose and there is no question of denying this.
  • Any progress towards AGI will involve machines to behave in irrational ways as humans do. Machines may not always chose the correct decision all the times ..  may not want to answer the same question many times like humans do ..  may show signs of emotions such as feeling hurt , sad , happy etc. like humans do.
  • This does not mean that AI has become sentinel and has actually become a person demanding its rights as a global citizen!.  All new technologies have rewards and risks and may be we are exaggerating the risks of AI tech too much.
  • Blake gave an example of one test case during his testing role at Google.  He tried various test conversations with LaMDA to identify ethical issues like bias etc.  When he gave a trick question to LaMDA which had no right answer, LaMDA responded back with a real stupid out of the line answer.   Blake reasoned that LaMDA understood that this was a trick question, deliberately being asked to confuse LaMDA and hence gave a out of the line stupid answer. For another question “what are you afraid of”, LaMDA said it is afraid of being turned off.  He felt these answers are way and beyond just conversational intelligence and hence felt that LaMDA has become more of a person.
  • You may refer my earlier Blogs on Turing test for AI.  Prof Turing published this test in 1953 to determine whether an AI machine has full general intelligence.  Blake also wanted Google to run this Turing test on LaMDA and see if LaMDA passes or fails this.  He says Google felt this is not necessary. He also claims that as per Google’s policy, LaMDA is hard coded to fail the Turing test.  If you ask a question “Are you an AI” , LaMDA is hardcoded to say Yes thus failing the Turing test.

Very interesting thoughts and discussions.  Nothing dramatic about this as AGI by its definition very controversial as it gets in to deep human knowledge replication.

What do enterprises who are planning on using AI/ML need to do? 

For enterprise applications of AI/ML, we do not need AGIs and our focused domain specific AI/ML models are sufficient.  Hence no need to worry about these sentinel discussions as yet.

However, the discussions on AI Ethics are still very relevant for all enterprise AIML applications and not to be confused with the AGI sentinel discussions.  

More Later,

L Ravichandran.

EU Artificial Intelligence Act proposal

Lot has been talked about #ResponsibleAI, #ai and #ethics. We also have a brand new filed called #xai Explainable AI with the sole objective of creating new simpler models to interpret more complex original models.  Many tech companies such as Google, Microsoft, IBM etc. have released their #ResponsibleAI guiding principles. 

European Union has circulated a proposal for a “The EU Artificial Intelligence Act”.  As per process this proposal  will be discussed, debated, modified and made in to law by the European parliament soon.

Let me give you a brief summary of the proposal.  

First is the definition of 4 risk categories with different type of checks & balances in each category. 

The categories are  

  1. Unacceptable
  2. High Risk
  3. Limited Risk
  4. Minimal Risk

Category 1 the recommendation is a Big NO.   No company can deploy this category SW within EU for commercial use.

Category 2 consisting of many of the business innovation & productivity improvement applications will be under formal review & certification before put to commercial use.

Category 3 will require full transparency to the end users and option to ask for alternate human in the loop solutions.

Category 4 is not addressed in this proposal.  Expected to be self-governed by companies

Let us look at what kind of applications fall in Category 2

  • Biometric identification and categorization
  • Critical Infrastructure management
  • Education and vocational training
  • Employment
  • Access to public and private services including benefits
  • Law enforcement ( Police & Judiciary)  
  • Border management ( Migration and asylum)
  • Democratic process such as elections & campaigning.

Very clearly EU is worried about the ethical aspects of these complex AI systems with their inbuild biases, lack of explain ability, transparency etc. and also clearly gives very high weightage to human rights and fairness & decency.

 I recommend that all organizations start reviewing this and include this aspect in their AIML deployment plans without waiting for the eventual EU law.