Skip to main content

Ethics at the Forefront: Navigating the Path at frontier of Artificial General Intelligence

While we may want to cautiously avoid the term Artificial General Intelligence (AGI) today, it is evident from the general capabilities of the systems currently in place that we are either close to, or perhaps already have, some form of AGI in operation. In this scenario, it is crucial that AI ethics take center stage for several compelling reasons:

Unprecedented Autonomy and Decision-Making: AGI’s capability to autonomously perform any intellectual task necessitates ethical guidelines to ensure that the decisions made do not harm individuals or society.

Societal Impact and Responsibility: The profound impact of AGI across all societal sectors demands an alignment with human values and ethics to responsibly navigate changes and mitigate potential disruptions.

Avoiding Bias and Ensuring Fairness: To counteract the perpetuation of biases and ensure fairness, AGI requires a robust ethical framework to identify, address, and prevent discriminatory outcomes.

Control and Safety: The potential for AGI to surpass human intelligence necessitates stringent ethical guidelines and safety measures to ensure human control and to prevent misuse or unintended behaviors.

Transparency and Accountability: Given the complexity of AGI decision-making, ethical standards must enforce transparency and accountability, enabling effective monitoring and management by human operators.

Long-term Existential Risks: Aligning AGI with human values is crucial to avert existential risks and ensure that its development and deployment do not adversely impact humanity’s future.

Global Collaboration and Regulation: The global nature of AGI development necessitates international cooperation, with ethical considerations driving dialogue and harmonized regulations for responsible AGI deployment worldwide.

To expand on the important aspect of “Unprecedented Autonomy and Decision-Making,” the profound ability of AGI systems to perform tasks across various domains without human intervention is noteworthy. Organizations can proactively establish certain measures to ensure that the development and deployment of AI systems are aligned with ethical standards and societal values. Here’s what organizations can put in place now:

Aspect Manifestation Importance of Ethics
Decision-Making in Complex Scenarios Future AGI can make decisions in complex, unstructured environments such as medicine, law, and finance. Ensuring Beneficence: Ethical guidelines are needed to ensure decisions made by AGI prioritize human well-being and do not cause harm.
Continuous Learning and Adaptation Unlike narrow AI, AGI can learn from new data and adapt its behavior, leading to evolving decision-making patterns. Maintaining Predictability : Ethical frameworks can guide the development of AGI to ensure its behavior remains predictable and aligned with human intentions.
Autonomy in Execution AGI systems can act without human oversight, executing tasks based on their programming and learned experiences. Safeguarding Control: Ethics ensure that even in autonomous operation, AGI systems remain under human oversight and control to prevent unintended consequences.
Interaction with Unstructured Data AGI can interpret and act upon unstructured data (text, images, etc.), making decisions based on a wide array of inputs. Preventing Bias: Ethical standards are crucial to ensure that AGI systems do not perpetuate or amplify biases present in the data they learn from.
Complex Communication Abilities AGI can potentially understand and generate natural language, enabling it to communicate based on complex dialogues and texts. Ensuring Transparency: Ethics demand that AGI communication remains transparent and understandable to humans to facilitate trust and accountability.
Long-Term Strategic Planning AGI could plan and execute long-term strategies with far-reaching impacts, considering a wide array of variables and potential future scenarios. Aligning with Human Values: Ethical guidelines are essential to ensure that AGI’s long-term planning and strategies are aligned with human values and ethics.

By taking these steps, organizations can play a pivotal role in steering the development of AGI towards a future where it aligns with ethical standards and societal values, ensuring its benefits are maximized while minimizing potential risks.

 

AI Regulations : Need for urgency

Few weeks ago, I saw a news article about risks of unregulated AI.  The news article quoted that in USA, Police came to a house of a 8 months pregnant African American lady and arrested her due to a facial recognition system identified her as the theft suspect in a robbery. No amount of pleading from the lady about her advanced pregnancy condition during the time of robbery and she just could not have committed the said crime with this condition, was heard by the police officer.  The Police officer did not have any discretion.  The system set up was such that once the AI face recognition identifies the suspect, Police are required to arrest her, bring her to the police station and book her.  

In this case, she was taken to the police station, booked and released on bail. Few days later the case against her was dismissed as the AI system has wrongly identified her.  It was also found out that she was not the first case and few more people, especially African American women were wrongly arrested and released later due to incorrect facial recognition model.

The speed in which the governments are moving on regulations and proliferation of AI tech companies delivering business application such as this facial recognition model demand urgent regulations.

May be citizens themselves should organize and let the people responsible for deploying these systems accountable.  The Chief of Police, may be the Mayor of the town and County officials who signed off this AI facial recognition system, should be made accountable.  May be the County should pay hefty fines and just not a simple oops, sorry.

Lots of attention need to be placed on training data.  Training data should represent all the diverse people in the country in sufficient samples.  Expected biases due to lack of sufficient diversity in training data must be anticipated and the model tweaked.  Most democratic countries have criminal justice system with a unwritten motto “Let 1000 criminals go free but not a single innocent person should go to jail”.  The burden of proof of guilt is always on the state.  However, we seem to have forgotten this when deploying these law enforcement systems.  The burden of proof with very high confidence levels and explainable AI human understandable reasoning, must be the basic approval criteria for these systems to be deployed.

The proposed EU act classifies these law enforcement systems as high risk and will be under the act.  Hopefully the EU act becomes a law soon and avoid this unfortunate violation of civil liberty and human rights.

More Later,

L Ravichandran

AI Ethics Self Governance

AI Ethics:  Self-governed by Corporations and Employees

L Ravichandran, Founder – AIThoughts.Org

As more self-learning AI software & products are being used in factories, retail stores, enterprises and on self-driven cars on our roads, the age-old philosophical area of Ethics has become an important current-day issue.

Who will ensure that ethics is a critical component of AI projects right from conceptualization?  Nowadays, ESG (environmental, social, and corporate governance) and sustainability considerations have become business priorities at all corporations; how do we make AIEthics a similar priority? The Board, CEO, CXOs and all employees must understand the impact of this issue and ensure compliance. In this blog, I am suggesting one of the things corporations can do in this regard.

All of us have heard of the Hippocratic Oath taken by medical doctors, affirming their professional obligations to do no harm to human beings. Another ethical oath is called the Iron Ring Oath, taken by Canadian Engineers, along with the wearing of iron rings, since 1922. There is a myth that the initial batch of iron rings was made from the beams of the first Quebec Bridge that collapsed during construction in 1907 due to poor planning and engineering design. The iron ring oath affirms engineers’ responsibility to good workmanship and NO compromise in their work regarding good design and good material, regardless of external pressures.

 

When it comes to AI & Ethics, the ethical questions become more complex. Much more complex.

 

If a self-driven car hits a human being, who is responsible? The car company, the AI product company or the AI designer/developers? Or the AI car itself?

 

Who is responsible if an AI Interviewing system is biased and selects only one set of people (based on gender, race, etc.)?

 

Who is responsible if an Industrial Robot shuts off an assembly line when sensing a fault but kills a worker in the process?  

 

Ironically, much literature on this topic refers to and even suggests the use of Isaac Asimov’s Laws of Robotics from his 1942 science fiction book.

The Three Laws are:

1.    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3.    A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

 

In June 2016, Satya Nadella, CEO of Microsoft Corporation in an interview with the Slate magazine talked about the following guidelines for Microsoft AI designers.

1.      “A.I. must be designed to assist humanity” meaning human autonomy needs to be respected.

  1. “A.I. must be transparent” meaning that humans should know and be able to understand how they work.
  2. “A.I. must maximize efficiencies without destroying the dignity of people”.
  3. “A.I. must be designed for intelligent privacy” meaning that it earns trust through guarding their information.
  4. “A.I. must have algorithmic accountability so that humans can undo unintended harm”.
  5. “A.I. must guard against bias” so that they must not discriminate against people.

 

Lots of research is underway to address this topic. Philosophers, lawyers, government bodies and IT professionals are jointly working on defining the problem in granular detail and coming out with solutions.

I recommend the following :-

 

1.                All corporate stake holders (user corporations and tech firms) should publish an AIEthics Manifesto and report compliance to the Board on a quarterly basis. This manifesto will ensure they meet all in-country AIEthics policies if available or follow a minimum set of safeguards even if some countries are yet to publish their policies. This will ensure CEO and CXOs will have an item on their KPIs/BSCs regarding AIEthics and ensure proliferation inside the company.

 

2.                Individual developers and end-users can take an oath or pledge stating that ‘I will, to the best of my ability, develop or use only products which are ethical and protect human dignity and privacy’.

 

 

3.                Whistle Blower policy to be extended to AIEthics compliance issues, to encourage employees to report issues without fear.