AI Ethics Self Governance

3 mn read

AI Ethics:  Self-governed by Corporations and Employees

L Ravichandran, Founder – AIThoughts.Org

As more self-learning AI software & products are being used in factories, retail stores, enterprises and on self-driven cars on our roads, the age-old philosophical area of Ethics has become an important current-day issue.

Who will ensure that ethics is a critical component of AI projects right from conceptualization?  Nowadays, ESG (environmental, social, and corporate governance) and sustainability considerations have become business priorities at all corporations; how do we make AIEthics a similar priority? The Board, CEO, CXOs and all employees must understand the impact of this issue and ensure compliance. In this blog, I am suggesting one of the things corporations can do in this regard.

All of us have heard of the Hippocratic Oath taken by medical doctors, affirming their professional obligations to do no harm to human beings. Another ethical oath is called the Iron Ring Oath, taken by Canadian Engineers, along with the wearing of iron rings, since 1922. There is a myth that the initial batch of iron rings was made from the beams of the first Quebec Bridge that collapsed during construction in 1907 due to poor planning and engineering design. The iron ring oath affirms engineers’ responsibility to good workmanship and NO compromise in their work regarding good design and good material, regardless of external pressures.

 

When it comes to AI & Ethics, the ethical questions become more complex. Much more complex.

 

If a self-driven car hits a human being, who is responsible? The car company, the AI product company or the AI designer/developers? Or the AI car itself?

 

Who is responsible if an AI Interviewing system is biased and selects only one set of people (based on gender, race, etc.)?

 

Who is responsible if an Industrial Robot shuts off an assembly line when sensing a fault but kills a worker in the process?  

 

Ironically, much literature on this topic refers to and even suggests the use of Isaac Asimov’s Laws of Robotics from his 1942 science fiction book.

The Three Laws are:

1.    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3.    A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

 

In June 2016, Satya Nadella, CEO of Microsoft Corporation in an interview with the Slate magazine talked about the following guidelines for Microsoft AI designers.

1.      “A.I. must be designed to assist humanity” meaning human autonomy needs to be respected.

  1. “A.I. must be transparent” meaning that humans should know and be able to understand how they work.
  2. “A.I. must maximize efficiencies without destroying the dignity of people”.
  3. “A.I. must be designed for intelligent privacy” meaning that it earns trust through guarding their information.
  4. “A.I. must have algorithmic accountability so that humans can undo unintended harm”.
  5. “A.I. must guard against bias” so that they must not discriminate against people.

 

Lots of research is underway to address this topic. Philosophers, lawyers, government bodies and IT professionals are jointly working on defining the problem in granular detail and coming out with solutions.

I recommend the following :-

 

1.                All corporate stake holders (user corporations and tech firms) should publish an AIEthics Manifesto and report compliance to the Board on a quarterly basis. This manifesto will ensure they meet all in-country AIEthics policies if available or follow a minimum set of safeguards even if some countries are yet to publish their policies. This will ensure CEO and CXOs will have an item on their KPIs/BSCs regarding AIEthics and ensure proliferation inside the company.

 

2.                Individual developers and end-users can take an oath or pledge stating that ‘I will, to the best of my ability, develop or use only products which are ethical and protect human dignity and privacy’.

 

 

3.                Whistle Blower policy to be extended to AIEthics compliance issues, to encourage employees to report issues without fear.

Leave a Reply

Join to share, discuss and advise

Discover AiThoughts

Welcome to AiThoughts, an author driven publication of learnings on Artificial Intelligence. A place where experience matters. Discover without boundaries on the practical applications of Artificial Intelligence.

Build great relations

Explore all the content from AiThoughts community network. Coming soon with Forums, Groups, Members, Posts, Social Wall and many more. You can never get tired of it!

Become a member

Contribute and learn more. Get unlimited access to the best articles on AiThoughts and support our lovely authors.

© 2022, AiThoughts

Subscribe to our newsletter and be updated on our blogs, events and news in the world of AI

You have successfully subscribed to AiThoughts

There was an error while trying to send your request. Please try again.

AiThoughts will use the information you provide on this form to be in touch with you and to provide updates and marketing.