Skip to main content

Global Collaborations: Managing AI Safety Paradox

In last few years AI is transforming industries and also poised to have significant impact on our daily lives. The whole space has got significant impetus after commercial availability of GenAI. It is stated by many and multiple times that industry needs to make sure that AI technologies are developed and deployed in a manner that is ethical, transparent, and accountable. There is a need for assurance of AI safety and trustworthiness. Otherwise, AI technologies which can do wonders in right hands, will create chaos if left without required oversight. This need for an oversight puts demands on the governments around the world to put in place policies and governance frameworks to provide guidelines that help steer the development and use of AI systems in directions that benefit society as a whole.

But governments around the world face challenges in ensuring the safety and trustworthiness of AI. Some of the key challenges are:

  1. Rapid Technological Advancement: AI technology is evolving at an unprecedented pace, making it difficult for regulations to keep up. This can lead to a gap between the development of AI and the implementation of effective safety measures.
  2. Complex Technical Nature: AI systems are often highly complex, involving intricate algorithms and large datasets. This complexity makes it challenging for policymakers and regulators who may not have the technical expertise to fully understand the risks and potential consequences.
  3. Diverse Applications: AI is being used in a wide range of sectors, from healthcare to finance to transportation. This diversity of applications means that different safety and trustworthiness concerns may arise in each sector, requiring tailored regulatory approaches.
  4. International Collaboration: AI development and deployment are increasingly global in nature, involving collaboration across countries. This necessitates international cooperation to establish consistent standards and regulations to prevent regulatory arbitrage and ensure global safety.
  5. Balancing Innovation and Regulation: Governments must strike a balance between encouraging innovation and ensuring safety. Overly restrictive regulations could stifle AI development, while lax regulations could lead to serious risks. This balance is a tightrope walk.
  6. Ethical Considerations: AI raises complex ethical questions, such as algorithmic bias, job displacement, and the potential for autonomous systems to make life-or-death decisions. Addressing these ethical concerns requires careful consideration and robust frameworks.
  7. Transparency and Explainability: AI systems, especially those based on machine learning, can be difficult to interpret and understand. Lack of transparency and explainability can hinder trust and accountability. But there is still lot of work to be done in this space.
  8. Security Risks: AI systems can be vulnerable to cyberattacks and manipulation, which could have serious consequences. Ensuring the security of AI systems is crucial for their safety and trustworthiness.
  9. Data Privacy: AI systems often rely on large amounts of data, raising concerns about privacy and data protection. Governments must balance the need for data to train AI models with the rights of individuals to protect their personal information.

These challenges require a coordinated effort between governments, private sector, academia, and civil society to develop effective solutions.

Many governments have taken steps and initiatives to address these challenges. Some have declared executive orders and laws. Some have established safety institutes for holistic approach to AI safety. Many have started collaborating also among themselves. Below, taking example of United Kingdom, is representative glimpse of how different governments are responding to the recognition that use of AI in Public Services and in Businesses is going to be unavoidable.

  • AI Safety Institute (AISI): Launched at the AI Safety Summit in November 2023, AISI is dedicated to advancing AI safety and governance. It conducts rigorous research, builds infrastructure to test AI safety, and collaborates with the wider research community, AI developers and other governments. Their aim is to also shape global policy making on the subject through such collaboration. (Ref 1)
  • AI Management Essentials (AIME) Tool: This tool includes a self-assessment questionnaire, a rating system, and a set of action points and recommendations to help businesses manage AI responsibly. AIME is based on the ISO/IEC 42001 standard, NIST framework, and E.U. AI Act. (Ref 2)
  • AI Assurance Platform: A centralized resource offering tools, services, and frameworks to help businesses navigate AI risks and improve trust in AI systems. (Ref 3)
  • Systemic Safety Grant Program: Provides funding for initiatives that develop the AI assurance ecosystem, with up to £200,000 available for each supporting project that investigates the societal risks associated with AI, including deepfakes, misinformation, and cyber-attacks. (Ref 4)
  • UK AISI Collaboration with Singapore: The UK AISI collaborates with Singapore to advance AI safety and governance. Both countries work together to ensure the safe, ethical, and responsible development and deployment of AI technologies. (Ref 5).

AISI UK has already started getting engaged with the industry. For example, jointly with AISI USA, it carried out Pre-Deployment Evaluation of Anthropic’s Upgraded Claude 3.5 Sonnet.

Many other countries have taken similar steps, like UK and US AISI partnership and UK and France AI Research Institute’s collaboration. On the other hand, many countries have not yet made this their priority.

The recognition that these efforts must transcend the country boundaries, there are initiatives that have come to exist. The most notable is International Network of AI Safety Institutes to boost cooperation on AI safety. A small overview of it below (Ref 6 & 7):

  • Formation and Members: Launched at the AI Seoul Summit in May 2024, the network includes the UK, US, Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada, and the European Union.
  • Objectives: The network aims to accelerate the advancement of AI safety science globally by promoting complementarity and interoperability between institutes and fostering a common international understanding of AI safety approaches.
  • Collaboration: Members coordinate research, share resources and information, develop best practices, and exchange or co-develop AI model evaluations.

AISIs, UN initiatives, and the International Network of AI Safety Institutes have made significant strides in promoting AI safety and trustworthiness, such as Collaboration including Industry-Academia Partnerships, Standards Setting, Knowledge Sharing, and defining comprehensive frameworks for ethical AI development and use. But concrete effects of these achievements are still emerging. While concrete outcomes may take time to materialize, these initiatives have laid the foundation for a safer and more trustworthy AI future.

References:

  1. AISI (AI Safety Institute) https://www.aisi.gov.uk/
  2. UK Government Introduces Self-Assessment Tool to Help Businesses Manage AI Use by Fiona Jackson – TechRepublic https://www.techrepublic.com/article/uk-government-ai-management-essentials/
  3. UK government launches AI assurance platform for enterprises by Sebastian Klovig Skelton – TechTarget/ComputerWeekly https://www.computerweekly.com/news/366615318/UK-government-launches-AI-assurance-platform-for-enterprises
  4. AISI’s Systemic AI Safety Grant https://www.aisi.gov.uk/work/advancing-the-field-of-systemic-ai-safety-grants-open
  5. UK & Singapore collaboration on AI Safety https://www.mddi.gov.sg/new-singapore-uk-agreement-to-strengthen-global-ai-safety-and-governance/
  6. Press Release https://www.gov.uk/government/news/global-leaders-agree-to-launch-first-international-network-of-ai-safety-institutes-to-boost-understanding-of-ai
  7. CSIS report https://www.csis.org/analysis/ai-safety-institute-international-network-next-steps-and-recommendations

 

Ethics at the Forefront: Navigating the Path at frontier of Artificial General Intelligence

While we may want to cautiously avoid the term Artificial General Intelligence (AGI) today, it is evident from the general capabilities of the systems currently in place that we are either close to, or perhaps already have, some form of AGI in operation. In this scenario, it is crucial that AI ethics take center stage for several compelling reasons:

Unprecedented Autonomy and Decision-Making: AGI’s capability to autonomously perform any intellectual task necessitates ethical guidelines to ensure that the decisions made do not harm individuals or society.

Societal Impact and Responsibility: The profound impact of AGI across all societal sectors demands an alignment with human values and ethics to responsibly navigate changes and mitigate potential disruptions.

Avoiding Bias and Ensuring Fairness: To counteract the perpetuation of biases and ensure fairness, AGI requires a robust ethical framework to identify, address, and prevent discriminatory outcomes.

Control and Safety: The potential for AGI to surpass human intelligence necessitates stringent ethical guidelines and safety measures to ensure human control and to prevent misuse or unintended behaviors.

Transparency and Accountability: Given the complexity of AGI decision-making, ethical standards must enforce transparency and accountability, enabling effective monitoring and management by human operators.

Long-term Existential Risks: Aligning AGI with human values is crucial to avert existential risks and ensure that its development and deployment do not adversely impact humanity’s future.

Global Collaboration and Regulation: The global nature of AGI development necessitates international cooperation, with ethical considerations driving dialogue and harmonized regulations for responsible AGI deployment worldwide.

To expand on the important aspect of “Unprecedented Autonomy and Decision-Making,” the profound ability of AGI systems to perform tasks across various domains without human intervention is noteworthy. Organizations can proactively establish certain measures to ensure that the development and deployment of AI systems are aligned with ethical standards and societal values. Here’s what organizations can put in place now:

Aspect Manifestation Importance of Ethics
Decision-Making in Complex Scenarios Future AGI can make decisions in complex, unstructured environments such as medicine, law, and finance. Ensuring Beneficence: Ethical guidelines are needed to ensure decisions made by AGI prioritize human well-being and do not cause harm.
Continuous Learning and Adaptation Unlike narrow AI, AGI can learn from new data and adapt its behavior, leading to evolving decision-making patterns. Maintaining Predictability : Ethical frameworks can guide the development of AGI to ensure its behavior remains predictable and aligned with human intentions.
Autonomy in Execution AGI systems can act without human oversight, executing tasks based on their programming and learned experiences. Safeguarding Control: Ethics ensure that even in autonomous operation, AGI systems remain under human oversight and control to prevent unintended consequences.
Interaction with Unstructured Data AGI can interpret and act upon unstructured data (text, images, etc.), making decisions based on a wide array of inputs. Preventing Bias: Ethical standards are crucial to ensure that AGI systems do not perpetuate or amplify biases present in the data they learn from.
Complex Communication Abilities AGI can potentially understand and generate natural language, enabling it to communicate based on complex dialogues and texts. Ensuring Transparency: Ethics demand that AGI communication remains transparent and understandable to humans to facilitate trust and accountability.
Long-Term Strategic Planning AGI could plan and execute long-term strategies with far-reaching impacts, considering a wide array of variables and potential future scenarios. Aligning with Human Values: Ethical guidelines are essential to ensure that AGI’s long-term planning and strategies are aligned with human values and ethics.

By taking these steps, organizations can play a pivotal role in steering the development of AGI towards a future where it aligns with ethical standards and societal values, ensuring its benefits are maximized while minimizing potential risks.

 

The Executive Order!

Close on the heels of the formation of the Frontier Model Forum and a White House announcement that it had secured “voluntary commitments” from seven leading A.I companies to self-regulate the risks posed by artificial intelligence, President Joe Biden, yesterday issued an executive order regulating the development and ensuring safe and secure deployment of artificial intelligence models . The underlying principles of the order can be summarized in the picture.

The key aspects of the order focus on what is termed “dual-use foundation models” – models that are trained on broad data, uses self-supervision, and can be applied in a variety of contexts. Typically the generative AI models like GPT fall into this category, although, the order is aimed at the next generation of models beyond GPT-4.

Let’s look at what are the key aspects of what the order says in this part. Whilst the order talks about the

Safe & Secure AI

  • The need for safe and secure AI through thorough testing – even sharing test results with the government for critical systems that can impact national security, economy, public health and safety
  • Build guidelines to conduct AI red-teaming tests that involves assessing and managing the safety, security, and trustworthiness of AI models
  • The need to establish provenance of AI generated content
  • Ensure that compute & data are not in the hands of few colluding companies and ensuring that new businesses can thrive [This is probably the biggest “I don’t trust you” statement back to Big Tech!]

AI Education / Upskilling

  • Given its criticality, the need for investments in AI related education, training, R&D and protection of IP.
  • Support for programs to provide Americans with the skills they need for the age of AI and attract the world’s AI talent, via investments in AI-related education, training, development, research, and capacity and IP development
  • Encouraging AI skills import into the US [probably the one that most Indian STEM students who hope to study and work in the US will find a reason to cheer]

Protection Of Rights

  • Ensuring the protection of civil rights, protection against bias & discrimination, rights of consumers (users)
  • Lastly, also the growth of governmental capacity to regulate, govern and support for responsible AI.

Development of guidelines & standards

  • Building up on the Blueprint AI Bill of Rights & the AI Risk Management Framework, to create guidance and benchmarks for evaluating and auditing AI capabilities, particularly in areas where AI could cause harm, such as cybersecurity and biosecurity

Protecting US Interests

  • The regulations also propose that companies developing or intending to develop potential dual-use foundation models to report to the Govt on an ongoing basis their activities w.r.t training & assurance on the models and the the results of any red-team testing conducted
  • IaaS providers report on the security of their infrastructure and the usage of compute (large enough to train these dual use foundation models), as well as its usage by foreign actors who train large AI models which could be used for malafide purposes

Securing Critical Infrastructure

  • With respect to critical infrastructure, the order directs that under the Secretary Homeland Security, an AI Safety & Security Board is established, composed of AI experts from various sectors, to provide advice and recommendations to improve security, resilience, and incident response related to AI usage in critical infrastructure
  • All critical infrastructure is assessed for potential risks (vulnerabilities to critical failures, physical attacks, and cyberattacks) associated with the use of AI in critical infrastructure.
  • An assessment to be undertaken of the risks of AI misuse in developing threats in key areas like CBRN (chemical, biological, radiological and nuclear) & bio sciences

Data Privacy

  • One section of the document deals with mitigating privacy risks associated with AI, including an assessment and standards on the collection and use of information about individuals.
  • It also wants to ensure that the collection, use, and retention of data ensures that privacy and confidentiality are respected
  • Also calls for Congress to pass Data Privacy legislation

Federal Government Use of AI

  • The order encourages the use of AI, particularly generative AI, with safeguards in place and appropriate training, across federal agencies, except for national security systems.
  • It also calls for an interagency council to be established to coordinate AI development and use.

Finally, the key element – keeping America’s leadership in AI strong – by driving efforts to expand engagements with international allies and establish international frameworks for managing AI risks and benefits as well as driving an AI research agenda.

In subsequent posts, we will look at reactions, and what it means for Big Tech and for the Indian IT industry which is heavily tied to the US!

To Be Or Not To Be – GPT4 Applications

Posting on behalf on L Ravichandran

I saw this talk organized by a company called Steamship on YouTube.
 
GPT-4 – How does it work, and how do I build apps with it? – CS50 Tech Talk
 


One of the key speakers talked about various categories of applications being built using GPT-4.  No 1 is the “Companionship Category of applications”.
 
He further expanded on the Companionship category such as mentor, coach,  a friend who will give you the right feedback, a friend who will always empathize with you, etc. People are using these personas to get solace and comfort by “talking” to these companions.
 
As I was seeing this video, I was really disturbed and at the same time became inquisitive. What do we humans want? Do we want to communicate with GPT Companions or Flesh & Blood real human companions? Are we settling for GPT-Companion as the current society does not support human-to-human contact and communication?
 
The large family cluster of extended families living nearby is gone as we move away into far suburbs. The number of children per family is reducing fast. Physical games are getting substituted with online virtual games; friends are very few, and even these few friends are happy with virtual communication.
 
I know this is a question for philosophers, phycologists, and social scientists to answer. I hope they seriously look at this new phenomenon and assess its impact on human society.
 
I will conclude with the famous Shakespeare dialogue “To Be or Not to Be “. “To be a human or Not to be a human” is the new question.

AI Regulations : Need for urgency

Few weeks ago, I saw a news article about risks of unregulated AI.  The news article quoted that in USA, Police came to a house of a 8 months pregnant African American lady and arrested her due to a facial recognition system identified her as the theft suspect in a robbery. No amount of pleading from the lady about her advanced pregnancy condition during the time of robbery and she just could not have committed the said crime with this condition, was heard by the police officer.  The Police officer did not have any discretion.  The system set up was such that once the AI face recognition identifies the suspect, Police are required to arrest her, bring her to the police station and book her.  

In this case, she was taken to the police station, booked and released on bail. Few days later the case against her was dismissed as the AI system has wrongly identified her.  It was also found out that she was not the first case and few more people, especially African American women were wrongly arrested and released later due to incorrect facial recognition model.

The speed in which the governments are moving on regulations and proliferation of AI tech companies delivering business application such as this facial recognition model demand urgent regulations.

May be citizens themselves should organize and let the people responsible for deploying these systems accountable.  The Chief of Police, may be the Mayor of the town and County officials who signed off this AI facial recognition system, should be made accountable.  May be the County should pay hefty fines and just not a simple oops, sorry.

Lots of attention need to be placed on training data.  Training data should represent all the diverse people in the country in sufficient samples.  Expected biases due to lack of sufficient diversity in training data must be anticipated and the model tweaked.  Most democratic countries have criminal justice system with a unwritten motto “Let 1000 criminals go free but not a single innocent person should go to jail”.  The burden of proof of guilt is always on the state.  However, we seem to have forgotten this when deploying these law enforcement systems.  The burden of proof with very high confidence levels and explainable AI human understandable reasoning, must be the basic approval criteria for these systems to be deployed.

The proposed EU act classifies these law enforcement systems as high risk and will be under the act.  Hopefully the EU act becomes a law soon and avoid this unfortunate violation of civil liberty and human rights.

More Later,

L Ravichandran

EU AI Regulations Update

I have written some time back about EU AI Act draft circulation.  After more than 2 years, there is some more movement in making this a EU Law.  In June 2023,  the EU Parliament adapted the draft and a set of negotiating principles and the next step of discussions with member countries has started.  The EU officials are confident that this process will be completed by end of 2023 and this will become an EU law soon.  Like the old Hindi proverb “ Bhagawan Ghar mein Dher hain Andher Nahin”. Or “In God’s scheme of things, there may be delays but never darkness”.  EU has taken the first step and if this becomes a law by early 2024, it will be a big achievement.   I am sure USA and other large countries will follow soon.

The draft has more or less maintained its basic principles and structure. 

The basic objective of the new law is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.  In addition, there is an larger emphasis on AI systems should be overseen by people, rather than by automation alone.  The principle of proportionate regulations, the risk categorization of AI systems and the level of regulations appropriate to the risk are the central theme of the proposed laws.  In addition, there was no generative AI or ChatGPT like products when the original draft was developed in 2021 and hence additional regulations are added to address this large language models / Generative AI models. The draft also plans to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.

Just to recall from my earlier Blog, the risks are categorized  in to Limited risk, high risk and unacceptable risk.

The draft Law clearly defines systems which are categorized as “Unacceptable risk” and proposed to ban them from commercial launch within EU community countries.  Some examples are given below.

  • Any AI system which can change or manipulate Cognitive behaviour of  humans , especially vulnerable groups such as children, elderly etc.
  • Any AI system which classifies people based on various personal traits such as behaviour, socio-economic stataus or race and other personal characteristics.
  • Any AI system which does real-time and remote biometric identification systems, such as facial recognition which is usually without consent of the person targeted.   The law also clarifies that past data analysis for law enforcement purposes is acceptable with court orders.

The draft law is concerned about any negative impact on fundamental rights of EU citizens and any impact on personal safety.  These types of systems will be categorized as High Risk.

1)  Many products such as toys, automobiles, aviation products, medical devices etc. are already under existing U Product safety legislation.  Any AI systems that are used inside products already  regulated under this legislation will also be subjected to additional regulations as per High Risk category.


2)  Other AI systems falling into eight specific areas that will be classified as High Risk and required registration in an EU database and subjected to the new regulations.

The eight areas are: –

  1. Biometric identification and categorisation of natural persons
  2. Management and operation of critical infrastructure
  3. Education and vocational training
  4. Employment, worker management and access to self-employment
  5. Access to and enjoyment of essential private services and public services and benefits
  6. Law enforcement
  7. Migration, asylum and border control management
  8. Assistance in legal interpretation and application of the law.


Once these systems are registered in the EU database, they will be assessed by appropriate agencies for functionality, safety features, transparency, grievance mechanisms for appeal etc and will be given approvals before they are deployed in EU market.  All updates and new versions of these AI system will be subjected to similar scrutiny.  


Other AI systems not in the above two lists will be termed as “Limited risk” systems and subjected to self-regulations.  At the minimum, the law expects these systems to inform the users that they are indeed interacting with an AI system and provide options to change to a human operated system or discontinue using the system. 

As I have mentioned before, the proposed law is covering Generative AI systems also.  The law required these systems to disclose to the users that the output document or a output decision is generated or derived by a Generative AI system.  In addition, the system should publish the list of copyrighted training content used by the model.  I am not sure how practical this is given that ChatGPT like systems are reading every digital content in the web and now moving in to very audio / video content.  Even if the system produces this list which is expected to be very large, not sure current copy right laws are sufficient to address the use of this copyrighted material in a different form inside the deep learning neural networks. 

The proposed law also wants to ensure that the generative AI models are self-regulated enough not to generate illegal content or provide illegal advice to users.


 Indian Government is also looking at enacting AI regulations soon.  June 9th 2023 interview, Indian IT minister talked about this.  He emphasized the objective of “No harm” to citizen digital users.  Government’s approach to any regulation of AI will be thru the prism of “ User harm or derived user harm thru use of any AI technology”.  I am sure draft will be out soon and India also will have similar laws soon.

Let us discuss about what are the implications or consequences of this regulation among the various stakeholders.

  • AI system developer company ( Tech and Enterprises )


They need to educate all their AI development teams on these laws and ensure these systems are tested for compliance prior to commercial release.  Large enterprises may even ask large scale model developers like open.AI to indemnify them against any violations while using their APIs.  Internal legal counsels of both the tech companies and API user enterprises need to be trained on the new laws and get ready for contract negotiations.  Systems Integrators and outsourcers such as Tech Mahindra, TCS, Infosys etc. are also need to gear up for the challenge.  The liability will be passed down from the enterprise to the Systems Integrators and they need to ensure compliance is built in and also tested correctly with proper documentation.

  • Governments & Regulators

Government and regulatory bodies need to upskill their staff on the new laws and how to verify and test compliance for the commercial launch approval.  The tech companies are very big and throw in best technical as well as legal talent to justify their systems are compliant and if regulatory bodies are not skilled enough to verify then the law will become ineffective and will be only on paper.  This is a huge challenge for the government bodies. 

  • Legal community both public prosecutors, company legal counsels and defence lawyers

Are they ready for the avalanche of legal cases starting from regulatory approvals and appeals, ongoing copyright violations, privacy violations, inter company litigations of liability sharing between Tech, enterprise and Systems Integrators etc.

Massive upskillng and training is needed for even senior lawyers as issues arising from this law are very different.  The law degree curriculum needs to include a course on AI regulations. For example, the essence of a comedian talk show “learnt” by a deep learning model and stored deep in to neural networks.  Is it a copyright violation?   The model outputs similar style comedy speech by using the “essence” stored in neural network.  Is the output a copy right violation?  Who is responsible and accountable for an autonomous car accident?  Who is responsible for a factory accident, causing injury to a worker in a autonomous robot factory?  Lots of new legal challenges.

Most Indian Systems Integrators are investing large sums of money to reskill and also create new AI based service offerings.  Hope they are spending part of that investment in AI regulations and compliance. Otherwise, they run a risk of losing all the profits in few tricky legal challenges. 

More later

L Ravichandran

AI&Ethics: A reminder from Anand Mahindra

I am sure many of you have seen a Tweet from Anand Mahindra on video morphing and the risks associated with it. Anand clearly makes a cry for tech solutions to solve the problem. Ref

https://twitter.com/anandmahindra/status/1616722233946411008

In AiThoughts.Org, we have been talking about AI & Ethics and the need for all the stakeholders i.e. Tech Developers, Enterprise users, Governments, and the justice ecosystem to get ready to tackle this issue head-on.

Just to highlight Anand Mahindra’s tweet, the video Morpher clearly demonstrated to the world that with available common technologies, one can put anyone’s face on a video and create a fake news controversy. In this world of instant reading and quick opinion forming, the damage to the public person will be enormous and many times irreparable. No amount of counterclaims about the fakeness of the video post facto will ever restore the lost goodwill. This is a real threat to democracy as elections can be manipulated as also enterprise valuations. We just saw just one day after the Twitter $8 authentication feature fiasco how $Billions of market valuation were lost for a few corporations.     

With ChatGPT providing robust APIs, I am sure more & more enterprises will use this powerful knowledge engine to extract research information about various facets such as competition, raw materials, political events in various countries, etc. to make business decisions. False research data can mean the failure of a strategy costing billions to enterprises. See my earlier Blog on how wrong answers were confidently given by CharGPT.

The evolutionary race between the Prey and the Predator is happening for millions of years. We have seen it happen in various digital technologies. The hackers (predators) keep improving and CISOs( Prey) keep deploying more powerful tools to identify and block access to these hackers. Same way, we need easy technology solutions to detect morphed images, morphed voices, etc. before they are allowed to be posted on popular social media sites. However, we are not seeing these fact-checker tools ( Protect Prey) coming to market as fast as the AIML advances ( Predators). In fact, even the term Predator I am using to characterize these great technologies will be objected to vehemently by AIML proponents. My use of this Prey and predator analogy is more to illustrate the risks associated with these technologies as pointed out by so many people.

Large Social sites such as Twitter, Youtube, FaceBook, etc. should immediately evolve a Preventive mechanism and not post facto mechanisms as exist today such as Blocking/deleting posts, etc. Today’s post facto deletion of fake news posts, warning tags on posts, or even blocking a handle is too little and too late fixes. Every second the fake news is on air, the damage it causes is enormous. We need preventive tech checks and blocking even before the fake item is posted.

Both sender and referenced person are to be used as key parameters for preventive checks. For e.g. a sender which is a political party’s official handle needs preventive checks as any message coming out of this handle will be viewed as the official position of the party. Sender, even if he/she is an individual if followers of that person are very large, then his/her posts also need preventive checks for fake news. The power of exponential dissemination of fake news coming from an influencer with a large following has been seen on many occasions already. Same for the referenced person. If a person with a large following is being referenced with fake news with morphed videos, it will cause enormous harm to the individual due to tagging.

I am sure the large R&D teams employed by these tech giants can easily develop preventive solutions and deploy them immediately and use the post facto solutions as an add-on for a few cases which escape the preventive checks.

AI&Ethics is a topic that is now becoming important and it is also becoming urgent.

More Later,

L Ravichandran

 

EU Artificial Intelligence Act proposal

Lot has been talked about #ResponsibleAI, #ai and #ethics. We also have a brand new filed called #xai Explainable AI with the sole objective of creating new simpler models to interpret more complex original models.  Many tech companies such as Google, Microsoft, IBM etc. have released their #ResponsibleAI guiding principles. 

European Union has circulated a proposal for a “The EU Artificial Intelligence Act”.  As per process this proposal  will be discussed, debated, modified and made in to law by the European parliament soon.

Let me give you a brief summary of the proposal.  

First is the definition of 4 risk categories with different type of checks & balances in each category. 

The categories are  

  1. Unacceptable
  2. High Risk
  3. Limited Risk
  4. Minimal Risk

Category 1 the recommendation is a Big NO.   No company can deploy this category SW within EU for commercial use.

Category 2 consisting of many of the business innovation & productivity improvement applications will be under formal review & certification before put to commercial use.

Category 3 will require full transparency to the end users and option to ask for alternate human in the loop solutions.

Category 4 is not addressed in this proposal.  Expected to be self-governed by companies

Let us look at what kind of applications fall in Category 2

  • Biometric identification and categorization
  • Critical Infrastructure management
  • Education and vocational training
  • Employment
  • Access to public and private services including benefits
  • Law enforcement ( Police & Judiciary)  
  • Border management ( Migration and asylum)
  • Democratic process such as elections & campaigning.

Very clearly EU is worried about the ethical aspects of these complex AI systems with their inbuild biases, lack of explain ability, transparency etc. and also clearly gives very high weightage to human rights and fairness & decency.

 I recommend that all organizations start reviewing this and include this aspect in their AIML deployment plans without waiting for the eventual EU law.

What NOT to say

What Not to Say

Teaching chatbots to speak ‘properly’ and ‘decently’

Many of us would have heard about Microsoft’s Tay.ai chatbot, which was released and pulled back within 24 hours in 2016, due to abusive learnings by the chatbot. It took less than 24 hours to corrupt an innocent AI chatbot. What went wrong? Tay.ai’s learning module was excellent, which ironically was the problem – it was rapidly learning swear words, hate language etc. from the large number of people who used abusive language during conversations with the chatbot.  However, unlike some of the internal filters many of us have, Tay.ai went ahead and learnt from these signals, and started using these phrases and hate language.  All this happened in less than 24 hours, which forced Microsoft to pull this from public use.

I have been observing how my son and daughter-in-law are teaching my 3-year-old granddaughter about the use of good language.  Basic things like saying ‘Please’, ‘Thank You’, ‘Good morning’, ‘Good night’, etc. In other words, decent and desirable language was taught first.  They have also given strict instructions to us (grandparents) and extended family about what to say – and what not to say – in front of the kid. The child will still hear some ‘bad words’ in school, malls, playgrounds etc. This is beyond the parents’ control. In these cases, they teach the child about how a very few bad people still use ‘bad’ language and good people never use these words, thus starting to lay in the internal filters in my granddaughter’s mind.

We should apply the same principle to these innocent but fast-learning chatbots.  Let us ‘teach’ the chatbot all the ‘good’ phrases like ‘Please’, ‘Thank you’ etc. Let us also ‘teach’ the chatbot about showing empathy, such as saying ‘Sorry that your product is not working.  We will do everything possible to fix it’, ‘Sorry to ask you to repeat as I did not understand your question’, and so on.

Finally, let us create a negative list of ‘bad’ phrases, and hate language in all possible variations.  English in the UK will have British, Scottish, and Irish variations.  Some phrases which are considered acceptable in one area may be objectionable in another. Same for Australia, New Zealand, India, New York Northern English, Southern USA English, etc.  Let us build internal filters in these chatbots to ignore or unlearn these phrases in the learning process.  By looking at the IP address of the user, the bot can identify the geographical location and apply the right language filters.

Will this work?  As good parents we have been doing this to teach our kids and grandkids from time immemorial.  Mostly this is working; very few kids grow to become users of hate language.

Will it slow down the machine learning process?  Perhaps a little bit, but this is a price worth paying, compared to having a chatbot use foul language and upset your valuable customers.

You may be wondering if this simple approach is supported by any AI research or whether this is just a grandfather’s tale! There is lots of research in this area that supports my approach.

There are many references to articles on ‘Seldonain Algorithm’ for AI Ethics. I want to refer to an article titled ‘Developing safer machine learning algorithms at UMass Amhrest’.  The authors recommend that the burden of ensuring that ML systems are well-behaved is with the ML designer and not with the end user, and they suggested a 3-step Seldonian algorithm. Let us look at this.

Step one is to provide an Interface specified by the user to define undesirable or bad behaviour.  The ML algorithm will use the interface and try as much as possible to avoid these undesirable behaviours.

Step two is to use High-Probability Constraints: Seldonian algorithms guarantee with high-probability that they will not cause the undesirable behaviour that the user specified via the interface.

Step three in the algorithm is No Solution Found: Seldonian algorithms must have the ability to say No Solution Found (NSF) to indicate that they were unable to achieve what they were asked.

 

Let us consider two examples involving human life to illustrate the Interface definitions. Example one is a robot that controls a robotic assembly line. The robot senses that a welding operation has gone out of sync and is causing all welded cars to be defective. The robot controller wants to issue the instruction to immediately stop the assemble line and get the welding station fixed. However, the user knows that abrupt stoppage of assembly line may cause harm to some factory workers who may be on another station in the assembly line.  This undesirable decision to immediately stop the assembly line needs to be defined in the interface, as this will cause harm to humans compared to a material loss in defective cars.

Example two is an autonomous truck carrying cargo driving in a hilly road with a cliff on the driving side.  A human driver is coming fast in the wrong lane ( human’s fault) and approaching the truck for a certain head-on collision. The only desirable outcome for the truck is to fall of the cliff and destroy itself with the cargo rather than trying to look at various other optimal decisions which may have some probability of hitting the car and harming the human.

In our chatbot good-behavior problem, the undesirable behaviors are usage of the phrases in the ‘Negative List’ for each geographical variation.  The interface will have this list and the logic to identify geographical variations.

I am in discussion with some sponsors for a research project to develop an English-language chatbot etiquette engine.  Initial reactions from the various stakeholders are positive – everyone agrees on the need for an etiquette engine as well as my approach. 

I will be delighted to receive critique and comments from all of you. 

As a closing note, wanted to tell you that Natural Language processing (NLP) is taking huge strides.  NLP is eating the ML” is the talk of the town.  NLP research supported by Large Language models, Transformers etc. are moving way ahead. Investment is going into Q&A, Language Generation, Knowledge management, Unsupervised/reinforcement learning.

In addition to desirable behavior, many other ethical issues need to be incorporated. For e.g

·        Transparency: Does everyone know broadly how learning is done and how decisions are taken?

·        Explainability:  For every individual decision, if requested, can we explain how the decision was taken?

Also, a lot of current AI/ML algorithms especially neural networks based have become black boxes. We expect a shift towards more simpler algorithms for enterprise usage.

 

AI Ethics Self Governance

AI Ethics:  Self-governed by Corporations and Employees

L Ravichandran, Founder – AIThoughts.Org

As more self-learning AI software & products are being used in factories, retail stores, enterprises and on self-driven cars on our roads, the age-old philosophical area of Ethics has become an important current-day issue.

Who will ensure that ethics is a critical component of AI projects right from conceptualization?  Nowadays, ESG (environmental, social, and corporate governance) and sustainability considerations have become business priorities at all corporations; how do we make AIEthics a similar priority? The Board, CEO, CXOs and all employees must understand the impact of this issue and ensure compliance. In this blog, I am suggesting one of the things corporations can do in this regard.

All of us have heard of the Hippocratic Oath taken by medical doctors, affirming their professional obligations to do no harm to human beings. Another ethical oath is called the Iron Ring Oath, taken by Canadian Engineers, along with the wearing of iron rings, since 1922. There is a myth that the initial batch of iron rings was made from the beams of the first Quebec Bridge that collapsed during construction in 1907 due to poor planning and engineering design. The iron ring oath affirms engineers’ responsibility to good workmanship and NO compromise in their work regarding good design and good material, regardless of external pressures.

 

When it comes to AI & Ethics, the ethical questions become more complex. Much more complex.

 

If a self-driven car hits a human being, who is responsible? The car company, the AI product company or the AI designer/developers? Or the AI car itself?

 

Who is responsible if an AI Interviewing system is biased and selects only one set of people (based on gender, race, etc.)?

 

Who is responsible if an Industrial Robot shuts off an assembly line when sensing a fault but kills a worker in the process?  

 

Ironically, much literature on this topic refers to and even suggests the use of Isaac Asimov’s Laws of Robotics from his 1942 science fiction book.

The Three Laws are:

1.    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3.    A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

 

In June 2016, Satya Nadella, CEO of Microsoft Corporation in an interview with the Slate magazine talked about the following guidelines for Microsoft AI designers.

1.      “A.I. must be designed to assist humanity” meaning human autonomy needs to be respected.

  1. “A.I. must be transparent” meaning that humans should know and be able to understand how they work.
  2. “A.I. must maximize efficiencies without destroying the dignity of people”.
  3. “A.I. must be designed for intelligent privacy” meaning that it earns trust through guarding their information.
  4. “A.I. must have algorithmic accountability so that humans can undo unintended harm”.
  5. “A.I. must guard against bias” so that they must not discriminate against people.

 

Lots of research is underway to address this topic. Philosophers, lawyers, government bodies and IT professionals are jointly working on defining the problem in granular detail and coming out with solutions.

I recommend the following :-

 

1.                All corporate stake holders (user corporations and tech firms) should publish an AIEthics Manifesto and report compliance to the Board on a quarterly basis. This manifesto will ensure they meet all in-country AIEthics policies if available or follow a minimum set of safeguards even if some countries are yet to publish their policies. This will ensure CEO and CXOs will have an item on their KPIs/BSCs regarding AIEthics and ensure proliferation inside the company.

 

2.                Individual developers and end-users can take an oath or pledge stating that ‘I will, to the best of my ability, develop or use only products which are ethical and protect human dignity and privacy’.

 

 

3.                Whistle Blower policy to be extended to AIEthics compliance issues, to encourage employees to report issues without fear.