Skip to main content

Leadership principles in the Generative AI age

This article is a guest post by Paramu Kurumathur.
[Pramu has over 37 years of industry experience and is an expert in Project / Program Management and Project Health. Paramu is a principal consultant with PM Power Consulting and the practice lead for PM Power’s Strategic IT suite of services and Program Management suite of services.]

(This article is an edited extract from PM Power’s upcoming book Full Stack Leadership. The original blog can be found here.)

It is interesting to see where AI has reached from when it started. Research on AI started as early as the 1950s and 1960s. By the beginning of this millennium, newer AI models like deep neural networks and ways to manipulate them, replaced the earlier models like Bayesian networks of the 1980s and 1990s, which had replaced the still earlier models like Markov chains. These new models of representing, storing, and retrieving knowledge are what drive the neural networks, transformers, natural language processing, language models and learning that make up generative AI. In addition, the explosion in chip technologies with organizations like Nvidia making huge strides in areas like GPU and TPU, also helps.

The field of AI keeps galloping ahead and is now manifesting itself as generative AI, which can create data on its own, rather than just read and interpret data supplied to it. AI can now make content, pictures, videos, and also support scientific research. Large data and AI together can produce unbelievable results with the AI machines spotting patterns, styles, and structures in the data they see. When it has learned from these data sets, it can make new content all on its own. I have even heard and played music made by AI. AI can bring in more productivity and efficiency, allow for better decision-making, help in experiments, and innovation and accessibility.

Of course, with its vast potential, it also brings great dangers like job loss, information bias, privacy issues, transparency issues and so on. And it does raise serious ethical concerns and worries that it will cause harm to society. AI could be used in the wrong way to hurt people and societies.

Some of the ways these dangers can be mitigated are developing internationally recognized guidelines on the use of AI, addressing biases and lack of transparency, and investing in awareness of the benefits and dangers of AI.

While we cannot get into details, let us quickly go over some of the things that leaders can do to ensure that AI is used and harnessed to serve organizations, communities and the world in general.

All principles of leadership will be affected one way or another by AI. But let us focus on those that are most affected.

  • Looking for opportunities to grab them
  • Breaking tradition and finding trailblazing ideas
  • Data driven decision making
  • Active inertia
  • Need for high level of emotional quotient
  • Protection against cyberattacks
  • Trustworthiness

A true full stack leader will always be looking out for opportunities to grab them and will always be ready to break tradition and find trailblazing ideas. AI is creating, and will continue to create, a rapid change in the world of technologies and applications. This makes it important that we change the way we look for new ideas. What we have now is to be considered ‘tradition’. Also, AI will throw up new opportunities for us to grab. These new inputs will also change the way experimentation is done.

Another principle that needs to be looked at in the light of AI is data-driven decision making. AI can be used with great advantage to generate data, and also analyse it. But, if we are making decisions based on this data, we need to ensure the correctness of the data. AI can build biases into the output it gives based on what information it has access to. It can also generate fake data inadvertently or even maliciously. We can also be overwhelmed by the amount of data produced by AI.

Yet another aspect that the new avatars of AI can help with is dealing with active inertia. They can be used to study current systems and suggest better and more efficient processes and systems. Knowing human nature, changes suggested by a machine may be more acceptable that changes suggested by humans! Using the same techniques, AI can also help in supporting principles policies like ‘no waste’.

One of the key requirements of a good full stack leader is to have a high level of Emotional Quotient (EQ). One of the main worries that AI brings in is the loss of human connection. Human connections are essential even when we allow AI to take over many jobs. You may find that this human connection is slipping away with the taking over by AI.

AI may not show the same empathy that humans do. Leaders must ensure that this does not happen. They must develop empathy and foster a culture of developing empathy and building strong relationships in the AI age. Also, interacting with AI driven systems continuously may affect associate well being and tolerance levels. Leaders must prioritize mental health and ensure associates have the support they need to work without stress.

Trustworthiness is another area that gets affected. Leadership is majorly about trust and inspiration. Even as humans, we are often challenged by these two facets. They are major levers for motivating individuals and teams. Some of the issues coming from partnership with AI like job displacement, bias and discrimination, lack of empathy, making decisions based on fake information can affect a leader’s trustworthiness. If these issues can be addressed, leaders can harness the potential of AI to enhance their trustworthiness and effectiveness.

Another aspect of AI that leaders look to take advantage of is protection against cyberattacks. AI can help process vast amounts of data, learn from patterns, and quickly detect and thus prevent threats. Leaders must be smart enough to ensure the harnessing of AI power to detect and prevent attacks. Of course, the same technology will be used by criminals to perpetrate cyber attacks. But a good leader should stay one step ahead of criminals.

One area that, though not mentioned as a principle in the book, has seen some new challenges that need to be addressed is ethical considerations, as we mentioned before. We saw some of the factors of this above when we discussed affected principles. Some of the other factors are:

  • AI generated information can be used for copyright infringement.
  • Deep fake information created by AI can be used to spread misinformation and lies about people to their detriment. It can also be used to manipulate people for the perpetrator’s benefit
  • Accountability: AI can be blamed for mistakes made by people.
  • AI based surveillance and analysis can be used to infringe on people’s privacy.
  • The question whether AI can employed to replace humans in jobs, thus depriving people of livelihood, is also a serious concern.

Human tendency is very inventive, especially when it comes to circumventing aspects of ethics, despite compliance requirements that constantly keep increasing. That’s because, deep down, matters of ethics (or otherwise) are in the realm of the subconscious and culture. This is an area where AI can at best come up with insights, hypotheses, point at vulnerabilities but never be good enough as humans in terms of morally correct thinking and actions.

In summary, I would stress the word “Synergy” between Humans and AI, for leveraging each other’s capabilities and keeping our sanity close to our sleeves to ensure goodness.

[Thank you, JV, for editing and correcting this piece. Thank you, Srini and Mohan, for suggesting improvements.]

AI and Philosophy: Who are we ? Can we be immortal?

Preface: This blog is part of a series of blogs based on short essays compiled in the book “The Minds I” by Douglas R Hofstadter and Daniel C Dennett 1981 ( DH and DC)

The short essay is called “On Having No Head “ by D.E. Harding ( 1909 – 2007). He was posted in pre-independence India and says that the Himalayan calmness and beauty have made him realize that he has no Head. In addition, Harding was also inspired by a self-portrait drawn by another Austrian philosopher, Ernst Mach. This portrait was done by the artist without a mirror, and hence, there was no head in the portrait but most other parts below the shoulder.

He says he lost his head but gained the whole world. Instead of two eyes through which he used to see the universe, now he says he is seeing everything with zero or infinite eyes. In this headless state, he says he does not feel location, near and far and can see the beauty of the Himalayas, forests, sea waves without any concept of near and far.

We usually use the analogy of getting into a bad dream and waking up to see we are fine. He says all his life with his head was a bad dream and now he has woken up to the reality of his life without his head. I liked these words “There arose no questions, no reference beyond the experience itself, but only peace and quiet joy and the sensation of having dropped a huge burden”.

The essay concludes by talking about “seeing”. How the seeing of the third person can be explained in terms of lights, reflection, retina, lens, etc., the first person seeing is eyeless. The last sentence in the essay is “In the Language of the sages, only the Buddha Nature or Brahman or Allah or God sees or hears or experiences anything at all:.

If you are not getting the real message Harding wanted to convey in this essay, do not worry. I was also looking at reflections section from DH and DC to get the real message.

Let us see some snippets from the reflections section.

The core human conflict of our own mortality and the fact that one day all of us will be non-existence is the message Harding wanted to communicate in his own style.

We all understand and relate to being part of many groups or classes. For example. We are all humans ,, some of us are human males and others human females. Some of us are Blacks, some of us are Caucasians and some of us are Asians. We also understand simple logic such as “ All humans are mortal”, “ I am a human”, “Hence I am mortal”.

Harding disputed the first part of the first premise itself that there can be a class called humans and some properties are applicable to all. However, creating classes and classifications is a rather advanced property of intelligence and humans continue to make newer classes to derive more insights in to day-to-day life.

The 2nd part of the first premise is the real shocker. That something can just vanish or destroyed is something we see all around. Newspaper in a fire place burns and vanishes, food in the spoon vanishes in to the mouth. All very shocking but still acceptable. But one’s own non-existence is not that easy to accept.

The sudden conjunction of these two premises, is a rude shock like a slap in the face. This shock can send us reeling for months, years, our whole lives.. But somehow we suppress the conflict and divert the attention elsewhere and live.

The question is will I ever die? or only this body with limbs, organs and a brain die but I remain alive. How can we say all these rich experience and knowledge of so many years will just one day cease to exist? Is there some truth to the body and soul theory where the soul is immortal and only the body is mortal?

DH and DC conclude their reflections by this statement. “There seem to be no alternative to accepting some sort of incomprehensible concept to existence”.

Great essay and does evoke strange feelings in all of us as all of us are battling with the same question seeing so many people in the friends and family circle go away. Many death rituals in Hinduism and other religions again seem to suggest some form of existence of your loved one, after death.

Now let us get to what can the current advances in AGI help Harding if he is still around in a different form!

What am I, for any one human being? It is the collection of experiences, interactions, and knowledge accumulated over the years since birth. With such advances in GPUs, quantum computing, and neural networks, can we take a backup of our physical brain and load it in an AGI program? Like a database backup taken periodically and restored when the production database becomes unusable? We are already freezing eggs and sperm to propagate our genes after we die biologically. We can have a full genome sequence of every human being soon enough. Why not a copy of “I” in digital form and the ability to give the same interactive responses to external chats or video/audio inputs? Friends & Family can interact with the AGI Person and hope to get the expected response as if the person were alive.

Look at real-life benefits. If we can preserve the knowledge and experiences of Einstein, Stephen Hawking, and other geniuses, we can do a lot more scientific breakthroughs.

Who knows what will happen when our AGI model “I“ and the Soul “I” meet and talk? I will let you ponder this and end my Blog here.

AI & Philosophy – Are We One Person Or One Personality?

Preface: This blog is part of a series of blogs based on short essays compiled in the book “The Minds I” by Douglas R Hofstadter and Daniel C Dennett 1981 ( DH and DC)

The short essay called “ Borges and I ” ( really short, just one page ) is written by Jorge Luis Borges ( 1899 – 1896) from Argentina. He wrote this article in 1962. More than 60 years ago.

Borges became very famous in the literary circles globally with his publications translated in many languages and this created a very strange effect on him. He saw his public personality and private personality as two different persons. When he talks about himself he uses third-person language He thinks little by little everything from the private person is going into the public person and soon the private person will be nothing. He even concludes his essay with this statement, “I do not know which of us has written this essay”.

In the reflection section, DH and DC, discuss the first person /third person issues. They give the example of you waiting in a departmental store line and seeing CCTV cameras all around. You notice a person getting his pockets picked up in the CCTV image. As you raise your hand in astonishment seeing the act, you see the CCTV victim also raising his hand and suddenly you realize that the person whose pocket is getting picked is you. The event has changed from a third person to the first person.

They also talk about a Robot called Shakey built by SRI International in California. Shakey can move about in the room avoiding obstacles and a computer program is controlling the robot and has the current coordinates of the robot. Shakey reaches the middle of the room and you are asked to translate into English the computer program’s representation of Shakey’s co-ordinates. What will you say? “Shakey is in the middle of the room” or “I am in the middle of the room”.

Such profound thoughts and deep questions in a few pages.

Let us see if we can relate these ideas and thoughts to our AGI journey of 2025. All of us present a different personality in various roles we play. Our workspace personality is different; our religious personality when visiting worship places; our parental personality, our romantic personality with spouse/partners, our football game buddies and poker buddies’ personality and last but not the least our social media personality. These days we can also add your “Avtar” personality.

What do we say on social media? Who do you follow? How much do you post? What blogs do you write? what causes you support? Almost all the personalization techniques using AI assume that there is one personality of you i.e. your public pronouncements in social media, as they do not have access to your brain to extract the other personality traits. So, they take what they can i.e. social media personality and create a AGI model of you. Will the model ever be perfect in representing you, even if use the biggest LLM AGI models? How is the AGI model built on this premise better than the fictitious Avtar models created by each of us voluntarily? A very valid question for all of us to ponder after 6 decades of “ Borges and I”.

Regarding the Shakey the robot question, this is at the root of the Human Robot relationship. There are many essays in the book which deal with this problem and we will be discussing these in much greater detail in future blogs. For now, I will leave you with a few thoughts to keep your interest flowing

The first answer “ Shakey is in the middle of the room” assumes that we think, Shakey is not a person with human skills but a machine with human-like skills and Shakey came to the middle of the room only due to someone giving the command for it to come to the spot via the computer program.

The second answer “ I am in the middle of the room” assumes that Shakey is an individual as much like you and I and he/she decided to come to this middle spot Many robots of today find the best spot in the room for maximum light, maximum coverage view, etc. and find their own optimum spots like any of us. In this case, shall we vote for the answer no 2?

Hope the first story generated some interest in this fascinating topic and we will continue to interact through this series of Blogs.

L Ravichandran

 

 

Insights into AI Landscape – A Preface

AI Landscape and Key Areas of Interest

The AI landscape encompasses several crucial domains, and it’s imperative for any organization aiming to participate in this transformative movement to grasp these aspects. Our objective is to offer our insights and perspective into each of these critical domains through a series of articles on this platform.

We will explore key topics each area depicted in the diagram below.

1.      Standards, Framework, Assurance: We will address the upcoming International Standards and Frameworks, as well as those currently in effect. Significant efforts in this area are being undertaken by international organizations like ISO, IEEE, BSI, DIN, and others to establish order by defining these standards. This also encompasses Assurance frameworks, Ethics frameworks, and the necessary checks and balances for the development of AI solutions. It’s important to note that many of these frameworks are still in development and are being complemented by Regulations and Laws. Certain frameworks related to Cybersecurity and Privacy Regulations (e.g., GDPR) are expected to become de facto reference points. More details will be provided in the forthcoming comprehensive write-up in Series 1.

2.      Legislations, Laws, Regulations: Virtually all countries have recognized the implications and impact of AI on both professional and personal behavior, prompting many to work on establishing fundamental but essential legislations to safeguard human interests. This initiative began a couple of years ago and has gained significant momentum, especially with the introduction of Generative AI tools and platforms. Europe is taking the lead in implementing legislation ahead of many other nations, and countries like the USA, Canada, China, India, and others are also actively engaged in this area. We will delve deeper into this topic in Series 2.

3.      AI Platforms & Tools: AI Platforms and Tools: An array of AI platforms and tools is available, spanning various domains, including Content Creation, Software Development, Language Translation, Healthcare, Finance, Gaming, Design/Arts, and more. Generative AI tools encompass applications such as ChatGpt, Copilot, Dall-E2, Scribe, Jasper, etc. Additionally, AI chatbots like Chatgpt, Google Bard, Microsoft AI Bing, Jasper Chat, and ChatSpot, among others, are part of this landscape. This section will provide insights into key platforms and tools, including open-source options that cater to the needs of users.

4.      Social Impact:  AI Ethics begins at the strategic planning and design of AI systems. Various frameworks are currently under discussion due to their far-reaching societal consequences, leading to extensive debates on this subject. Furthermore, it has a significant influence on the jobs of the future, particularly in terms of regional outcomes, the types of jobs that will emerge, and those that will be enhanced or automated. The frameworks, standards, and legislations mentioned earlier strongly emphasize this dimension and are under close scrutiny. Most importantly, it is intriguing to observe the global adoption of AI solutions and whether societies worldwide embrace them or remain cautious. This section aims to shed light on this perspective.

5.      Others: Use Cases and Considerations:  In this Section, we will explore several use cases and success stories of AI implementation across various domains. We will also highlight obstacles in the adoption of AI, encompassing factors such as the pace of adoption, the integration of AI with existing legacy systems, and the trade-offs between new solutions and their associated costs and benefits.  We have already published a recent paper on this subject, and we plan to share more insights as the series continues to unfold.

The Executive Order!

Close on the heels of the formation of the Frontier Model Forum and a White House announcement that it had secured “voluntary commitments” from seven leading A.I companies to self-regulate the risks posed by artificial intelligence, President Joe Biden, yesterday issued an executive order regulating the development and ensuring safe and secure deployment of artificial intelligence models . The underlying principles of the order can be summarized in the picture.

The key aspects of the order focus on what is termed “dual-use foundation models” – models that are trained on broad data, uses self-supervision, and can be applied in a variety of contexts. Typically the generative AI models like GPT fall into this category, although, the order is aimed at the next generation of models beyond GPT-4.

Let’s look at what are the key aspects of what the order says in this part. Whilst the order talks about the

Safe & Secure AI

  • The need for safe and secure AI through thorough testing – even sharing test results with the government for critical systems that can impact national security, economy, public health and safety
  • Build guidelines to conduct AI red-teaming tests that involves assessing and managing the safety, security, and trustworthiness of AI models
  • The need to establish provenance of AI generated content
  • Ensure that compute & data are not in the hands of few colluding companies and ensuring that new businesses can thrive [This is probably the biggest “I don’t trust you” statement back to Big Tech!]

AI Education / Upskilling

  • Given its criticality, the need for investments in AI related education, training, R&D and protection of IP.
  • Support for programs to provide Americans with the skills they need for the age of AI and attract the world’s AI talent, via investments in AI-related education, training, development, research, and capacity and IP development
  • Encouraging AI skills import into the US [probably the one that most Indian STEM students who hope to study and work in the US will find a reason to cheer]

Protection Of Rights

  • Ensuring the protection of civil rights, protection against bias & discrimination, rights of consumers (users)
  • Lastly, also the growth of governmental capacity to regulate, govern and support for responsible AI.

Development of guidelines & standards

  • Building up on the Blueprint AI Bill of Rights & the AI Risk Management Framework, to create guidance and benchmarks for evaluating and auditing AI capabilities, particularly in areas where AI could cause harm, such as cybersecurity and biosecurity

Protecting US Interests

  • The regulations also propose that companies developing or intending to develop potential dual-use foundation models to report to the Govt on an ongoing basis their activities w.r.t training & assurance on the models and the the results of any red-team testing conducted
  • IaaS providers report on the security of their infrastructure and the usage of compute (large enough to train these dual use foundation models), as well as its usage by foreign actors who train large AI models which could be used for malafide purposes

Securing Critical Infrastructure

  • With respect to critical infrastructure, the order directs that under the Secretary Homeland Security, an AI Safety & Security Board is established, composed of AI experts from various sectors, to provide advice and recommendations to improve security, resilience, and incident response related to AI usage in critical infrastructure
  • All critical infrastructure is assessed for potential risks (vulnerabilities to critical failures, physical attacks, and cyberattacks) associated with the use of AI in critical infrastructure.
  • An assessment to be undertaken of the risks of AI misuse in developing threats in key areas like CBRN (chemical, biological, radiological and nuclear) & bio sciences

Data Privacy

  • One section of the document deals with mitigating privacy risks associated with AI, including an assessment and standards on the collection and use of information about individuals.
  • It also wants to ensure that the collection, use, and retention of data ensures that privacy and confidentiality are respected
  • Also calls for Congress to pass Data Privacy legislation

Federal Government Use of AI

  • The order encourages the use of AI, particularly generative AI, with safeguards in place and appropriate training, across federal agencies, except for national security systems.
  • It also calls for an interagency council to be established to coordinate AI development and use.

Finally, the key element – keeping America’s leadership in AI strong – by driving efforts to expand engagements with international allies and establish international frameworks for managing AI risks and benefits as well as driving an AI research agenda.

In subsequent posts, we will look at reactions, and what it means for Big Tech and for the Indian IT industry which is heavily tied to the US!

Domain and LLM

I am in total agreement with Morgan Zimmerman, Dassault Systems quote in TOI today.  Every industry has its own terminologies, concepts, names, words i.e Industry Language. He says even a simple looking word like “Certification” have different meanings in Aerospace vs life sciences.  He recommends use of Industry specific language and your own company specific language for getting significant benefit out of LLMs. This will also reduce hallucinations and misunderstanding.

This is in line with @AiThoughts.Org thoughts on Domain and company specific information on top of general data used by all LLMs.  Like they say in Real Estate, the 3 most important things in any real estate buy decision is “Location, Location and Location”.  We need 3 things to make LLMs work for the enterprise.  “Domain, Domain and Domain”.   Many of us may recall a very successful Bill Clinton Presidential campaign slogan. “The economy, Stupid”.   We can say “The domain, Stupid” as the slogan to make LLMs useful for the enterprises.

But the million-dollar question is how much it is going to cost for the learning updates using your Domain and company data?  EY published a cost of $1.4 Billion which very few can afford.  We need much less expensive solutions for large scale implementation of LLMs.

Solicit your thoughts. #LLM #aiml #Aiethics #Aiforindustry

L Ravichandran

AI and Law

The Public Domain is full of initiatives by many Law Universities, large law firms, and various government departments on the topic of “AI and Law “. I was happy to see a news article a few days ago about the Indian Consumer grievances cell thinking about using AI to clear a large number of pending cases. They have had some success in streamlining processes and making it all digital but they felt that the sheer large volume of pending cases needs AI-type intervention.  I have already talked about the huge volume of civil cases pending in lower courts in India and some cases taking even 20 years to get final judgment.  As the saying goes “Justice delayed is Justice denied”, it is imperative that we find solutions to this huge backlog problem.

All discussions are centred around two broad areas: –

1.      Legal Research and development of customer’s case by Law firms.  Basically, core work of both junior and senior law associates and partners.

2.      Assisting judges or even rendering judgment on their own by AI models to reduce backlog and speedy justice. 

Lots of interesting discussions happening on (1). Law research, looking into archives, similar judgments, precedence’s, etc. seem to be a no-brainer.  Huge advances in automation have been already done and this will increase multi-fold by these Law purpose-built language models.  What will happen to junior law associates is an interesting question. Can they use better research and develop actual arguments and superior case brief for their clients and take the load off senior associates who in turn can focus more on customer interactions?  I found discussions on the model analysing judges’ earlier judgments and making the argument briefs customized per judge, fascinating.  

The no (2) item needs lot of discussions.   All democratic countries jurisprudence is based on these 3 fundamental principles.

  1. Every citizen will have their “day in the court” to present their case to an impartial judge.
  2. Every citizen will have a right to a competent counsel with a provision of public defenders given free to the citizens.
  3. Every witness can be cross examined by the other party without any restrictions.

On the one hand, we have these great jurisprudence principles.  On the other hand, we have huge backlogs and delays. 

How much citizens are willing to give up some of the basic principles to get speedy justice? 

Can we give up the principle of “my day in Court” and let only written briefs submitted to the court to be used for final judgement? This will mean witness statements in briefs will not be cross examined or questioned.

Can we give up the presence of a human judge who will read the briefs on both sides and make a judgement and let an AI Model read both the briefs and pronounce the judgement?

Even if citizens are willing to give up these principles, does the existing law of the land allow this?   It may require changes to law and in some countries even changes to their constitution to allow for this new AI jurisprudence.

Do we differentiate between civil cases and criminal cases separately and find different solutions?  Criminal cases will involve human liberty issues such as imprisonment and will need a whole set of different benchmarks.

What about changes to appeal process if you do not like lower court judgment?   I presume we will need human judges to review the judgements given by AI Models. It is very difficult for us to accept higher court AI model, reviewing and correcting a lower court AI model’s original judgement.

The biggest hurdle is going to be us, the citizens.  In any legal case involving two parties, one party always and in many cases both parties will be unhappy with any judgement.  No losing party in any civil case is going to be happy that they lost as per some sub clause in some law text. In many cases, even winning parties may not be happy with the award amount.  In this kind of scenario, how do you expect citizens to accept an instantaneous verdict after both parties submit their briefs?  This will be a great human change management issue.

Even if we come out with some solutions to these complex legal and people problems, one technical challenge still remains a big hurdle.  With the release of many large language models and APIs, many projects are happening to train these LLMs on specific domain. A few days ago, we saw a press release by EY about their domain-specific model developed with an investment of US$1.4 Billion.  Bloomberg announced a BloombergGPT, their own 50-billion parameters language model purpose-built for finance. Who will bell the cat for the Law domain? Who will invest large sums of $$s and create a Legal AI Model for each country? Until this model is available for general use, many of the things we discussed will not be possible.

To conclude, there are huge opportunities to get business value out of the new AI technology in the Law and Justice Domain. However, technical, legal and people issues must be understood, addressed and resolved before any large-scale implementation.

More Later. Like to hear your thoughts.

L Ravichandran

AI Regulations : Need for urgency

Few weeks ago, I saw a news article about risks of unregulated AI.  The news article quoted that in USA, Police came to a house of a 8 months pregnant African American lady and arrested her due to a facial recognition system identified her as the theft suspect in a robbery. No amount of pleading from the lady about her advanced pregnancy condition during the time of robbery and she just could not have committed the said crime with this condition, was heard by the police officer.  The Police officer did not have any discretion.  The system set up was such that once the AI face recognition identifies the suspect, Police are required to arrest her, bring her to the police station and book her.  

In this case, she was taken to the police station, booked and released on bail. Few days later the case against her was dismissed as the AI system has wrongly identified her.  It was also found out that she was not the first case and few more people, especially African American women were wrongly arrested and released later due to incorrect facial recognition model.

The speed in which the governments are moving on regulations and proliferation of AI tech companies delivering business application such as this facial recognition model demand urgent regulations.

May be citizens themselves should organize and let the people responsible for deploying these systems accountable.  The Chief of Police, may be the Mayor of the town and County officials who signed off this AI facial recognition system, should be made accountable.  May be the County should pay hefty fines and just not a simple oops, sorry.

Lots of attention need to be placed on training data.  Training data should represent all the diverse people in the country in sufficient samples.  Expected biases due to lack of sufficient diversity in training data must be anticipated and the model tweaked.  Most democratic countries have criminal justice system with a unwritten motto “Let 1000 criminals go free but not a single innocent person should go to jail”.  The burden of proof of guilt is always on the state.  However, we seem to have forgotten this when deploying these law enforcement systems.  The burden of proof with very high confidence levels and explainable AI human understandable reasoning, must be the basic approval criteria for these systems to be deployed.

The proposed EU act classifies these law enforcement systems as high risk and will be under the act.  Hopefully the EU act becomes a law soon and avoid this unfortunate violation of civil liberty and human rights.

More Later,

L Ravichandran

EU AI Regulations Update

I have written some time back about EU AI Act draft circulation.  After more than 2 years, there is some more movement in making this a EU Law.  In June 2023,  the EU Parliament adapted the draft and a set of negotiating principles and the next step of discussions with member countries has started.  The EU officials are confident that this process will be completed by end of 2023 and this will become an EU law soon.  Like the old Hindi proverb “ Bhagawan Ghar mein Dher hain Andher Nahin”. Or “In God’s scheme of things, there may be delays but never darkness”.  EU has taken the first step and if this becomes a law by early 2024, it will be a big achievement.   I am sure USA and other large countries will follow soon.

The draft has more or less maintained its basic principles and structure. 

The basic objective of the new law is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.  In addition, there is an larger emphasis on AI systems should be overseen by people, rather than by automation alone.  The principle of proportionate regulations, the risk categorization of AI systems and the level of regulations appropriate to the risk are the central theme of the proposed laws.  In addition, there was no generative AI or ChatGPT like products when the original draft was developed in 2021 and hence additional regulations are added to address this large language models / Generative AI models. The draft also plans to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.

Just to recall from my earlier Blog, the risks are categorized  in to Limited risk, high risk and unacceptable risk.

The draft Law clearly defines systems which are categorized as “Unacceptable risk” and proposed to ban them from commercial launch within EU community countries.  Some examples are given below.

  • Any AI system which can change or manipulate Cognitive behaviour of  humans , especially vulnerable groups such as children, elderly etc.
  • Any AI system which classifies people based on various personal traits such as behaviour, socio-economic stataus or race and other personal characteristics.
  • Any AI system which does real-time and remote biometric identification systems, such as facial recognition which is usually without consent of the person targeted.   The law also clarifies that past data analysis for law enforcement purposes is acceptable with court orders.

The draft law is concerned about any negative impact on fundamental rights of EU citizens and any impact on personal safety.  These types of systems will be categorized as High Risk.

1)  Many products such as toys, automobiles, aviation products, medical devices etc. are already under existing U Product safety legislation.  Any AI systems that are used inside products already  regulated under this legislation will also be subjected to additional regulations as per High Risk category.


2)  Other AI systems falling into eight specific areas that will be classified as High Risk and required registration in an EU database and subjected to the new regulations.

The eight areas are: –

  1. Biometric identification and categorisation of natural persons
  2. Management and operation of critical infrastructure
  3. Education and vocational training
  4. Employment, worker management and access to self-employment
  5. Access to and enjoyment of essential private services and public services and benefits
  6. Law enforcement
  7. Migration, asylum and border control management
  8. Assistance in legal interpretation and application of the law.


Once these systems are registered in the EU database, they will be assessed by appropriate agencies for functionality, safety features, transparency, grievance mechanisms for appeal etc and will be given approvals before they are deployed in EU market.  All updates and new versions of these AI system will be subjected to similar scrutiny.  


Other AI systems not in the above two lists will be termed as “Limited risk” systems and subjected to self-regulations.  At the minimum, the law expects these systems to inform the users that they are indeed interacting with an AI system and provide options to change to a human operated system or discontinue using the system. 

As I have mentioned before, the proposed law is covering Generative AI systems also.  The law required these systems to disclose to the users that the output document or a output decision is generated or derived by a Generative AI system.  In addition, the system should publish the list of copyrighted training content used by the model.  I am not sure how practical this is given that ChatGPT like systems are reading every digital content in the web and now moving in to very audio / video content.  Even if the system produces this list which is expected to be very large, not sure current copy right laws are sufficient to address the use of this copyrighted material in a different form inside the deep learning neural networks. 

The proposed law also wants to ensure that the generative AI models are self-regulated enough not to generate illegal content or provide illegal advice to users.


 Indian Government is also looking at enacting AI regulations soon.  June 9th 2023 interview, Indian IT minister talked about this.  He emphasized the objective of “No harm” to citizen digital users.  Government’s approach to any regulation of AI will be thru the prism of “ User harm or derived user harm thru use of any AI technology”.  I am sure draft will be out soon and India also will have similar laws soon.

Let us discuss about what are the implications or consequences of this regulation among the various stakeholders.

  • AI system developer company ( Tech and Enterprises )


They need to educate all their AI development teams on these laws and ensure these systems are tested for compliance prior to commercial release.  Large enterprises may even ask large scale model developers like open.AI to indemnify them against any violations while using their APIs.  Internal legal counsels of both the tech companies and API user enterprises need to be trained on the new laws and get ready for contract negotiations.  Systems Integrators and outsourcers such as Tech Mahindra, TCS, Infosys etc. are also need to gear up for the challenge.  The liability will be passed down from the enterprise to the Systems Integrators and they need to ensure compliance is built in and also tested correctly with proper documentation.

  • Governments & Regulators

Government and regulatory bodies need to upskill their staff on the new laws and how to verify and test compliance for the commercial launch approval.  The tech companies are very big and throw in best technical as well as legal talent to justify their systems are compliant and if regulatory bodies are not skilled enough to verify then the law will become ineffective and will be only on paper.  This is a huge challenge for the government bodies. 

  • Legal community both public prosecutors, company legal counsels and defence lawyers

Are they ready for the avalanche of legal cases starting from regulatory approvals and appeals, ongoing copyright violations, privacy violations, inter company litigations of liability sharing between Tech, enterprise and Systems Integrators etc.

Massive upskillng and training is needed for even senior lawyers as issues arising from this law are very different.  The law degree curriculum needs to include a course on AI regulations. For example, the essence of a comedian talk show “learnt” by a deep learning model and stored deep in to neural networks.  Is it a copyright violation?   The model outputs similar style comedy speech by using the “essence” stored in neural network.  Is the output a copy right violation?  Who is responsible and accountable for an autonomous car accident?  Who is responsible for a factory accident, causing injury to a worker in a autonomous robot factory?  Lots of new legal challenges.

Most Indian Systems Integrators are investing large sums of money to reskill and also create new AI based service offerings.  Hope they are spending part of that investment in AI regulations and compliance. Otherwise, they run a risk of losing all the profits in few tricky legal challenges. 

More later

L Ravichandran

brAInWaves – Oct ’22

Welcome to brAInwaves – Our first newsletter! And thank you all for signing up! Ever since we launched AiThoughts, we have expanded our core team, now comprising of S SivaguruAnil Sane & Diwakar Menon.

We have had a couple of events with large consulting organisations & large IT services companies around AI & DevSecOps and how to package and sell AI services.

We have also have about 17 posts on various topics covering AiOps, Ethics, DevSecOps & Agile SDLC processes and other posts, including games to test your AI Quotient.

What we would like is for you to share case studies, your experiences with AI, articles of interest you may have come across, spread the word about this community and encourage them to subscribe & contribute to this forum


HERE’S WHAT YOU MAY HAVE MISSED

Are You Human? Tale of CAPTCHA (L Ravichandran)

Recently I gave a keynote speech in Mahindra University, Hyderabad as part of a 2-day workshop on “Data Science for the Industry”. Great opportunity to share my thoughts on Data Sciences/AIML technologies and industry use cases. I talked about various problems to be solved by these rapidly advancing technologies.

Test Your AI Quotient (S Sivaguru)

Take this fun quiz to find ten words related to the world of AI. These may be acronyms or terms that you would come across while exploring the wide world of Artificial Intelligence, Machine Learning, techniques, applications etc.


SOME RECENT NEWS


Devang Sachdev, Snorkel AI: On easing the laborious process of labelling data

Correctly labelling training data for AI models is vital to avoid serious problems, as is using sufficiently large data sets. However, manually labelling massive amounts of data is time consuming & labourious. So what’s the middle ground?

OpenAI removes waitlist for DALL-E text-to-image generator

OpenAI has removed the waitlist for its DALL-E service and the text-to-image generator is now publicly available. The original DALL-E debuted in January 2021 to much fanfare. In April this year, DALL-E 2 was released with significant improvements.

Chess: How to spot a potential cheat

The recent controversy involving Magnus Carlsen, who recently resigned without comment in a game against a nineteen year old Niemann, has raised the questions of ethics & how to identify cheats in Chess.


Keep your (Ai)Thoughts flowing, and if you have an article, news, case study to submit, do send it to lravi@aithoughts.com