In last few years AI is transforming industries and also poised to have significant impact on our daily lives. The whole space has got significant impetus after commercial availability of GenAI. It is stated by many and multiple times that industry needs to make sure that AI technologies are developed and deployed in a manner that is ethical, transparent, and accountable. There is a need for assurance of AI safety and trustworthiness. Otherwise, AI technologies which can do wonders in right hands, will create chaos if left without required oversight. This need for an oversight puts demands on the governments around the world to put in place policies and governance frameworks to provide guidelines that help steer the development and use of AI systems in directions that benefit society as a whole.
But governments around the world face challenges in ensuring the safety and trustworthiness of AI. Some of the key challenges are:
- Rapid Technological Advancement: AI technology is evolving at an unprecedented pace, making it difficult for regulations to keep up. This can lead to a gap between the development of AI and the implementation of effective safety measures.
- Complex Technical Nature: AI systems are often highly complex, involving intricate algorithms and large datasets. This complexity makes it challenging for policymakers and regulators who may not have the technical expertise to fully understand the risks and potential consequences.
- Diverse Applications: AI is being used in a wide range of sectors, from healthcare to finance to transportation. This diversity of applications means that different safety and trustworthiness concerns may arise in each sector, requiring tailored regulatory approaches.
- International Collaboration: AI development and deployment are increasingly global in nature, involving collaboration across countries. This necessitates international cooperation to establish consistent standards and regulations to prevent regulatory arbitrage and ensure global safety.
- Balancing Innovation and Regulation: Governments must strike a balance between encouraging innovation and ensuring safety. Overly restrictive regulations could stifle AI development, while lax regulations could lead to serious risks. This balance is a tightrope walk.
- Ethical Considerations: AI raises complex ethical questions, such as algorithmic bias, job displacement, and the potential for autonomous systems to make life-or-death decisions. Addressing these ethical concerns requires careful consideration and robust frameworks.
- Transparency and Explainability: AI systems, especially those based on machine learning, can be difficult to interpret and understand. Lack of transparency and explainability can hinder trust and accountability. But there is still lot of work to be done in this space.
- Security Risks: AI systems can be vulnerable to cyberattacks and manipulation, which could have serious consequences. Ensuring the security of AI systems is crucial for their safety and trustworthiness.
- Data Privacy: AI systems often rely on large amounts of data, raising concerns about privacy and data protection. Governments must balance the need for data to train AI models with the rights of individuals to protect their personal information.
These challenges require a coordinated effort between governments, private sector, academia, and civil society to develop effective solutions.
Many governments have taken steps and initiatives to address these challenges. Some have declared executive orders and laws. Some have established safety institutes for holistic approach to AI safety. Many have started collaborating also among themselves. Below, taking example of United Kingdom, is representative glimpse of how different governments are responding to the recognition that use of AI in Public Services and in Businesses is going to be unavoidable.
- AI Safety Institute (AISI): Launched at the AI Safety Summit in November 2023, AISI is dedicated to advancing AI safety and governance. It conducts rigorous research, builds infrastructure to test AI safety, and collaborates with the wider research community, AI developers and other governments. Their aim is to also shape global policy making on the subject through such collaboration. (Ref 1)
- AI Management Essentials (AIME) Tool: This tool includes a self-assessment questionnaire, a rating system, and a set of action points and recommendations to help businesses manage AI responsibly. AIME is based on the ISO/IEC 42001 standard, NIST framework, and E.U. AI Act. (Ref 2)
- AI Assurance Platform: A centralized resource offering tools, services, and frameworks to help businesses navigate AI risks and improve trust in AI systems. (Ref 3)
- Systemic Safety Grant Program: Provides funding for initiatives that develop the AI assurance ecosystem, with up to £200,000 available for each supporting project that investigates the societal risks associated with AI, including deepfakes, misinformation, and cyber-attacks. (Ref 4)
- UK AISI Collaboration with Singapore: The UK AISI collaborates with Singapore to advance AI safety and governance. Both countries work together to ensure the safe, ethical, and responsible development and deployment of AI technologies. (Ref 5).
AISI UK has already started getting engaged with the industry. For example, jointly with AISI USA, it carried out Pre-Deployment Evaluation of Anthropic’s Upgraded Claude 3.5 Sonnet.
Many other countries have taken similar steps, like UK and US AISI partnership and UK and France AI Research Institute’s collaboration. On the other hand, many countries have not yet made this their priority.
The recognition that these efforts must transcend the country boundaries, there are initiatives that have come to exist. The most notable is International Network of AI Safety Institutes to boost cooperation on AI safety. A small overview of it below (Ref 6 & 7):
- Formation and Members: Launched at the AI Seoul Summit in May 2024, the network includes the UK, US, Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada, and the European Union.
- Objectives: The network aims to accelerate the advancement of AI safety science globally by promoting complementarity and interoperability between institutes and fostering a common international understanding of AI safety approaches.
- Collaboration: Members coordinate research, share resources and information, develop best practices, and exchange or co-develop AI model evaluations.
AISIs, UN initiatives, and the International Network of AI Safety Institutes have made significant strides in promoting AI safety and trustworthiness, such as Collaboration including Industry-Academia Partnerships, Standards Setting, Knowledge Sharing, and defining comprehensive frameworks for ethical AI development and use. But concrete effects of these achievements are still emerging. While concrete outcomes may take time to materialize, these initiatives have laid the foundation for a safer and more trustworthy AI future.
References:
- AISI (AI Safety Institute) https://www.aisi.gov.uk/
- UK Government Introduces Self-Assessment Tool to Help Businesses Manage AI Use by Fiona Jackson – TechRepublic https://www.techrepublic.com/article/uk-government-ai-management-essentials/
- UK government launches AI assurance platform for enterprises by Sebastian Klovig Skelton – TechTarget/ComputerWeekly https://www.computerweekly.com/news/366615318/UK-government-launches-AI-assurance-platform-for-enterprises
- AISI’s Systemic AI Safety Grant https://www.aisi.gov.uk/work/advancing-the-field-of-systemic-ai-safety-grants-open
- UK & Singapore collaboration on AI Safety https://www.mddi.gov.sg/new-singapore-uk-agreement-to-strengthen-global-ai-safety-and-governance/
- Press Release https://www.gov.uk/government/news/global-leaders-agree-to-launch-first-international-network-of-ai-safety-institutes-to-boost-understanding-of-ai
- CSIS report https://www.csis.org/analysis/ai-safety-institute-international-network-next-steps-and-recommendations