Lot has been talked about #ResponsibleAI, #ai and #ethics. We also have a brand new filed called #xai Explainable AI with the sole objective of creating new simpler models to interpret more complex original models. Many tech companies such as Google, Microsoft, IBM etc. have released their #ResponsibleAI guiding principles.
European Union has circulated a proposal for a “The EU Artificial Intelligence Act”. As per process this proposal will be discussed, debated, modified and made in to law by the European parliament soon.
Let me give you a brief summary of the proposal.
First is the definition of 4 risk categories with different type of checks & balances in each category.
The categories are
- Unacceptable
- High Risk
- Limited Risk
- Minimal Risk
Category 1 the recommendation is a Big NO. No company can deploy this category SW within EU for commercial use.
Category 2 consisting of many of the business innovation & productivity improvement applications will be under formal review & certification before put to commercial use.
Category 3 will require full transparency to the end users and option to ask for alternate human in the loop solutions.
Category 4 is not addressed in this proposal. Expected to be self-governed by companies
Let us look at what kind of applications fall in Category 2
- Biometric identification and categorization
- Critical Infrastructure management
- Education and vocational training
- Employment
- Access to public and private services including benefits
- Law enforcement ( Police & Judiciary)
- Border management ( Migration and asylum)
- Democratic process such as elections & campaigning.
Very clearly EU is worried about the ethical aspects of these complex AI systems with their inbuild biases, lack of explain ability, transparency etc. and also clearly gives very high weightage to human rights and fairness & decency.
I recommend that all organizations start reviewing this and include this aspect in their AIML deployment plans without waiting for the eventual EU law.
No Comments yet!