I am sure many of you have seen a Tweet from Anand Mahindra on video morphing and the risks associated with it. Anand clearly makes a cry for tech solutions to solve the problem. Ref
https://twitter.com/anandmahindra/status/1616722233946411008
In AiThoughts.Org, we have been talking about AI & Ethics and the need for all the stakeholders i.e. Tech Developers, Enterprise users, Governments, and the justice ecosystem to get ready to tackle this issue head-on.
Just to highlight Anand Mahindra’s tweet, the video Morpher clearly demonstrated to the world that with available common technologies, one can put anyone’s face on a video and create a fake news controversy. In this world of instant reading and quick opinion forming, the damage to the public person will be enormous and many times irreparable. No amount of counterclaims about the fakeness of the video post facto will ever restore the lost goodwill. This is a real threat to democracy as elections can be manipulated as also enterprise valuations. We just saw just one day after the Twitter $8 authentication feature fiasco how $Billions of market valuation were lost for a few corporations.
With ChatGPT providing robust APIs, I am sure more & more enterprises will use this powerful knowledge engine to extract research information about various facets such as competition, raw materials, political events in various countries, etc. to make business decisions. False research data can mean the failure of a strategy costing billions to enterprises. See my earlier Blog on how wrong answers were confidently given by CharGPT.
The evolutionary race between the Prey and the Predator is happening for millions of years. We have seen it happen in various digital technologies. The hackers (predators) keep improving and CISOs( Prey) keep deploying more powerful tools to identify and block access to these hackers. Same way, we need easy technology solutions to detect morphed images, morphed voices, etc. before they are allowed to be posted on popular social media sites. However, we are not seeing these fact-checker tools ( Protect Prey) coming to market as fast as the AIML advances ( Predators). In fact, even the term Predator I am using to characterize these great technologies will be objected to vehemently by AIML proponents. My use of this Prey and predator analogy is more to illustrate the risks associated with these technologies as pointed out by so many people.
Large Social sites such as Twitter, Youtube, FaceBook, etc. should immediately evolve a Preventive mechanism and not post facto mechanisms as exist today such as Blocking/deleting posts, etc. Today’s post facto deletion of fake news posts, warning tags on posts, or even blocking a handle is too little and too late fixes. Every second the fake news is on air, the damage it causes is enormous. We need preventive tech checks and blocking even before the fake item is posted.
Both sender and referenced person are to be used as key parameters for preventive checks. For e.g. a sender which is a political party’s official handle needs preventive checks as any message coming out of this handle will be viewed as the official position of the party. Sender, even if he/she is an individual if followers of that person are very large, then his/her posts also need preventive checks for fake news. The power of exponential dissemination of fake news coming from an influencer with a large following has been seen on many occasions already. Same for the referenced person. If a person with a large following is being referenced with fake news with morphed videos, it will cause enormous harm to the individual due to tagging.
I am sure the large R&D teams employed by these tech giants can easily develop preventive solutions and deploy them immediately and use the post facto solutions as an add-on for a few cases which escape the preventive checks.
AI&Ethics is a topic that is now becoming important and it is also becoming urgent.
More Later,
L Ravichandran
No Comments yet!