Sunday, January 9, 2022

Natural Laws for Artificial Intelligence

 

I like reading Sci-fi written by Asimov.  He coined the word Robotics and he envisioned a world where robots walked amongst us humans. They possessed greater abilities than humans such as strength and speed. Every robot had "The Three Laws" entrenched into their "positronic brains". 

The following three Laws were there to protect humans: 

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm 

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law 

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws 

Asimov later in another book defined a "Zeroth" Law: A robot may not injure humanity, or, by inaction, allow humanity to come to harm. 

If we extend the definition of "harm" to include "upset or bring distress" then we really need guidelines for AI and Ethics?    

No one right now manufactures "positronic brain" in our world. But any individual, organization, or government can create AI and put it to use or misuse. If only we could embed the Zeroth Law into our algorithms, so no matter what we did to train our AI algorithm, the Zeroth Law would always override our actions. 

Currently, most types of AI that we encounter on a daily basis are quantified as “narrow-AI”. This is a type of AI that is very specific and narrow in its utility function.  Artificial General (AGI) is an AI that similar to humans can quickly learn, adapt, pivot, and function in the real-world 

In Asimov's Sci-Fi, there were instances where robots were accidentally or deliberately created without the Three Laws embedded but the robots had to work out their ethics and their place in the world and society for themselves. 

Our narrow AI can't do what Asimov’s Robots did. But if we apply the Zeroth Law of AI to our work: AI may not harm, upset or distress humanity, or, by inaction, allows humanity to come to harm, upset, or distress? The focus today is on AI action, not inaction - that raises a whole discussion point of ethics.

Leave aside sci-fi for a moment.  Progress in AI will happen over a period of time.  But Currently, AI is becoming an unavoidable part of our society. Machines now recommend online videos to watch, perform surgery and send people to jail. The science of AI is a human activity that needs to be regulated by society.  There are enormous risks. There are two approaches to AI. The first is to view it in engineering terms, where algorithms are trained on specific tasks. The second presents deeper philosophical questions about the nature of human knowledge. The algorithms are much pushed by Silicon Valley, where AI is deployed to get products quickly to market and ethical problems dealt with later. This has made AI as a success even when the goals aren’t socially acceptable and there is hardly any accountability. The unscrupulous aspect of this approach is exemplified by the role YouTube’s algorithm plays in radicalizing people, given that there is no public understanding of how it works. That requires a system of checks and balances where machines can pause and “ask” for human intervention, and for regulations to deal with anomalies. 

Few AI professionals back the global adoption of EU legislation that would ban the impersonation of humans by machines. Computers are getting closer to passing the Turing test (though barely) – where machines attempt to trick people into believing they are communicating with other humans. Yet human knowledge is collective: to truly fool humans a computer would have to be able to grasp mutual understandings. 

Some argue that AI can already produce new insights that humans have missed. But human intelligence is much more than an algorithm. Inspiration strikes when a brilliant thought arises. And This can’t be explained as a logical consequence of preceding steps. As an Example: Einstein’s theory of general relativity cannot be derived from observations of that age – it was experimentally proven only decades later. Human beings can also learn a new task by being shown how to do it only a few times.  Currently, AI can be prompted – but not prompt itself – into action. 

Some people have predicted that a computer that could match the human brain might arrive by 2052 (costing $1tn). We need to find better ways to build it. Humans have reached an era when the more powerful the AI system, the harder it is to explain its actions. How can we tell if a machine is acting on our behalf and not acting contrary to our interests?  Are all the AI-backed decisions are ethical?

Thus we need to ponder on the biggest question of how to regulate AI algorithms as Asimov tried to imply and how to implement AI systems that are based on the key principles underlying the proposed regulatory frameworks.

AI systems that produce biased results have been making headlines. One well-known example is Apple’s credit card algorithm, which has been accused of discriminating against women. But there are other problems as for example in online advertisement algorithms, which may target viewers by race, religion, or gender, and in Amazon’s automated résumé screener, which filtered out female candidates. A recent study published showed that risk prediction tools used in health care, which affect millions of people in the United States every year, exhibit significant racial bias.   

 To follow the more stringent AI regulations that are on the cards, companies will need new processes and tools: system audits, documentation and data protocols (for traceability), AI monitoring, and diversity awareness training. A number of companies already test each new AI algorithm across a variety of stakeholders to assess whether its output is aligned with company values and is unlikely to raise regulatory concerns. You have seen companies have CXO roles, now you will definitely see a Chief AI Ethical officer (CAEO). 

 

 

No comments: