Monday, March 13, 2017

SXSW Day 2, session 4: Intelligent machines will eat their young (and us): separating fact from fiction in AI

By Adam Porter-Price, Emma Kinnucan and JD Dulny from Booz Allen Hamilton.

The discussion was around the dangers and risks of AI.  They started off by stating what they would not discuss, which included:

  • Ethics questions in using AI
  • Financial impact of AI (i.e. job loss)
  • Bad actors using AI
All of these topics, they said, are covered by others (quite true - there are a lot of sessions here at SXSW on those three).  Instead, they wanted to focus on the question of AI turning against us.

They started off by giving a brief review of the history of AI, starting by defining some common terms in the industry:
Basic AI - the first attempt at computerized intelligence, represented in expert systems, was basically an attempt to program intelligence into code by telling the machine what may happen and indicating the path it should follow for each eventuality (basically, a bunch of if-then statements).
Machine learning is a more advanced form of AI, where you don't program the meaning of specific things you want to teach, but rather teach by example.  So for example, instead of programming a computer how to identify an image of a cat, you tag a million photos of a cat as "cat" and let it find the similarities in the images itself.
Deep learning is where more cognitive capabilities emerge. Here is isn't even learning by example, you just give the computer a goal and let it figure out how to solve it.  For example, instead of programming it how to win at a video game (AI), or showing it lots of examples of play and letting it learn (Machine learning), you just teach it how to move the game's controls and tell it to maximize the score.  It doesn't even know the meaning of the game, but it can learn how to achive the goal through repetitive trial, error and learning sessions.



The advancement of AI had gone on relatively slowly, but exploded in the past decade thanks to:
  • An explosion of data provided by the internet
  • Growing computational power
  • New technological advances allowing to get results using less data points
Still, with all of its advancements, AI is still a relatively narrow capability with certain limitations.  It is defined as Artificial Narrow Intelligence, with two additional stages yet to be achieved:

  • Artificial General Intelligence - an intelligence that can reason in a general way across all tasks as good as or better than humans
  • Artificial Super-Intelligence - an intelligence that can reason in a general way exponentially better than humans.
There is a lot of debate when artificial super-intelligence will be reached, with different experts putting different dates on it, in the range of 2045 to 2100.

As such, they discount the common public fear of AI, where sentient AI plots to destroy humanity.  This type of concept, which is frequently portrayed in movies such as Terminator or the Matrix, is not considered by anyone in the field to be a true concern, primarily because we are so far away of actually achieving machine sentience.

The problem they indicate we should be worried about with AI is not it becoming evil and destroying us, but rather that its goals diverge from those of human goals.  AIs are best at optimizing their objectives, so it's extremely important to set objectives properly, or an AI may take unintended measures to achieve them.  So an AI may cause unintended consequences in trying to achieve its goals, or if the goal has multiple sub goals, take those out of order to problematic effect.

An example of AI thinking: Clean the tub!

It is hard to define rules to the AI around how to achieve its goals, because rules around saftey, morality, and ethics are not universal and what might be ok in one culture is not ok in another.  Also, rules have exceptions which are hard to define, and rules can have unexpected loopholes which AIs may be very good at taking advantage of, as they subscribe to literal meanings of rules rather than to intentions.

Once AIs are prevalent, one thought is that we can just "turn them off" if they behave unexpectedly.  This will be hard to do, because:

  1. As AIs anthropomorphize, we will begin to get emotional about them and will feel bad about "killing" them.
  2. AI will become so embedded in our day to day lives, we may become over-reliant on them and will have a hard time doing without them
  3. There is a global race to develop AI, and some are less concerned about the possibility of it running out of control than others (or are willing to take the risk).
So our goals for AI need to be defined along human values, which will require additional thought and research into the following areas:
  1. How do we define the right objectives to an AI, and teach it to pursue it along human values?  Can we even define such values universally?  How do we avoid side effects of unintended consequences when the AI pursues the goal?  How do we prevent the AI from finding loopholes and shortcuts to achieving the goals that are non-beneficial to us?
  2. Oversight - how do we monitor that the AI is pursuing the goals in a way consistent with human values?  Human supervision is not realistic at the rate AI learns and adapts; some sort of partial supervision mechanism will need to be put in place.
  3. How do we let the machines learn in a way that doesn't harm humans?  For example, you could teach an AI how to fly a drone by teaching it controls of a drone and letting it loose.  It would eventually learn how to control it properly, but it would also crash into a lot of things and people as it learns.  Can we teach AI in a simulated or virtual environment?
These questions have been gaining more attention, both by famous thinkers (Bill Gates, Elon Musk, Stephen Hawking), as well as by companies involved in AI development and Governments.  The guidelines for businesses and governments experimenting in AI should be:
  1. Perform extensive testing for AI - use AI to try and cause the AI to fail (red team testing); run AI on inert data while humans are still operating the real data, to see if there is any divergence.
  2. Institute boundaries around behaviors that are universally unacceptable
  3. Create governance mechanisms for overseeing use of AI tools in any application.

No comments:

Post a Comment