Saturday, March 11, 2017

SXSW Day 2, session 1: AI on the Horizon: Challenges, Directions, Future

The speaker in this session was Eric Horvitz, Director of the Microsoft Research Lab on MS's main campus.
The lecture started with a basic primer on AI, defining the four pillars of AI, as the speaker identified them - perception, learning, reasoning and natural language.  He went through a few areas where AI is being applied, such as image recognition and meaning (recognizing expressions), though drone piloting (preventing collisions by teaching in simulated environments) through assisted surgery - a human doctor and a robot collaborating to perform a particularly complex surgery.  He also talked about autonomous cars and the benefits expected from them.  All very standard for an AI conversation.
He then moved on to some of the challenges.  First, dealing with unknowns.  AI is trained to identify patterns, but how should it deal with surprises?  This is especially clear in autonomous driving, where surprises can happen all the time - a kid popping out into the road, a tree branch falling and so on.  He also touched on Ethical challenges; again, the example of autonomous cars, where a car might have to decide whether to swerve away from a person on the road but risk the passenger, or avoid risk to the passenger at the cost of hitting the person on the road?  These kinds of questions will have to be programmed into the cars, and also dealing with unexpected scenarios that cannot be predetermined.

Another risk is using AI to generate new "attack surfaces" that were previously unavailable.  For example, adversarial machine learning.  An image recognition AI will map a set of properties and parameters it finds on an image to a definition of the image.  One group of researches took the AI and used it to look for the smallest possible changes in the image that would make the software recognition AI misidentify the image.  See the below example:
The image of the stop sign on the right was tweaked by only a few pixles - to the eye it looks exactly the same; however to the image recognition software these small changes were enough to change the definition it matched to from "stop" to "yield", even though the image looks nothing like a yield sign.  I wondered whether this sort of things could be done with humans as well, and asked him, but he said that machines work differently than the human brain, about which we don't know nearly enough about to be able to engineer this type of attach.

Another concern is AI attacks on human minds.  Research here studied the information from twitter to craft a twitter message personalized to one specific user, that would make them most likely to click a link in the tweet.  This required understanding from the twitter feed what types of messages cause that person to click on an attached link, and then construct a message to match those parameters. The message looks like it was written by a human, not by a machine.

An additional advancement in AI that can be misused is the ability to add the expressions from one person on to a video of another in real time.  This allows to take a video of someone and make their face match expressions they did not make.

Another area to be careful of is machine bias.  Artificial intelligence algorithms are extremely complex and it's hard to fully grasp how they will behave precisely.  This can lead to unexpected biases forming in the system.  For example, a system used in advising judges about early releases of prisoners was found to have bias in determining who is likely to recommit a crime.

The speaker then considered the concern addressed by a few that once AI is able to design its own code, it will grow at an incredible accelerated pace, leaving humans far behind.  This view is shared by a number of members of the computer scientist community, although there is some debate as to when this type of condition may arrive.
Looking forward, the speaker compared where we are with AI now to the Kittyhawk moment - we've done the initial breakthrough equivalent of the first airplane getting into the air.  He noted that it only took 50 years from that moment to the first Jumbo 707 plane, and with software improvements can be done in a much faster pace.

The speaker closed by looking at some of the work he's involved in now in thinking about the impact of AI.  One example is a 2016 report on AI and life in 2030, and another is a session he coordinated that pitted two AIs against each other, one coming up with nightmare scenarios of AI running out of control, and the other building countermeasures and defensive strategies.

He briefly addressed the possible combination of Quantum computing and whether it would be an accellerator for AI, and said that in his opinion, it was not clear this was the sort of thing quantum computing would be good at.

Another future of AI question is what happens when we all become completely dependent on getting our recommendations from AI?  what impacts does that have on the life of people?

An excellent lecture overall.

1 comment:

  1. Keep going. It's great to get such detailed insights. Almost like beeping there by myself.

    ReplyDelete