Sunday, March 11, 2018

SXSW 2018 Day 1 Session 2: The Future of Machine Learning - is it worth the hype?

Session page, including audio: https://schedule.sxsw.com/2018/events/PP70196

Jeff Chow, Trip Advisor
Finale Doshi-Velez, Harvard University
Chris Jones, iRobot
Tom Foster, Inc Magazine

There are two types of hype surrounding AI - the hype around AI itself, and the hype around doomsday scenarios that follow AI.  Machine learning is a discipline in AI - AI is an encompassing term including machine learning, NLP, autonomy and other fields.
Machine learning is basically taking huge amounts of data and having the software make sense of it based on predefined guidelines.  The guidelines may be extensive or very simple.
Tom: Survey shows that 89% of CEOs say they are in the process of deploying machine learning.  Does this mean that it is just a buzzword?
Jeff: Probably not buzzword, but most likely most companies are engaging a lower form of ML.
Chris: it really should be 100%.  People have a narrow definition of AI but even simpler data crunching algorithm is AI.
Tom: how do you tackle big problems with Machine Learning?
Jeff: At TripAdisor we have over 600M reviews; how do you make this more useful?  AI helps catalog and sort the raw data into sentiment review and to surface relevant reviews, helping surface the right review at the right time.
Tom: where is it all going?
Finale: healthcare - which treatment works best for which patient.  The challenge is that we only have very partial data, and there are many unknown factors we may yet be missing.  Missing or partial data can lead to sub-optimal decisions.
Other fields include Education and Law, but these are also fields that suffer from partial data, or areas where you can’t test all possibilities without impact.
Chris: Rumba makes lots of decisions each second, needs to take into account spatial layout, remembering where it’s been, where it hasn’t, what milestones it can use to understand the home - mapping interior space of the home.
Tom: What other use of spatial understanding? There is a large value to mapping home autonomously - manually entering all this info is not consumer oriented.
What are some of the thorny problems that will need to be dealt with?
Finale: In healthcare, the problem domain is much more complex. You can't check all the permutations, but you can try to improve by reducing the level of medical errors.
Tom: Ethically, can we trust machine decisions?
Finale: In case of medicine, machine learning will not bring about replacement of doctors, but they will augment their capabilities and help guard against error.  However we are still in early days in the healthcare field, especially because of regulatory issues.
Jeff: Currently Machine Learning is narrowly focused, which makes it easier to manage.  Expanding it to have a wider view, or to integrate multiple fields is harder.  It is more about empowering human operators than giving controls to robots.
Tom: What are the unexpected consequences?  Ethical questions?
Finale: In health care there are a lot of ethical questions in today's system, even before you introduce AI.  When we get to the question of the application of AI, we'll need to decide which human values we want to represent and apply them to the software.  Once we decide on values, we can test how well we do based on the application of these decisions.  AI allows to roll out the decisions to large populations at once, so we need to be careful. It's hard to predict, but we can monitor.
Tom: But there are multiple societal values - how do we choose?
Finale: These choices need to be made by us as a society through open public discussion - we can't leave these to the software developers writing the code.
Chris:  Need to remember AI is a tool not unlike other tools we use; we always have to consider how we use any tool.  AI is not unique here.  Also, there will be a very long adoption period, long enough to allow us to calibrate.
Finale: An example of the sort of problems narrow AI can run into was discovered in an application written to identify risk of level of pneumonia in people coming into ER.  Just by going based on the data, the machine determined that asthmatic patients were lower risk from pneumonia than other types of patients.  The reason it determined this was because statistically asthmatic patients had a lower mortality rate form pneumonia than other types of people, so it ranked them as low risk. However, the reason they had a lower mortality was because the staff understood they were at higher risk, and thus gave any asthmatic patients a higher level of care if they came in with pneumonia; so the machine learning algorithm got it wrong because it failed to see the full picture.
Bias is another problem.  If the data collected is impacted by human bias, then the ML learning algorithm will replicate the bias found in the data.  Any machine learning algorithm is only as good as its data.
Tom: to what extent is a company using an ML tool responsible for making sure it works on a wide range of people?  Often the extra testing requirements to ensure these types of behaviors in the system conflict with the need to ship quickly, and the natural bias developers have towards "sunny day", as opposed to "rainy day" scenarios.

Elon Musk, Steven Hawking and others warn about an AI distopia; how do you feel about these warnings?
Chris: the points are valid and are worth thinking about.  At this point I don't think we're going in blind - it's a very long run, and we have time to think about it.
AI is very application specific; one can't talk about AI generically. How you manage AI for cars is not how you manage it for healthcare.
Finale: we already have a lot of regulation, we just need to make sure we apply it to AI.  The unintended consequence of AI use is far more worrying than the Skynet scenario.
Jeff: AI can be weaponized, but also have a huge benefit, so it's just a question of balance.
Finale: In humans we can ask why they did something; we need to be able to do the same with AI.  We need to make sure we build on top of AI the ability to translate the internal workings to human-understandable reasoning.

No comments:

Post a Comment