Wednesday, March 29, 2017

SXSW Day 5 session 5: Brain wearables

Tan Li (Emotiv)                                                                                                                

Emotive creates a product which is worn on the head, and which detects through the scalp brain waves using sensors.  The device is like a strip of plastic worn on the head a bit like a crown.  The device picks up brain waves and can be trained to understand specific patterns of waves, which can represent a particular type of thought.  Then it can translate the pattern to a wireless command to some device, or just collect information for studying the function of the brain.

The system is an open platform that can be used for a variety of uses, such as a remote control to move objects or control input to a device.  The system is trained by having the person repeat a particular type of thought like a command (such as “up”).  Once the pattern is identified, it’s matched to an activity.
The device can be used for people with disabilities – example shown was a quadriplegic with no motor functions who was able to play a rudimentary video game using thoughts to control the motion on screen.
In addition, can be used for biofeedback – can be used to train our brains by identifying positive mental states and enhancing them.  Identify a positive mental state, and have it trigger a visual queue which a person can try to repeat.
Additional applications include research.  For example, do external factors improve or harm concentration?  Another example is monitoring brain processes of amateur card players and compare them to the brain processes of an expert card player, to see what thought patterns experts use.
Many of these types of activities are possible with use of an EEG machine; but an EEG requires complex connection and the wires limit where this device can be tested.  This device is wearable and is wireless, so it is portable and can be used in field research and situations where exertion or extreme mobility is required.  It can also scale better.


Standard EEG

Emotiv device

In the following video, an audience member was brought up and fitted with the device, which was set to control a Sphero ball.  First the device was “trained”: the volunteer was told to think about moving in a direction, and the device captured the patterns.  After a couple of minutes, the training was completed.  The hard part was getting the sphero to work, though; the wifi and Bluetooth kept failing and they couldn’t get it connected to the phone, and pair it with the wearable.  Eventually the speaker gave up, gave the devices to an assistant to try and figure out, and continued with the presentation.  Then, at some point, they were suddenly able to make it work:


Tuesday, March 28, 2017

SXSW Day 5 session 4: Beyond driverless cars - our transportation future

A panel discussion moderated by Neal Ungerleider (Fast Company) featuring Anthony Foxx (former US secretary of transportation and mayor of Charlotte), Chandra Bhat (University of Texas), and Don Civgin (Allstate insurance company)

The panel started with some statistics:

  • Cars are not utilized (i.e. parked) 96% of the time
  • 8 Billion hours wasted on congestion (I guess annually)
  • For low and middle income families, transportation costs represent their second highest expense


Transportation sits on three pillars:

  • Technology: this is where advances happen most rapidly
  • Infrastructure: need to think about how to adapt infrastructure to the prevalence of autonomous cars
  • Human awareness: benefits and unintended consequences need to be thought through.  For example, if autonomous cars free me to do whatever I want during the ride (work, entertainment), then time of travel is not as much of an issue as it was in the past, which may encourage urban sprawl, lead to longer drives, and increase waste.
Will autonomous vehicles be owned, or will we just use ride sharing?  Younger, more educated people gravitate towards the shared economy, so they will be buying less cars.

Autonomous cars will come sooner rather than later:
  • Autonomous cars are "killer apps"; they will be the feature people use to decide to buy a car, and which car.
  • Companies pioneering the technology (primarily Uber and Tesla) are not playing in the traditional manner; they are disruptive and aggressive.  Traditional car manufacturers have to accelerate their own plans to keep up, and as a result, the entire timeline is accelerated. 
How will liability be assessed around autonomous cars?
Today humans who drive cars have to be tested and licensed.  How do you license the software behind the self driving car?  We would need to have a centralized standard.
Government's approach to this question is still being guidelines.  The US government has published guidelines but needs to expand them and have them periodically reviewed.

How do you keep your data private, but also make the information about autonomous vehicle accidents public and available to manufacturers and government so it can be reviewed and learned from?  Possibly will need to adopt a similar model to the one used for aviation accidents.

Car manufacturers are beginning to view themselves as mobility companies, rather than car companies.  One unexpected possible feature of autonomous cars is that because they are much safer than regular cars, they may be able to lose a lot of weight taken up in today's cars by safety features (bumpers, reinforcements, airbags, etc.), which could lead to smaller engines and better efficiency.

Sunday, March 26, 2017

SXSW Day 5 session 3: Connected cities, hackable streets

Panel with Nadya Bliss (Arizona State University) moderating Tom Cross (Drawbridge Networks) and Robert Hansen (OutsideIntel)

Smart cities represent the confluence of machine learning, big data and IOT in a city context.  Most cities start with power related applications.
Security represents a special challenge in cities, which have both old, legacy, vulnerable systems as well as modern sensors and tools that we have not figured out how to protect very well quite just yet.  Basically, a lot of the attention directed towards smart cities today is based on the marketing of the device manufacturers who are trying to increase sales of their devices in the guise of cost reduction.  The cities themselves are still not 100% clear on the usecases.

One example of a hacking of a very simple smart city feature:
Many cities have smart traffic lights - traffic lights that have sensors buried under the road that senses when a car is on top of it, and signals the traffic light (usually wirelessly) to change the lights.  Since the device is communicating wirelessly, and almost never securely, it's relatively simple to identify and decipher the signals it is sending to the traffic light, and then using a transmitter more powerful than the sensor, disrupt the operation of the traffic light, even from a distance.

There are three actor classes who may be involved in hacking cities:
1. Board teenagers - kids who would hack a traffic light for the LOLs.  They represent a low threat overall in the category of connected cities.
2. Criminals and criminal organizations - they are usually looking for how to monetize a vulnerability.  They would be involved in very specific usecases.
3. Nation states - possibly most interested in vulnerabilities of connected cities, especially when they can accumulate effects to generate a large impact on the whole city or state.  They can shut down devices for general disruption, or introduce subtle changes to how the system operates, to achieve a specific goal.
For example, in the case of traffic light, a nation state can attempt to disrupt an election by causing traffic slowdowns or jams in areas where there are a lot of voters to a particular candidate.  This type of attack would be extremely difficult to even identify as a hack, let alone trace it back to the originator.  Also, cities are resource poor - they get all of their resources from outside the city.  An attack on delivery trucks or the transportation routs they take could deprive a city of food in a matter of a few days.
Not all "hacking" is done by people trying to cause disruption; there have been reports of police feeding false information into Waze to fool people on where they actually are located.

Because they are such diverse domains, smart cities have numerous points of failure.  And because product developers want to get their product to market fast, they don't always want to perform intensive security testing and fixing, which takes time and costs money.  Also, given the way some of these devices are used, not many of them are built with the ability or even accessibility to patch them.
As such, what are the ethical considerations around disclosing discovered vulnerabilities?
In the software industry there are developed norms for how to disclose vulnerabilities; in newer industries, such as connected cars, however, there is less openness, as not all manufacturers adopting the software mindset quite yet.

What should be done?

  • All messaging needs to be encrypted
  • Companies in all areas of smart city services and devices need to be made to follow rules around security - SLA on logging, encryption, upgrades, etc.
  • Need to have one throat to choke - each city needs to have a Chief Security Officer or innovation officer who can coordinate security concerns for the city.  Cities should also create panels for advising them on how to add smart city features securely
  • Determine for each device whether it really needs to be connected to the public internet - is there a good reason for public connectivity?
  • Consider data as a liability, not an asset, and treat it accordingly
What challenges are we facing in this domain?
  • The people making the technology are not incentivized to spend money and time on security
  • People buying the technology are not aware of the risks and dangers
  • There is a lack of talent to fill the need
What are the main questions you should consider when building for security?
  • What happens if you get a reply you didn't expect?
  • What happens if you get no reply at all, when you expected one?

I asked a question about Estonia, and whether there are any specific applicable lessons that could be learned from that country's experience both in advanced digitalization of public services, as well as withstanding attacks from a nation state attacker (Russia); the panel did not have any information around this, but someone in the audience was able to give me some interesting details:
  1. Estonia uses Guardtime (an industrial blockchain platform) to log minute by minute records of anything that happens in their database, so they can be sure no one can tamper with their data undetected
  2. Estonia is planning to build "data embassies", which are secure data centers outside of the country, that act as a backup in case of attack and are treated like real embassies.

Friday, March 24, 2017

SXSW Day 5 session 2: Going Beyond Moore's Law

Panel with Greg Yeric (ARM Research), Rachel Courtland (IEEE), Tom Conte (Georgia Institute of Technology) and Tsu-Jae King Liu (University of California at Berkeley)

Moore’s law says the number of transistors on a square inch of chip doubles every 1-2 years.  People take this to mean the speed of computers doubles every year, but really it was stated more as an observation about the economy of making semiconductors.  The performance prediction is attributed to a different Intel executive (David House), who predicted chip performance would double every 18 months.
Moore’s law was a good predictor of semiconductor advancement and correlated well to performance.  In 1995 transistors were still getting smaller, but the signal distance was increasing, so wire delay started to slow them down.  This was compensated with other optimizations that kept Moore’s law relevant.  In 2005 power output was leading to greater and greater heat, which led to a limitation of clock speed, and instead of running cycle speeds faster, newer chips just add more cores.

The existing process of shrinking chips is not going to be economical, so the current loop of size reduction will not be sustainable without substantial innovation.  To sustain (and move beyond) Moore's law, we probably need some disruption in the industry, which can include things like:

  • Cryogenic supercomputing can perform at lower power levels than CMOS (even when accounting for the power required for the extreme cooling) which would allow for higher clock speeds.
  • Quantum computing - this is still in the future, and only has limited applications.  Currently research is in its great infancy.
Looking forward, the expectations are that innovation being worked on today will be available ~10 years down the line commercially.

What are the characteristics needed from today's semiconductor chips?
  • Ability to be always on
  • Low power - need much better efficiency (We are looking for orders of magnitude drops in power usage - reducing voltage from 1 Volt to 1 millivolt reduces power consumption by a million.  Once we get to that level of power usage, we won't need batteries anymore - chips could work on ambient power in the air.)
  • Embedded memory
  • Small - to support applications in IoT and wearables
  • Networked (optical interconnects)
  • Flexible substrait (for wearables)
Technologies for future systems:



Thursday, March 23, 2017

SXSW Day 5 session 1: What to do when machines do everything

Malcolm Frank (Cognizant)

AI is already here today in little helpers we use on a daily basis (personal assistants, next-gen GPS apps, etc).
When Deep Blue beat the human world champion in Chess, people still belittled the achievement, and did not attribute it to AI; IBM’s Deep blue was specifically written for the single purpose of defeating the world champion in chess.  Later, Watson defeated Jeoperdy champions, and that was acknowledged as an improvement because it required real time deciphering of human speech, but still, once the speech was understood, the rest of it was a search algorithm, so people still played the achievement down.  Later, however, when Deep mind beat the world Go champion, it was a key turning point, because Deep Mind was a generic all-purpose AI that was taught to play Go, not a special made application.  Also, Go is the most complex board game in existence, orders of magnitude more complex than chess, and with no real way to brute force it.  Here we pitted 2500 years of human evolution against two years of computer development, and the computer won.  However even after the decisive victory in Go, some people still said that AI could only go where it could algorithmically solve problems; they said that in games like Poker, where small human cues are important, it would not be able to compete.  However recently a computer beat the top poker players decisively.  It turns out that short term human strategies like bluffing, or visual cues, do not stack up to a long term algorithm and that it can adapt to catch and learn these types of invisible cues over time.

Today, 8 out of 10 hedge funds are AI driven; Tesla’s automated driving has accumulated over 200 million miles of driving, and the more car Tesla sells, the more learning it accumulates.  In radiology, AI identification of anomalies like cancer has reached an accuracy of 99.6%, exceeding human accuracy of 92%.  In the legal world, paralegals doing due diligence in two weeks are replaced by machines who can do the same work in two hours.  JP Morgan Chase has put a loan review system that has automated over 360,000 hours of what used to be human work.  Helpdesk management is being replaced by voice recognition capable software and chatbots, and AI is proving to be much better than humans in investment advisory, because it has none of the biases humans have.  The displacement of humans with AI is happening already, and it’s happening rapidly.
The takeover of jobs by AI can be described as a capitalist dream and labor nightmare.  For the capitalists, AI and automation
-          Radically lowers operational costs
-          Improves quality (less errors)
-          Boosts speed
-          Raises insights and meaning previously unavailable

What happens on the other side of the equation?  A 2013 Oxford study predicts that 47% of US jobs are at risk to be automated in the next one or two decades.  Nevertheless, Malcolm Frank is optimistic.  Why?  Because if in the past technological advancement was focused on smaller job domains, now it is shifting to bigger systems: finance, healthcare, government.  He sees three scenarios for humans:
  • Replaced - AI takes over your job completely
  • Enhanced - AI enhances your job, taking away the rote activities and leaving you to deal with higher level functions
  • Invented - AI creates new jobs that didn't exist previously
Frank says 90% of the talk is around replacement, but he feels that 90% of the actual impact of AI will be in enhancement and invention.  He breaks it down as follows:
  • 12% of jobs will be replaced
  • 75% of jobs will be enhanced
  • 12% of jobs will be invented
Simpler jobs will be replaced altogether, but automation has been eliminating jobs for quite some time now.  For example, the automated toll both operators replaced human toll both operators - and that was not a job anyone really wanted to do.  Until now, automation replaced primarily blue color jobs; now, however, software is automating white color jobs as well, if they are rote enough.
As an example, a company called "Narrative Science" has a software that automates simple journalism.  Almost all minor sports events which received small coverage in local newspapers are now no longer written by people, but rather by their software.  The following article was written just by entering the play information into their story writing software:
Story written by Narrative Science software

A guideline he gave for whether your job is at risk is this: do you sit in a large cubicle farm?  If so, you job is at risk.

Most Jobs, however, are a sum of multiple tasks: some of the tasks will be automated, but some will not.  For example, lawyers:

Some of these tasks can be automated (examining legal data, research prepare and draft legal documents) but others are not so easy (present cases before clients and courts, gather evidence, etc.).  Again, this is nothing we are not used to today already.  A Taxi driver uses GPS for navigation, Credit card reader to automate payment taking, and possibly an app for a ride hailing services like Get Taxi.

Generally speaking, you could describe a "periodic table" of jobs as follows:
The jobs on the left will be automated; the ones on the right are enhanced.
People will do the "art" of a job; machines will do the "science" of the job.  Examples of area of augmentation:



And it's not as though our institutions do not have vast room for improvement.  Healthcare, for example, is a very inefficient, very bureaucratic industry, with so many activities not related to actual health (forms, returns, appointments, questionnaires, reports, etc).  Some of the improvements that can be expected in healthcare include:



What about new jobs?  Frank describes what he calls the Budding Effect.  Edwin Budding invented the lawnmower in 1827.  The ability to mow lawns to an even height vastly improved the ability to play sports, opening the door to a huge sporting industry which was previously limited or not possible.  A more striking example of the Budding Effects are the invention of ways to generate and deliver electricity, which enabled communication, radio, TV, and other countless industries, none of which could have been imagined when the initial inventions for generating electricity were made.  Frank quotes W. Brian Arthur who wrote in the McKinsey Quarterly that by 2025, the second economy created (the digital economy) will be as large as the entire 1995 physical economy (interesting note is that Arthur does acknowledge that jobs will be gone in the second economy that will not be coming back, and sees lack of jobs as a problem that needs to be addressed; he says that the focus should be less on job creation and more on wealth distribution).

Where will new jobs be created?  These are some good candidate domains:

Wellness - as mentioned previously, the current healthcare system is operating very poorly.  We will be seeing a shift to patient central care, rather than doctor centered care.  Furthermore, AI will be able to predict when we will likely be unwell or get sick, so we will be able to take preventative measures in advance and reduce overall illness altogether.

Biotechnology - will be bigger than IT by 2025

VR and AR - will grow to similar size of the current movie industry today.  VR is predicted to be a bigger business than TV by 2025; Tim Cook (Apple CEO) predicts that AR will be bigger than VR in 10 years.

The Experience economy - people with time on their hands will seek more vacation, and beyond just going places they will want to experience things.  Air B&B is planning to move away from providing lodging to providing experiences: want to live in a medieval castle, or relive the renascence, or experience what it was like to be a pioneer?  Specialized experience packages will enable expanding tourism to different levels altogether (a-la Westworld, hopefully without the murderous robots).

Smart infrastructure - smart buildigs, roads, and so on.  The money for projects like these comes from automation - when self driving cars eliminate the majority of car accidents, this will lead to a savings to the economy in the US alone of $1 Trillion (as point of comparison, all federal tax revenue is $1.7 Trillion).

Next gen IT - Cyber security, quantum computing - these are emerging domains that will require jobs as well.

In summary, says Frank, never short human imagination - human wants and desires are limitless, and these will lead to things for us all to do.

Wednesday, March 22, 2017

SXSW Day 4 session 4: A new normal: user security in an insecure world

Panel discussion with Alina Selyukh (NPR technology reporter), Bob Lord (Yahoo's chief information security officer (CISO) and Christopher Kirchhoff (previously assistant to Director of Join Chiefs of Staff at Pentagon)

First off, the moderator addressed the most interesting question - the Yahoo attack and how Bob Lord handled it.  He said it was most likely a nation state sponsored attack, which he called "the new norm".  He said that if in the past nation state sponsored attacks were primarily directed at government or military targets, today many corporations are also attacked by nation states, either for industrial espionage, to give its own corporations an advantage or even for revenge (e.g. Sony hacks attributed to North Korea).  He said that many corporations do not understand the meaning and impact of having a nation state attacker - the dedication of resources, time, money and people involved.
Nation state attacks are different than regular hacks in that they are very well funded and can be planned and executed over many years.

Similarly, Kirchhoff was asked about his experience - he is the one who had to deliver the news of the Snoden leaks to the joint chiefs of staff.

Bob Lord was asked about what he does at Yahoo to improve security.  He mentioned a number of things:
1. Red team/Blue team exercises - like many companies, Yahoo conducts red team exercises, where they take a group of hackers and try to penetrate their own systems (red team) while another team tries to detect and stop them (blue team).  He says the red team always wins; they were never able to stop them.  He recommend not building the red team from the people who are in charge of security at your company, as they may fall into certain patterns of thinking and take assumptions due to their knowledge of the security defenses.
2. Phishing exercises - IT sends out phishing emails to employees, to see if any of them click on the included links.  He says he usually uses this technique to test how good the security orientation was.  He said that one of the lessons he learned from Phishing attacks was that the security orientation for new employees was too detailed, with so much information people didn't remember very much of it.  He now prefers more focused sessions on key points he wants employees to remember, so they're not overwhelmed with information.

He said that most failures are procedural - not using proper existing protocols, not making sure updates are applied immediately - a lot of human error.  He said that security problems are not just technical problems, they are also cultural ones.

On the subject of security culture, Kirchhoff described how the Navy developed a "high reliability" culture, to be used in places where small mistakes can have big impacts (such as nuclear submarines).  He mentioned it had 5 principles and mentioned two of them - Forceful backup (he gave an example of not sending two people to do a critically delicate job, even if only one is needed for it), and integrity (if you make a mistake, speak up regardless of consequences).

[As a side note, I think he confused "high reliability" with "operational discipline"; the two pillars he mention come from them (the five pillars of operational discipline are Questioning Attitude, Level of Knowledge, Forceful Watch-team Backup, Formality and Integrity).  High reliability Organizations have different characteristics (Preoccupation with failure, Reluctance to simplify interpretations, Sensitivity to operations, Commitment to resilience, and Deference to expertise)]

Lord mentioned that adding security after the fact cannot work, security has to be designed right into whatever is being developed as it's being developed.  For regular users, rather than corporations, he mentioned these as some of the more important steps to take to improve personal security:

  1. Keep things patched and up to date
  2. Use two factor authentication
  3. Shut down old accounts - hackers know that a lot of people reuse their passwords, so the more accounts you have out there, the better the chance a hacker can stumble across one of your passwords (oh, and don't reuse passwords).
Asked about biometrics, he was not enthusiastic.  He said they can sometimes be captured passively (people leave fingerprints everywhere) and then used against you.  Unlike passwords or digital certificates, you can't revoke and reissue your biometric markers.

He said that the average time between penetration and detection is 200 days; so you need to have your teams ready to do research that far back.  He also mentioned that two thirds of breaches are discovered by the company when someone calls it to tell it it was hacked.

What are the top things they learned?
  • Log retention - because of the long detection time, it is important to keep logs very far back.  This is very expensive, but worth it when the hack happens.
  • Conduct red team/blue team exercises
  • Get top management familiar with the risks and the issues up front, and include them in exercises; so that when something happens, they are more ready to deal with it, rather than having to educate them in the middle of a crisis.

Tuesday, March 21, 2017

SXSW Day 4 session 3: GAFA: The relentless rise of tech giants (and their inevitable fall)

By James Schad, WeGrow

Google, Amazon, Facebook and Apple – giants roaming the tech land face (he didn’t mention Microsoft, which is sometimes also bundled with these four).

Google – controls 90% of the search market
Amazon – Largest online retailer 43% of online retail (I’m guessing in the US); second biggest company after it is 4%
Facebook – 1.9 Billion users
Apple – 91% profit share of the smartphone market (despite having far less than 50% of the market share); most valuable company in the world

Despite being such giants, there is nothing inevitable about their continued existence, necessarily.  Myspace was once thought to be an unassailable social media platform.
Apple has a loyalty following which should help it survive even if it falls on to hard times (as has happened in the past)
Google and Facebook are more precarious, even though they are, in essence, a digital duopoly.  Both rely very heavily on advertising.  Google has 86% of its revenue coming from advertising, while facebook has 97% of its revenue from ads.
Google:
Other players are beginning to get into Google and Facebook’s game: in search, Amazon is becoming a major search competitor to Google– a lot of the retail related search is going to them; and with Amazon Echo, they will be moving into other search as well.  By 2020 it is projected that 50% of search willo be voice or visual, where it is not clear how you add advertisements.  When web is no longer an entry point, Google loses its gateway.
Google’s next major income generator is video, where it is increasingly competing with Facebook (in feed), twitch (specialty gamer domain), and OTT content providers.
Facebook:
Facebook is getting ad fatigue – more users are ignoring them, and Facebook needs to take stronger measures to push them at its users (for example auto-play ads, which users dislike).  Also, there is greater pressure from advertisers to open up Facebook’s metrics, so there can be more transparency around what value they are getting from it.
There are also a lot of privacy concerns which may cause backlash.  For example, when Facebook acquired whatsapp, they said it was technically impossible for them to link account data from the two services, but then went ahead and did it anyway, causing the European Commission to investigate them.  In fact there are many investigations underway against both Facebook and Google in the EU both around privacy as well as antitrust concerns.

How are these companies diversifying away from their core revenue sources?
Purchasing: Google, Facebook, Amazon and Apple have spent $130 billion dollars in purchasing other companies – many of them for innovation (patents).
Content: All four are investing heavily in content and content services (you tube red, Amazon prime video and music, Apple music and iTunes store, Facebook Content Strategy)
Subscription services – Amazon prime (which is now in 50% of US households, 70% of US households with annual income of $112K and up!), you tube red and Apple music are all content subscription (some more successful than others)

But is the investment paying off?  Not clear.  Alphabet has invested $46 billion in various projects, which so far have lost them $6 billion.  There are a lot of “moonshot” projects, so some may return the investment in the long run, but so far they have not found a replacement for search as a monetization platform.
Facebook, on the other hand, is trying to copy other models: Facebook markets is their version of Craig’s list; Facebook jobs is their version of Linked-in; Facebook workplace is their version of Slack; and with Instagram they are clearly just copying whatever Snapchat is doing.  In addition, they are investing heavily in VR, but again, this is a domain which has not yet paid off.
Apple’s diversification strategy is less clear.  In contrast to Facebook, Apple seems to be focusing on AR, not VR – rumors it will be available as soon as the iPhone 8.  Their attempt to branch off into wearables with the Apple watch is not generating excitement, and Apple music has slow adoption and is far behind the other music streaming services.  It’s rumored to be working a TV service (it may just buy a major US TV provider, it has enough money), and for some time it was rumored to be working on a car.  None of these are near time projects.  However Apple has a unique brand loyalty which should sustain it even if drops from dominance (as has happened in the past).

Amazon stands out from the others in terms of diversification: It has been able to generate $10 Billion in revenue in 2016 from Amazon Web Services, and it is rolling out more services.  It is still very strong in retail – provides more choice at cheaper products.  It has its own consumer product lines (AmazonBasics branded goods, and it’s rumored to be launching a brand of female undergarment fashion) and are using their massive data collection to undercut the market.  They have huge success with the Amazon Dash buttons and are constantly innovating in retail: new Amazon go store, Amazon flying warehouse patent, Drone delivery and so on.
Amazon’s flying warehouse: a warehouse floating in the air from which drones fly down to the city to deliver packages.



Given all of these trends, the lecturer predicts Amazon will become the biggest company in the world and has the best chance of the four to survive for the long run.

I asked about Alibaba, and he acknowledged it is a potential competitor to Amazon, but said it hasn't caught on outside of China (I'm not sure he's up to date on his data on that).

2017-04-09 Update:
Just the last couple of weeks, Google has been feeling the pinch of some advertisers pulling their ads from Youtube because the ads were shown near what's called "inappropriate" material.  This is proving to be quite the challenge for Google.  This of course won't topple them, but it's not a good sign, clearly.

Saturday, March 18, 2017

SXSW Day 4 session 2: Chaos Monkeys: the threat of AI and automation

By Antonio Garcia Martinez
The session was originally titled “Chaos Monkeys: A silicon valley adventure”, and was supposed to be in reference to the book written by the lecturer, about his experience in silicon valley.  Instead, he decided he was tired of telling that story and wanted instead to discuss something he felt was much more important, which is the future of society in the face of AI and automation.
Chaos monkeys is a software system written by Netflix, which randomly shuts down servers and does general havoc in their systems, so they can see how well they can react to problems that happen.  The Tech industry, said the speaker, is the chaos monkey of the world; it throws things out of whack and lets the world deal with it as much as it can.
He talked a little about his background, starting off in the financial industry (prompted to join it by reading “Liar’s Poker” by Michel Lewis), and eventually wound his way into Facebook, where he experienced a type of insanity that can happen when very young people have a company drenched in money.  He also talked about his background – his grandparents fled Spain because of Franco, to go to Cuba; his parents fled Cuba because of Castro, to the US.  He joked his grandparents fled Europe because of fascism, his parents fled Cuba because of Communism, and now he’ll be fleeing the US because of capitalism.  After that he moved on the topic of automation taking over.

He started by talking about trucking, which everyone agrees is one of the first places where driver automation will take place.  Commercial driving is the most common job in 20 US states, and the last well-paying job an uneducated person can have (~$73k salary).  There are 3.5 million truckers, at least half of them will lose their jobs in the coming 10 years or so.
Automation, he said, is the triumph of capital over labor, with labor becoming obsolete and powerless.  As automation advances, more and more people are put out of a job, and the strength of labor is decreased.  Simultaneously, automation can reduce the costs of goods and services substantially.  Still, if you have no job and no income, even cheap goods are hard to come by.
He quoted an article by Peter Frase, which described four possible futures, given the question of abundance vs. scarcity, and whether the society is equitable or hierarchical (full article here: https://www.jacobinmag.com/2011/12/four-futures/).

Equality
Hierarchy
Abundance
Communism
Rentism
Scarcity
Socialism
Exterminism

The first is a combination of abundance, mixed with equality, which the article describes as communism and the target utopian state of humanity.  He didn’t get into that part of it.
If we have abundance of resources and goods, which the technological age may provide (3D printers, automation of production reducing its price to be negligible), then one way to preserve a hierarchy of power and imbalance would be to license the ability to use the technology, to maintain an artificial scarcity of it.  This is similar to DRM-ing software, which can be copied infinitely with almost no costs, and maintaining digital rights and copy protection laws.  Then you “rent” the permission to get any sort of product or service you want, and you still have a money based hierarchy of those who have and those who don’t, even though technically there is the ability to provide everyone with anything.
It is possible that even with automation we will run into scarcity of resources (there’s only so many minerals and physical material available to be used in the world).  If we have scarcity of resources, then if equality is maintained among people, you have socialism, which again is a form of political government that he did not get into.
The last future is one where there is resource scarcity, but there are still powerful ruling elites who preserve a hierarchy of people.  In this case, there can be several ways to deal with the people who don’t have jobs but need resources:
A system of universal living wage could be used to ensure everyone has access to basic living.  This will divide society into two – those with jobs or resources, who can live at a higher quality of life, and those without them, who subsist on basic income.  Martinez says this type of situation happened before – at some points in the Roman Empire’s history, 30% of people were getting their daily bread from the government.  Today as well the welfare system is propping up a very large number of people who do not have or cannot get jobs – if you factor in people who get disability benefits, you get to over 20% in some states.  He notes that not everyone getting disability benefits actually has disabilities; some of them just can’t get jobs.  So the sum of unemployment and disability recipient represents the true size of the non-working.  However these are never stable situations, as the poor rarely put up long term with the disparity.
Another option is for the rich/powerful to physically move away to a place where the others can’t reach – there are a number of science fiction stories detailing this type of scenario (Elysium as an example).
The direst dystopian future is one where the masses are just killed off, and again, there are science fiction stories around these types of scenarios (Logan’s run, In Time).
In the past, those who were pushed into poverty would revolt, and he gave an example of the battle of Blair Mountain, in which disenfranchised coal miners in Virginia staged the largest uprising in US history after the civil war (but most revolutions around the world are rooted in inequity).  In the case of the Blair Mountain revolt, the miner’s lost after the US army intervened in favor of the mine owners.  In a future where we are creating robotic soldiers who are far more powerful than the ability of humans to engage with, such an uprising will be even more impossible.  As an example, the work down now by Boston Dynamics, who are creating robots for military application:



This image is a man trying to push the robot down, to show how well its balance is.  In future conflict, this will literally be the type of match-up between humans and robots.


He didn’t really have a positive ending, so to avoid ending on a depressed note, he showed a picture of his newborn baby.

SXSW Day 4 session 1: Five factors influencing the future of UX design

Bill Akins, Rockfish Digital; Diane Edgeworth, Lululemon; Almaz Nanjappa, Momentus Software; Ed Valdez, Momentus Software

This panel actually talked about the impact of technology on retail, and user experience in a retail setting (which was not what I initially understood the session to be).

The five factors are: Simplicity, Ubiquity, Mobility, Technology and Connectivity.

Retail is still strong, over 90% of sales still happen in retail (source?).  Technology is augmenting retail – over 10,000 pepper robots are already in use in Japan.
Sample project in 7/11 added weather data and past purchases to optimize the app experience; if the weather is cold, offer coupons or advertisements for hot drinks rather than cold drinks.  Also, smart displays can enhance the retail experience.
Another example: Lens crafters noticed a common problem with people trying out frames, that when they take their glasses off to try the frame, they can’t see their face well because they have no lenses in the frame.  So they enhanced the mirror to take your picture from several angles, and then displays what you look like wearing the frames from several directions.
Retailers are looking to create mixed reality scenarios to pull people into the stores.  Also experimenting with things like ultra-haptics, which is an array of speakers that project sound that gives tactile feel so you can touch and feel it, creating virtual controls.
Smart carts, as in the Amazon test store, let you ring up items when you put them in the cart, and then checkout can happen automatically.
Another tested technology is overlaying visual images on physical objects (movie)
It’s not clear how VR can help in retail; it seems to be more of a gimmick.  It takes you out of the retail experience.  Augmented reality keeps you in the experience but enhances it.

Challenges of user experience enhancement in retail:

  • Updating and scaling: a lot of work to update and maintain tech.  For example, touch screens can get really sticky and dirty, and need to be constantly cleaned.
  • Adoption is better in a concept store first, to test a new technology out, and only if it works there, is it worthwhile to roll it out across more stores.


Thursday, March 16, 2017

SXSW Day 3 session 4: The future of conversational UI

Hector Ouilhet, lead designer for Google Search and Assist products

Coming on the heels of the previous session, which talked about the change from search to assist and the potential impact it may have on companies that provides services, was this lecture on the benefits of the personal assistant, by someone at Google working on it.

He started off discussing the work he did planning his session in SXSW, including finding travel, hotel, etc.  He said he spent about 10 hours planning it out.  Then he played a brief voice interaction of how it would work with an assistant, which lasted about a minute.  He emphesized the benefit of the assistant as the reduction of time between identifying what I want and getting it from technology (which he called "friction").
He reviewed the history of getting things done via the internet.  First, there were portals which indexed content into categories, like yellow pages (the model that was familiar then).  Then search arrived and shifted the paradigm, removing the need to manually categorize the internet; you stated what you wanted and were given a number of best matches to choose from.  Then came the feed, which identifies relevant information and pushes it too you based on identifications or subscriptions you make (e.g. Facebook, twitter).  This was another paradigm shift because it tried to anticipate what you want and deliver it to you up front.  Chat apps are the next evolutionary step, where you have a conversational like interface to finding information.  Personal assistants are a format of that using regular voice speech.
Where are we heading?
1. Smart everything - every physical object will be, in some way, smart.
2. Multi-user devices - the objects will change from being personal objects (like phones) to shared objects (like smart appliances).  Interaction will be with any user of the device, not just it's owner.
The simplest way to communicate with all of these devices would be to use voice communication; and you want a single interface as the gateway to the devices, so you don't have to build the communication intelligence into each one separately, or talk to each one with its own protocol separately.

Moving beyond the actual message spoken - additional development will include the voice cadence, tone and expression to imbue even more understanding of the intent of the speaker (just like humans do).

What are some of the challenges we will see with this interface?

  • Intuitiveness of interface - smart devices add layers of capabilities to devices that used to have a very clear purpose.  This can cause cognitive dissonance, and difficulty to understand the device.
  • Conversational interfaces lack the visual immediate feedback that regular devices provide, which helps understand where the problem is.  For example, when you turn a light switch on and the light doesn't turn on, you know the problem is with the light bulb or there's a power outage.  However if you say "turn on" to the light and it doesn't turn on, is the problem with the bulb, or the interface, or with understanding the command, or hearing it, or any other type of software issue?  Hard to track the problem down.
  • Technical problems of using voice - learning accents, cultural language differences, speech impediments, etc.
  • Discoverability - how do you know what the device can do?  When you have physical switches, you can see what can be done.  With a voice interface there's no menu or visible cue to tell you what the device can do.
  • Human speech is frequently assuming the listener understands context or visual cues which the device might be missing.  For example, "turn on that light" - which light?  The assistant can't see what you're pointing at.
  • Audio is linear and non-persistent, as compared to visual interfaces, which can be non-linear and persistent.  For example, if there is a list of options, you have to wait to hear them all to be able to know which one to use; in a menu you get them all at once and can skip the first three to get to the fourth.
What are some of the opportunities with voice interfaces?
  • Accessibility to all - no need to be tech savvy to use technology, everyone can do it regardless of education (although I would say that even though this is true for voice, it needs to be and can be true for any interface).
  • Device ubiquity - no need to carry a device with you at all times to interface with the world; all smart devices can be a portal for interfacing.
What needs to be done to get to this world:
  • Technology needs to adapt to us, not the other way around
  • Need to move beyond simple input/output interfaces
  • Need to design interfaces for speech
  • Need to move away from evolving products by adding features to evolving products by creating stories of how they are used (again, needs to happen regardless of voice interface)
  • Need to create a persona for gluing together the different interfaces into one coherent interaction point and giving an experience across multiple devices
  • Teach the technology to understand the context of our speech - we understand what it is, technology needs to as well.  The tools we need for this are only just now being developed.
  • Need to understand that localization is not just language, it's the whole cultural frame of reference.
  • Need to strive towards conversations that are multi-modal, not just audio.



Wednesday, March 15, 2017

SXSW Day 3 session 3: AI Replaces search: the future of customer acquisition

A panel discussion with Amanda Richardson (Hotel Tonight), Brian Witlin (Yummly), Charles Jolley (Ozlo) and Rangan Majumder (Microsoft).
The panel discussed how AI, specifically through the voice personal assistant (e.g. Siri, Cortana, Google now, etc.), will be supplanting search.

With search, you get back a list of results from which you can choose.  With a voice personal assistant, you typically get back only one result, and that's determined directly by the assistant operator (Apple, Google, and such).  How do companies make sure they are selected to be the one result?  The assistant provides the assistant operator power over the service provider, especially aggregation service providers (sites that aggregate hotel services, like Hotel tonight for example).  The operator can select whoever they want, making a deal with one provider to the exclusion of all others.  A more likely scenario would be real-time bidding: if I say to the assistant to find me a hotel in Austin, the operator can hold a real time bid among all hotel service providers, and the one that wins the bid is the one that gets used (like in advertising).
The problem is that even if you win the bid, it doesn't mean that the consumer is necessarily exposed to the service provider - they may only be exposed to the end product.  Taking the hotel example from above, even if HotelTonight wins the bid and provides a hotel offer of Marriott in downtown, the assistant will communicate which hotel it selected, and not that it got it on HotelTonight.  That means that HotelTonight will lose the customer relationship, which is today one of their key assets.  So these companies will need to shift their monetization strategies to anatomized services.

Strangely enough, the representatives on the panel didn't seem to be very aware of the implication a move to digital assistants would have on their companies, or didn't seem to mind.  I asked the panel directly whether they are not concerned the digital assistant will turn them into a backend database service, and their potential to lose all customer relations.  Amanda Richardson challenged me back asking why they would need to own the customer relations; they get paid per hotel booked, so they just want to get bookings.  I think she's missing the bigger picture, but perhaps there's something I myself am missing.  At any rate, the digital assistant seems to me to be spelling a bleak future for these types of companies.

Tuesday, March 14, 2017

SXSW Day 3 session 2: An internet for and by the people

Vint Cerf, the inventor of the internet, was interviewed about his promotion of an "internet for the people by the people", but in reality was asked very little about it.  Because he is such a key figure in the development of the internet, he was asked other questions instead.


The internet for the people initiative, as he described it, is an attempt to make a people-centered internet focusing on the value of the internet to individual people, rather than organizations.  This involves making the information on it more local and in local languages and frames of reference.  He talked about providing in places where there is non, and making it sustainable - he said you can't just drop internet infrastructure and leave, you need training to help people keep that infrastructure up and running.  He said penetration of internet technology has been made more difficult as a result of the Arab Spring: governments in internet poor areas (which are frequently authoritarian) are reluctant to build internet infrastructure because of concerns it will cause population to organize and ask for more democracy and rights.

Vint talked about how our technology has outraced our inhibition about its social consequences.  He talked about how this impacts privacy, for example.  In the past, when people lived in small towns, there was no privacy - everyone knew what everyone did.  When we moved to the city, people feel anonymous because of the mass of other people around, and the ability to spread information via rumor and small town social chains disappeared.  As such, people got the sense of privacy.  However modern social media brought back the loss of privacy we got used to.  Also, now you can lose privacy not by something you do, but by something other people do.  For example, Facebook tags faces in photos uploaded to it, and any unwitting people in the background get tagged as well, so without even being aware of it, a record of where and when they are can be captured.

Discussing the Internet of Things, he raised the concern it would be used to create attack networks against the web (which has already happened).  Companies are racing to produce products but not thinking enough about control and authority over these devices.

He was asked about IPV6, and said at the time they defined the protocol for the internet they could not conceive they would ever need more than 4 billion addresses (as that was pretty much the entire population of the earth).  Now it's clear that's not enough, and he hopes IOT will accelerate it's adoption.


Asked about possible impacts of government funding cuts, he said he's concerned it would primarily hurt what he calls "curiosity projects", which are focused on long term research that doesn't have immediate benefit, but which often has long term benefit.

Asked about blockchain, he said he doesn't feel it scales so well.  He's not persuaded this is the only way to achieve distributed ledger.  He's worried about the surrounding software around it, which may be controlled by a very small number of organizations.

On keeping the internet open and neutral, he said he feels there needs to be a legal framework to help keep net neutrality.  He also said he's hoping to see some social maturity to make the internet a safer place than it is now.

Asked about the walled garden effect of companies like facebook, he said it was attempted in the past - AOL tried and failed to create a walled garden; he's similarly hoping users will push to break through the walled garden.

He said, "8-9 year old kids today use the internet; I didn't get to use the internet until I was 28 years old, and I had to invent it first!"

He also told an anecdotal story about his wife, who had lost her hearing in a young age, and had implants that send signals to her nerves, simulating the activity that her inner ear would normally be doing.  This returned her hearing to her after many years of deafness.  One interesting ability she gained, he said, is that she has a microphone unit she carries and which transmits sounds to her implants.  This unit has a range of up to 15 meters, so she can leave it somewhere and walk away, and still hear what's going on where the microphone is.  He says she sometimes at a restaurant leaves the device on the table and goes to the bathroom, and he has to warn everyone not to talk about her while she's away because she can hear every word...

SXSW Day 3, session 1: What is a smart city: technologies and challenges

A panel lecture on smart city initiatives taken in Austin, by Catherine Crago Blanton (head of strategic initiatives and resource development at housing authority city of Austin), Craig Watkins (Professor at university of Texas) and Sherri Greenberg (Professor at LBJ school of public affairs).  The panel discussed initiatives taken in the public housing sector and education sector in Austin.

Public housing - two efforts directly around public housing: first, applications that help connect people who need public housing with landlords renting out properties.  Also, digitizing the bureaucracy involved in applying and using vouchers for public housing to help people with the process.  The second is a project to provide every public housing resident with internet access, computer literacy and some computer system, usually inexpensive or donated low power linux boxes with preloaded educational content.

Energy efficiency - provided smart thermostats in public housing to save energy.  An unexpected added benefit of these is that since they can be controlled by a smart phone, bed-ridden residents could control the temperature right from their bed without having to call in their caretakers just to adjust the thermostat.

Mobility and Transportation - many of the advances in touted in the area of transportation, such as ride hailing applications, do not help poor people who do not have credit cards; so a lot of focus was spent around studying public transportation, mapping routs and understanding transportation costs for activities.  How much are the transportation costs of people who live in public housing for various activities such as buying milk, getting to work, paying bills?  Using data collected from these research the city could optimize its public transportation routs to make it more accessible and less expensive, and to open up new opportunities for people in public housing.

Education - one if the issues identified is that an infusion of technology into classrooms had not translated into proper curriculum that enables students to learn how to use it properly, and there was a lack of training of teachers as well, so available technology was not really used as much as it could.  Various programs were set up to improve this, including summer training for teachers and innovation labs in schools.

To my question of whether they have quantifiable results showing benefits of all these measures, I was answered with anecdotal evidence of improvement (stories of individuals who had better lives); so I'm guessing they don't have data on overall and systemic improvements.

SXSW Day 2, session 5: Homo Sapiens 2.0: Genetic enhancement and the future of humanity

By Jamie Metzl

The lecturer feels we are coming to the moment where humans turn the evolutionary corner and begin directing their own evolutionary path - biology is being turned into Information Technology.

Take a baby from 1000 years in the past and bring it to today, it would be indistinguishable from a modern day baby; but take a baby from 1000 years in the future and bring it to today, it would grow up to be a super human being with built in immunities and genetic capabilities that may not even exist today in any species.
The technologies that are required to bring us to this level of enhancements exist today:

IVF and Genetic screening: PGD/PGS is a process of screening fertilized eggs prior to reinsertion.  On day 5 after fertilization you can take 2 cells from each egg and sequence it for single gene diseases, skin color, eye color and hair color.  With time, more data will be available as genome sequencing gets cheaper and more features will be available to screen for: height, intelligence, etc.  Research indicates that between 50-80% of people's traits can be attributed to genetics, and all of those will eventually be screenable.  One side effect of such screening ability will be the reduction of sex for procreation - as compared to screening capabilities of IVF, procreation by sex is just rolling the dice with your child's future.
Creating stem cells out of adult cells: there is already first success in taking regular adult cells (such as blood cells) and converting them into stem cells, from which you can make any type of cell.  So it will eventually be possible to take 1000 blood cells, convert them into stem cells, and convert those into egg cells, which can be fertilized.  If today a doctor can withdraw ~10 eggs from a woman, this technique will allow increasing the number of fertilized eggs by two orders of magnitude, therefore allowing for much better screening and a higher range of options to chose from than is possible today.
CRISPR: while embrio selection will be more important for accelerated evolution than CRISPR, CRISPR, the technology that allows editing the human genome can still be used to edit non-viable embrios.  The first application on humans - fixing the anomaly that causes sickle cell disease - will be available in a few years.

Beyond enhancements in procreation, there will be a strong medical impact to genetic sequencing: the foundation for medical treatment will be your personal sequenced genome.

Once the door is opened to these types of capabilities, additional pressures will come in to play, such as cultural pressures and competitive pressures between countries.  Some cultures are more open to these types of changes than others - Chinese culture, for example.  There will also be financial impact - insurance companies will want to eliminate diseases that cost them a lot of money to treat, so they will apply their own pressure in this field.
Ethics problems will also appear, as the science in this topic moves exponentially fast, while regulation moves very slowly.  In addition, on the popular level, people don't understand genetic manipulation and are inherently fearful of it.  For example, genetically modified crops, which were designed to bring substantial value to people, received a very negative reaction against them, primarily based on fear (as per the lecturer).
Another concern as we advance in research of genetic features is the possible making of bad inferences or links between genetic data and traits or race.

Will we be creating two classes of citizens - those who can afford and have access to these types of biological enhancements, and those who don't (such as displayed in the movie Gattica)?  Possibly, but that's not something that's different than today - even today we have people who have access to better resources than others, without genetic enhancements; so whatever we do to address current inequality can be used to address that as well.

What is the impact on life expectancy?  There's no inevitability to a particular lifespan; it should be hackable as well.  The lecturer posits that the first person to live to be 150 is already alive today.

Monday, March 13, 2017

SXSW Day 2, session 4: Intelligent machines will eat their young (and us): separating fact from fiction in AI

By Adam Porter-Price, Emma Kinnucan and JD Dulny from Booz Allen Hamilton.

The discussion was around the dangers and risks of AI.  They started off by stating what they would not discuss, which included:

  • Ethics questions in using AI
  • Financial impact of AI (i.e. job loss)
  • Bad actors using AI
All of these topics, they said, are covered by others (quite true - there are a lot of sessions here at SXSW on those three).  Instead, they wanted to focus on the question of AI turning against us.

They started off by giving a brief review of the history of AI, starting by defining some common terms in the industry:
Basic AI - the first attempt at computerized intelligence, represented in expert systems, was basically an attempt to program intelligence into code by telling the machine what may happen and indicating the path it should follow for each eventuality (basically, a bunch of if-then statements).
Machine learning is a more advanced form of AI, where you don't program the meaning of specific things you want to teach, but rather teach by example.  So for example, instead of programming a computer how to identify an image of a cat, you tag a million photos of a cat as "cat" and let it find the similarities in the images itself.
Deep learning is where more cognitive capabilities emerge. Here is isn't even learning by example, you just give the computer a goal and let it figure out how to solve it.  For example, instead of programming it how to win at a video game (AI), or showing it lots of examples of play and letting it learn (Machine learning), you just teach it how to move the game's controls and tell it to maximize the score.  It doesn't even know the meaning of the game, but it can learn how to achive the goal through repetitive trial, error and learning sessions.



The advancement of AI had gone on relatively slowly, but exploded in the past decade thanks to:
  • An explosion of data provided by the internet
  • Growing computational power
  • New technological advances allowing to get results using less data points
Still, with all of its advancements, AI is still a relatively narrow capability with certain limitations.  It is defined as Artificial Narrow Intelligence, with two additional stages yet to be achieved:

  • Artificial General Intelligence - an intelligence that can reason in a general way across all tasks as good as or better than humans
  • Artificial Super-Intelligence - an intelligence that can reason in a general way exponentially better than humans.
There is a lot of debate when artificial super-intelligence will be reached, with different experts putting different dates on it, in the range of 2045 to 2100.

As such, they discount the common public fear of AI, where sentient AI plots to destroy humanity.  This type of concept, which is frequently portrayed in movies such as Terminator or the Matrix, is not considered by anyone in the field to be a true concern, primarily because we are so far away of actually achieving machine sentience.

The problem they indicate we should be worried about with AI is not it becoming evil and destroying us, but rather that its goals diverge from those of human goals.  AIs are best at optimizing their objectives, so it's extremely important to set objectives properly, or an AI may take unintended measures to achieve them.  So an AI may cause unintended consequences in trying to achieve its goals, or if the goal has multiple sub goals, take those out of order to problematic effect.

An example of AI thinking: Clean the tub!

It is hard to define rules to the AI around how to achieve its goals, because rules around saftey, morality, and ethics are not universal and what might be ok in one culture is not ok in another.  Also, rules have exceptions which are hard to define, and rules can have unexpected loopholes which AIs may be very good at taking advantage of, as they subscribe to literal meanings of rules rather than to intentions.

Once AIs are prevalent, one thought is that we can just "turn them off" if they behave unexpectedly.  This will be hard to do, because:

  1. As AIs anthropomorphize, we will begin to get emotional about them and will feel bad about "killing" them.
  2. AI will become so embedded in our day to day lives, we may become over-reliant on them and will have a hard time doing without them
  3. There is a global race to develop AI, and some are less concerned about the possibility of it running out of control than others (or are willing to take the risk).
So our goals for AI need to be defined along human values, which will require additional thought and research into the following areas:
  1. How do we define the right objectives to an AI, and teach it to pursue it along human values?  Can we even define such values universally?  How do we avoid side effects of unintended consequences when the AI pursues the goal?  How do we prevent the AI from finding loopholes and shortcuts to achieving the goals that are non-beneficial to us?
  2. Oversight - how do we monitor that the AI is pursuing the goals in a way consistent with human values?  Human supervision is not realistic at the rate AI learns and adapts; some sort of partial supervision mechanism will need to be put in place.
  3. How do we let the machines learn in a way that doesn't harm humans?  For example, you could teach an AI how to fly a drone by teaching it controls of a drone and letting it loose.  It would eventually learn how to control it properly, but it would also crash into a lot of things and people as it learns.  Can we teach AI in a simulated or virtual environment?
These questions have been gaining more attention, both by famous thinkers (Bill Gates, Elon Musk, Stephen Hawking), as well as by companies involved in AI development and Governments.  The guidelines for businesses and governments experimenting in AI should be:
  1. Perform extensive testing for AI - use AI to try and cause the AI to fail (red team testing); run AI on inert data while humans are still operating the real data, to see if there is any divergence.
  2. Institute boundaries around behaviors that are universally unacceptable
  3. Create governance mechanisms for overseeing use of AI tools in any application.

Sunday, March 12, 2017

SXSW Day 2, session 3: Catch me if you can: Overview of moving target defence techniques for preventative cybersecurity

By Chris Christou of Booz Allan Hamilton

This session sounded very interesting, but unfortunately was presented by someone who didn't seem to be very versed in the details.  Every question I asked he went to his tablet to search for the answer, and eventually I gave up trying to understand, figuring out that I may as well go to wikipedia myself instead of having him intermediate it for me.  I haven't had a chance to do that quite yet, so this is a rather brief summary of the little I understood:

Most cyber defenses today are reactive.  But are there proactive measures we can take to improve our defenses against cyber threats?
The traditional cyber defense relies on these four pillars: Prevention, Detection, Reaction, Recovery; but it primarily consists of building a moat around my assets.  The problem is that all the elements that are used to build the moat are based on the same underlying platform components: Linux, firewall, etc.  Hackers know these components well, including all their weaknesses and vulnerabilities.  Furthermore, while the defenses are stationary, the attackers are a moving target - it's hard to anticipate where they might strike from.  The way we map our network elements is static - IPs and DNS lookup tables - these present stationary targets.  How can you get around this structured defense problem?

A new field of cyber defense is called moving target defense.  The idea is to constantly change the environment in some way, to deprive the attacker of the advantage of consistency.  Conceptually, this is like frequency hopping in communication: it forces the attackers to spend a lot more time and resources to find a pattern they can attack.

Four areas of research were discussed:

  1. Anti Return Oriented Programming (Anti-ROP), researched by IBM (Haifa, so an Israeli Idea.  also, a similar solution is provided commercially by an Israeli company called Morphisec): One common attack form hackers use exploits the fact that code libraries used in attacked computers are frequently publicly available and can be studied.  Attackers can use this knowledge to use the code out of order, by jumping around in the memory to areas they know in advance will contain certain commands they want to be able to use.  What Anti-ROP does is to mix up and randomize the order of the code in memory, so an attacker won't know where to find what they are looking for.  A nice little video explaining how this works can be found here.
  2. Network attribute randomization, researched by Sandia labs: As mentioned above, one of the weaknesses of a static network is that if attackers are able to gain even limited access to it, they can observe it over time and map its structure, looking for weak points.  Network attribute randomization constantly changes the network attributes, such that it's very hard to map it, and even if mapped the map very quickly becomes obsolete.  This makes attacks much harder, as you can't pinpoint the weak elements.  The paper Sandia labs published can be found here.
  3. Self adaptive system: this was presented more as a concept than a concrete solution.  A self adaptive system is a system that uses some form of AI to constantly evaluate its own behavior to see if it is accomplishing its goals, and if not, change itself to adapt.  This general concept can be applied to cyber security, but no details were given as to how.
  4. Host Identity Protocol: This one I understood the least, but generally speaking, today a client accessing a server looks up the server name in a DNS server and gets the server IP.  Once it has the IP, it can easily find the server again.  Host identity protocol is an enhancement to internet protocol that adds host identity as an additional layer, which intermediates between namespace and IP.  That is to say, the DNS returns the host ID, which is encrypted, and the client makes the call to the host ID instead of to the IP.  This separates the transport layer from the internet layer.  To be honest I'm not sure I understand how this helps, but you can read more about it here and here.  If you figure it out, let me know.  By the way, this is a proposal only, it's not implemented anywhere.
So that's that.  Certainly an interesting topic, but unfortunately not well delivered.