Thursday, April 13, 2017

SXSW Day 7 session 2: Can you lie to an MRI?

Panel: Alain Dagher (Montreal Neurological Institute McGill University), Cameron Craddoc (Nathan S Kline Institute For Psychiatric Research), Daniel Margulies (Max Planck Institute for Human Cognitive and Brain Sciences), Emily Finn (Yale University)

Shifting from a polygraph to an MRI – moving from trying to measure physiological signs of anxiety to trying to identify what are the specific cognitive responses a human exhibits when lying.  Can it be done?
FMRI brain mapping is based on the iron in the blood cells.  You can measure the blood flow and the oxygen in the brain with magnetic pulses.
There are several regions of the brain which are impacted by lying, but also by other things, so it’s not easy to tell what triggered a specific activity.  However, it may be possible to identify patterns in the brain that are indicative of something.
In a nutshell, how would you use an MRI to detect lies?

  • Put a person in an MRI, and direct them answer questions, truly or falsely (knowing which the reply will be)
  • Identify the patterns that emerge, and train the software to look for it.
  • Put person back in and have the software look for the patterns it was trained for.

Clearly, as a lie detector it’s problematic – it only works if the person is consenting and cooperating.  Plus results are individualized.  Also, lying can take multiple forms, from trying to actively deceive to trying to hide a truth.
Can meaningful information be gotten out of a single person, rather than averaging over multiple people?
You could map the connections of different nodes, and create a correlation matrix (image), or a connectivity profile.  Can a connectivity profile act as a “fingerprint” for an individual?
Research was conducted on 156 people, mapping matrices for multiple activities, to see if you could match connectivity profiles to a person.  So, based on the patterns of a connectivity profiles people generate, given another profile could you tell who it belongs to?  The results showed a 70% success ratio in matching profiles to people.
In the future, the hope is to be able to understand what the patterns tell about us.  Currently, the research was able to predict on a low level of correlation how a certain patterns is an indicator of fluid intelligence (the general ability to think abstractly, reason, identify patterns, solve problems, and discern relationships.)  However this is still very far from indicative and there’s a long way to go yet.
Another study that was conducted to test attention.  Subjects lay in an MRI and were displayed images in rapid succession.  They had one or two seconds to click on a button if the image was of a certain type (say a house) and _not_ click if it was a different type (say, car).  They did this for 40 minutes (!!!), and later connectivity maps were generated out of their brain scans.  They found two types of areas of activity in the brain: high attention network (maps of people who did well on the test) and low attention network (maps of those who didn’t).  Then they took other people and scanned them while they were doing nothing, and looked for this network, to try and predict who would do well on such a test. They were able to show correlation:  People who, during rest, scanned with high attention connectivity maps, did well on the test, and those scanning with low attention connectivity maps did not do as well.  Researchers were also able to correlate these networks to children with ADHD.  A correlation was also found with people who took Ritalin.
It’s important to remember that these mappings represent physical brain states, not mental states.  Even if you could map a physical brain connectivity mapping of “sadness”, it may not match what a person is feeling, as the mental world is interpreted in a personal manner.  If you don’t feel sad, you won’t accept someone telling you that are, even if they waved an MRI scan in front of you.

Additional research took 2000 teenagers who did not have drug experience, and tested them with gambling questions, while checking their value anticipation response.  They then followed them for four years, and were able to correlate the value anticipation responses and probability of drug abuse.  However, they were able to predict just as well with only the written questionnaires, the MRI scans did not provide better results.

Could these technologies be used to qualify people for jobs?  There are far better tools available today than MRI scans.  Down the road, when more networks are mapped, and more correlations can be made (hundreds of them), it may be more practical, but that is very far down the road.
What about misuse?  If you get a brain scan for medical purposes, could it be used for other things as well?  That’s still an open question but a lot depends on the environment and state of mind the person was in when they were scanned.

Can an MRI read thoughts?  Right now you could tell some very basic things about what people are dreaming in broad categories, but it’s very, very far from any ability to detect specific thoughts.  On this question there’s a lot of hype in media about this, with very little behind it.
How do you filter out noise?  For example, if while I’m being scanned, I’m feeling cold, or claustrophobic, or generally afraid, or otherwise distracted?  One way to eliminate noise is to use cognitive subtraction – flip tested activities “on” and “off”; if you’re looking to study brain maps for lies, have a subject answer five questions truthfully, then lie five times.  Subtract the resulting maps from each other to neutralize constant factors.
What about cultural differences? There are specific differences among different mother tongues, so additional variability comes to play here.  Even the shape of the brain can impact, so generations are very hard.

Wednesday, April 12, 2017

SXSW Day 7 session 1: AI - Actually still terrible

Kate Darling, MIT Media Lab and Nilesh Ashra, Creative Director at Wieden+Kennedy Lodge.

There is a lot of hype from companies around what AI can and can't do, focusing on successes, less on failures.  The discussion around AI has moved beyond research and practicality into the domain of hype.  This hype is potentially distracting from real valuable applications.
There is also confusion between AI, insofar as analytics, and human-like interaction.  The latter is a much harder problem than the former.  It's much easier to build AI that mimics the analytical workings of the human mind, than the interactive side of it.  At this, we are still relatively far.
As an example, call center speech bots can still be very frustrating; they are script based, and can navigate a conversation in a very limited domain.  Small changes in speech patterns can confuse the AI and so interaction with it can be very frustrating.  Humans have better referential context capabilities ("the other one"), and are better at taking and reacting to interruptions.

Being able to improve the way AI interacts with humans will require a lot more research in that domain.  A few examples of this type of interaction research:

  • Hitchbot - an early experiment in having a robot interact in an open environment with humans.  Hitchbot was created as a social experiment in a Canada university.  The Robot, which had very basic humanoid form (basically a largish bucket with head, hands and legs), with very rudimentary communication skills, and a mission to hitchhike first across Canada, then other places.  The research here was very rudimentary, just GPS-tracking Hitchbot, and tracking people's posted reactions to it in social media.  After going across Canada, Germany and the Netherlands, it was vandalized while trying to cross the US.
  • Needybot - Wieden Kennedy advertising agency built a robot and that has basic mobile but can't really do anything by itself, and needs to ask everyone around it.  This one was more sophisticated, having a camera and including face recognition, so it could identify people it met.  Wieden Kennedy set it loose in their offices.  The experiment focused on interaction impacts, such as that people like being recognized by robots or that silent robots don’t generally get help.

Sunday, April 9, 2017

SXSW Day 6 session 4: AI in America: preparing our kids

Andrew Moore, Dean of school of computer science, Carnegie Mellon

How does AI work?  Lets take an example of a question asked from a voice assistant on your phone: what happens when you ask Google's voice assistant a question, say, “show me a picture of a celebrity in an orange dress”.  Google wants to provide a response to be returned in 0.3 seconds – so how does it do it?  Here's a step by step breakdown:

  1. The phone's microphone generates the wave forms of the question and digitizes them.
  2. The digitized wave forms are encrypted and uploaded to Google's nearest data center.
  3. The wave form is analyzed, and converted into a number of guesses as to what was asked, each with a rank of how likely the guess is correct.
  4. Each of the guesses is sent to thousands of servers, who race each other to find answers for the question.
  5. Each server checks its cache of frequently asked questions to see if the question it receives matches; if it does it sends the results back.
  6. If not, it does a search on the words inside the search, looking for results that provide the best match to the most number of words.
  7. Eventually each server returns the result it found to an evaluation server, that provides a score as to high likely the result is the right answer to the question asked, and identifies the top scores.
  8. The answers with the highest scores are send back to the phone.  This is done before the evaluation server even gets all the answers back.  Servers that go beyond the SLA provided will "give up" to not waste time on an answer that is known will not be returned.
  9. The phone displays the top results.

All of this happens in under a second.  There's no magic in what Google does, therefore, just a lot of work broken down to lots and lots of servers.

He then broke down the different components of AI as per the following diagram:

Nowadays, there are ready-made solutions for the individual pieces of AI, such as perceiving (software that identifies something in a picture or video) or software APIs that can be used to build projects.  Using these individual and ready-made solutions, even a person (or kid) who is not tech savvy, can aggregate them together into a meaningful AI application.  He gave a number of examples from camps the university held with children and teenagers, where after very quick training they were able to create their own applications using these building blocks.
He does recommend an educational foundation for all kids, in preparation to a world where AI is prevalent.  He says computer science doesn't have to be taught too early - 9th grade is probably optimal for most kids.  Before that, there are other basic skills that can be taught that will prepare kids for computer science.  He suggests the following curriculum: 




Tuesday, April 4, 2017

SXSW Day 6 session 3: The creation of a hacker


Presented by Adam Tyler, chief innovation officer at CS Identity (and who looked like he got off the set of the Mr. Robot TV show)

The young generation is becoming more and more involved in hacking, cyber-crime, and fraud.  The increase of access is breeding a new generation of hackers, who start off as mostly curious, rather than malicious.

Today, cyber-crime accounts for over 50% of all crime reported in England.  This is fulled by a huge change in the cyber-crime market, which is copying the cloud companies and providing Crime as a Service.  DDOS (Distributed Denial of Service - the ability to bombard someone's network resource until it can't cope with requests), Ransomware (software that encrypts your data and then ransoms you for the decryption key), malware, and spam engines all come as "platform as a service" nowadays.  You can set up your criminal endeavor on the cloud, and the platform providers take 25% of the revenue.

The young demographic values digital life and digital assets more than money.  Data is the core piece of the digital life; its distribution is what the internet was created for.

Adam challenged us to guess the age of the hackers behind these hacks:

  • Xbox/Playstation hack - a DDOS attack that shut down 160 million users, knocking them offline.  The purpose of the attack was advertising - after the attack the hackers sold the tools they used, so they did the attack as an ad to how good their software is.  Age of attackers: 18
  • JP Morgan Chase hack - $100 million stolen from banks.  Age of attacker: 36
  • TalkTalk hack - SQL injection attack, which cost the cellular company 100,000 customers and $73 million.  Age of attacker: 16
  • Target attack - 2-3 million credit cards stolen and successfully sold on black market.  Age of attackers: 25
There is a large range of ages for hackers, although the age is trending downwards.
Common thinking is that becoming a hacker is hard - it's something you need to devote many years of intensive study for.  This is no longer true in our days.

There are three basic hacker types:
  • Type 1: Script kiddies.
    • Motivation - Glory among their peers.  In the internet they hold power which they may not hold in their real life.
    • Communication: open web forums
    • Attack methods: techniques learned from gaming and gaming related forums
    • Targets: other gamers
    • How do they get introduced to hacking?  Simple, via Google.  Adam gave a live example, and googled "how to kick off other players".  This leads to a gaming forum (not a hacking forum!), with lots of free tools and techniques for kicking off other players from whatever online games you like.  While you're at the forum, you're also exposed to other things - exploits, social engineering tricks and so on.  He followed an ad for selling stolen identities to a whole host of services and showed how easy it was to get a stolen Netflix, Xbox Gold, Hulu, Spottify, or whatever account.
  • Type 2: Enthusiasts
    • Motivation: financial gain
    • Communication method: dark web forums
    • Attack methods: phishing emails, exploit kits
    • Targets: Individuals, small businesses
    • How do they get introduced to the dark web?  Once again, Google.  In the past it used to be relatively difficult to get on the dark web; you would have to use special browsers and know what to look for.  However, when Google created its own DNS service, then suddenly any dark web site someone would access through it would automatically get indexed and start showing up in searches.  As for the sites that require TOR browsers (special anonymizing browsers used to surf without exposing your identity), peoploe built TOR proxies for them, which are doorways that allow people to access the site for someone who has a regular browsers; and thus they would be indexed by Google as well.  So now with simple searches on Google you can access any corner of the dark web.
  • Type 3: Professionals
    • Motivation: massive financial gains
    • Communication: highly private communication methods
    • Attack methods: zero day exploits, malware
    • Targets: large corporate entities, financial institutions
Type 3 hackers are rare; what's driving the explosion in hacking are type 1 and 2s.

How does one protect himself?  This wasn't part of the presentation but someone asked the question, and as usual, there is very little to be done.  You should update software frequently - make sure you have all security updates; you should never reuse passwords, and you should understand the world of hacking - be aware of the techniques employed by hackers so you are less likely to fall for them: phishing, clicking on dubious links, what ransomware is and how to protect from it, etc.

Saturday, April 1, 2017

SXSW Day 6 session 2: Towards more humane tech

Anil Dash, CEO FrogCreek

Code developers worry about what happens if their software won’t succeed, but not so much about what will happen if it becomes very successful and everyone uses it.  This sometimes causes unintended consequences.
Technology is used ubiquitously, but not everyone who uses it trusts it.  Ethics is a mandatory part of training in business schools, medical schools, and law schools, but not in most technology and computer schools.
There is a lot of evidence that IT is a harsh and inhumane environment.  Women, especially, are not accepted in the industry or find it a hard environment.  Diversity in tech is low – there is sexism and ageism.  Also, a lot of the recruitment in tech is referral based, which tends to keep the same profile of people – people tend to keep networks of people like them.

Evolution of markets as an example of the problems tech can introduce:

  • Old markets – in the old pre-tech markets, buyers and sellers connected naturally.  I could get to any buyer inside my geographic region, or I could venture out to find buyers, or I could mail-order.
  • eBay – eBay introduced a more efficient way of connecting buyers to sellers, and removed most of the geographic limitations.  Buyers had better access to a more sellers and vice versa.
  • Google – The sophisticated search algorithm promises a better match for your needs as a buyer, but no one really knows what the algorithm is, so no one can tell if it provides better access, or whether it’s directed based on Google’s interests.
  • Amazon – Amazon represents complete control of the markets: they can decide who sells through their framework, there’s no clear understanding who is listed first in search results, and they themselves are beginning to enter as sellers; their data provides them the ability to identify the best, most lucrative parts of the market and focus solely on those.  So they are substantially disrupting the relationship between buyers and sellers.
  • Uber – Got rid of markets altogether: buyers can’t chose the ride, sellers can’t determine price.  Also, Uber are substantially subsidizing their service, not only below traditional taxies, but in some places below public transportation.  In essence they are engaging in dumping.
  • Facebook – their Algorithm decides what each reader sees, what ads each reader reads, and their market is not really a market.

Facebook, Google and Uber are creating fake markets – all apps lead to fake markets.  Even worst, we are moving towards no markets, when you have automated personal assistants, they will determine the sellers for you and you won’t even necessary know who they are.
We need to bring ethics considerations into the industry, but this collides with the values of technology (then again, the same can be said to more traditional industries, and it’s still done there).  On a personal level, you should choose carefully who you work with, what apps you use and so on, and make sure the tech you use reflects the values you believe in.

SXSW Day 6 session 1: A tale of future cities

A panel with Paula Chowles (Wired), Andrew Bolwell (Chief Disruptor, HP Inc), Alex Rosson, (Shinola Audio), and Joshua Kaufman (Wisome VC)

In 1950, 30% of the population was living in urban areas; today it's 54%, and by 2050 it'll be 66%.  Should we or even can we put the breaks on urbanization?
1.6 million people move to cities every day.  They use up less than 3% of the world's land mass, but 60-70% of it's energy.  It does not look as though urbanization will stop.

How should cities navigate public/private infrastructure? The growth rate of cities and their infrastructure far outstrips the ability of the private sector to provide it; we'll need to re-envision infrastructure.  Building roads, bridges, and parking are expensive and time consuming projects and usually provide very little return on investment.  Plus, in some areas the task is too expensive to even consider: by the end of the century 40% of the earth's population will be African, and the infrastructure problems there are the most challenging.
Cities will need to return to more traditional social based infrastructure solutions: public transportation, reduction of cars in cities (can be done using taxation) to be able to reclaim parking space, and instead of roads building more bicycle lanes and escalators.  Bicycles are great social equalizers, and escalators are a very cost-efficient transportation method.

There is also a need to increase city's diversity of business and employment - single-industry cities can suffer immensely when the one industry fails (Las Vegas, Detroit)

Violence in cities - technology can help alleviate some of these issues as well. Some US cities have deployed systems to triangulate the sound of gunshots for faster response times.  Additional research has shown that the way violence patterns spread is similar to the way contagion spreads, and the same type of containment of disease can be applied to containing violence.

Wednesday, March 29, 2017

SXSW Day 5 session 5: Brain wearables

Tan Li (Emotiv)                                                                                                                

Emotive creates a product which is worn on the head, and which detects through the scalp brain waves using sensors.  The device is like a strip of plastic worn on the head a bit like a crown.  The device picks up brain waves and can be trained to understand specific patterns of waves, which can represent a particular type of thought.  Then it can translate the pattern to a wireless command to some device, or just collect information for studying the function of the brain.

The system is an open platform that can be used for a variety of uses, such as a remote control to move objects or control input to a device.  The system is trained by having the person repeat a particular type of thought like a command (such as “up”).  Once the pattern is identified, it’s matched to an activity.
The device can be used for people with disabilities – example shown was a quadriplegic with no motor functions who was able to play a rudimentary video game using thoughts to control the motion on screen.
In addition, can be used for biofeedback – can be used to train our brains by identifying positive mental states and enhancing them.  Identify a positive mental state, and have it trigger a visual queue which a person can try to repeat.
Additional applications include research.  For example, do external factors improve or harm concentration?  Another example is monitoring brain processes of amateur card players and compare them to the brain processes of an expert card player, to see what thought patterns experts use.
Many of these types of activities are possible with use of an EEG machine; but an EEG requires complex connection and the wires limit where this device can be tested.  This device is wearable and is wireless, so it is portable and can be used in field research and situations where exertion or extreme mobility is required.  It can also scale better.


Standard EEG

Emotiv device

In the following video, an audience member was brought up and fitted with the device, which was set to control a Sphero ball.  First the device was “trained”: the volunteer was told to think about moving in a direction, and the device captured the patterns.  After a couple of minutes, the training was completed.  The hard part was getting the sphero to work, though; the wifi and Bluetooth kept failing and they couldn’t get it connected to the phone, and pair it with the wearable.  Eventually the speaker gave up, gave the devices to an assistant to try and figure out, and continued with the presentation.  Then, at some point, they were suddenly able to make it work:


Tuesday, March 28, 2017

SXSW Day 5 session 4: Beyond driverless cars - our transportation future

A panel discussion moderated by Neal Ungerleider (Fast Company) featuring Anthony Foxx (former US secretary of transportation and mayor of Charlotte), Chandra Bhat (University of Texas), and Don Civgin (Allstate insurance company)

The panel started with some statistics:

  • Cars are not utilized (i.e. parked) 96% of the time
  • 8 Billion hours wasted on congestion (I guess annually)
  • For low and middle income families, transportation costs represent their second highest expense


Transportation sits on three pillars:

  • Technology: this is where advances happen most rapidly
  • Infrastructure: need to think about how to adapt infrastructure to the prevalence of autonomous cars
  • Human awareness: benefits and unintended consequences need to be thought through.  For example, if autonomous cars free me to do whatever I want during the ride (work, entertainment), then time of travel is not as much of an issue as it was in the past, which may encourage urban sprawl, lead to longer drives, and increase waste.
Will autonomous vehicles be owned, or will we just use ride sharing?  Younger, more educated people gravitate towards the shared economy, so they will be buying less cars.

Autonomous cars will come sooner rather than later:
  • Autonomous cars are "killer apps"; they will be the feature people use to decide to buy a car, and which car.
  • Companies pioneering the technology (primarily Uber and Tesla) are not playing in the traditional manner; they are disruptive and aggressive.  Traditional car manufacturers have to accelerate their own plans to keep up, and as a result, the entire timeline is accelerated. 
How will liability be assessed around autonomous cars?
Today humans who drive cars have to be tested and licensed.  How do you license the software behind the self driving car?  We would need to have a centralized standard.
Government's approach to this question is still being guidelines.  The US government has published guidelines but needs to expand them and have them periodically reviewed.

How do you keep your data private, but also make the information about autonomous vehicle accidents public and available to manufacturers and government so it can be reviewed and learned from?  Possibly will need to adopt a similar model to the one used for aviation accidents.

Car manufacturers are beginning to view themselves as mobility companies, rather than car companies.  One unexpected possible feature of autonomous cars is that because they are much safer than regular cars, they may be able to lose a lot of weight taken up in today's cars by safety features (bumpers, reinforcements, airbags, etc.), which could lead to smaller engines and better efficiency.

Sunday, March 26, 2017

SXSW Day 5 session 3: Connected cities, hackable streets

Panel with Nadya Bliss (Arizona State University) moderating Tom Cross (Drawbridge Networks) and Robert Hansen (OutsideIntel)

Smart cities represent the confluence of machine learning, big data and IOT in a city context.  Most cities start with power related applications.
Security represents a special challenge in cities, which have both old, legacy, vulnerable systems as well as modern sensors and tools that we have not figured out how to protect very well quite just yet.  Basically, a lot of the attention directed towards smart cities today is based on the marketing of the device manufacturers who are trying to increase sales of their devices in the guise of cost reduction.  The cities themselves are still not 100% clear on the usecases.

One example of a hacking of a very simple smart city feature:
Many cities have smart traffic lights - traffic lights that have sensors buried under the road that senses when a car is on top of it, and signals the traffic light (usually wirelessly) to change the lights.  Since the device is communicating wirelessly, and almost never securely, it's relatively simple to identify and decipher the signals it is sending to the traffic light, and then using a transmitter more powerful than the sensor, disrupt the operation of the traffic light, even from a distance.

There are three actor classes who may be involved in hacking cities:
1. Board teenagers - kids who would hack a traffic light for the LOLs.  They represent a low threat overall in the category of connected cities.
2. Criminals and criminal organizations - they are usually looking for how to monetize a vulnerability.  They would be involved in very specific usecases.
3. Nation states - possibly most interested in vulnerabilities of connected cities, especially when they can accumulate effects to generate a large impact on the whole city or state.  They can shut down devices for general disruption, or introduce subtle changes to how the system operates, to achieve a specific goal.
For example, in the case of traffic light, a nation state can attempt to disrupt an election by causing traffic slowdowns or jams in areas where there are a lot of voters to a particular candidate.  This type of attack would be extremely difficult to even identify as a hack, let alone trace it back to the originator.  Also, cities are resource poor - they get all of their resources from outside the city.  An attack on delivery trucks or the transportation routs they take could deprive a city of food in a matter of a few days.
Not all "hacking" is done by people trying to cause disruption; there have been reports of police feeding false information into Waze to fool people on where they actually are located.

Because they are such diverse domains, smart cities have numerous points of failure.  And because product developers want to get their product to market fast, they don't always want to perform intensive security testing and fixing, which takes time and costs money.  Also, given the way some of these devices are used, not many of them are built with the ability or even accessibility to patch them.
As such, what are the ethical considerations around disclosing discovered vulnerabilities?
In the software industry there are developed norms for how to disclose vulnerabilities; in newer industries, such as connected cars, however, there is less openness, as not all manufacturers adopting the software mindset quite yet.

What should be done?

  • All messaging needs to be encrypted
  • Companies in all areas of smart city services and devices need to be made to follow rules around security - SLA on logging, encryption, upgrades, etc.
  • Need to have one throat to choke - each city needs to have a Chief Security Officer or innovation officer who can coordinate security concerns for the city.  Cities should also create panels for advising them on how to add smart city features securely
  • Determine for each device whether it really needs to be connected to the public internet - is there a good reason for public connectivity?
  • Consider data as a liability, not an asset, and treat it accordingly
What challenges are we facing in this domain?
  • The people making the technology are not incentivized to spend money and time on security
  • People buying the technology are not aware of the risks and dangers
  • There is a lack of talent to fill the need
What are the main questions you should consider when building for security?
  • What happens if you get a reply you didn't expect?
  • What happens if you get no reply at all, when you expected one?

I asked a question about Estonia, and whether there are any specific applicable lessons that could be learned from that country's experience both in advanced digitalization of public services, as well as withstanding attacks from a nation state attacker (Russia); the panel did not have any information around this, but someone in the audience was able to give me some interesting details:
  1. Estonia uses Guardtime (an industrial blockchain platform) to log minute by minute records of anything that happens in their database, so they can be sure no one can tamper with their data undetected
  2. Estonia is planning to build "data embassies", which are secure data centers outside of the country, that act as a backup in case of attack and are treated like real embassies.

Friday, March 24, 2017

SXSW Day 5 session 2: Going Beyond Moore's Law

Panel with Greg Yeric (ARM Research), Rachel Courtland (IEEE), Tom Conte (Georgia Institute of Technology) and Tsu-Jae King Liu (University of California at Berkeley)

Moore’s law says the number of transistors on a square inch of chip doubles every 1-2 years.  People take this to mean the speed of computers doubles every year, but really it was stated more as an observation about the economy of making semiconductors.  The performance prediction is attributed to a different Intel executive (David House), who predicted chip performance would double every 18 months.
Moore’s law was a good predictor of semiconductor advancement and correlated well to performance.  In 1995 transistors were still getting smaller, but the signal distance was increasing, so wire delay started to slow them down.  This was compensated with other optimizations that kept Moore’s law relevant.  In 2005 power output was leading to greater and greater heat, which led to a limitation of clock speed, and instead of running cycle speeds faster, newer chips just add more cores.

The existing process of shrinking chips is not going to be economical, so the current loop of size reduction will not be sustainable without substantial innovation.  To sustain (and move beyond) Moore's law, we probably need some disruption in the industry, which can include things like:

  • Cryogenic supercomputing can perform at lower power levels than CMOS (even when accounting for the power required for the extreme cooling) which would allow for higher clock speeds.
  • Quantum computing - this is still in the future, and only has limited applications.  Currently research is in its great infancy.
Looking forward, the expectations are that innovation being worked on today will be available ~10 years down the line commercially.

What are the characteristics needed from today's semiconductor chips?
  • Ability to be always on
  • Low power - need much better efficiency (We are looking for orders of magnitude drops in power usage - reducing voltage from 1 Volt to 1 millivolt reduces power consumption by a million.  Once we get to that level of power usage, we won't need batteries anymore - chips could work on ambient power in the air.)
  • Embedded memory
  • Small - to support applications in IoT and wearables
  • Networked (optical interconnects)
  • Flexible substrait (for wearables)
Technologies for future systems:



Thursday, March 23, 2017

SXSW Day 5 session 1: What to do when machines do everything

Malcolm Frank (Cognizant)

AI is already here today in little helpers we use on a daily basis (personal assistants, next-gen GPS apps, etc).
When Deep Blue beat the human world champion in Chess, people still belittled the achievement, and did not attribute it to AI; IBM’s Deep blue was specifically written for the single purpose of defeating the world champion in chess.  Later, Watson defeated Jeoperdy champions, and that was acknowledged as an improvement because it required real time deciphering of human speech, but still, once the speech was understood, the rest of it was a search algorithm, so people still played the achievement down.  Later, however, when Deep mind beat the world Go champion, it was a key turning point, because Deep Mind was a generic all-purpose AI that was taught to play Go, not a special made application.  Also, Go is the most complex board game in existence, orders of magnitude more complex than chess, and with no real way to brute force it.  Here we pitted 2500 years of human evolution against two years of computer development, and the computer won.  However even after the decisive victory in Go, some people still said that AI could only go where it could algorithmically solve problems; they said that in games like Poker, where small human cues are important, it would not be able to compete.  However recently a computer beat the top poker players decisively.  It turns out that short term human strategies like bluffing, or visual cues, do not stack up to a long term algorithm and that it can adapt to catch and learn these types of invisible cues over time.

Today, 8 out of 10 hedge funds are AI driven; Tesla’s automated driving has accumulated over 200 million miles of driving, and the more car Tesla sells, the more learning it accumulates.  In radiology, AI identification of anomalies like cancer has reached an accuracy of 99.6%, exceeding human accuracy of 92%.  In the legal world, paralegals doing due diligence in two weeks are replaced by machines who can do the same work in two hours.  JP Morgan Chase has put a loan review system that has automated over 360,000 hours of what used to be human work.  Helpdesk management is being replaced by voice recognition capable software and chatbots, and AI is proving to be much better than humans in investment advisory, because it has none of the biases humans have.  The displacement of humans with AI is happening already, and it’s happening rapidly.
The takeover of jobs by AI can be described as a capitalist dream and labor nightmare.  For the capitalists, AI and automation
-          Radically lowers operational costs
-          Improves quality (less errors)
-          Boosts speed
-          Raises insights and meaning previously unavailable

What happens on the other side of the equation?  A 2013 Oxford study predicts that 47% of US jobs are at risk to be automated in the next one or two decades.  Nevertheless, Malcolm Frank is optimistic.  Why?  Because if in the past technological advancement was focused on smaller job domains, now it is shifting to bigger systems: finance, healthcare, government.  He sees three scenarios for humans:
  • Replaced - AI takes over your job completely
  • Enhanced - AI enhances your job, taking away the rote activities and leaving you to deal with higher level functions
  • Invented - AI creates new jobs that didn't exist previously
Frank says 90% of the talk is around replacement, but he feels that 90% of the actual impact of AI will be in enhancement and invention.  He breaks it down as follows:
  • 12% of jobs will be replaced
  • 75% of jobs will be enhanced
  • 12% of jobs will be invented
Simpler jobs will be replaced altogether, but automation has been eliminating jobs for quite some time now.  For example, the automated toll both operators replaced human toll both operators - and that was not a job anyone really wanted to do.  Until now, automation replaced primarily blue color jobs; now, however, software is automating white color jobs as well, if they are rote enough.
As an example, a company called "Narrative Science" has a software that automates simple journalism.  Almost all minor sports events which received small coverage in local newspapers are now no longer written by people, but rather by their software.  The following article was written just by entering the play information into their story writing software:
Story written by Narrative Science software

A guideline he gave for whether your job is at risk is this: do you sit in a large cubicle farm?  If so, you job is at risk.

Most Jobs, however, are a sum of multiple tasks: some of the tasks will be automated, but some will not.  For example, lawyers:

Some of these tasks can be automated (examining legal data, research prepare and draft legal documents) but others are not so easy (present cases before clients and courts, gather evidence, etc.).  Again, this is nothing we are not used to today already.  A Taxi driver uses GPS for navigation, Credit card reader to automate payment taking, and possibly an app for a ride hailing services like Get Taxi.

Generally speaking, you could describe a "periodic table" of jobs as follows:
The jobs on the left will be automated; the ones on the right are enhanced.
People will do the "art" of a job; machines will do the "science" of the job.  Examples of area of augmentation:



And it's not as though our institutions do not have vast room for improvement.  Healthcare, for example, is a very inefficient, very bureaucratic industry, with so many activities not related to actual health (forms, returns, appointments, questionnaires, reports, etc).  Some of the improvements that can be expected in healthcare include:



What about new jobs?  Frank describes what he calls the Budding Effect.  Edwin Budding invented the lawnmower in 1827.  The ability to mow lawns to an even height vastly improved the ability to play sports, opening the door to a huge sporting industry which was previously limited or not possible.  A more striking example of the Budding Effects are the invention of ways to generate and deliver electricity, which enabled communication, radio, TV, and other countless industries, none of which could have been imagined when the initial inventions for generating electricity were made.  Frank quotes W. Brian Arthur who wrote in the McKinsey Quarterly that by 2025, the second economy created (the digital economy) will be as large as the entire 1995 physical economy (interesting note is that Arthur does acknowledge that jobs will be gone in the second economy that will not be coming back, and sees lack of jobs as a problem that needs to be addressed; he says that the focus should be less on job creation and more on wealth distribution).

Where will new jobs be created?  These are some good candidate domains:

Wellness - as mentioned previously, the current healthcare system is operating very poorly.  We will be seeing a shift to patient central care, rather than doctor centered care.  Furthermore, AI will be able to predict when we will likely be unwell or get sick, so we will be able to take preventative measures in advance and reduce overall illness altogether.

Biotechnology - will be bigger than IT by 2025

VR and AR - will grow to similar size of the current movie industry today.  VR is predicted to be a bigger business than TV by 2025; Tim Cook (Apple CEO) predicts that AR will be bigger than VR in 10 years.

The Experience economy - people with time on their hands will seek more vacation, and beyond just going places they will want to experience things.  Air B&B is planning to move away from providing lodging to providing experiences: want to live in a medieval castle, or relive the renascence, or experience what it was like to be a pioneer?  Specialized experience packages will enable expanding tourism to different levels altogether (a-la Westworld, hopefully without the murderous robots).

Smart infrastructure - smart buildigs, roads, and so on.  The money for projects like these comes from automation - when self driving cars eliminate the majority of car accidents, this will lead to a savings to the economy in the US alone of $1 Trillion (as point of comparison, all federal tax revenue is $1.7 Trillion).

Next gen IT - Cyber security, quantum computing - these are emerging domains that will require jobs as well.

In summary, says Frank, never short human imagination - human wants and desires are limitless, and these will lead to things for us all to do.

Wednesday, March 22, 2017

SXSW Day 4 session 4: A new normal: user security in an insecure world

Panel discussion with Alina Selyukh (NPR technology reporter), Bob Lord (Yahoo's chief information security officer (CISO) and Christopher Kirchhoff (previously assistant to Director of Join Chiefs of Staff at Pentagon)

First off, the moderator addressed the most interesting question - the Yahoo attack and how Bob Lord handled it.  He said it was most likely a nation state sponsored attack, which he called "the new norm".  He said that if in the past nation state sponsored attacks were primarily directed at government or military targets, today many corporations are also attacked by nation states, either for industrial espionage, to give its own corporations an advantage or even for revenge (e.g. Sony hacks attributed to North Korea).  He said that many corporations do not understand the meaning and impact of having a nation state attacker - the dedication of resources, time, money and people involved.
Nation state attacks are different than regular hacks in that they are very well funded and can be planned and executed over many years.

Similarly, Kirchhoff was asked about his experience - he is the one who had to deliver the news of the Snoden leaks to the joint chiefs of staff.

Bob Lord was asked about what he does at Yahoo to improve security.  He mentioned a number of things:
1. Red team/Blue team exercises - like many companies, Yahoo conducts red team exercises, where they take a group of hackers and try to penetrate their own systems (red team) while another team tries to detect and stop them (blue team).  He says the red team always wins; they were never able to stop them.  He recommend not building the red team from the people who are in charge of security at your company, as they may fall into certain patterns of thinking and take assumptions due to their knowledge of the security defenses.
2. Phishing exercises - IT sends out phishing emails to employees, to see if any of them click on the included links.  He says he usually uses this technique to test how good the security orientation was.  He said that one of the lessons he learned from Phishing attacks was that the security orientation for new employees was too detailed, with so much information people didn't remember very much of it.  He now prefers more focused sessions on key points he wants employees to remember, so they're not overwhelmed with information.

He said that most failures are procedural - not using proper existing protocols, not making sure updates are applied immediately - a lot of human error.  He said that security problems are not just technical problems, they are also cultural ones.

On the subject of security culture, Kirchhoff described how the Navy developed a "high reliability" culture, to be used in places where small mistakes can have big impacts (such as nuclear submarines).  He mentioned it had 5 principles and mentioned two of them - Forceful backup (he gave an example of not sending two people to do a critically delicate job, even if only one is needed for it), and integrity (if you make a mistake, speak up regardless of consequences).

[As a side note, I think he confused "high reliability" with "operational discipline"; the two pillars he mention come from them (the five pillars of operational discipline are Questioning Attitude, Level of Knowledge, Forceful Watch-team Backup, Formality and Integrity).  High reliability Organizations have different characteristics (Preoccupation with failure, Reluctance to simplify interpretations, Sensitivity to operations, Commitment to resilience, and Deference to expertise)]

Lord mentioned that adding security after the fact cannot work, security has to be designed right into whatever is being developed as it's being developed.  For regular users, rather than corporations, he mentioned these as some of the more important steps to take to improve personal security:

  1. Keep things patched and up to date
  2. Use two factor authentication
  3. Shut down old accounts - hackers know that a lot of people reuse their passwords, so the more accounts you have out there, the better the chance a hacker can stumble across one of your passwords (oh, and don't reuse passwords).
Asked about biometrics, he was not enthusiastic.  He said they can sometimes be captured passively (people leave fingerprints everywhere) and then used against you.  Unlike passwords or digital certificates, you can't revoke and reissue your biometric markers.

He said that the average time between penetration and detection is 200 days; so you need to have your teams ready to do research that far back.  He also mentioned that two thirds of breaches are discovered by the company when someone calls it to tell it it was hacked.

What are the top things they learned?
  • Log retention - because of the long detection time, it is important to keep logs very far back.  This is very expensive, but worth it when the hack happens.
  • Conduct red team/blue team exercises
  • Get top management familiar with the risks and the issues up front, and include them in exercises; so that when something happens, they are more ready to deal with it, rather than having to educate them in the middle of a crisis.

Tuesday, March 21, 2017

SXSW Day 4 session 3: GAFA: The relentless rise of tech giants (and their inevitable fall)

By James Schad, WeGrow

Google, Amazon, Facebook and Apple – giants roaming the tech land face (he didn’t mention Microsoft, which is sometimes also bundled with these four).

Google – controls 90% of the search market
Amazon – Largest online retailer 43% of online retail (I’m guessing in the US); second biggest company after it is 4%
Facebook – 1.9 Billion users
Apple – 91% profit share of the smartphone market (despite having far less than 50% of the market share); most valuable company in the world

Despite being such giants, there is nothing inevitable about their continued existence, necessarily.  Myspace was once thought to be an unassailable social media platform.
Apple has a loyalty following which should help it survive even if it falls on to hard times (as has happened in the past)
Google and Facebook are more precarious, even though they are, in essence, a digital duopoly.  Both rely very heavily on advertising.  Google has 86% of its revenue coming from advertising, while facebook has 97% of its revenue from ads.
Google:
Other players are beginning to get into Google and Facebook’s game: in search, Amazon is becoming a major search competitor to Google– a lot of the retail related search is going to them; and with Amazon Echo, they will be moving into other search as well.  By 2020 it is projected that 50% of search willo be voice or visual, where it is not clear how you add advertisements.  When web is no longer an entry point, Google loses its gateway.
Google’s next major income generator is video, where it is increasingly competing with Facebook (in feed), twitch (specialty gamer domain), and OTT content providers.
Facebook:
Facebook is getting ad fatigue – more users are ignoring them, and Facebook needs to take stronger measures to push them at its users (for example auto-play ads, which users dislike).  Also, there is greater pressure from advertisers to open up Facebook’s metrics, so there can be more transparency around what value they are getting from it.
There are also a lot of privacy concerns which may cause backlash.  For example, when Facebook acquired whatsapp, they said it was technically impossible for them to link account data from the two services, but then went ahead and did it anyway, causing the European Commission to investigate them.  In fact there are many investigations underway against both Facebook and Google in the EU both around privacy as well as antitrust concerns.

How are these companies diversifying away from their core revenue sources?
Purchasing: Google, Facebook, Amazon and Apple have spent $130 billion dollars in purchasing other companies – many of them for innovation (patents).
Content: All four are investing heavily in content and content services (you tube red, Amazon prime video and music, Apple music and iTunes store, Facebook Content Strategy)
Subscription services – Amazon prime (which is now in 50% of US households, 70% of US households with annual income of $112K and up!), you tube red and Apple music are all content subscription (some more successful than others)

But is the investment paying off?  Not clear.  Alphabet has invested $46 billion in various projects, which so far have lost them $6 billion.  There are a lot of “moonshot” projects, so some may return the investment in the long run, but so far they have not found a replacement for search as a monetization platform.
Facebook, on the other hand, is trying to copy other models: Facebook markets is their version of Craig’s list; Facebook jobs is their version of Linked-in; Facebook workplace is their version of Slack; and with Instagram they are clearly just copying whatever Snapchat is doing.  In addition, they are investing heavily in VR, but again, this is a domain which has not yet paid off.
Apple’s diversification strategy is less clear.  In contrast to Facebook, Apple seems to be focusing on AR, not VR – rumors it will be available as soon as the iPhone 8.  Their attempt to branch off into wearables with the Apple watch is not generating excitement, and Apple music has slow adoption and is far behind the other music streaming services.  It’s rumored to be working a TV service (it may just buy a major US TV provider, it has enough money), and for some time it was rumored to be working on a car.  None of these are near time projects.  However Apple has a unique brand loyalty which should sustain it even if drops from dominance (as has happened in the past).

Amazon stands out from the others in terms of diversification: It has been able to generate $10 Billion in revenue in 2016 from Amazon Web Services, and it is rolling out more services.  It is still very strong in retail – provides more choice at cheaper products.  It has its own consumer product lines (AmazonBasics branded goods, and it’s rumored to be launching a brand of female undergarment fashion) and are using their massive data collection to undercut the market.  They have huge success with the Amazon Dash buttons and are constantly innovating in retail: new Amazon go store, Amazon flying warehouse patent, Drone delivery and so on.
Amazon’s flying warehouse: a warehouse floating in the air from which drones fly down to the city to deliver packages.



Given all of these trends, the lecturer predicts Amazon will become the biggest company in the world and has the best chance of the four to survive for the long run.

I asked about Alibaba, and he acknowledged it is a potential competitor to Amazon, but said it hasn't caught on outside of China (I'm not sure he's up to date on his data on that).

2017-04-09 Update:
Just the last couple of weeks, Google has been feeling the pinch of some advertisers pulling their ads from Youtube because the ads were shown near what's called "inappropriate" material.  This is proving to be quite the challenge for Google.  This of course won't topple them, but it's not a good sign, clearly.

Saturday, March 18, 2017

SXSW Day 4 session 2: Chaos Monkeys: the threat of AI and automation

By Antonio Garcia Martinez
The session was originally titled “Chaos Monkeys: A silicon valley adventure”, and was supposed to be in reference to the book written by the lecturer, about his experience in silicon valley.  Instead, he decided he was tired of telling that story and wanted instead to discuss something he felt was much more important, which is the future of society in the face of AI and automation.
Chaos monkeys is a software system written by Netflix, which randomly shuts down servers and does general havoc in their systems, so they can see how well they can react to problems that happen.  The Tech industry, said the speaker, is the chaos monkey of the world; it throws things out of whack and lets the world deal with it as much as it can.
He talked a little about his background, starting off in the financial industry (prompted to join it by reading “Liar’s Poker” by Michel Lewis), and eventually wound his way into Facebook, where he experienced a type of insanity that can happen when very young people have a company drenched in money.  He also talked about his background – his grandparents fled Spain because of Franco, to go to Cuba; his parents fled Cuba because of Castro, to the US.  He joked his grandparents fled Europe because of fascism, his parents fled Cuba because of Communism, and now he’ll be fleeing the US because of capitalism.  After that he moved on the topic of automation taking over.

He started by talking about trucking, which everyone agrees is one of the first places where driver automation will take place.  Commercial driving is the most common job in 20 US states, and the last well-paying job an uneducated person can have (~$73k salary).  There are 3.5 million truckers, at least half of them will lose their jobs in the coming 10 years or so.
Automation, he said, is the triumph of capital over labor, with labor becoming obsolete and powerless.  As automation advances, more and more people are put out of a job, and the strength of labor is decreased.  Simultaneously, automation can reduce the costs of goods and services substantially.  Still, if you have no job and no income, even cheap goods are hard to come by.
He quoted an article by Peter Frase, which described four possible futures, given the question of abundance vs. scarcity, and whether the society is equitable or hierarchical (full article here: https://www.jacobinmag.com/2011/12/four-futures/).

Equality
Hierarchy
Abundance
Communism
Rentism
Scarcity
Socialism
Exterminism

The first is a combination of abundance, mixed with equality, which the article describes as communism and the target utopian state of humanity.  He didn’t get into that part of it.
If we have abundance of resources and goods, which the technological age may provide (3D printers, automation of production reducing its price to be negligible), then one way to preserve a hierarchy of power and imbalance would be to license the ability to use the technology, to maintain an artificial scarcity of it.  This is similar to DRM-ing software, which can be copied infinitely with almost no costs, and maintaining digital rights and copy protection laws.  Then you “rent” the permission to get any sort of product or service you want, and you still have a money based hierarchy of those who have and those who don’t, even though technically there is the ability to provide everyone with anything.
It is possible that even with automation we will run into scarcity of resources (there’s only so many minerals and physical material available to be used in the world).  If we have scarcity of resources, then if equality is maintained among people, you have socialism, which again is a form of political government that he did not get into.
The last future is one where there is resource scarcity, but there are still powerful ruling elites who preserve a hierarchy of people.  In this case, there can be several ways to deal with the people who don’t have jobs but need resources:
A system of universal living wage could be used to ensure everyone has access to basic living.  This will divide society into two – those with jobs or resources, who can live at a higher quality of life, and those without them, who subsist on basic income.  Martinez says this type of situation happened before – at some points in the Roman Empire’s history, 30% of people were getting their daily bread from the government.  Today as well the welfare system is propping up a very large number of people who do not have or cannot get jobs – if you factor in people who get disability benefits, you get to over 20% in some states.  He notes that not everyone getting disability benefits actually has disabilities; some of them just can’t get jobs.  So the sum of unemployment and disability recipient represents the true size of the non-working.  However these are never stable situations, as the poor rarely put up long term with the disparity.
Another option is for the rich/powerful to physically move away to a place where the others can’t reach – there are a number of science fiction stories detailing this type of scenario (Elysium as an example).
The direst dystopian future is one where the masses are just killed off, and again, there are science fiction stories around these types of scenarios (Logan’s run, In Time).
In the past, those who were pushed into poverty would revolt, and he gave an example of the battle of Blair Mountain, in which disenfranchised coal miners in Virginia staged the largest uprising in US history after the civil war (but most revolutions around the world are rooted in inequity).  In the case of the Blair Mountain revolt, the miner’s lost after the US army intervened in favor of the mine owners.  In a future where we are creating robotic soldiers who are far more powerful than the ability of humans to engage with, such an uprising will be even more impossible.  As an example, the work down now by Boston Dynamics, who are creating robots for military application:



This image is a man trying to push the robot down, to show how well its balance is.  In future conflict, this will literally be the type of match-up between humans and robots.


He didn’t really have a positive ending, so to avoid ending on a depressed note, he showed a picture of his newborn baby.

SXSW Day 4 session 1: Five factors influencing the future of UX design

Bill Akins, Rockfish Digital; Diane Edgeworth, Lululemon; Almaz Nanjappa, Momentus Software; Ed Valdez, Momentus Software

This panel actually talked about the impact of technology on retail, and user experience in a retail setting (which was not what I initially understood the session to be).

The five factors are: Simplicity, Ubiquity, Mobility, Technology and Connectivity.

Retail is still strong, over 90% of sales still happen in retail (source?).  Technology is augmenting retail – over 10,000 pepper robots are already in use in Japan.
Sample project in 7/11 added weather data and past purchases to optimize the app experience; if the weather is cold, offer coupons or advertisements for hot drinks rather than cold drinks.  Also, smart displays can enhance the retail experience.
Another example: Lens crafters noticed a common problem with people trying out frames, that when they take their glasses off to try the frame, they can’t see their face well because they have no lenses in the frame.  So they enhanced the mirror to take your picture from several angles, and then displays what you look like wearing the frames from several directions.
Retailers are looking to create mixed reality scenarios to pull people into the stores.  Also experimenting with things like ultra-haptics, which is an array of speakers that project sound that gives tactile feel so you can touch and feel it, creating virtual controls.
Smart carts, as in the Amazon test store, let you ring up items when you put them in the cart, and then checkout can happen automatically.
Another tested technology is overlaying visual images on physical objects (movie)
It’s not clear how VR can help in retail; it seems to be more of a gimmick.  It takes you out of the retail experience.  Augmented reality keeps you in the experience but enhances it.

Challenges of user experience enhancement in retail:

  • Updating and scaling: a lot of work to update and maintain tech.  For example, touch screens can get really sticky and dirty, and need to be constantly cleaned.
  • Adoption is better in a concept store first, to test a new technology out, and only if it works there, is it worthwhile to roll it out across more stores.


Thursday, March 16, 2017

SXSW Day 3 session 4: The future of conversational UI

Hector Ouilhet, lead designer for Google Search and Assist products

Coming on the heels of the previous session, which talked about the change from search to assist and the potential impact it may have on companies that provides services, was this lecture on the benefits of the personal assistant, by someone at Google working on it.

He started off discussing the work he did planning his session in SXSW, including finding travel, hotel, etc.  He said he spent about 10 hours planning it out.  Then he played a brief voice interaction of how it would work with an assistant, which lasted about a minute.  He emphesized the benefit of the assistant as the reduction of time between identifying what I want and getting it from technology (which he called "friction").
He reviewed the history of getting things done via the internet.  First, there were portals which indexed content into categories, like yellow pages (the model that was familiar then).  Then search arrived and shifted the paradigm, removing the need to manually categorize the internet; you stated what you wanted and were given a number of best matches to choose from.  Then came the feed, which identifies relevant information and pushes it too you based on identifications or subscriptions you make (e.g. Facebook, twitter).  This was another paradigm shift because it tried to anticipate what you want and deliver it to you up front.  Chat apps are the next evolutionary step, where you have a conversational like interface to finding information.  Personal assistants are a format of that using regular voice speech.
Where are we heading?
1. Smart everything - every physical object will be, in some way, smart.
2. Multi-user devices - the objects will change from being personal objects (like phones) to shared objects (like smart appliances).  Interaction will be with any user of the device, not just it's owner.
The simplest way to communicate with all of these devices would be to use voice communication; and you want a single interface as the gateway to the devices, so you don't have to build the communication intelligence into each one separately, or talk to each one with its own protocol separately.

Moving beyond the actual message spoken - additional development will include the voice cadence, tone and expression to imbue even more understanding of the intent of the speaker (just like humans do).

What are some of the challenges we will see with this interface?

  • Intuitiveness of interface - smart devices add layers of capabilities to devices that used to have a very clear purpose.  This can cause cognitive dissonance, and difficulty to understand the device.
  • Conversational interfaces lack the visual immediate feedback that regular devices provide, which helps understand where the problem is.  For example, when you turn a light switch on and the light doesn't turn on, you know the problem is with the light bulb or there's a power outage.  However if you say "turn on" to the light and it doesn't turn on, is the problem with the bulb, or the interface, or with understanding the command, or hearing it, or any other type of software issue?  Hard to track the problem down.
  • Technical problems of using voice - learning accents, cultural language differences, speech impediments, etc.
  • Discoverability - how do you know what the device can do?  When you have physical switches, you can see what can be done.  With a voice interface there's no menu or visible cue to tell you what the device can do.
  • Human speech is frequently assuming the listener understands context or visual cues which the device might be missing.  For example, "turn on that light" - which light?  The assistant can't see what you're pointing at.
  • Audio is linear and non-persistent, as compared to visual interfaces, which can be non-linear and persistent.  For example, if there is a list of options, you have to wait to hear them all to be able to know which one to use; in a menu you get them all at once and can skip the first three to get to the fourth.
What are some of the opportunities with voice interfaces?
  • Accessibility to all - no need to be tech savvy to use technology, everyone can do it regardless of education (although I would say that even though this is true for voice, it needs to be and can be true for any interface).
  • Device ubiquity - no need to carry a device with you at all times to interface with the world; all smart devices can be a portal for interfacing.
What needs to be done to get to this world:
  • Technology needs to adapt to us, not the other way around
  • Need to move beyond simple input/output interfaces
  • Need to design interfaces for speech
  • Need to move away from evolving products by adding features to evolving products by creating stories of how they are used (again, needs to happen regardless of voice interface)
  • Need to create a persona for gluing together the different interfaces into one coherent interaction point and giving an experience across multiple devices
  • Teach the technology to understand the context of our speech - we understand what it is, technology needs to as well.  The tools we need for this are only just now being developed.
  • Need to understand that localization is not just language, it's the whole cultural frame of reference.
  • Need to strive towards conversations that are multi-modal, not just audio.



Wednesday, March 15, 2017

SXSW Day 3 session 3: AI Replaces search: the future of customer acquisition

A panel discussion with Amanda Richardson (Hotel Tonight), Brian Witlin (Yummly), Charles Jolley (Ozlo) and Rangan Majumder (Microsoft).
The panel discussed how AI, specifically through the voice personal assistant (e.g. Siri, Cortana, Google now, etc.), will be supplanting search.

With search, you get back a list of results from which you can choose.  With a voice personal assistant, you typically get back only one result, and that's determined directly by the assistant operator (Apple, Google, and such).  How do companies make sure they are selected to be the one result?  The assistant provides the assistant operator power over the service provider, especially aggregation service providers (sites that aggregate hotel services, like Hotel tonight for example).  The operator can select whoever they want, making a deal with one provider to the exclusion of all others.  A more likely scenario would be real-time bidding: if I say to the assistant to find me a hotel in Austin, the operator can hold a real time bid among all hotel service providers, and the one that wins the bid is the one that gets used (like in advertising).
The problem is that even if you win the bid, it doesn't mean that the consumer is necessarily exposed to the service provider - they may only be exposed to the end product.  Taking the hotel example from above, even if HotelTonight wins the bid and provides a hotel offer of Marriott in downtown, the assistant will communicate which hotel it selected, and not that it got it on HotelTonight.  That means that HotelTonight will lose the customer relationship, which is today one of their key assets.  So these companies will need to shift their monetization strategies to anatomized services.

Strangely enough, the representatives on the panel didn't seem to be very aware of the implication a move to digital assistants would have on their companies, or didn't seem to mind.  I asked the panel directly whether they are not concerned the digital assistant will turn them into a backend database service, and their potential to lose all customer relations.  Amanda Richardson challenged me back asking why they would need to own the customer relations; they get paid per hotel booked, so they just want to get bookings.  I think she's missing the bigger picture, but perhaps there's something I myself am missing.  At any rate, the digital assistant seems to me to be spelling a bleak future for these types of companies.