Thursday, April 13, 2017

SXSW Day 7 session 2: Can you lie to an MRI?

Panel: Alain Dagher (Montreal Neurological Institute McGill University), Cameron Craddoc (Nathan S Kline Institute For Psychiatric Research), Daniel Margulies (Max Planck Institute for Human Cognitive and Brain Sciences), Emily Finn (Yale University)

Shifting from a polygraph to an MRI – moving from trying to measure physiological signs of anxiety to trying to identify what are the specific cognitive responses a human exhibits when lying.  Can it be done?
FMRI brain mapping is based on the iron in the blood cells.  You can measure the blood flow and the oxygen in the brain with magnetic pulses.
There are several regions of the brain which are impacted by lying, but also by other things, so it’s not easy to tell what triggered a specific activity.  However, it may be possible to identify patterns in the brain that are indicative of something.
In a nutshell, how would you use an MRI to detect lies?

  • Put a person in an MRI, and direct them answer questions, truly or falsely (knowing which the reply will be)
  • Identify the patterns that emerge, and train the software to look for it.
  • Put person back in and have the software look for the patterns it was trained for.

Clearly, as a lie detector it’s problematic – it only works if the person is consenting and cooperating.  Plus results are individualized.  Also, lying can take multiple forms, from trying to actively deceive to trying to hide a truth.
Can meaningful information be gotten out of a single person, rather than averaging over multiple people?
You could map the connections of different nodes, and create a correlation matrix (image), or a connectivity profile.  Can a connectivity profile act as a “fingerprint” for an individual?
Research was conducted on 156 people, mapping matrices for multiple activities, to see if you could match connectivity profiles to a person.  So, based on the patterns of a connectivity profiles people generate, given another profile could you tell who it belongs to?  The results showed a 70% success ratio in matching profiles to people.
In the future, the hope is to be able to understand what the patterns tell about us.  Currently, the research was able to predict on a low level of correlation how a certain patterns is an indicator of fluid intelligence (the general ability to think abstractly, reason, identify patterns, solve problems, and discern relationships.)  However this is still very far from indicative and there’s a long way to go yet.
Another study that was conducted to test attention.  Subjects lay in an MRI and were displayed images in rapid succession.  They had one or two seconds to click on a button if the image was of a certain type (say a house) and _not_ click if it was a different type (say, car).  They did this for 40 minutes (!!!), and later connectivity maps were generated out of their brain scans.  They found two types of areas of activity in the brain: high attention network (maps of people who did well on the test) and low attention network (maps of those who didn’t).  Then they took other people and scanned them while they were doing nothing, and looked for this network, to try and predict who would do well on such a test. They were able to show correlation:  People who, during rest, scanned with high attention connectivity maps, did well on the test, and those scanning with low attention connectivity maps did not do as well.  Researchers were also able to correlate these networks to children with ADHD.  A correlation was also found with people who took Ritalin.
It’s important to remember that these mappings represent physical brain states, not mental states.  Even if you could map a physical brain connectivity mapping of “sadness”, it may not match what a person is feeling, as the mental world is interpreted in a personal manner.  If you don’t feel sad, you won’t accept someone telling you that are, even if they waved an MRI scan in front of you.

Additional research took 2000 teenagers who did not have drug experience, and tested them with gambling questions, while checking their value anticipation response.  They then followed them for four years, and were able to correlate the value anticipation responses and probability of drug abuse.  However, they were able to predict just as well with only the written questionnaires, the MRI scans did not provide better results.

Could these technologies be used to qualify people for jobs?  There are far better tools available today than MRI scans.  Down the road, when more networks are mapped, and more correlations can be made (hundreds of them), it may be more practical, but that is very far down the road.
What about misuse?  If you get a brain scan for medical purposes, could it be used for other things as well?  That’s still an open question but a lot depends on the environment and state of mind the person was in when they were scanned.

Can an MRI read thoughts?  Right now you could tell some very basic things about what people are dreaming in broad categories, but it’s very, very far from any ability to detect specific thoughts.  On this question there’s a lot of hype in media about this, with very little behind it.
How do you filter out noise?  For example, if while I’m being scanned, I’m feeling cold, or claustrophobic, or generally afraid, or otherwise distracted?  One way to eliminate noise is to use cognitive subtraction – flip tested activities “on” and “off”; if you’re looking to study brain maps for lies, have a subject answer five questions truthfully, then lie five times.  Subtract the resulting maps from each other to neutralize constant factors.
What about cultural differences? There are specific differences among different mother tongues, so additional variability comes to play here.  Even the shape of the brain can impact, so generations are very hard.

Wednesday, April 12, 2017

SXSW Day 7 session 1: AI - Actually still terrible

Kate Darling, MIT Media Lab and Nilesh Ashra, Creative Director at Wieden+Kennedy Lodge.

There is a lot of hype from companies around what AI can and can't do, focusing on successes, less on failures.  The discussion around AI has moved beyond research and practicality into the domain of hype.  This hype is potentially distracting from real valuable applications.
There is also confusion between AI, insofar as analytics, and human-like interaction.  The latter is a much harder problem than the former.  It's much easier to build AI that mimics the analytical workings of the human mind, than the interactive side of it.  At this, we are still relatively far.
As an example, call center speech bots can still be very frustrating; they are script based, and can navigate a conversation in a very limited domain.  Small changes in speech patterns can confuse the AI and so interaction with it can be very frustrating.  Humans have better referential context capabilities ("the other one"), and are better at taking and reacting to interruptions.

Being able to improve the way AI interacts with humans will require a lot more research in that domain.  A few examples of this type of interaction research:

  • Hitchbot - an early experiment in having a robot interact in an open environment with humans.  Hitchbot was created as a social experiment in a Canada university.  The Robot, which had very basic humanoid form (basically a largish bucket with head, hands and legs), with very rudimentary communication skills, and a mission to hitchhike first across Canada, then other places.  The research here was very rudimentary, just GPS-tracking Hitchbot, and tracking people's posted reactions to it in social media.  After going across Canada, Germany and the Netherlands, it was vandalized while trying to cross the US.
  • Needybot - Wieden Kennedy advertising agency built a robot and that has basic mobile but can't really do anything by itself, and needs to ask everyone around it.  This one was more sophisticated, having a camera and including face recognition, so it could identify people it met.  Wieden Kennedy set it loose in their offices.  The experiment focused on interaction impacts, such as that people like being recognized by robots or that silent robots don’t generally get help.

Sunday, April 9, 2017

SXSW Day 6 session 4: AI in America: preparing our kids

Andrew Moore, Dean of school of computer science, Carnegie Mellon

How does AI work?  Lets take an example of a question asked from a voice assistant on your phone: what happens when you ask Google's voice assistant a question, say, “show me a picture of a celebrity in an orange dress”.  Google wants to provide a response to be returned in 0.3 seconds – so how does it do it?  Here's a step by step breakdown:

  1. The phone's microphone generates the wave forms of the question and digitizes them.
  2. The digitized wave forms are encrypted and uploaded to Google's nearest data center.
  3. The wave form is analyzed, and converted into a number of guesses as to what was asked, each with a rank of how likely the guess is correct.
  4. Each of the guesses is sent to thousands of servers, who race each other to find answers for the question.
  5. Each server checks its cache of frequently asked questions to see if the question it receives matches; if it does it sends the results back.
  6. If not, it does a search on the words inside the search, looking for results that provide the best match to the most number of words.
  7. Eventually each server returns the result it found to an evaluation server, that provides a score as to high likely the result is the right answer to the question asked, and identifies the top scores.
  8. The answers with the highest scores are send back to the phone.  This is done before the evaluation server even gets all the answers back.  Servers that go beyond the SLA provided will "give up" to not waste time on an answer that is known will not be returned.
  9. The phone displays the top results.

All of this happens in under a second.  There's no magic in what Google does, therefore, just a lot of work broken down to lots and lots of servers.

He then broke down the different components of AI as per the following diagram:

Nowadays, there are ready-made solutions for the individual pieces of AI, such as perceiving (software that identifies something in a picture or video) or software APIs that can be used to build projects.  Using these individual and ready-made solutions, even a person (or kid) who is not tech savvy, can aggregate them together into a meaningful AI application.  He gave a number of examples from camps the university held with children and teenagers, where after very quick training they were able to create their own applications using these building blocks.
He does recommend an educational foundation for all kids, in preparation to a world where AI is prevalent.  He says computer science doesn't have to be taught too early - 9th grade is probably optimal for most kids.  Before that, there are other basic skills that can be taught that will prepare kids for computer science.  He suggests the following curriculum: 




Tuesday, April 4, 2017

SXSW Day 6 session 3: The creation of a hacker


Presented by Adam Tyler, chief innovation officer at CS Identity (and who looked like he got off the set of the Mr. Robot TV show)

The young generation is becoming more and more involved in hacking, cyber-crime, and fraud.  The increase of access is breeding a new generation of hackers, who start off as mostly curious, rather than malicious.

Today, cyber-crime accounts for over 50% of all crime reported in England.  This is fulled by a huge change in the cyber-crime market, which is copying the cloud companies and providing Crime as a Service.  DDOS (Distributed Denial of Service - the ability to bombard someone's network resource until it can't cope with requests), Ransomware (software that encrypts your data and then ransoms you for the decryption key), malware, and spam engines all come as "platform as a service" nowadays.  You can set up your criminal endeavor on the cloud, and the platform providers take 25% of the revenue.

The young demographic values digital life and digital assets more than money.  Data is the core piece of the digital life; its distribution is what the internet was created for.

Adam challenged us to guess the age of the hackers behind these hacks:

  • Xbox/Playstation hack - a DDOS attack that shut down 160 million users, knocking them offline.  The purpose of the attack was advertising - after the attack the hackers sold the tools they used, so they did the attack as an ad to how good their software is.  Age of attackers: 18
  • JP Morgan Chase hack - $100 million stolen from banks.  Age of attacker: 36
  • TalkTalk hack - SQL injection attack, which cost the cellular company 100,000 customers and $73 million.  Age of attacker: 16
  • Target attack - 2-3 million credit cards stolen and successfully sold on black market.  Age of attackers: 25
There is a large range of ages for hackers, although the age is trending downwards.
Common thinking is that becoming a hacker is hard - it's something you need to devote many years of intensive study for.  This is no longer true in our days.

There are three basic hacker types:
  • Type 1: Script kiddies.
    • Motivation - Glory among their peers.  In the internet they hold power which they may not hold in their real life.
    • Communication: open web forums
    • Attack methods: techniques learned from gaming and gaming related forums
    • Targets: other gamers
    • How do they get introduced to hacking?  Simple, via Google.  Adam gave a live example, and googled "how to kick off other players".  This leads to a gaming forum (not a hacking forum!), with lots of free tools and techniques for kicking off other players from whatever online games you like.  While you're at the forum, you're also exposed to other things - exploits, social engineering tricks and so on.  He followed an ad for selling stolen identities to a whole host of services and showed how easy it was to get a stolen Netflix, Xbox Gold, Hulu, Spottify, or whatever account.
  • Type 2: Enthusiasts
    • Motivation: financial gain
    • Communication method: dark web forums
    • Attack methods: phishing emails, exploit kits
    • Targets: Individuals, small businesses
    • How do they get introduced to the dark web?  Once again, Google.  In the past it used to be relatively difficult to get on the dark web; you would have to use special browsers and know what to look for.  However, when Google created its own DNS service, then suddenly any dark web site someone would access through it would automatically get indexed and start showing up in searches.  As for the sites that require TOR browsers (special anonymizing browsers used to surf without exposing your identity), peoploe built TOR proxies for them, which are doorways that allow people to access the site for someone who has a regular browsers; and thus they would be indexed by Google as well.  So now with simple searches on Google you can access any corner of the dark web.
  • Type 3: Professionals
    • Motivation: massive financial gains
    • Communication: highly private communication methods
    • Attack methods: zero day exploits, malware
    • Targets: large corporate entities, financial institutions
Type 3 hackers are rare; what's driving the explosion in hacking are type 1 and 2s.

How does one protect himself?  This wasn't part of the presentation but someone asked the question, and as usual, there is very little to be done.  You should update software frequently - make sure you have all security updates; you should never reuse passwords, and you should understand the world of hacking - be aware of the techniques employed by hackers so you are less likely to fall for them: phishing, clicking on dubious links, what ransomware is and how to protect from it, etc.

Saturday, April 1, 2017

SXSW Day 6 session 2: Towards more humane tech

Anil Dash, CEO FrogCreek

Code developers worry about what happens if their software won’t succeed, but not so much about what will happen if it becomes very successful and everyone uses it.  This sometimes causes unintended consequences.
Technology is used ubiquitously, but not everyone who uses it trusts it.  Ethics is a mandatory part of training in business schools, medical schools, and law schools, but not in most technology and computer schools.
There is a lot of evidence that IT is a harsh and inhumane environment.  Women, especially, are not accepted in the industry or find it a hard environment.  Diversity in tech is low – there is sexism and ageism.  Also, a lot of the recruitment in tech is referral based, which tends to keep the same profile of people – people tend to keep networks of people like them.

Evolution of markets as an example of the problems tech can introduce:

  • Old markets – in the old pre-tech markets, buyers and sellers connected naturally.  I could get to any buyer inside my geographic region, or I could venture out to find buyers, or I could mail-order.
  • eBay – eBay introduced a more efficient way of connecting buyers to sellers, and removed most of the geographic limitations.  Buyers had better access to a more sellers and vice versa.
  • Google – The sophisticated search algorithm promises a better match for your needs as a buyer, but no one really knows what the algorithm is, so no one can tell if it provides better access, or whether it’s directed based on Google’s interests.
  • Amazon – Amazon represents complete control of the markets: they can decide who sells through their framework, there’s no clear understanding who is listed first in search results, and they themselves are beginning to enter as sellers; their data provides them the ability to identify the best, most lucrative parts of the market and focus solely on those.  So they are substantially disrupting the relationship between buyers and sellers.
  • Uber – Got rid of markets altogether: buyers can’t chose the ride, sellers can’t determine price.  Also, Uber are substantially subsidizing their service, not only below traditional taxies, but in some places below public transportation.  In essence they are engaging in dumping.
  • Facebook – their Algorithm decides what each reader sees, what ads each reader reads, and their market is not really a market.

Facebook, Google and Uber are creating fake markets – all apps lead to fake markets.  Even worst, we are moving towards no markets, when you have automated personal assistants, they will determine the sellers for you and you won’t even necessary know who they are.
We need to bring ethics considerations into the industry, but this collides with the values of technology (then again, the same can be said to more traditional industries, and it’s still done there).  On a personal level, you should choose carefully who you work with, what apps you use and so on, and make sure the tech you use reflects the values you believe in.

SXSW Day 6 session 1: A tale of future cities

A panel with Paula Chowles (Wired), Andrew Bolwell (Chief Disruptor, HP Inc), Alex Rosson, (Shinola Audio), and Joshua Kaufman (Wisome VC)

In 1950, 30% of the population was living in urban areas; today it's 54%, and by 2050 it'll be 66%.  Should we or even can we put the breaks on urbanization?
1.6 million people move to cities every day.  They use up less than 3% of the world's land mass, but 60-70% of it's energy.  It does not look as though urbanization will stop.

How should cities navigate public/private infrastructure? The growth rate of cities and their infrastructure far outstrips the ability of the private sector to provide it; we'll need to re-envision infrastructure.  Building roads, bridges, and parking are expensive and time consuming projects and usually provide very little return on investment.  Plus, in some areas the task is too expensive to even consider: by the end of the century 40% of the earth's population will be African, and the infrastructure problems there are the most challenging.
Cities will need to return to more traditional social based infrastructure solutions: public transportation, reduction of cars in cities (can be done using taxation) to be able to reclaim parking space, and instead of roads building more bicycle lanes and escalators.  Bicycles are great social equalizers, and escalators are a very cost-efficient transportation method.

There is also a need to increase city's diversity of business and employment - single-industry cities can suffer immensely when the one industry fails (Las Vegas, Detroit)

Violence in cities - technology can help alleviate some of these issues as well. Some US cities have deployed systems to triangulate the sound of gunshots for faster response times.  Additional research has shown that the way violence patterns spread is similar to the way contagion spreads, and the same type of containment of disease can be applied to containing violence.