Author and keynote speaker Kate O'Neill is known around the world as The Tech Humanist. Hear her thoughtful approach to keeping technology human and what it will take for emerging technology to be successful from a business standpoint.
How do we design technology that is both smart for business and good for people?
Hear the human centered approach to voice and AI. Emily and Kate also discuss oncoming voice tech issues such as deep fakes and privacy issues such as data mining by Facebook and other tech companies.
Play this podcast anywhere:
Topics and Timestamps:
03:15 How do we approach voice design from a human centric way that is also good for business?
04:30 Weather skill example - take context about what someone using the skill needs, like an umbrella
05:20 Business might build voice tech or other tech in order to check a box but it’s better to build for the person on the other end
06:00 Don’t ask, “What’s our AI strategy?”. Instead, step back and ask, “What are we trying to accomplish as a business? - Kate
07:00 Who are we building for and how can we serve their needs?”
06:20 Create alignment and relevance between the business and people outside it
07:10 Avoid unintended consequences of technology as it becomes capable of such scale
07:35 Google Translatotron and deep fakes: Translatotron translates spoken word into another language while retaining the VOICE of the original speaker.
08:20 How we should approach technology that reminds us of the Babel fish from Hitchhiker’s Guide to the Galaxy? The Translatotron’s simultaneous translation does not lose integrity originating from the sound of your voice. But one step further: there is sampling of your voice that is sufficient for ML (machine learning) and AI to synthesize your voice.
08:45 Sampling: Google would now have your voice - what will they do with it? Voice synthesis and deep fakes - the terrifying possibilities (overall: cool but scary)
09:30 Companies must govern themselves (e.g. Google)
09:50 Government has a responsibility to regulate privacy and data models
10:40 Kate doesn’t have smart speakers in her home because we don’t have a precedent for protecting user data, she says
11:20 Facebook Ten Year Challenge (Kate’s tweet went viral in January 2019 over the ten year old photo trend next to current photos of themselves) - she pointed out that this data could be training facial recognition algorithms on predicting aging
Facebook's '10 Year Challenge' Is Just a Harmless Meme—Right? : “Opinion: The 2009 vs. 2019 profile picture trend may or may not have been a data collection ruse to train its facial recognition algorithm. But we can't afford to blithely play along.”
13:20 We have seen memes and games that ask you to provide structured information turn out to be data mining (e.g. Cambridge Analytica): we have good reason to be cautious
14:40 "Everything we do online is a genuine representation of who we are as people, so that data really should be treated with the utmost respect and protection. Unfortunately, it isn't always." - Kate O’Neill
15:00 Do we need government to regulate tech? Can it?
16:10 “Ask forgiveness, not permission” is clearly the case with Facebook so why do users seem to be forgiving?
20:00 What might a future social network look like in which there are fewer privacy and data mining concerns?
Bonus info:
Deep fake (a portmanteau of "deep learning" and "fake") is a technique for human image synthesis based on artificial intelligence. It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique known as generative adversarial network.
Read more about deep fakes and voice emulation: the idea of voice skins and impersonation for fraud