For the longest time, I would just drop by the barber shop in the hopes they had an opening. Why? Because I didn’t want to make a phone call to schedule an appointment. I hate making phone calls. What if they don’t answer and I have to leave a voicemail? What if they do answer and I have to talk to someone? I’m fine with in-person interactions, but there’s something about phones. Yuck. So I initially greeted the news that Google Duplex would handle phone calls for me with great glee.
Of course it’s not that simple. A voice-enabled AI that can pass for human is ripe for abuse. Imagine the phone scams you could pull.
The potential for phone scams using Google Duplex is breathtaking. Ah to be young, morally unencumbered and in posession of a list of 200,000 retiree phone numbers! https://t.co/0zwUEj4v3k
— Pinboard (@Pinboard) May 9, 2018
I recently called a local non-profit that I support to increase my monthly donation. They did not verify my identity in any way. So that’s one very obvious way for causing mischief. I could also see tech support scammers using this as a tool in their arsenal — if not to actually conduct the fraud then to pre-screen victims so that humans only have to talk to likely victims. It’s efficient!
Anil Dash, among many others, pointed out the apparent lack of consent in Google Duplex:
This stuff is really, really basic, but: any interaction with technology or the products of tech companies must be exist within a context of informed consent. Something like #GoogleDuplex fails this test, _by design_. That's an unfixable flaw.
— anildash.com (@anildash) May 9, 2018
The fact that Google inserted “um” and other verbal placeholders into Duplex makes it seem like they’re trying to hide the fact that it’s an AI. In response to the blowback, Google has said it will disclose when a bot is calling:
One update about Duplex is that before this launches, we're going to have the system disclose that it is an automated agent.https://t.co/B22R82s5d5 https://t.co/bx28kV5Dug
— Jeff Dean (@🏡) (@JeffDean) May 12, 2018
That helps, but I wonder how much abuse consideration Google has given this. It will definitely be helpful to people with disabilities that make using the phone difficult. It can be a time-saver for the Very Important Business Person™, too. But will it be used to expand the scale of phone fraud? Could it execute a denial of service attack against a business’s phone lines? Could it be used to harass journalists, advocates, abuse victims, etc?
As I read news coverage of this, I realized that my initial reaction didn’t consider abuse scenarios. That’s one of the many reasons diverse product teams are essential. It’s easy for folks who have a great deal of privilege to be blind to the ways technology can be misused. I think my conclusion is a pretty solid one:
In 2018, if you can't say "here are the abuse scenarios we considered and how we addressed them", your product is not ready for launch.
— Ben Cotton is no longer here (@FunnelFiasco) May 9, 2018
The tech sector still has a lot to learn about ethics.
https://twitter.com/BenedictEvans/status/994293491948650496
I was discussing this with some other attendees at the Advanced Scale Forum last week. Too many computer science and related programs do not require any coursework in ethics, philosophy, etc. Most of computing has nothing to do with computers, but instead with the humans and societies that the computers interact with. We see the effects play out in open source communities, too: anything that’s not code is immediately devalued. But the last few years should teach us that code without consideration is dangerous.
Ben Thompson had a great article in Stratechery last week comparing the approaches of Apple and Microsoft versus Google and Facebook. In short: Apple and Microsoft are working on AI that enhances what people can do while Google and Facebook are working on AI to do things so people don’t have to. Both are needed, but the latter would seem to have a much greater level of ethical concerns.
There are no easy answers yet, and it’s likely that in a few years tools like Google Duplex will not even be noticeable because they’ve become so ubiquitous. The ethical issues will be addressed at some point. The only question is if it will be proactive or reactive.