Google Duplex and the future of phone calls

For the longest time, I would just drop by the barber shop in the hopes they had an opening. Why? Because I didn’t want to make a phone call to schedule an appointment. I hate making phone calls. What if they don’t answer and I have to leave a voicemail? What if they do answer and I have to talk to someone? I’m fine with in-person interactions, but there’s something about phones. Yuck. So I initially greeted the news that Google Duplex would handle phone calls for me with great glee.

Of course it’s not that simple. A voice-enabled AI that can pass for human is ripe for abuse. Imagine the phone scams you could pull.

I recently called a local non-profit that I support to increase my monthly donation. They did not verify my identity in any way. So that’s one very obvious way for causing mischief. I could also see tech support scammers using this as a tool in their arsenal — if not to actually conduct the fraud then to pre-screen victims so that humans only have to talk to likely victims. It’s efficient!

Anil Dash, among many others, pointed out the apparent lack of consent in Google Duplex:

The fact that Google inserted “um” and other verbal placeholders into Duplex makes it seem like they’re trying to hide the fact that it’s an AI. In response to the blowback, Google has said it will disclose when a bot is calling:

That helps, but I wonder how much abuse consideration Google has given this. It will definitely be helpful to people with disabilities that make using the phone difficult. It can be a time-saver for the Very Important Business Person™, too. But will it be used to expand the scale of phone fraud? Could it execute a denial of service attack against a business’s phone lines? Could it be used to harass journalists, advocates, abuse victims, etc?

As I read news coverage of this, I realized that my initial reaction didn’t consider abuse scenarios. That’s one of the many reasons diverse product teams are essential. It’s easy for folks who have a great deal of privilege to be blind to the ways technology can be misused. I think my conclusion is a pretty solid one:

The tech sector still has a lot to learn about ethics.

I was discussing this with some other attendees at the Advanced Scale Forum last week. Too many computer science and related programs do not require any coursework in ethics, philosophy, etc. Most of computing has nothing to do with computers, but instead with the humans and societies that the computers interact with. We see the effects play out in open source communities, too: anything that’s not code is immediately devalued. But the last few years should teach us that code without consideration is dangerous.

Ben Thompson had a great article in Stratechery last week comparing the approaches of Apple and Microsoft versus Google and Facebook. In short: Apple and Microsoft are working on AI that enhances what people can do while Google and Facebook are working on AI to do things so people don’t have to. Both are needed, but the latter would seem to have a much greater level of ethical concerns.

There are no easy answers yet, and it’s likely that in a few years tools like Google Duplex will not even be noticeable because they’ve become so ubiquitous. The ethical issues will be addressed at some point. The only question is if it will be proactive or reactive.

 

 

Silicon Valley has no empathy

That’s not quite fair. The tech industry has no empathy, regardless of geography. And it’s not fair to say “no empathy”, but so many social issues around technology stem from a lack of empathy. I’m no half-Betazoid Starfleet counselor, but in my view there are two kinds of empathy: proactive and reactive.

Reactive empathy is, for example, feeling sad when someone’s cat dies. It’s putting yourself in the shoes of someone who has experienced a Thing. Most functional humans (and yes, I’m including the tech sector here) have at least some amount of reactive empathy. Some more than others, of course, but it’s there.

Proactive empathy is harder. That’s imagining how someone else is going to experience a Thing. It requires more imagination. Even when you know you have to do it, it’s a hard skill to practice.

I touched on this a little bit in a post a few weeks ago, but there I framed it as a lack of ethics. I’m not convinced that’s fully the case. More often, issues are probably more correctly attributed to a lack of empathy. You know why you can’t add alt-text to GIFs in tweets? Because Silicon Valley has no empathy.

I was thinking about this again last week as I drove down to Indianapolis. I had to pass through the remnants of Tropical Storm Cindy, which meant some very heavy downpours. Like a good citizen, I tried to report issues on Waze so that other drivers would have some warning. As it turns out, “tropical deluge” is not a weather option in Waze. Want to know how I can tell it was developed in the Valley?

It’s so easy to say “it works for me!” and then move on to the next thing. But that’s why it’s so important to bring in people who aren’t like you to help develop your product. Watch how others experience it and you’ll probably find all sorts of things you never considered.

Ethics in technology

Technology has an ethics problem. I don’t mean that it’s evil, although I’d forgive you for thinking that. Just take a look at Theranos or Mylan, or Uber’s parade of seemingly-unending scandals. So yes, there are some actors for whom “they lack a moral compass” is the charitable explanation. No, the main problem is that we spend so little time thinking about ethics.

It’s too easy to think that since your intent is good that your results will be, too. But good intent is not sufficient. It’s important to consider impacts as well, especially the impacts on people not like you. (Note that I use “you” to avoid awkward wording. I’m guilty of this as well.) And when you do consider the impacts, don’t be Robert Moses. Does your new web interface make it harder for people who use screen readers? Is your insulin meter easy to misinterpret for someone whose blood sugar is off?

The work we do in the technology sector every day can have a significant impact on people’s lives. And yet ethics courses are often an afterthought in college curricula. Of course, many in tech are self-trained with no real professional body to provide guidance. This means they get no exposure to professional ethics at all. It’s no wonder that we, as an industry, ignore our ethical obligations.

Actually, it’s about ethics in book reviews

Bruce Schneier shared a story earlier this month about how Amazon is apparently mining information to flag book reviews when the reviewer has a relationship with the author. I write book reviews (though I don’t post them to Amazon), so this seems relevant to my interests. I can see why Amazon would do something like this. People buy books, in part, based on reviews. If Amazon’s reviews are credible, people will be more likely to buy well-reviewed books. Plus: ethics!

The first few purchases would likely be unaffected until the buyer has a chance to form an evaluation of credibility. And even then, how much stock do people put into online reviews of any product or service? I tend to only look at reviews in aggregate, unless the specific reviewer has established credibility.

I hope that my occasional book reviews have established some sort of credibility with my ones of readers. I certainly try to make it clear when I might have a bias (e.g. disclosing stock ownership or a personal friendship). Mostly, though, I’m motivated to give accurate reviews in order to advance my own thought leadership. I’m very self-serving sometimes.

On the whole, I appreciate that Amazon is trying to keep reviews fully-disclosed. I just don’t think they’re doing it very well. If a reviewer has a relationship with the reviewee and it is properly disclosed, there’s no reason to suppress the review.

Full disclosure: I own a small number of shares in Amazon.

Is storm chasing unethical?

Eric Holthaus wrote an article for Slate arguing that storm chasing has become unethical. This article has drawn a lot of response from the meteorological community, and not all of the dialogue has been productive. Holthaus makes some good points, but he’s wrong in a few places, too. His biggest sin is painting with too wide a brush.

At the root of the issue is Mark Farnik posting a picture of a mortally wounded five-year-old girl. The girl was injured in a tornado that struck Pilger, Nebraska and succumbed to the injuries a short time later. To be perfectly clear, I have no problem with Farnik posting the picture, nor do I have a problem with him “profiting” off it. Photojournalism is not always pleasant, but it’s an important job. To suggest that such pictures can’t be shared or even taken is to do us a disservice. 19 years on, the picture of a firefighter holding Baylee Almon remains the single most iconic image from the Oklahoma City bombing.

None of this would have come up had Farnik not posted the following to Facebook: “I need some highly photogenic and destructive tornadoes to make it rain for me financially.” That’s a pretty awful statement. While I enjoy tornado video as much as anyone, I prefer them to occur over open fields. Nobody I know ever wishes for destruction, and I’d be loath to associate with anyone who did. This one sentence served as an entry point to condemn an entire hobby.

Let’s look at Holthaus’ points individually:

  1. Storm chasers are not saving lives. Some chasers make a point to report weather phenomena to the local NWS office immediately. Some chasers do not. Some will stop to render assistance when they come across damage and injuries. Some will not. In both cases, my own preference is for the former. Patrick Marsh, the Internet’s resident weather data expert, found no evidence that an increase in chasers has had any effect on the tornado fatalities. In any case, not saving lives is hardly a condemnation of an activity. Golf is not an inherently life-saving avocation, but I don’t see anyone arguing that it’s unethical.
  2. Chasing with the intent to profit… adds to the perverse incentive for more and more risky behavior. Some people act stupidly when money or five minutes of Internet fame are on the line. This is hardly unique to storm chasing. Those chasers who put themselves or others in danger are acting stupidly. The smart ones place a premium on safety. What’s more, the glee that chasers often express in viral videos is disrespectful to people who live there and may be adversely affected by the storm. Also true. The best videos are shot from a tripod and feature quiet chasers.
  3. A recent nationwide upgrade to the National Weather Service’s Doppler radar network has probably rendered storm chasers obsolete anyway. Bull. Dual-polarization radar does greatly aid the radar detection of debris, but ground truth is still critical. Radar cannot determine if a wall cloud is rotating. It cannot determine if a funnel cloud is forming. It cannot observe debris that does not exist (e.g. if a tornado is over a field). If you wait for a debris signature on radar, you’ve already lost. In a post to the wx-chase mailing list, NWS meteorologist Tanja Fransen made it very clear that spotters are not obsolete. To be clear, spotters and chasers are not the same thing, even if some people (yours truly, for example) engage in both activities.

The issue here is that in the age of social media, it’s easier for the bad eggs to stand out. It’s easy to find chasers behaving stupidly, sometimes they even get their own cable shows. The well-behaved chasers, by their very nature, tend to not be noticed. Eric Holthaus is welcome to not chase anymore, that’s his choice. I haven’t chased in several years, but that’s more due to family obligations than anything else. I have, and will continue to, chase with the safety of myself and others as the top priority.

Scattered thoughts on sysadmin ethics

Last week, a Redditor posted a rant titled “why I’m an idiot, but refuse to change my ways.” I have to give him (or her, but let’s stick with “him” for the sake of simplicity and statistical likelihood) credit for recognizing the idiocy of the situation, but his actions in this case do a disservice to the profession of systems administration. My initial reaction was moderated by my assumption that this person is early-career and my ability to see some of myself in that post. But as I considered it further, I realized that even in my greenest days, I did not consider unplanned outages to be a license for experimentation.

Not being in a sysadmin role anymore, I’ve had the opportunity to consider systems administration from the perspective of a learned outsider. I was pleasantly surprised to see that the responses to the poster were fairly aghast. There’s a great deal of ethical considerations for sysadmins, partly due to the responsibilities of keeping business critical services running and partly due to the broad access to business and personal data. So much of the job is knowing the appropriate behavior, not just the appropriate technical skills.

This may be the biggest benefit of a sysadmin degree program: training future systems administrators the appropriate professional ethic. I am by no means trying to imply that most sysadmins are lacking. On the contrary, almost all of the admins I’ve encountered take their ethical requirements very seriously. Nonetheless, a strain of BOFHism still runs through the community. As the world becomes increasingly reliant on computer systems, a more rigorous adherence to a certain philosophy will be required.

The Terry Childs case

If you pay much attention to technical news, you probably have heard of Terry Childs.  Childs is the network admin formerly employed by the City of San Francisco who was arrested in 2008 after he was fired for insubordination and subsequently refused to give his supervisor the passwords for the FiberWAN routers.  If you know this much, you probably also heard that he was found guilty of one felony count on Tuesday.  For the sake of continuing this paragraph, I’ll assume you heard that.  Since you know this, I think it’s fairly safe to assume that your response to his conviction falls into one of two summaries: “he had it coming” and “this is an outrage.”

The prevailing mood on Slashdot and elsewhere seems to favor the latter summary.  My own take is more toward the former.  I’m not sure if that’s because I’m a short-hair type (side note: in my experience, there are two broad classifications of admins — short-hair and long-hair.  There’s often a stark behavioral/mindset difference between the two.  Maybe I’ll write about that at some point.), or if it’s because I’m still a youngin, or if it’s just because I’m being more sensible than everyone else.

My opinion on the case has softened a bit since it first broke.  Initially, the city was claiming that Childs had booby-trapped systems so that they would fail if anyone tried to gain access after he left.  As it turns out, things continued to run smoothly after Childs was fired.  There was a lot of stupid surrounding this case, and neither side comes out particularly sympathetic.  InfoWorld’s Paul Venezia had a good summary of the case in July 2008.

I don’t fault Terry Childs for refusing to give the passwords to people who asked for them, as the city had a very sensible password policy in place (don’t give user or system account passwords to anyone. The End).  What he didn’t do was put the passwords in the appropriate central repository.  I can understand his reasoning — we’ve all had incompetent coworkers that we didn’t want to share a password with, but sometimes that’s what we have to do.

Perhaps the city’s biggest mistake was letting Childs “own” the FiberWAN in the first place.  By all accounts, it was a pretty brilliant design, and every artist should be proud of the work they do, but that doesn’t make it their work.  Let’s face it: except in very rare cases, the work an admin does for his employer is the property of that employer.  We all like to think of systems as “ours”, but the reality is that we’re just caretakers, even when we design the system.  Think of a gardener as an analogue.

System/network/database/whatever-else admins have access to a great deal of sensitive information — grades in education, financial or research data in the public sector, medical records in hospitals, etc.  There is definitely a compelling need to restrict access in a sensible, responsible manner, but this must also be balanced out with a need to increase the bus factor.  There should always be at least one other person who has access to the passwords in case something unfortunate happens to the person with primary responsibility, even if this person is only authorized to get the passwords in the event of an emergency.

Childs also failed to play nice with others, and that’s the only reason we’ve heard about this at all.  Allegedly, he harassed a new manager to the point where she locked herself in a room to get away from him.  Like it or not, admins have to deal with other people, and that’s often the skill that is most lacking.  However, it is also perhaps the most necessary.  Technical position or no, we all need to be able to manage our role in office politics.  I sometimes think that should be a required class for sysadmins.  Maybe someone could set up a certification program?