Conference talks: “how” versus “why”

Recently in the #public_speaking channel on Freenode, we were discussing two types of conference talks: the “how” talks and the “why” talks. SomeKittens said:

too many talks are “how” when I really want to hear “why”

I couldn’t agree more. I struggle with “how” talks at conferences because conferences are a fire hose of information and it can be hard to take it all in, never mind retain it. By the time I get back to real life and am ready to implement this new thing I’ve learned, I have forgotten so much. If I’m lucky, I can watch the recorded version a few months later. But then why did I go to the session in the first place?

“How” talks are also often meaningless without the context of “why”. What good is knowing how to frobble the bobulator if I don’t know why a bobulator needs to be frobbled in the first place?

“How” talks are often very specific. A certain person in a certain organization accomplished a certain task in a certain way. How much of that is applicable to another person in another organization? Even if they want to accomplish the exact same task, the conditions aren’t the same.

“Why” talks tend to be more about identifying and presenting principles that can be broadly applied. As genehack pointed out, they tend to be stories. Stories make for much more engaging talks.

If you were to think about a talk mapped to written form, consider a “how” talk like a blog post. It might give a bit of introductory context (and if not, it’s a bad post) but then it gets straight into the matter at hand. There’s a well-defined flow and set of steps. It’s very amenable to copy/paste-ing.

A “why” talk is more like a book or maybe a magazine. You’re not going to copy and paste from it. You may put it down partway through, mull it over, and then pick it back up later. The aim is less about accomplishing a particular task and more about developing a mental framework.

When you’re developing a conference presentation, come up with whatever you want. But at least consider making it a “why” talk.

Silicon Valley has no empathy

That’s not quite fair. The tech industry has no empathy, regardless of geography. And it’s not fair to say “no empathy”, but so many social issues around technology stem from a lack of empathy. I’m no half-Betazoid Starfleet counselor, but in my view there are two kinds of empathy: proactive and reactive.

Reactive empathy is, for example, feeling sad when someone’s cat dies. It’s putting yourself in the shoes of someone who has experienced a Thing. Most functional humans (and yes, I’m including the tech sector here) have at least some amount of reactive empathy. Some more than others, of course, but it’s there.

Proactive empathy is harder. That’s imagining how someone else is going to experience a Thing. It requires more imagination. Even when you know you have to do it, it’s a hard skill to practice.

I touched on this a little bit in a post a few weeks ago, but there I framed it as a lack of ethics. I’m not convinced that’s fully the case. More often, issues are probably more correctly attributed to a lack of empathy. You know why you can’t add alt-text to GIFs in tweets? Because Silicon Valley has no empathy.

I was thinking about this again last week as I drove down to Indianapolis. I had to pass through the remnants of Tropical Storm Cindy, which meant some very heavy downpours. Like a good citizen, I tried to report issues on Waze so that other drivers would have some warning. As it turns out, “tropical deluge” is not a weather option in Waze. Want to know how I can tell it was developed in the Valley?

It’s so easy to say “it works for me!” and then move on to the next thing. But that’s why it’s so important to bring in people who aren’t like you to help develop your product. Watch how others experience it and you’ll probably find all sorts of things you never considered.

Who gets your Facebook messages after you die?

Last month, a court in Germany ruled that Facebook should not be compelled to give access to the account of a teenager who died to her parents. The girl died after being struck by a train. Her parents, trying to determine if it was a suicide, wanted to look for evidence that she had been bullied. The initial court ruled in favor of the girls parents, but Facebook prevailed on appeal.

This is an excellent example of the “hard cases make bad law” adage, though I think the court arrived at the right decision here. The girl’s parents argued that it was the digital equivalent of a diary, which is an interitable item. I understand their argument. As a parent myself, I don’t doubt that I would make the same argument were I in their case. But I think the appeals court made the right decision here, although it took some thought to get to that.

It’s more than just a diary

The “it’s the same as a diary” argument makes sense only if you intentionally exclude the ways it’s not. Yes, people use Facebook to share personal musings and reflections the same way they might in a diary or journal. However, Facebook (and other social media) have an interactivity that a diary does not.

This goes beyond the fact that others may leave comments on posts. The owner of an account is not necessarily the originator of the content within the account. What I mean by that is that the messages may be initiated by someone else. Granting account access to the girl’s parents is not really about protecting her privacy, it’s about protecting the privacy of those she has communicated with.

But that’s the point, right?

The girl’s parents wanted to find evidence of bullying. Why should the privacy of the bullies be protected (in the very narrow context of their messages to the girl)? Because they’re probably not the only people who sent the girl messages. What if another friend had confided in the girl about personal matters? What right do the girl’s parents have to that communication? None, of course.

I have a hard time justifying why the girl’s account should be made available to anyone given the risk of harm to innocent third parties. If the situation were different – if the police or prosecutor were ask for specific searches as part of a case – that would be more reasonable, in my opinion. In that case, the structure and process of the investigation would minimize the harm of disclosure.

This is a hard problem

In the pre-digital age, it was less complicated. Conversations that didn’t happen face-to-face (or on the telephone) probably happened via letter. Any letters that were not destroyed became part of the estate. Some heirs probably destroyed them, others not. And though there are many threats to privacy these days, the electronic age has made possible a form of privacy that was hitherto unknown.

I’m certainly in favor of people being able to explicitly opt in to allowing someone to inherit their accounts. And not all accounts are created equal. When I die, I’d like to think someone would keep my meager website around in order to provide a legacy of sorts. But I’d also like to think that my death won’t result in the correspondence my friends have sent me in confidence. It’s not my privacy I want to protect after I die, it’s the privacy of my friends.

Ethics in technology

Technology has an ethics problem. I don’t mean that it’s evil, although I’d forgive you for thinking that. Just take a look at Theranos or Mylan, or Uber’s parade of seemingly-unending scandals. So yes, there are some actors for whom “they lack a moral compass” is the charitable explanation. No, the main problem is that we spend so little time thinking about ethics.

It’s too easy to think that since your intent is good that your results will be, too. But good intent is not sufficient. It’s important to consider impacts as well, especially the impacts on people not like you. (Note that I use “you” to avoid awkward wording. I’m guilty of this as well.) And when you do consider the impacts, don’t be Robert Moses. Does your new web interface make it harder for people who use screen readers? Is your insulin meter easy to misinterpret for someone whose blood sugar is off?

The work we do in the technology sector every day can have a significant impact on people’s lives. And yet ethics courses are often an afterthought in college curricula. Of course, many in tech are self-trained with no real professional body to provide guidance. This means they get no exposure to professional ethics at all. It’s no wonder that we, as an industry, ignore our ethical obligations.

Motivations for storm chasing

Maybe I’m not the right person to write this post. Or maybe I would have been had I written it during a time when I was active. (It’s almost six years since the last time I went storm chasing, how much longer can I pretend that it’s a thing I do?) But here on Blog Fiasco, I get to make the rules, and Rule #1 is “Ben gets to write about whatever the hell he feels like writing about.”

At any rate, it seems that storm chasers have one thing in common: we/they really like to criticize the motivations of others. The most common target are the chasers who get in extremely close in order to get the perfect shot for TV. They take risks that most of us won’t (whether or not those risks are justified are left as an exercise for the reader). As a result, they’re dismissed as merely thrill-seekers by the “serious” chasers.

He’s in it for the money, not the science.

As my friend Amos said, “there’s no single explanation for chasing. It’s like trying to count all the reasons tourists visit Paris.” “Serious” chasers like to think they’re doing it for some altruistic reason. That could be scientific research, warning the public, or whatever. These things definitely happen, and they’re very good reasons for participating in an activity, but I doubt it’s what primarily motivates people.

Warning can be done by stationary (or nearly stationary) spotting, which also probably means you’ve developed some kind of relationship with the local authorities or NWS office. Some kinds of scientific research can only happen in situ, but that also requires a degree of discipline that many don’t want. Storm chasing is a very boring hobby that involves sitting on your butt in a car for hours on end in the hopes of seeing something interesting. It takes more than a sense of civic duty for most people.

I used to think I was doing it as a learning exercise or in order to serve the public. At some point I realized I was kidding myself. I chased (and hope to chase again) because I enjoy the thrill of the hunt. Can I figure out what the atmosphere is doing? Can I stay ahead of a dangerous beast while keeping myself safe? I’ll absolutely report severe weather I see, and I’ll share pictures with the NWS and any researchers, but that’s not the primary motivation. Now to get myself back out there…

The irony of automation

Automation is touted – often rightly – as a savior. It saves time, it saves effort, it saves money, and it saves lives. Except when it doesn’t. A while back, I read a two-part post about how a mistake with an automated pharmacy system lead to a 38x overdose. It’s not that the system itself made a mistake, but it enabled the medical professionals to make a mistake that they’d never have made in a pen-and-paper system.

This story has two ultimate lessons. First, modes are dangerous in user interfaces, because they are easy to overlook and can lead to incredibly different outcomes. In this story, had the dosage input always required either the total dosage or the dosage per patient weight, this would have never happened. Allowing either makes it easy to make a lethal mistake. Perhaps a better option would be to have a optional popup that calculates the dosage from the per-weight dosage and the patient weight. That retains the convenience of being able to prescribe the dosage either way, while making it explicitly obvious which way is being used.

The second lesson is that it’s important for experts with specialized knowledge to apply that to their use of automation. When something doesn’t seem right, it’s easy to find ways to explain it away, especially if the automation is reliable. But “that doesn’t seem right” must remain a feeling we pay attention to.

Giving up is the better part of valor

There’s a lot to be said for sticking with a problem until it is solved. The tricky part can be knowing when it is solved enough. I’ve been known to chase down an issue until I can explain every aspect of it. The very existence of the problem is a personal insult that will not be satisfied as long as the problem continues to insist on being.

That approach has served me well over the years, even if it can be annoying sometimes. I’ve learned by chasing these problems to their ultimate resolution. Sometimes it even reveals conditions that can be solved before they become problems.

But as with anything, there’s a tradeoff to be made. The time it takes run down a problem is time not spent elsewhere. It’s a matter of prioritization. Does having it 80% fixed do what you need? Can you just ignore the problem and move on? 

A while back, I was trying to get a small script to build in our build system at work. For whatever reason, the build itself would work, but using the resulting executable to upload itself to Amazon S3 failed with an indecipherable (to me at least) Windows library error. It made no sense to me. This was a workflow I had used on a local virtual machine dozens of times. And it worked if I executed the build artifact by hand. Just not in the build script.

I spent probably a few hours working on solutions. But no matter what I tried, I made no progress. When I got to the point where I had exhausted all of the reasonable approaches I could think of, I implemented a workaround and moved on to something else.

It can be hard to know when to give up. Leaving a problem unsolved might come back to bite you later. But what else could you be doing if you’re not spinning your wheels?

Don’t memorize what you can look up

“Never memorize something that you can look up” is a quote often attributed to Albert Einstein. And it happens to be pretty solid advice in most cases. No value is added by being able to recite facts from memory. Value comes from being able to piece together the facts to make something new. It’s one thing to know the syntax of a command or a language function. It’s something else entirely to know how to use it to get the desired result.

I recently had a conversation with a gentleman who was applying for a volunteer gig at a non-profit. The role involved doing some work with spreadsheets, and they had him sit down and implement a few features while they watched. At one point, he looked up how to perform a particular task. They ended up not accepting him.

I don’t think we have quite caught on to the idea of looking up instead of memorizing. As Seth Godin points out, ubiquitous lookup is a very new concept. The idea of being able to rattle off easily discoverable facts is still appealing to us. In some cases, that’s still desirable. I really want EMTs to know how to perform first aid without Googling it. Pilots should know what the various switches and buttons in the cockpit do. Programmers? Meh.

Anecdotally, the tech industry is ahead of the general population in terms of avoiding memorization. I have a hunch as to why that may be. Memorization comes from repetition, and in tech repetition is something we strive to avoid. If you’re repeating the same thing over and over, you’re doing something wrong. That’s not necessarily the case in other fields.

When I think the things I have memorized, few of them are useful. I’ll probably never need to know that the McDonald’s restaurant in Georgetown, Indiana is store 12895. Or the one in Clarksville is 383. Or the one on Grantline Road is 12900. I remember IP addresses for defunct hosts that I haven’t worked on in eight-plus years. I don’t remember the argument order for Perl’s split function (it’s always the opposite of what I think it is), but that’s okay. When I need to split a string in Perl, I can look it up. It’s more important that I know when it’s appropriate to do that than to instantly recall the implementation details (and I suspect if I spent more time coding, I’d remember more syntax).

I hope the “don’t memorize” philosophy continues to take hold. For my part, I’ll never reject someone because they had to use Google. If anything, the ability to use search engines and other fact-finding tools is among the most important skill one can have.

Journalism and leaks

Over at Lawfare, Jack Goldsmith had a great article called “Journalism in the Doxing Era“. Professor Goldsmith examined the differences between data published by Wikileaks and The New York Times. I’m no journalist, but I am a journalish, and the thing that stood out to me is what makes the act of publication journalism.

Two attributes, in my mind, make the publishing of leaked or stolen information journalism. First, authentication. Responsible journalism requires presenting facts, not rumors. If documents are published, they had better be the real deal. It’s easy to fake correspondence that looks authentic, but if you publish it, it had better be real.

The second attribute is editorial filtering. Once you’re left with true (or at least authentic) documents, what’s newsworthy? There’s an argument that everything should be published so the public can decide for themselves what they think is important. I’m sympathetic to that, but it’s also a little lazy. Journalists should not just be gatherers of information, but they should be curators of it. That means chucking out what’s not important in favor of what is.

Of course, importance is very context-sensitive, but some things are pretty clear. John Podesta’s risotto recipe? Not important (unless there’s a food blog that wants to run with it). The Clinton campaign receiving debate questions in advance? Important. (As an aside, the whole “but her emails” thing overall may prove to be one of the great tragedies of the 21st century. That doesn’t make this particular example unnewsworthy.)

An editorial filter does lend itself to bias, and an even greater perception of bias by those biased in the opposite direction. Nonetheless, most news consumers don’t have time to examine everything and draw their own informed conclusions. Journalists serve the public interest when they collect facts, but also when they curate them.

Maybe your tech conference needs less tech

My friend Ed runs a project called “Open Sourcing Mental Illness“, which seeks to change how the tech industry talks about mental health (to the extent we talk about it at all). Part of the work involves the publication of handbooks developed by mental health professionals, but a big part of it is Ed giving talks at conferences. Last month he shared some feedback on Twitter:

So I got feedback from a conf a while back where I did a keynote. A few people said they felt like it wasn’t right for a tech conf. It was the only keynote. Some felt it wasn’t appropriate for a programming conf. Time could’ve been spent on stuff that’d help career. Tonight a guy from a company that sponsored the conf said one of team members is going to seek help for anxiety about work bc of my talk. That’s why I do it. Maybe it didn’t mean much to you, but there are lots of hurting, scared people who need help. Ones you don’t see.

Cate Huston had similar feedback from a talk she gave in 2016:

the speaker kept talking about useless things like feelings

The tech industry as a whole, and some areas more than others, likes to imagine that it is as cool and rational as the computers it works with. Conferences should be full of pure technology. And yet we bemoan the fact that so many of our community are real jerks to work with.

I have a solution: maybe your tech conference needs less technology. After all, the only reason anyone pays us to do this stuff is because it (theoretically) solves problems for human beings. I’m biased, but I think the USENIX LISA conference does a great job of this. LISA has three core areas: architecture, engineering, and culture. You could look at it this way: designing, implementing, and making it so people will help you the next time around.

Culture is more than just sitting around asking “how does this make you feeeeeeeel?” It includes things like how to avoid burnout and how to train the next generation of practitioners. It also, of course, includes how to not be a insensitive jerk who inflicts harm on others with no regard for the impact they cause.

I enjoy good technical content, but I find that over the course of a multi-day conference I don’t retain very much of it. For a few brief hours in 2011, I understood SELinux and I was all set to get it going at home and work. Then I attended a dozen other sessions and by the time I got home, I forgot all of the details. My notes helped, but it wasn’t the same. On the other hand, the cultural talks tend to be the ones that stick with me. I might not remember the details, but the general principles are lasting and actionable.

Every conference is different, but I like having one-third of content be not-tech as a general starting point. We’re all humans participating in these communities, and it serves no one to pretend we aren’t.