You don’t need to have an answer to report a problem

When Alex Hinrichs retired from Microsoft, he wrote the traditional goodbye email. It was so well received that he decided to post it to LinkedIn. In his letter, Hinrichs shares 11 lessons he learned over the years. Most of the advice he shares is good, but he also included something I strongly disagree with:

If you bring a problem to your boss, you must have a recommendation. When presenting solutions to your boss, give them a menu and tell them your recommendation. “Hey boss, choose solution a, b, c, or d… I recommend a”.

​Hinrichs is not the first person to share this advice. I see it all over the place, and I hate it. I get the intent. If nothing else, coming up with the list of options and your recommendation forces you to think through the problem. It’s good practice and it makes you look good.

But it’s also a good way to suppress red flags. This is particularly true for early career and underindexed employees. For them, reporting a problem can be intimidating enough. The pressure of having to figure out a solution means they might just stay quiet instead.

Now it may turn out that what a junior employee sees as a problem that they don’t have an answer to really isn’t a problem. On the other hand, some problems are much easier to identify than they are to fix. This is particularly true with ethical and cultural problems. If your policy is “don’t come to me until you have a solution”, you’re opening yourself up to letting bad culture take root. And you’re depriving your junior employees a chance to learn from you and your team.

If someone is constantly reporting problems but not showing any willingness to address them, that’s an issue. But saying you must have a recommendation is a good way to not learn about problems until it’s too late.

Do you need a product owner?

Do you need a product owner? Many organizations seem to think so. Scrum.org says a product owner is “responsible for maximizing the value of the product resulting from the work of the Development Team”. That certainly seems like an important role to fill. But Joshua Kerievsky wrote in 2016 that customer-obsessed teams don’t have product owners.

Having a formal product owner, he argues, means engineers merely implement. They are absolved of responsibility for the final product. “Massive technical debt, poor reliability, ugly UIs, bad usability and new, barely used features” are due to having product owners. Companies that produce excellent products send engineers out into the field to learn from real-life customers. This means there’s no need for a product owner because everyone is a product owner.

Kerievsky is right, in part. Putting all responsibility for the product on one person is a recipe for disaster. Customer focus is not something you can bolt on at the end. If the people building the product don’t have the customer in mind as they build it, the product will reflect that. Regardless of what the product owner does.

But responsibility and sole responsibility are not the same. Engineers need to think of the customer as they build the product. And the product owner is responsible for keeping the engineers focused. The proper role for the product owner is not setting product direction, it is setting culture direction.

As Eran Davidoff wrote, we are all product owners. Not just engineering teams, but everyone in the company. Marketing, sales, and everyone else must focus on driving impact, not completing tasks. Positive business impact means understanding the customer and meeting their needs. Sometimes the customer is an internal customer, not the end customer. But meeting the needs of internal customers can indirectly help the end customer.

Ultimately, the job of the product owner – be it an internal or external product – is to make sure the measurements match the goals and that the goals match the mission. After all: we get what we measure.

Book review: Inside the Tornado

Geoffrey Moore’s Crossing the Chasm is perhaps the single most influential technology marketing book. When I first read it a few years ago, everything in it made sense and it gave me a better feel for where my company was (spoiler alert: it’s not necessarily where we thought we were). So when several people recommended Inside the Tornadoa sequel of sorts – I was ready to dig in and love it.

But I didn’t love it. It’s not because Moore is wrong. I don’t claim to know enough to assert that, and in fact I think he’s probably right on the whole. My dislike for the book instead is a matter of literary and ethical concerns.

The literary concern is what struck me first, so I’ll start there. Whereas the metaphor in Chasm is very straightforward, Tornado is a mess. You start in the bowling alley and then a tornado develops and eventually you end up on Main Street. Also, you want to be a gorilla or maybe a chimpanzee, but probably not a monkey. In fairness to Mr. Moore, some of this is because the concepts he tried to communicate became more complex in Tornado. Instead of the broad concepts of the Technology Adoption Life Cycle, he focuses on the more intricate motions that happen on a smaller scale. As a meteorologist, I can appreciate this. Nonetheless, the roughness of the metaphor distracted me from the message of the book.

I’m also not particularly keen on what Moore tells us we must do to achieve dominance in the market. “To hell with quality or what your customer wants” may be the best way to achieve the market position you want when conditions are favorable to you. That doesn’t mean it’s what I want to do. Reading this book made me think of Don McLean’s third-most popular song: “if winning is what matters I respect the ones who fail.”

I suppose it may be a disconnect between my goals and what Moore assumes my goals are. Although I am a very competitive person, I am not interested in winning for winning’s sake. I want to do work that makes the world better, and if we’re in second or third place, that just means that others are also making the world a better place. That doesn’t seem like losing to me.

Inside the Tornado is one of those books that every technology marketer should read. But that doesn’t mean I recommend it.

Reading at work

Seth Godin had a post on his blog a few weeks ago with the same title. Reading at work is a hard thing for me to accept sometimes. I’ll read industry articles in Feedly or relevant posts shared by coworkers. But when it comes to sitting down and reading a book? Nope.

This is dumb. I’m not saying I should sit around reading a novel during work (although a short diversion to refresh my mind seems worth it). I have a stack of books recommended to me that are directly relevant to being better at my job.

If getting better at my job isn’t a good use of the time I give to my employer, what is? It’s certainly a better investment than some of the meetings I’ve attended. Professional growth too often gets overlooked. When I first started working from home, I noticed that I was way more productive. I think it’s because I try too hard to be busy that I sometimes forget to be productive.

Seth’s post also reminded me of a fun game I used to play when I worked at a previous employer. As a public university, everyone’s salary was a matter of public record. So in a particularly pointless meeting, I’d look up everyone’s salary and figure out what that hour (or more) cost the University. Salaries are a sunk cost, so it’s easy to waste time in meetings.

But Godin reminds me that I need to focus on devoting time to getting better at my job, not just doing it day-to-day. Now is the ideal time to do that, with many coworkers out of the office for the holidays. And with the tech industry discovering job training, who can complain?

Your assumptions may outlive you

Walt Mankowski had a great article on Opensource.com last week called “Don’t hate COBOL until you’ve tried it“. In this article, he shares the story of a bug. Because columns are special in some versions of COBOL, his code didn’t behave the way he expected.

The lesson I took away from this is: be careful about the assumptions that you make because they might bite someone decades later.

This isn’t a knock on Grace Hopper. At the time COBOL was invented, 80-line punch cards had been in use for over a century. It made sense at the time to treat that as a given. But here’s the thing about the 20th century: not only did technology change, but the rate of change increased. The punch cards that had survived over 100 years were well on their way to obsolescence 10 years later.

The future is hard. You can’t fault pioneers for not seeing how people would use computers decades later. But it turns out that this assumption was not future-proof.

Maybe that’s the better lesson: if you make something well, your assumptions will out live you.

What might government regulation of infosec look like?

“Terrible” is the most likely answer. But let’s assume we’re talking about regulation that is effective and sound (from both a technical and civil liberties perspective).

On Sunday’s episode of This Week in Tech, the panel discussed the possibility of government regulation of internet security. I’m not fully convinced that any regulation is necessary, but the case for some form of consumer protection grows with every breach. And I don’t think it likely that companies will self-regulate.

So as neither a policy nor technical security expert, what sort of plan would I draw up?

Good infosec regulation

Any workable laws or regulations would have to be defense-oriented. It may sound like victim-blaming, but I don’t see any other path. Companies must meet some minimum standard of protection or face non-trivial fines in the event of a breach. But if a breach occurs and the company met the standard, I would not punish them. Even the best organization is going get compromised in some way at some point.

In an ideal scenario, the punishment would instead be on the bad actor. The international nature of the Internet makes that a near impossibility. And given that a company is acting with some degree of public trust, I don’t find it unjust to demand a certain level of security compliance.

In order to avoid a heavy administrative burden, I wouldn’t require external audits (at least not for companies below a certain size). It could be something as simple as “document the security plan and show that you’ve kept to it”. The plan would have some number of required elements (e.g. customer passwords aren’t stored in plaintext) and a further list of suggested elements maintained by an expert body. So long as your plan isn’t garishly incompetent and you stick to it, you’re in the clear from a government punishment perspective.

Of course, certain systems would still be subject to heavier burden. I wouldn’t do away with HIPAA or PCI in favor of this new model. But you can see how less-sensitive services would be nudged toward better consumer protection.

Bad infosec regulation

So what wouldn’t I include? I certainly would not require any encryption backdoor (I might even prohibit it) or prohibit users’ use of encryption. That’s an obvious choice in light of the civil liberty requirement.

I also would not include any specific technology or process in the law/regulation itself. The technology landscape is too dynamic and diverse for that to be effective. The best we can hope for is to set broader principles that need updated on the order of years.

The reality of regulation

I don’t see any meaningful regulation happening in the near future. For one, it’s a very difficult problem to solve from both a technical and a policy perspective. More importantly, it could be politically hot, and we all know how pleasant the current environment in Washington is.

At most, we may see a few laws, probably bad, that nibble around the edges. But as the digital age continues to change society as we’ve known it, the law must catch up somehow.

Why can’t I develop software the right way?

It’s not because I’m dumb (or not just because I’m dumb).

Readers of this blog will know that I have no shortage of opinions. I know what I think is the Right Way™ to develop software, but I have a hard time doing it when I’m a solo contributor. I’ve never written unit tests for a project where I’m the only (or primary) contributor. I’m sloppy with my git branching and commits. I leave myself fewer code comments and write less documentation. This is true even when I expect others may see it.

A few months ago, I was at the regular coffee meetup of my local tech group. We were talking about this phenomenon. I blurted out something that, upon further reflection, I’m particularly fond of.

Testing is a cultural thing and you can’t develop a culture with one person.

Okay, maybe you can develop a culture with one person, but I can’t. One could argue that a one-person culture is called a “habit”, and I’m open to that argument. But I would counter that they’re not the same thing. A habit is “a settled tendency or usual manner of behavior”, according to my friends at Merriam-Webster. In other words, it’s something you do. A culture, in my view, is a collective set of habits, but also mores. Culture defines not just what is done, but what is right.

Culture relies on peer pressure (and boss pressure, and customer pressure, etc) to enforce the norms of the group. Since much of the Right Way™ in software development (or any practice) is not the easy way, this cultural pressure helps group members to develop the habit. When left alone, it’s too easy to take the short term gains (e.g. not going to the effort of writing tests) even if that involves long term pain (yay regressions!).

If I wrote code regularly at work, or as a part of an open source project that has already developed a culture of testing, I’m sure I’d pick up the habit. Until then, I’ll focus on leading thoughts and let “mostly works” be my standard for code.

“Oh crap, I have to manage”

The logical extension of what I wrote above is that code testing and other excellent-but-also-no-fun practices will not spontaneously develop in a group. Once the culture includes the practice in question, it will be (to a large degree) self-perpetuating. Until then, someone has to apply pressure to the group. This often falls to a technical lead who is probably not particularly fond of putting on the boss hat to say “hey, you need to write tests or this patch doesn’t get accepted”.

Too bad. Do it anyway. That’s one of the key features of project leadership: developing the culture so that the Right Way™ becomes the way things are done.

Your crappy UI could be lethal

The tech industry embraces (some might say “fetishizes”) the philosophy of “move fast and break things”. Mistakes are common and celebrated, and we don’t always consider the effects they have. Autonomous cars may help bring that to the forefront, but many dangerous effects already exist. This is a story of how a UI decision can lead to dangerous consequences.

A friend of mine has a daughter (we’ll call her “Edith”) who has diabetes. Like many people with the disease, she checks her blood sugar and takes an appropriate dose of insulin to regulate it. One night a few months ago, Edith checked her blood sugar before dinner. The meter read “HI” (over 600 mg/dL), so even though she felt fine, Edith gave herself 15 units of insulin.

As she ate, Edith began feeling shaky. She re-checked her sugar: 85 mg/dL. It was then that my friend realized “HI” was actually “81” and had Edith disconnect her insulin pump. Severe hypoglycemia can lead to a coma or worse. Had Edith been alone, misreading the meter could have resulted in getting the full dose of insulin from the pump, which could have caused a dangerously low blood sugar level.

How could this have been prevented? Using the word “high” instead of “hi” perhaps. Or any other unambiguous message. If it’s the “digital clock” style screen, have the elements race around the edges. Or put a plus sign and have it read “600+” with the plus sign blinking in a demonic color. Whatever. So long as it’s not easy to misread (especially if the user wears glasses but does not have them on, as an example).

Unambiguous UIs are always good. When it comes to medical devices, unambiguous UIs are critical. If misunderstood, the consequences can be lethal.

P.S. Happy birthday, “Edith”!

When are you done testing?

On my way to the airport a few months ago, I happened to catch an interesting story on NPR. Tom’s of Maine was looking to remove fossil fuels from their deodorant. After developing what seemed to be a good formula in the lab, they sent it out to customers for real-world testing. When the testers reported back positively, the company released the new formula. That’s when they started getting complaints. It turns out that the new formula had problems in warm weather. All of the testers were in New England and tested it during the winter months.

The folks at Tom’s of Maine certainly thought they were giving their product a thorough test. That they didn’t was a failure of imagination. Perhaps the most valuable skill I’ve gained in my career is the ability to imagine more possible ways something can fail. Indeed, I consider that the hallmark of “senior” status. I don’t consider myself particularly excellent in this regard, just better than I was a decade ago. Much of the ability to imagine failures comes from getting burned by not anticipating what can go wrong.

Thats what makes quality assurance a challenging (and underrated) discipline. It’s more than just trying some things and seeing what breaks. Good QA first requires identifying the entire universe of possible failures and then designing tests to make sure the outcome in each of those cases is the desired one.

When are you done testing? Hopefully when you’ve exhausted the space of possible conditions. In reality, it’s closer to when you’ve exhausted the space of likely conditions that are worth handling. Excluding conditions you don’t care about (e.g. not testing your software on end-of-life operating systems because you’re not going to provide support for those platforms) is a great way to shrink the universe.

Giving up is the better part of valor

There’s a lot to be said for sticking with a problem until it is solved. The tricky part can be knowing when it is solved enough. I’ve been known to chase down an issue until I can explain every aspect of it. The very existence of the problem is a personal insult that will not be satisfied as long as the problem continues to insist on being.

That approach has served me well over the years, even if it can be annoying sometimes. I’ve learned by chasing these problems to their ultimate resolution. Sometimes it even reveals conditions that can be solved before they become problems.

But as with anything, there’s a tradeoff to be made. The time it takes run down a problem is time not spent elsewhere. It’s a matter of prioritization. Does having it 80% fixed do what you need? Can you just ignore the problem and move on? 

A while back, I was trying to get a small script to build in our build system at work. For whatever reason, the build itself would work, but using the resulting executable to upload itself to Amazon S3 failed with an indecipherable (to me at least) Windows library error. It made no sense to me. This was a workflow I had used on a local virtual machine dozens of times. And it worked if I executed the build artifact by hand. Just not in the build script.

I spent probably a few hours working on solutions. But no matter what I tried, I made no progress. When I got to the point where I had exhausted all of the reasonable approaches I could think of, I implemented a workaround and moved on to something else.

It can be hard to know when to give up. Leaving a problem unsolved might come back to bite you later. But what else could you be doing if you’re not spinning your wheels?