Book review: Inside the Tornado

Geoffrey Moore’s Crossing the Chasm is perhaps the single most influential technology marketing book. When I first read it a few years ago, everything in it made sense and it gave me a better feel for where my company was (spoiler alert: it’s not necessarily where we thought we were). So when several people recommended Inside the Tornadoa sequel of sorts – I was ready to dig in and love it.

But I didn’t love it. It’s not because Moore is wrong. I don’t claim to know enough to assert that, and in fact I think he’s probably right on the whole. My dislike for the book instead is a matter of literary and ethical concerns.

The literary concern is what struck me first, so I’ll start there. Whereas the metaphor in Chasm is very straightforward, Tornado is a mess. You start in the bowling alley and then a tornado develops and eventually you end up on Main Street. Also, you want to be a gorilla or maybe a chimpanzee, but probably not a monkey. In fairness to Mr. Moore, some of this is because the concepts he tried to communicate became more complex in Tornado. Instead of the broad concepts of the Technology Adoption Life Cycle, he focuses on the more intricate motions that happen on a smaller scale. As a meteorologist, I can appreciate this. Nonetheless, the roughness of the metaphor distracted me from the message of the book.

I’m also not particularly keen on what Moore tells us we must do to achieve dominance in the market. “To hell with quality or what your customer wants” may be the best way to achieve the market position you want when conditions are favorable to you. That doesn’t mean it’s what I want to do. Reading this book made me think of Don McLean’s third-most popular song: “if winning is what matters I respect the ones who fail.”

I suppose it may be a disconnect between my goals and what Moore assumes my goals are. Although I am a very competitive person, I am not interested in winning for winning’s sake. I want to do work that makes the world better, and if we’re in second or third place, that just means that others are also making the world a better place. That doesn’t seem like losing to me.

Inside the Tornado is one of those books that every technology marketer should read. But that doesn’t mean I recommend it.

T-Mobile, Layer3, and the uncarriering of TV

Last week, T-Mobile announced it will acquire Layer3. In his usual John Legere manner, T-Mobile CEO John Legere promised to end the “complete bullshit” of traditional TV by ushering in the uncarriering of TV. But Layer3 is a cable TV provider, except over the Internet. It’s not entirely clear how T-Mobile plans to improve things.

Unlike services like Sling, which offer smaller packages, Layer3’s offering is sized like traditional cable bundles. Reporting from Ars Technica suggests the pricing is sized like traditional cable bundles, too. Layer3 does not have an app, which means customers have to use individual channels’ apps to watch when on the go.

The Layer3 website is a little short on information, so it’s hard to tell what the value proposition is. It could be that it’s cheaper for large bundles or that it has a broader offering. It seems to be a good fit for T-Mobile in this sense: it’s geographically limited and possibly cheaper. And I say this as a T-Mobile customer (and minor shareholder). What could T-Mobile do to improve it?

My idea for the uncarriering of TV

This is hardly a novel concept, but I’d like to see a true à la carte offering. Let me choose the exact channels I want to subscribe to and pay whatever that amounts to. I have Sling TV now, and it mostly gives me all of the channels I want, but it still includes some I don’t. I would have to get the most expensive option in order to get all the channels. At that point, I’d do just as well getting TV from my fiber provider. I have no problem paying for the content I want, I just want to be able to get it from a single source.

What will probably happen

A more likely outcome is that T-Mobile fixes the price. Instead of the “introductory offer” dance that TV providers often do, they say “this is the price.” It almost certainly would be zero-rated for T-Mobile customers. Watch as much TV as you want on your T-Mobile phone, it won’t count.

I would be surprised to see unbundling of channels into an à la carte offering. With its “ONE” plan, T-Mobile has shown a clear preference for simple billing. Even if customers might prefer more flexibility, a simple plan with few options is easier to manage for both customer and provider. I don’t think John Legere particularly wants to get into the TV business, so there’s little benefit to T-Mobile for going the more complicated route.​

We’ll have to see what happens when the new service rolls out.

What acquisition means for Shazam

I was surprised to see the news that Apple is acquiring Shazam. After all, they’re a devices company, right? Maybe not, as the “services” division is the second-strongest line and growing. So what does Shazam do to help Apple? Two things that I see.

The first is that it gives them an avenue for selling music. Hear a song and wonder what it is? Fire up Shazam to identify it and here’s a handy link to buy it in the iTunes store. Right now (at least on Android), users have a choice between Google and Amazon for track purchases. You have to think Apple would want to get in on that. It’s a prime opportunity for impulse buys.

The second benefit is that it gives Apple more data about the songs people are interested in. The utility of this data is not immediately obvious to me, but I’m sure someone in Apple’s spaceship can figure out how to put it to use. Can they execute on that idea, though? I admittedly don’t pay a lot of attention to Apple, but they don’t seem to have the data chops of Google or Amazon.

But the title of this post is what the acquisition means for Shazam, not what it means for Apple. My first thought was “well I guess I won’t be able to use Shazam anymore.” Most of Apple’s software acquisitions have been focused on Siri or Apple Maps. Neither of those are available outside of the Apple ecosystem. CUPS (yes, the Unix print system) is the only acquisition that remains available outside of Apple, as far as I can tell.

Apple has no real desire to make it’s software available to non-iOS/macOS users. iTunes is a notable exception, but for the most part, you can’t expect Apple software outside of Apple hardware. Apple makes its money on services and hardware sales, not on software. And I can’t fault them for sticking to what works.

The question remains: will Shazam continue to be available across platforms? If Apple’s motivation is primarily to use it as an iTunes sales engine, I think it will. If they want to use it as a differentiator in a competitive smartphone market, they won’t. I’m inclined to favor the sales engine scenario, but time will tell.

sudo is not as bad as Linux Journal would have you believe

Fear, uncertainty, and doubt (FUD) is often used to undercut the use of open source solutions, particularly in enterprise settings. And the arguments are sometimes valid, but that’s not a requirement. So long as you make open source seem risky, it’s easier to push your solution.

I was really disappointed to see Linux Journal run a FUD article as sponsored content recently. I don’t begrudge them for running sponsored content generally. They clearly label it and it takes money to run a website. Linux Journal pays writers and that money has to come from somewhere. But this particular article was tragic.

Chad Erbe uses “Four Hidden Costs and Risks of Sudo Can Lead to Cybersecurity Risks and Compliance Problems on Unix and Linux Servers” to sow FUD far and wide. sudo, if you’re not familiar with it, is a Unix command that allows authorized users to run authorized commands with elevated privileges. The most common use case is to allow administrators to run commands as the root user, but it can also be used to give, for example, webmasters the ability to restart the web server without giving them full access.

So what’s wrong with this article?

Administrative costs

Erbe argues that using sudo adds administrative overhead because you have to maintain the configuration file. It’s 2017: if you’re not using configuration management already then you’re probably a lost cause. You’re not adding a whole new layer, you’re adding one more file to the dozens (or more) you’re coordinating across your environment.

Erbe sets up a “most complicated setup” strawman and knocks it down by saying commercial solutions could help. He doesn’t say how, though, and there’s a reason for that: the concerns he raises apply to any technology that provides the solution. I have seen sites that use commercial solutions to replace sudo, and they still have to configure which users are authorized to use which commands on which servers.

Forensics and audit risks

sudo doesn’t have a key logger or log chain of custody. That’s true, but that doesn’t mean it’s the wild west. Erbe says configuration management systems can repair modified configuration files, but with a delay. That’s true, but tools like Tripwire are designed to catch these very cases. And authentication/authorization logs can be forwarded to a centralized log server. That’s probably something sysadmins should have set up already.

sudo provides a better level of audit logging compared to switching to the root account. It logs every command run and who runs it. Putting a key logger in it would provide no additional benefit. The applications launched with sudo (or the operating system itself) would need it.

Business continuity risks

You can’t rollback sudo and you can’t get support. Except that you can, in fact, downgrade the sudo version if it contains a critical bug. And you can get commercial support. Not for sudo specifically, but for your Linux installs generally.

Lack of enterprise support

This sems like a repeat of the last point, with a different focus. There’s no SLA for fixing bugs in sudo, but that doesn’t mean it’s inherently less secure. How many products developed by large commercial vendors have security issues discovered years later? A given package being open source does not imply that it is more or less secure than a proprietary counterpart, only that its source code is available.

A better title for this artice

Erbe raises some good points, but loses them in the FUD. This article would be much better titled “why authorization management is hard”. That approach, followed by “and here’s how my proprietary solution addresses those difficulties” would be a very interesting article. Instead, all we get is someone paying to knock down some poorly-constructed strawmen. The fact that it appears in Linux Journal gives it a false sense of credibility and that’s what makes it dangerous.

Blocking the AT&T/Time Warner merger

News outlets reported last week that the Department of Justice intends to block the merger of AT&T and Time Warner on antitrust grounds. Depending on who is talking, a condition of approval is the sale of either CNN or DirecTV by AT&T.

I’m no mergerologist, but this seems weird. I agree there’s a good argument for blocking the merger. But that argument is predicated on the lack of competition in the broadband space. Neither DirecTV nor CNN are broadband providers.

However, the president has very publicly decried what he views as unfair coverage by CNN. Of course it follows that the DoJ’s objections are perceived to be driven by political concerns from the White House. This is especially true given the “business friendly” moniker claimed by the administration.

I’m not inclined to give the White House the benefit of the doubt, but there’s an argument for this that I’d buy. There’s an inherent danger involved when the same company owns both the content and the delivery. This gives the company the opportunity to crowd out competitors in an anticompetitive manner.

What makes this confusing is the DirecTV part. TV and Internet are often combined. Adding in satellite doesn’t seem to materially change the landscape. It’s essentially the same service over a different medium. Cynically, it’s a cover to make it look like the CNN sale isn’t politically-driven.

I have a hard time taking that argument at face value. That may be my own distrust of the Trump administration, but it doesn’t seem to make sense. Even the parts of the argument where I agree with in the ends, the means don’t seem to mesh. It will be interesting to see how this proceeds.

Deleting the president’s Twitter account

Last week, President Trump’s personal Twitter account disappeared. It was restored 11 minutes later and Twitter said it was deleted by a rogue employee on their last day. Depending on your political leanings, this was either the best thing Twitter ever did or a part of the elitist campaign against the President. I happen to think that it’s an enforcement of Twitter’s Terms of Service, but Twitter has repeatedly proved that it doesn’t care what I think.

Of course, an action like this can’t be viewed in isolation. Even if you agree with the deletion of this account, it sets a bad precedent. Yes, I’m using the slippery slope fallacy, but it’s worth considering. We entrust social media networks to fairly handle problems according to the terms of service they lay out. This is probably a silly thing, but we do it anyway. We entrust social media networks with details of our personal lives, often ones we don’t want shared.

As a professional, I find this act to be a total violation of trust. It violates the System Administration Code of Ethics. But it’s also not the dangerous act some have made it out to be. I have seen comments on Twitter and news articles to the effect of “it’s not funny. What if they had tweeted ‘I just launched the nukes’?!” It’s true Trump uses social media in a way that no past president has and that a fraudulent post could have a tremendous impact. But there’s also nothing to suggest that just because a rogue employee can remove an account that they can also impersonate it.

I’m sure at some level it is possible to somehow insert a fraudulent post. But not only does policy prevent it, but it’s likely very technically difficult to do undetected as well. Frankly, I’m not that concerned about it. What I am worried about is the effects of vigilante ToS enforcement in a political sphere that seems ready to explode.

One bad thing about the death of Flash

Adobe Flash is only mostly dead. That means it’s slightly alive. But the death of Flash is nigh. At least if you consider 2020 “nigh”. By and large, this is celebrated. Flash is notoriously riddled with vulnerabilities, it wrecks accessibility, etc. But losing Flash is still a little bit sad.

Not just because it pioneered interactive web content, as the Tech Crunch article above notes. But I think about all of the games and silly websites that will become unusable. Major projects will be converted to HTML 5. Sites that are mostly-video (I’m thinking Homestar Runner in particular) may end up as a recorded video that can be watched, but not interacted with.

But what about all of the little one-off sites? How much time did I spend in college playing miniputt.swf instead of studying for finals? (Spoiler alert: a lot) How many little educational games have been created that won’t get recreated?

Maybe the lost sites will be replaced by new projects. But it’s a concern we face with every file format: what happens when it’s no longer supported? We have centuries of printed records that can be analyzed by researchers. Centuries from now, will that be true of our digital artifacts?

This is an argument for using open formats instead of proprietary. But even that is no guarantee of future durability. An open format isn’t very helpful if no software implements it. I’m older than the JPEG and GIF standards. Will I outlive them, too? In the not-too-distant future, there may be a niche market for software that implements ancient technology for the purposes of historical preservation.

No, the cloud is not dead

If you think cloud hype is bad, cloud-is-dead hype may be worse. There’s nothing like declaring something dead to get attention. For example, this recent article in Wired. I’ll give Jeremy Hsu credit: he probably didn’t write the headline. Nonetheless, it’s an article in search of conflict.

The Wired article introduces the concept of edge computing to its readers. The idea behind edge computing is simple in concept: move the computation closer to where the consumer is: the edge of the network instead of a central data center(s).

Edge computing has great benefit in certain situations. Latency-sensitive applications such as mobile augmented reality (e.g. Pokemon Go) do better the closer the compute is to the user and their data. In fact, if all the computation can happen locally (e.g. on the user’s phone), that’s the best scenario. I don’t like Hsu’s example of self-driving cars, though. Cars that require a network connection to avoid running into things are cars that do not belong on the road.

But even with edge computing having solid use cases, that doesn’t mean a thing for the idea of cloud computing. First of all, there are still plenty of cases where edge computing doesn’t make sense. Centralization allows for greater economy of scale, which is great for many applications. Secondly, compute demand doesn’t decrease. More computing at the edge doesn’t mean less computing at the core, it means more computing total.

Now the rapid growth in cloud usage (and thus revenue) can’t go on forever at the current rate. At some point, it will level off and reach a steadier rate of growth. That’s the nature of the market. But it’s a mistake to equate maturity with death.

Disclosure: my employer is a leading public cloud provider.

SSH login failures when you have too many keys

I recently had an interesting issue where I SSH login failures in to both work and personal servers. When I tried to log in, I’d immediately get

Received disconnect from w.x.y.z port 22:2: Too many
authentication failures for funnelfiasco
Authentication failed.

This was a surprise, because I hadn’t tried to log in for a while. Why would I get “too many authentication failures”? I knew we ran fail2ban on the work servers and I figured my web host used something like that, too, so I thought maybe something was triggering a ban.

I checked that there wasn’t something on my network that was generating SSH attacks. tcpdump didn’t show anything (whew!).

It turns out that the issue is due to how the SSH agent works. The SSH agent holds your SSH keys. This allows you to remote into a Unix server with a key without having to re-type your passphrase every time. This is really useful behavior, especially if you make remote connections regularly (whether directly SSHing or using something like git over SSH). But it has some behaviors that can cause problems.

By default, if you have an SSH agent running, it will send all of the keys in the agent, even if you’ve explicitly specified the identity to use. If you have more keys than the server’s MaxAuthTries setting, you may end up with too many login attempts before it gets to the one you want. If you don’t want this behavior, you can add IdentitiesOnly yes to your SSH config file.

Are shared block lists the answer to Twitter abuse?

No.

At the beginning of the year, the good folks at Lawfare suggested shared block and follow lists could be an answer to the lack of civility and excess of abuse on Twitter. This is not a new idea. The Lawfare article discusses Block Together – a robust tool for sharing block lists. And Randi Harper’s GGautoblocker builds a block list of accounts that appear to be associated with “Gamer Gate”. What Citron and Wittes propose is to take similar functionality and make it natively a part of Twitter.

I understand their reasoning, but I don’t think it’s the right answer. First, there’s the practical concern. People use blocks in different ways. Some people block with great impunity. They might not have a problem with the person per se, but maybe they just don’t want to be reminded of something. Others only block as a last resort. Trying to manage your own preferences while automatically getting someone else’s can be challenging.

And then of course there’s the fact that it doesn’t get harassers off of the platform. If a garbage human shitposts on Twitter, but no one is around to see it, are they still a garbage human? Of course they are. I understand Twitter’s commitment to being “the free speech wing of the free speech party.” The idea of free speech is critical to a free society. By the same token, there’s no reason they have to give it a platform.

I don’t have the answers. Targeted abuse is a tough problem to solve. As an example, one or more people have been creating account after account targeting meteorologists on Twitter. At last check, Twitter has suspended some 700+ accounts – identical in every respect except for the incremented number at the end of the handle. Worse, the abuse has spread to other platforms, so even if Twitter had a good way of addressing it, they’d be limited by the borders of their service.

Shared block lists might not be the answer to abuse, but maybe they’re the best answer we have right now?