NWS warnings still make me sad

Longtime readers of Blog Fiasco know that I have some opinions about how the National Weather Service (NWS) issues and communicates warnings. Just check out the “warning” tag on this site if you’re interested. But it turns out that I have as-yet-unwritten opinions. This post is inspired by a recent tweet from an NWS meteorologist:

I mentioned in a previous post that warning products tend to be technically correct instead of useful, as in the case of the non-hurricane Sandy. This is a fine example. Tornadoes over water are waterspouts and the NWS treats maritime areas (including larger lakes) differently than land areas. The end result is that forecasters are unable to properly communicate threats to the public. This is harmful.

I understand treating land and water areas differently. A storm that is unremarkable on land could be deadly to watercraft. Special marine warnings are usefully distinct. But a tornado warning over water can be useful, too, particularly to folks on land who happen to be downstream. But forecasters aren’t allowed to provide that information because it’s not technically correct.

The National Weather Service is a great agency. The dedicated forecasters are at work around the clock to provide life-saving (and life-enhancing) forecasts and warnings. I just wish it would get out of it’s own way on this issue.

HPE acquiring Cray

So HPE is acquiring Cray. Given my HPC experience, you may wonder what my take is on this. (You probably don’t, but this is my blog and I’ve only had one post this week, so…) This could either go very well or very badly. If you want deeper insight, read Timothy Prickett Morgan’s article in The Next Platform.

How it could go well

HPE is strong in the high-performance computing (HPC) market. They had nearly 50% of the top 500 systems 10 years ago. But their share of that list has pretty steadily fallen since — largely due to the growth of “others”. And they’ve never been dominant at the top end. Meanwhile, Cray — a name that was essentially synonymous with “supercomputing” for decades — has been on an upswing.

Cray had 37 systems on the Top 500 list in November 2010 and hasn’t dropped below that number since. From November 2012 through June 2015, Cray took off. They peaked at 71 systems in June 2015, and have been on a slow decline since.

But the system count isn’t the whole story. When looking at the share of performance, Cray is consistently one of the top vendors. Currently accounting for nearly 14% of the list’s performance, they were consistently in the 20-25% range during their ascent at the early part of the current decade.

And while the exascale race is good news for Cray, that revenue is bursty. When cloud providers starting taking up some of the low end HPC workloads, it wasn’t a concern for Cray. They don’t play in that space. But the cloud tide is rising (particularly as Microsoft’s acquisition of Evan Burness starts to pay dividends). When I was at Microsoft, we entered into a partnership with Cray. It was mutually beneficial: Microsoft customers could get a Cray supercomputer without having to find a place in the datacenter for it and Cray customers could more easily offload their smaller workloads to cloud services.

So all of this is to say there’s opportunity here. HPE can get into the top-end systems, particularly the contracts with the U.S. Departments of Defense and Energy. And Cray can ignore the low-to-mid market because the HPE mothership will cover that. And both sides get additional leverage with component manufacturers. If HPE lets Cray be Cray, this could turn out to be a great success.

How it could go poorly

Well, as a friend said “HPE is pretty good at ruining everything they buy”. I’ll admit that I don’t have a particularly positive view of HPE, but there’s nothing particular that I can point to as a reason. If HPE tries to absorb Cray into the larger HPE machine, I don’t think it will go well. Let Cray continue to be Cray, with some additional backing and more aggressive vendor relations, and it will do well. Try to make Cray more HPE-like and it will be a poor way to spend a billion dollars.

The bigger picture

Nvidia gobbled up Mellanox. Xilinx bought Solarflare. Now HPE acquires Cray (and SGI a few years ago). Long-standing HPC vendors are disappearing into the arms of larger companies. It will be very interesting to see how this plays out in the market over the next few years. Apart from the ways technological diversity help advance the state of the art, I wonder what this says about the market generally. Acquisitions like this can often be a way to show growth without having to actually grow anything.

Competency degrees and the role of higher education

Several years ago, Purdue University introduced a “competency degree program”. I called it “test out of your degree”. Although the University’s website is short on detail, I gather the general idea is a focus on doing instead of study. Which sounds pretty good on its face, but actually isn’t.

“We’ve hired Purdue grads before,” said Dave Bozell, owner of CG Visions, told the Lafayette Journal & Courier, “and they have the theory, but we still have to spend time teaching them how to apply it to what they’re working on.”

Yes, Dave. That’s the point. Universities do not exist to provide vocational training for your employees. That’s your responsibility. That’s why science majors have to take some (but not enough) humanities courses. Higher education is for broad learning. Or at least it used to be.

I wonder sometimes if the Morrill Act — which lead to the creation of Purdue University and many other institutions — is what caused the shift from education to training. Uncharitably, it said “this fancy book learnin’ is fine and all, but we need people to have useful skills.” “Useful”, of course, has a pretty strict definition.

Purdue’s College of Technology Dean Gary Bertoline said “there are plenty of high-skill, high-wage technology jobs available, but students just don’t have the skills necessary to fill them.” You know what skills are most lacking in tech these days? It’s not coding. It’s not database optimization. It’s ethics. I doubt that’s in the competency-based degree.

I’d like to see employers doing more to train their employees in the skills needed to perform the day-to-day work. Theory is important, and that’s a good fit for the university model. If you want a more streamlined approach, embrace vocational schools. Much of the work done these days that requires a college degree doesn’t need to. In fact, it might benefit from a more focused vocational approach that leaves graduates in less debt.

But universities should be catering to the needs of the student and the society, not the employer.

Is Slack revolutionary?

No, says Betteridge’s Law. But there are some who will argue it is. For example, Ben Thompson recently wrote “Zoom is in some respects a more impressive business, but its use-case was a pre-existing one. Slack, on the other hand, introduced an entirely new way to work”.

I don’t see that Slack introduced an entirely new way to work. What it did was take existing ways to work and make them suck less. When I joined a former employer, they were using consumer Skype for instant messaging and calls. It worked fairly well from the telephony side, but as a team IM client it was…bad. Channels weren’t discoverable, there were no integrations, and search (if it even existed, I don’t remember now) was useless.

When we switched to Slack, it was so much better than the way we had been working. But none of the concepts were new, they were just better executed. Many tools have attempted to address the use cases that Slack handles well. They just didn’t succeed in the same way. Does that make Slack revolutionary? Maybe it’s splitting hairs, but I could see an argument that Slack had a revolutionary impact without being revolutionary itself.

Apache Software Foundation moves to GitHub

Last week, GitHub and the Apache Software Foundation (ASF) announced that ASF migrated their git repositories to GitHub. This caused a bit of a stir. It’s not every day that “the world’s largest open source foundation” moves to a proprietary hosting platform.

Free software purists expressed dismay. One person described it as “a really strange move In part because Apache’s key value add [was] that they provided freely available infrastructure.” GitHub, while it may be “free as in beer”, is definitely not “free as in freedom”. git itself is open source software, but GitHub “special sauce” is not.

For me, it’s not entirely surprising that ASF would make this move. I’ve always seen ASF as a more pragmatically-minded organization than, for example, the Free Software Foundation (FSF). I’d argue that the ecosystem benefits from having both ASF- and FSF-type organizations.

It’s not clear what savings ASF gets from this. Their blog post says they maintain their own mirrors, so there’s still some infrastructure involved. Of course, it’s probably smaller than running the full service, but by how much?

More than a reduced infrastructure footprint, I suspect the main benefit to the ASF is that it lowers the barrier to contribution. Like it or not, GitHub is the go-to place to find open source code. Mirroring to GitHub makes the code available, but you don’t get the benefits of integrating issues and pull requests (at least not trivially). Major contributors will do what it takes to adopt the tool, but drive by contributions should be as easy as possible.

There’s also another angle, which probably didn’t the drive the decision but brings a benefit nonetheless. Events like Hacktoberfest and 24 Pull Requests help motivate new contributors, but they’re based on GitHub repositories. Using GitHub as your primary forge means you’re accessible to the thousands of developers who participate in these events.

In a more ideal world, ASF would use a more open platform. In the present reality, this decision makes sense.

Other writing: April 2019

What have I been writing when I haven’t been writing here?

Stuff I wrote

Red Hat/Fedora

Opensource.com

Lafayette Eats

Stuff I curated

Red Hat/Fedora

Releasing open source software is not immoral

Matt Stancliff recently made a bold statement on Twitter:

He made this comment in the context of the small amount of money the largest tech companies use to fund open source. With the five largest companies contributing less than a percentage of their annual revenue, open source projects would have two billion dollars of support. These projects are already subsidizing the large corporations, he argues, so they deserve some of the rewards.

This continues the recent trend of people being surprised that people will take free things and not pay for them. Developers who choose to release software under an open source license do so with the knowledge that someone else may use their software to make boatloads of money. Downstream users are under no obligation to remunerate or support upstreams in any way.

That said, I happen to think it’s the right thing to do. I contributed to Fedora as a volunteer for years as a way to “pay back” the community that gave me a free operating system. At a previous company, we made heavy use of an open source job scheduler/resource manager. We provided support on the community mailing lists and sponsored a reception at the annual conference. This was good marketing, of course, but it was also good community citizenship.

At any rate, if you want to make a moral judgment about open source, it’s not the release of open source software that’s the issue. The issue is parasitic consumption of open source software. I’m sure all of the large tech companies would say they support open source software, and they probably do in their own way. But not necessarily in the way that allows small-but-critical projects to thrive.

Toward a more moral ecosystem

Saying “releasing open source software has become immoral” is not helpful. Depriving large companies of open source would also deprive small companies and consumers. And it’s the large companies who could best survive the loss. Witness how MongoDB’s license change has Amazon using DocumentDB instead; meanwhile Linux distributions like Fedora are dropping MongoDB.

It’s an interesting argument, though, because normally when morality and software are in the mix, it’s the position that open source (or “free software” in this context, generally) is the moral imperative. That presents us with one possible solution: licensing your projects under a copyleft license (e.g. the GNU General Public License (GPL)). Copyleft-licensed software can still be used by large corporations to make boatloads of money, but at least it requires them to make source (including of derived works) available. With permissively-licensed software, you’re essentially saying “here’s my code, do whatever you want with it.” Of course people are going to take you up on that offer.

If a thank you note is a requirement, I don’t want to work for you

Jessica Liebman wrote an article for Business Insider where she shared a hiring rule: If someone doesn’t send a thank-you email, don’t hire them. This, to be blunt, is a garbage rule. I don’t even know where to begin describing why I don’t like it, so I’ll let Twitter get us started.

When I’ve been on the hiring team, a short, sincere “thank you” email has always been nice to receive. But I’ve never held the lack of one against a candidate. It’s not like we’re doing them some huge favor. We’re trying to find a mutually beneficial fit. And employers hold most of the power, in the interview process and beyond.

You can lament it if you want, but the social norm of sending thank yous for gifts is greatly diminished. So even if it would have been appropriate in the past, it’s no longer expected. And, as noted above, it’s culture-specific anyway.

Until employers see fit to offer meaningful feedback to all applicants, they can keep their rule requiring thank you notes to themselves. And even after that. If an employer wants to use arbitrary gates that have no bearing on performing the job function, I don’t want to work for them.

The Linux desktop is not in trouble

Writing for ZDNet earlier this month, Steven J. Vaughan-Nichols declared trouble for the Linux desktop. He’s wrong.

Or maybe not. Maybe we’re just looking at different parts of the elephant. sjvn’s core argument, if I may sum it up, is that fragmentation is holding back the Linux desktop. Linux can’t gain significant traction in the desktop market because there are just so many options. This appeals to computer nerds, but leads to confusion for general users who don’t want to care about whether they’re running GNOME or KDE Plasma or whatever.

Fragmentation

I’m sympathetic to that argument. When I was writing documentation for Fedora, we generally wrote instructions for GNOME, since that was the default desktop. Fedora users can also choose from spins of KDE Plasma, LXQt, Xfce, plus can install other desktop environments. If someone installs KDE Plasma because that’s what their friend gave them, will they be able to follow the documentation? If not, will they get frustrated and move back to Windows or MacOS?

Even if they stick it out, there are two large players in the GUI toolkit world: GTK and Qt. You can use an app written in one in a desktop environment written in the other, but it doesn’t always look very good. And the configuration settings may not be consistent between apps, which is also frustrating.

Corporate indifference

Apart from that, sjvn also laments the lack of desktop effort from major Linux vendors:

True, the broad strokes of the Linux desktop are painted primarily by Canonical and Red Hat, but the desktop is far from their top priority. Instead, much of the nuts and bolts of the current generation of the Linux desktop is set by vendor-related communities: Red Hat, Fedora, SUSE’s openSUSE, and Canonical’s Ubuntu.

I would argue that this is the way it should be. As he notes in the preceding paragraph, the focus of revenue generation is on enterprise servers and cloud. There are two reasons for that: that’s where the customer money is and enterprises don’t want to innovate on their desktops.

I’ll leave the first part to someone else, but I think the “enterprises don’t want to innovate on their desktops” part is important. I’ve worked at and in support of some large organizations and in all cases, they didn’t want anything more from their desktops than “it allows our users to run their business applications in a reliable manner”. Combine this with the tendency of the enterprise to keep their upgrade cycles long and it makes no sense to keep desktop innovation in the enterprise product.

Community distributions are generally more focused on individuals or small organizations who may be more willing to accept disruptive change as the paradigm is moved forward. This is true beyond the desktop, too. Consider changes like the adoption of systemd or replacing yum with dnf: these also appeared in the community distributions first, but I didn’t see that used as a case for “enterprise Linux distributions are in trouble.”

What’s the answer?

Looking ahead, I’d love to see a foundation bring together the Linux desktop community and have them hammer out out a common desktop for everyone. Yes, I know, I know. Many hardcore Linux users love have a variety of choices. The world is not made up of desktop Linux users. For the million or so of us, there are hundreds of millions who want an easy-to-use desktop that’s not Windows, doesn’t require buying a Mac, and comes with broad software and hardware support.

Setting aside the XKCD #927 argument, I don’t know that this is an answer. Even if the major distros agreed to standardize on the same desktop (and with Ubuntu returning to GNOME, that’s now the case), that won’t stop effort on other desktops. If the corporate sponsors don’t invest any effort, the communities still will. People will use whatever is provided to them in the workplace, so presenting a single standard desktop to consumers will rely on the folks who make the community distributions to agree to that. It won’t happen.

But here’s the crux of my disagreement with this article. The facts are all correct, even if I disagree with the interpretation of some of them. The issue is that we’re not looking at the success of the Linux desktop in the same way.

If you define “Linux desktop” as “a desktop environment that runs the Linux kernel”, then ChromeOS is doing quite well, and will probably continue to grow (unless Google gets bored with it). In that case, the Linux desktop is not in trouble, it’s enjoying unprecedented success.

But when most people say “Linux desktop”, they think of a traditional desktop model. In this case, the threat to Linux desktops is the same as the threat to Windows and MacOS: desktops matter less these days. So much computing, particularly for consumers, happens in the web browser when done on a PC at all.

Rethinking the goal

This brings me back to my regular refrain: using a computer is a means, not an end. People don’t run a desktop environment to run a desktop environment, they run a desktop environment because it enables them to do the things they want to do. As those things are increasingly done on mobile or in the web browser, achieving dominant market share for desktops is no longer a meaningful goal (if, indeed, it ever was).

Many current Linux desktop users are (I guess), motivated at least in part by free software ideals. This is not a mainstream position. Consumers will need more practical reasons to choose any Linux desktop over the proprietary OS that was shipped by the computer’s manufacturer.

With that in mind, the answer isn’t standardization, it’s making the experience better. Fedora Silverblue and OpenSUSE Kubic are efforts in that direction. Using those as a base, with Flatpaks to distribute applications, the need for standardization at the desktop environment level decreases because the users are mostly interacting with the application level, one step above.

The usual disclaimer applies: I am a Red Hat employee who works on Fedora. The views in this post are my own and not necessarily the views of Red Hat, the Fedora Council, or anyone else. They may not even be my views by the time you read this.

Emoji in console output

Recently, my friend was talking about some output he got from running the minikub program. Each line included a leading emoji character. He was not thrilled, and I don’t think they did it well. But when used appropriately, emoji can add valuable context to the output.

root@test# minikube start
😄 minikube v1.0.0 on linux (amd64)
🤹 Downloading Kubernetes v1.14.0 images in the background ...
💡 Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.

💣 Unable to start VM: Error getting state for host: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'minikube'')

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new
root@test# minikube delete
🔥 Deleting "minikube" from kvm2 ...
💔 The "minikube" cluster has been deleted

I should say, in the interests of full disclosure, that I have written tools that include unhelpful emoji in the output. Some of the emoji are not helpful. The crying cat because it crashed? Doesn’t add anything. The broken heart when the cluster is deleted? I don’t have time for your guilt trips. But the light bulb for a tip and the bomb for a Big Bad Error help draw attention to what could be a wall of text.

Here’s what I see as some guiding ideas for using emoji in output:

  • Have a fallback, but not a stupid one. For the code above, there’s a fallback to ASCII. If you thought the emoji added no value, check out how un-valuable the fallback is. The fallback should probably be “print nothing and go straight to the text output”.
  • Don’t print emoji to logs. The console should be human-readable (and a well-placed emoji can help with drawing attention to the right places), but logs should be machine-readable (well, grep-readable). Log messages should be more structured anyway, so maybe it doesn’t really matter, but don’t rely on your user having an emoji keyboard available when they need to grep the logs.
  • Try to use unambiguous emoji. Pictograms are language-independent, which is nice, but if you’ve ever tried to communicate with A Youth via emoji, you know there’s a lot of room for nuance. Stick to well-understood characters and document them somewhere.
  • Use emoji to enhance context, not to replace text. Log parsing and screen readers are two reasons that you don’t want to get rid of text in favor of emoji. Use the emoji to draw attention to important messages and provide hints as to why they’re important, but make sure the messages still stand on their own.