DevOps is dead!

“$thing is dead!” is one of the more annoying memes in the world of technology. A Tech Crunch article back in April claimed that managed services (of cloud infrastructure) is the death knell of DevOps. I dislike the article for a variety of reasons, but I want to focus on why the core argument is bunk. Simply put: “the cloud” is not synonymous with “magical pixie dust.” Real hardware and software still exist in order to run these services.

Amazon Web Services (AWS) is the current undisputed leader in the infrastructure-as-a-service (IaaS) space. AWS probably underlies many of the services you use on a daily basis: Slack and Netflix are two prime examples. AWS offers dozens of services for computation, storage, and networking that roll out updates to datacenters across the globe many times a day. DevOps practices are what make that possible.

Oh, but the cloud means you don’t need your internal DevOps team! No. Shut up. “Why not simply teach all developers how to utilize the infrastructure tools in the cloud?” Because being able to spin up servers and being able to effectively manage and scale them are two entirely different concepts. It is true that cloud services can (not “must”!) take the “Ops” out of “DevOps” for development environments. But just as having access to WebMD doesn’t mean I’m going to perform my own surgery, being able to spin up resources doesn’t obviate the need for experienced management.

The author spoke of “managed services provider” as an apparent synonym for “IaaS provider”. He ignored what I think of as “managed services” which is a contracted team to manage a service for you. That’s what I believe to be the more realistic threat to internal DevOps teams. But it’s no different than any other outsourcing effort, and outsourcing is hardly a new concept.

At the end of the article, the author finally gets around to admitting that DevOps is a cultural paradigm, not a position or a particular brand of fairy dust. Cloud services don’t threaten DevOps, they make it easier than ever to practice. Anyone trying to convince you that DevOps is dead is just trying to get you to read their crappy article (yes, I have a well-tuned sense of irony, why do you ask?).

Crowdfunding academic research?

A few months ago, the Lafayette Journal & Courier ran a story about Purdue University turning to crowdfunding Zika research. Funding sources in higher ed are special. Grants from federal and other agencies require the submission of sometimes lengthy proposals. The approval process is slow and bureaucratic. Private sector funding can indirectly bias fields of study (why would a company fund a study that is expected to be bad for the company?) or at least lead to accusations of bias.

There are benefits to a crowdfunding model for academic research. Getting the public involved in the process means they’re interested, which is good for scientific literacy. Crowdfunding can be a powerful tool for raising a large amount of money.

On the other hand, we already have a crowdfunding model for research: the tax-supported National Science Foundation, National Institutes of Health, etc. Basic research generally lacks the pizzaz to attract large amounts of crowdfunding, but it is a key foundation for higher-level research.

As the article pointed out, a crowdfunding pitch on the heels of a major fundraising campaign is a bit of a sour note. But overall, using crowdfunding to augment research is an appealing idea. I just worry about the day that researchers become dependent on it.

How gamification can change our habits

A while back, a friend posted the following tweet:

Sleeping in the buff means you don’t get Fitbit credit for the steps taken to clean up dog piss in the wee hours of the morning.

I laughed at first, but then I thought about it. Gamification changes how we behave: I certainly walk a lot more since I started counting my steps on my phone, for example. Adding small rewards and leveling up is used both in games but also to promote desired behavior in “serious” situations.

So what will long-term gamification side effects look like? Will people who normally sleep naked start sleeping with socks on so they can wear their FitBit? Will they instead buy a Jawbone Up that they can wear on their ankle?

Changing how HTCondor is packaged in Fedora

The HTCondor grid scheduler and resource manager follows the old Linux kernel versioning scheme: for release x.y.z, if y is an even number it’s a “stable” series that get bugfixes, behavior changes and major features go on odd-numbered y. For a long time, the HTCondor packages in Fedora used the development series. However, this leads to a choice between introducing behavior changes when a new development HTCondor release comes out or pinning a Fedora release to a particular HTCondor release which means no bugfixes.

This ignores the Fedora Packaging Guidelines, too:

As a result, we should avoid major updates of packages within a stable release. Updates should aim to fix bugs, and not introduce features, particularly when those features would materially affect the user or developer experience. The update rate for any given release should drop off over time, approaching zero near release end-of-life; since updates are primarily bugfixes, fewer and fewer should be needed over time.

Although the HTCondor developers do an excellent job of preserving backward compatibility, behavior changes can happen between x.y.1 and x.y.2. HTCondor is not a major part of Fedora, but we should still attempt to be good citizens.

After discussing the matter with upstream and the other co-maintainers, I’ve submitted a self-contained change for Fedora 25 that will

  1. Upgrade the HTCondor version to 8.6
  2. Keep HTCondor in Fedora on the stable release series going forward

Most of the bug reports against the condor-* packages have been packaging issues and not HTCondor bugs, so upstream isn’t losing a massive testing resource here. I think this will be a net benefit to Fedora since it prevents unexpected behavior changes and makes it more likely that I’ll package upstream releases as soon as they come out.

Fourth Amendment protection and your computer

Back in January, I wrote an article for Opensource.com arguing that judges need to be educated on open source licensing. A recent decision from the Eastern District of Virginia makes it clear that the judiciary needs to better understand technology in general. Before I get into the details of the case, I want to make it clear that I tend to be very pro-defendant on the 4th-8th Amendments. I don’t see them as helping the guilty go free (although that is certainly a side effect in some cases), but as preventing the persecution of the innocent.

The defendant in this case is accused of downloading child pornography, which makes him a pretty unsympathetic defendant. Perhaps the heinous nature of his alleged crime weighed on the mind of the judge when he said people have no expectation of privacy on their home computers. Specifically:

Now, it seems unreasonable to think that a computer connected to the Web is immune from invasion. Indeed, the opposite holds true: in today’s digital world, it appears to be a virtual certainty that computers accessing the Internet can – and eventually will – be hacked.

As a matter of fact, that’s a valid statement. It’s good security advice. As a matter of law, that’s a terrible reason to conclude that a warrant was not needed. Homes are broken into every day, and yet the courts have generally ruled that an expectation of privacy exists in the home.

The judge drew an analogy to Minnesota v. Carter, in which the Supreme Court ruled that a police officer peering through broken blinds did not constitute a violation of the Fourth Amendment. I find that analogy to be flawed. In this case, it’s more like the officers entered through a broken window and began looking through drawers. Discovering the contents of a computer requires more than just a passing glance, but instead at least some measure of active effort.

What got less discussion is the Sixth Amendment issue. Access to the computer was made possible by an exploit in Tor that the FBI made use of. The defendant asked for the source code, which the the judge refused:

The Government declined to furnish the source code of the exploit due to its immateriality and for reasons of security. The Government argues that reviewing the exploit, which takes advantage of a weakness in the Tor network, would expose the entire NIT program and render it useless as a tool to track the transmission of contraband via the Internet. SA Alfin testified that he had no need to learn or study the exploit, as the exploit does not produce any information but rather unlocks the door to the information secured via the NIT. The defense claims it needs the exploit to determine whether the FBI closed and re-locked the door after obtaining Defendant’s information via the NIT. Yet, the defense lacks evidentiary support for such a need.

It’s a bit of a Catch-22 for the defense. They need evidence to get the evidence they need? I’m open to the argument that the exploit here is not a witness per se, making the Sixth Amendment argument here a little weak, but as a general trend, the “black boxes” used by the government must be subject to scrutiny if we are to have a just justice system.

It’s particularly obnoxious since unauthorized access to a computer by non-law-enforcement has been punished rather severely at times. If a citizen can get 10 years in jail for something, it stands to reason the government should have some accountability when undertaking the same action.

I have seen nothing that suggests the judge wrote this decision out of malice or incompetence. He probably felt that he was making the correct decision. But those who make noise about the “government taking our rights away” would be better served paying attention to the papercut cases like this instead of the boogeyman narratives.

The easy answer here is “don’t download child pornography.” While that’s good advice, it does nothing to protect the innocent from malicious prosecution. Hopefully this will be overturned on appeal.

Fedora 24 upgrade

Fedora 24 was released last week, so of course I had to upgrade my machines. As has become the norm, there weren’t any serious issues, but I hit a few annoyances this time around. The first was due to packages in the RPMFusion repos not being signed. This isn’t Fedora’s fault, as RPMFusion is a completely separate project. And it was temporary: by the time I upgraded my laptop on Sunday night, the packages had all been signed.

Several packages had to be dropped by using the –allowerasing argument to dnf. Mostly these were packages installed from RPMFusion, but there were a couple of Fedora packages as well.

The biggest annoyance was that post-upgrade, I had no graphical login. I had to explicitly start the desktop manager service with:

systemctl enable kdm
systemctl start kdm

kdm had previously been enabled on both machines, but the upgrade nuked that in both cases. It looks like I’m not the only person to hit this: https://bugzilla.redhat.com/show_bug.cgi?id=1337546

And now, my traditional meaningless torrent stats!

Here’s my seeding ratios for Fedora 23:

Flavor i686 x86_64
KDE 16.2 35.6
Security 10.3 21.1
Workstation 30.9 46.7
Server 17.5 25.0

The “ratio ratio” as I call it is a comparison of seeding ratios between the two main architectures:

Flavor x86_64:i686
KDE 2.20
Security 2.05
Workstation 1.51
Server 1.43

So what does all of this tell us? Nothing, of course. Just because someone downloads a torrent that doesn’t mean they use it. Still, if we pretend that it’s a proxy for usage, all of the seeding ratios are higher than on the last release day. That tells me that Fedora is becoming more popular (yay!). 64-bit architectures are continuing to be a larger portion of the pie, as well.

Now that I’m starting to build a record of these, I can start reporting trends with the Fedora 25 release.

Feeling snappy? Self-contained apps won’t replace distro packages

Keeping on this week’s theme of Linux software repositories, an announcement from Canonical last week caused quite a commotion in various Linux communities. As Ars Technica reported, Canonical is touting “snap packages” as the next big thing in software distribution. The client software has been ported to platforms other than Ubuntu, which has been misrepresented as other distros actively supporting it.

The idea behind snaps (and Flatpak, which is under development in Fedora) is that software projects can build a single, self-contained package that runs on any version of Linux. It’s certainly an appealing prospect for some use cases. Not every open source project wants to rely on community packagers or spend their own effort being the maintainer in a dozen different distros. Some applications, particularly large web apps, could definitely benefit from an easier install process. I’ve found several open source project management tools I wanted to try out, but wasn’t interested enough to go through all of the setup.

Increased security is also being touted as a benefit, but I’m less inclined to buy into that. I count 63 packages on my Fedora machine that depend on the openssl-libs package. That means the next Heartbleed would require 63 updates instead of one. If some of the upstreams are slow to release updated snaps, sorry about your luck. I did see some discussion that snaps can depend on each other, but that sort of kills the “self-contained” aspect. Containerizing the applications does offer some improvements, but in most implementations, the containerization is disabled.

I think snapd (and Flatpak) are going to be useful tools, but they’re not there yet. And they certainly won’t solve all of the problems that the tech press seems to think they will. For the foreseeable future, maintainers will matter.

Are Linux repos like proprietary app stores?

Bryan Lunduke had a column in Network World last week comparing Linux software repositories to proprietary app stores. His point is a valid one: just because software was once available, that doesn’t mean it will be in perpetuity.

It’s tempting to argue that his example isn’t a fair comparison because the repos in question were hosted by Nokia, who was making money off device sales, instead of being a community project. That’s a weak argument, though, because community is no guarantee of longevity. Community provides some extra momentum, but if Red Hat decided it was going to pull its support for Fedora tomorrow, would the community be able to keep the infrastructure running? Would Ubuntu continue without Canonical? Perhaps, at least in the short term. At least in those cases, community mirrors would allow the current state to be maintained for a while.

This isn’t really a problem with repositories, though. If you had to go to the project’s website to get the code, you’re still in the same boat. Or in many small boats that are the same. At any rate, software you get from the Internet is always going to be at risk of going poof. This is true for both commercial and open source software.

Somewhere in my parents’ house, there are still probably 3.5″ floppies of DOS games I bought at Target for $5. If I had a 3.5″ drive, I’d see if the disks are still readable. But the software I actually use? It all comes from Internet distribution. I guess good backup hygiene includes keeping the appropriate install media stashed away somewhere.

Fixing a lack of documentation

In their 2016 developer survey, StackOverflow said poor documentation is the second-biggest problem faced by developers. 34.7% of respondents reported documentation as a challenge in their daily work. It’s not clear if that is internal documentation or upstream, but in either case the pain is largely self-inflicted.

Just as not everyone can write good code, not everyone can write good documentation. It’s a skill that requires practice, experience, and some talent. But the best documentation is what gets written. Documentation often has to start with the developer, even if someone else is responsible for cleaning it up.

The developer culture must embrace documentation. Test-driven development says write the tests first and code to the tests. Documentation-driven development is similar. We can argue about which should come first, but I’d suggest the documentation should inform the writing of the tests. Essentially, the documentation is the use case: the business reason you’re going to write code in the first place.

Writing documentation isn’t fun for a lot of people, so fostering a documentation culture is not easy. You can’t just say “thou shalt document” and expect it to happen. It has to be something you actively enforce. Your pull request doesn’t include documentation? It’s not getting merged until it does. You haven’t written a word of docs all year? No raise for you.

Once documentation takes root, though, it can be self-reinforcing. In the meantime, you do your users and your future self a big favor by making the effort now.

Product review: Divoom AuraBox

When Opensource.com hit 1 million page views back in March, Red Hat bought all of the community moderators a Divoom AuraBox as a token of its appreciation. I’ve been using mine for a little over a month now and I thought I’d share my impression with my ones of readers.

The back of the AuraBox, complete with special Opensource.com branding.

The back of the AuraBox, complete with special Opensource.com branding.

This is a fun toy. The primary function is a speaker (with both Bluetooth and audio jack support), a task it is pretty okay at. I mainly use it to listen to public radio podcasts while I’m washing dishes or baseball games while I’m doing work outside. I have used it for music some, and if your goal is to have sound come out, it works. PC Mag’s review took the AuraBox to task over the sound quality, but I think it’s acceptable. No, it’s not great (though I’m no audiophile), but it works.

The fun part comes in with the LED panel on the front. By itself, the AuraBox can show you the time, temperature, or some pretty patterns. When paired with a phone, the LED panel becomes your own. It will display notification icons for a few select apps (notably and annoyingly, that does not include Google Hangouts), but you can also create your own art. The app supports static or eight-frame animated images on the 10×10 grid. When I first unboxed it, my kids and I had a lot of fun coming up with new images to put on it. I spent more time than I should admit making a train animation. The grid size, frame count (and fixed rate), and limited color palette constrain what you can display, but it’s still fun.

The front of the AuraBox, displaying my attempt at recreating the Fedora logo.

The front of the AuraBox, displaying my attempt at recreating the Fedora logo.

The internal battery charges via a USB port, which means I can put either my phone or my AuraBox on the charger, whichever one seems to be lower. I haven’t tested the battery life, but I’ve done several hours of use without charging and it hasn’t died on me. The Bluetooth connection seems reliable, though it is a little jarring when you go just out of range and suddenly your pants are talking to you again.

My only complaint (and if anyone from Opensource.com is reading this, please don’t mistake this for a lack of gratitude!) is that it is a completely closed environment. I expect that from consumer technology, but given that it was a gift from a website focused on open source, I’d have hoped for more. If the box had a published API, I can think of several fun things to do.

Still, this is a useful device to have around. If sound quality is your primary focus, this isn’t the device for you. If you just want to listen to things and have a little bit of fun, then I recommend it.