The role of the Linux distro in modern computing

John Mark Walker recently published a post on how the Linux distribution has lost prominence in the DevOps ecosystem (right around the time we started using the term “DevOps”) and how it might be poised for a comeback. His take is centered around the idea of the “download your dependencies on the fly as you build and deploy software” being a risky proposition, supply-chain-securitally-speaking.

I can’t disagree with that. I’ve always felt that modern language ecosystems are bonkers in that regard. Yes, it’s great that there are better dependency relationships than in C/C++ where the relationships might as well be “LOL, GFY”. But the idea that you’d just download some libraries on the fly and hope it all works as expected? Well, we’ve learned a few times how that can go sideways.

So, yeah. I generally agree with what John Mark says here. Heck, I’ve even given a talk on several occasions with a premise of “operating systems are boring.” So can the Linux distribution become un-boring again and help us fix our woeful supply chains?

Why distros aren’t the answer

The fundamental problem that Linux distributions face is sometimes called “too fast, too slow.” Distributions prioritize stability and coherence to some degree, even the cutting edge distributions. Enterprise distributions — or more accurately, their customers — place a particular emphasis on long-term stability. But development tools and programming languages change quickly.

Some applications need the latest version of libraries. Others update slowly and need old versions. It’s hard to meet both of those needs at the same time. Various attempts have been made, like Fedora’s Modularity, but there’s no great answer. The modern language ecosystems exist, in part, to sidestep this problem.

Another problem is that not all distribution package maintainers are technically solid or even ethical. The vast majority are, of course, but there are exceptions. As a package maintainer myself, I’m not adding a ton of value from a supply chain security standpoint. I take the upstream sources, wrap them in an RPM spec file, and submit them to the build system.

Distributions also create a disconnect between developers and end users. This separation has some value, but it also creates additional points for slow bug fixes, malware injection, and social engineering attacks.

Why distros can be the answer

While it’s true that not all distribution package maintainers are capable of fixing issues in the packages they maintain, some can. And even when a maintainer can’t fix an issue, they can get help from other maintainers in the distribution’s community. Issues that upstream chooses not to fix can be fixed in the distribution’s build.

Distributions aren’t just RPMs or debs or whatever anymore. Contained application delivery methods like Flatpak, Snap, and toolbx give distributions the ability to provide curated environments in a way that improves parallel installability. Of course, upstreams can produce their own containers, but this gives end users the ability to pick the right option for their needs.

In conclusion

I’m not sure I see distributions making the comeback that John Mark hopes for. Enterprise distros move too slowly, by and large, to address the needs of language ecosystems. And there’s not a ton of value in distributions blindly packaging language libraries.

John Mark suggested that organizations would want to “outsource risk mitigation to a curated distribution as much as possible.” I don’t disagree. But community distributions can’t take on the risk that companies want to outsource.

That said, the problem won’t fix itself, so let’s work toward something. If that ends up being Linux distributions, then great. If not, distributions will still have an important role to play.

Maybe we should think about how we use language ecosystems

Over the weekend, Bleeping Computer reported on thousands of packages breaking because the developer of a package inserted infinite loops. He did this with intent. The developer had grown frustrated with his volunteer labor being used by large corporations with no compensation. This brings up at least three issues that I see.

FOSS sustainability

How many times have we had to relearn this lesson? A key package somewhere in the dependency chain relies entirely on volunteer or vastly-underfunded labor. The XKCD “Dependency” comic is only a year and a half old, but it represents a truth that we’ve known since at least the 2014 Heartbleed vulnerability. More recently, a series of log4j vulnerabilities made the holidays very unpleasant for folks tasked with remediation.

The log4j developers were volunteers, maintaining code that they didn’t particularly like but felt obligated to support. And they worked their butts off while receiving all manner of insults. That seemingly the entire world depended on their code was only known once it was a problem.

Many people are paid well to maintain software on behalf of their employer. But certainly not everyone. And companies are generally not investing the sustainability of the projects they rely on.

We depend on good behavior

The reason companies don’t invest in FOSS in proportion to the value they get from it is simple. They don’t have to. Open source licenses don’t (and can’t) require payment. And I don’t think they should. But companies have to see open source software as something to invest in for the long-term success of their own business. When they don’t, it harms the whole ecosystem.

I’ve seen a lot of “well you chose a license that let them do that, so it’s your fault.” Yes and no. Just because people can build wildly profitable companies while underinvesting in the software they use doesn’t mean they should. I’m certainly sympathetic to the developers position here. Even the small, mostly unknown software that I’ve developed sometimes invokes a “ugh, why am I doing this for free?” from me—and no one is making money off it!

But we also depend on maintainers behaving. When they get frustrated, we expect they won’t take their ball and go home as in the left-pad case or insert malicious code as in this case. While the anger is understandable, a lot of other people got hurt in the process.

Blindly pulling from package repos is a bad idea

Speaking of lessons we’ve learned over and over again, it turns out that blindly pulling the latest version of a package from a repo is not a great idea. You never know what’s going to break, even if it’s accidental. This still seems to be a common mode in some language ecosystems and it baffles me. With the increasing interest in software supply chains, I wonder if we’ll start seeing that as an area where large companies suddenly decide to start paying attention.

How I configure sshd at home

My “server” at home isn’t particularly important to the outside world. But by virtue of being on the Internet, it’s subject to a lot of SSH logins. The easiest thing to do is to shut it off from the outside world. But I need to access it when away from home, so that’s not a particularly useful solution.

So what I’ve done is use the SSH daemon’s (sshd) configuration to reduce the risk profile. The first thing I wanted to do is forbid login as root:

PermitRootLogin no

I also don’t want anyone to be able to log in with passwords. “Anyone” is essentially me here, but since I have sudo on the box, if someone is able to figure out my password they are able to get root remotely.

PermitRootLogin no
ChallengeResponseAuthentication no

Finally, I want to restrict remote login to only explicitly-permitted users. I do this with a dedicated Unix group that I call “sshusers”.

AllowGroups sshusers

These are pretty standard changes and not really worth a blog post. But it turns out that sshd has a very flexible configuration. When a client is coming from inside the LAN, I want to enable password authentication. This is particularly helpful when I’m installing a new system and don’t have SSH keys setup yet.

Match Address 192.168.1.*
    PasswordAuthentication yes

Also within the LAN, it’s easier to run Ansible playbooks across machines if the root user can SSH in with a key. So I combine user and address matching to permit key-based root login only from the server with the Ansible playbooks.

Match User root Address 192.168.1.10
    AllowGroups root sshusers
    PermitRootLogin prohibit-password

Finally, I want my ex to be able to access the server in order to access photos, etc. So I set up her account so that she can use an sftp client but can’t log in (not that she would anyway, but it was a fun challenge to set this up).

Match User angie
    ForceCommand internal-sftp
    PasswordAuthentication yes
    PermitTunnel no
    AllowAgentForwarding no
    AllowTcpForwarding no
    X11Forwarding no

Why didn’t you … ?

The configuration above isn’t the only way to secure my SSH server from the outside world. It’s not even necessarily the best. I could, for example, move SSH to a different port, which would cut down on the drive-by attempts significantly. I resisted that in the past because I felt “security through obscurity isn’t security.” But in practice, it can be a layer in a more secure approach. In the past, I also recall some clients I used (particularly on mobile) not having the ability to use a non-default port. If that recollection is correct, it seems to also be outdated now. So basically I’m still on port 22 because of inertia.

I could also set up a VPN server and use that for remote access. That requires an additional service to manage, of course. And it also presents challenges when I’m also connected to a work VPN server. The sshd configuration approach is a simpler way for my needs.

If everyone followed good password advice, we’d be less secure

Passwords are hard. To be useful, they must be hard to guess. But the rules we put in place to make them hard to guess also make them hard to remember. So people do the minimum they can get away with.

Earlier this week, security company Webroot took a look at the unintended consequences of password constraints. The rules organizations set in order to ensure passwords are sufficiently complex reduce the total number of possible passwords. This can make automated password guessing more

Good passwords are easy for the user to remember and hard for computers and other humans to guess. Let’s say I wanted to use a password like 2Clippy2Furious!! Various password checking sites rate it highly. It’s 18 characters long and contains upper- and lower-case letters, digits, and special characters. But because it contains consecutive repeating letters, some companies won’t allow it.

Writing for Webroot, Randy Abrams says “it’s length, not complexity that matters.” And he’s right. That’s the point behind the “correct horse battery staple” password in XKCD #936. So let’s all do that, right?

Well…it’s not so simple. If I were trying to brute force passwords, and I knew everyone was using four (or five or six) words, suddenly instead of “CorrectHorseBatteryStaple” being 26 characters, it’s four. Granted, the character set goes from 95 to (using /usr/share/dict/words on my laptop) 479,828. “CorrectHorseBatteryStaple” is many powers of 10 more secure if the attacker doesn’t know you’re using words.

And let’s be real: they don’t. This hypothetical weakness has a long time before it becomes a real concern. Don’t believe me? Just look at the password dumps when a site gets hacked. There are a lot of really bad passwords out there. If we took all the constraints off (except for minimum length), people would just use really dumb, easily-guessed passwords again. But it amuses me that if everyone followed good password advice, we’d actually make it worse for ourselves. Passwords are hard.

Sidebar: Yes, I know

The savvier among you probably read this and thought “it’s better to use a random string that you never have to memorize because your password manager handles it for you. Just set a very long and memorable password on that and you’re good to go.” Yes, you’re right. But people, even those who use password managers, will often go to memorable passwords for low-risk sites or passwords they have to use often (e.g. to log in to their computer so they can access the password manager). 

What might government regulation of infosec look like?

“Terrible” is the most likely answer. But let’s assume we’re talking about regulation that is effective and sound (from both a technical and civil liberties perspective).

On Sunday’s episode of This Week in Tech, the panel discussed the possibility of government regulation of internet security. I’m not fully convinced that any regulation is necessary, but the case for some form of consumer protection grows with every breach. And I don’t think it likely that companies will self-regulate.

So as neither a policy nor technical security expert, what sort of plan would I draw up?

Good infosec regulation

Any workable laws or regulations would have to be defense-oriented. It may sound like victim-blaming, but I don’t see any other path. Companies must meet some minimum standard of protection or face non-trivial fines in the event of a breach. But if a breach occurs and the company met the standard, I would not punish them. Even the best organization is going get compromised in some way at some point.

In an ideal scenario, the punishment would instead be on the bad actor. The international nature of the Internet makes that a near impossibility. And given that a company is acting with some degree of public trust, I don’t find it unjust to demand a certain level of security compliance.

In order to avoid a heavy administrative burden, I wouldn’t require external audits (at least not for companies below a certain size). It could be something as simple as “document the security plan and show that you’ve kept to it”. The plan would have some number of required elements (e.g. customer passwords aren’t stored in plaintext) and a further list of suggested elements maintained by an expert body. So long as your plan isn’t garishly incompetent and you stick to it, you’re in the clear from a government punishment perspective.

Of course, certain systems would still be subject to heavier burden. I wouldn’t do away with HIPAA or PCI in favor of this new model. But you can see how less-sensitive services would be nudged toward better consumer protection.

Bad infosec regulation

So what wouldn’t I include? I certainly would not require any encryption backdoor (I might even prohibit it) or prohibit users’ use of encryption. That’s an obvious choice in light of the civil liberty requirement.

I also would not include any specific technology or process in the law/regulation itself. The technology landscape is too dynamic and diverse for that to be effective. The best we can hope for is to set broader principles that need updated on the order of years.

The reality of regulation

I don’t see any meaningful regulation happening in the near future. For one, it’s a very difficult problem to solve from both a technical and a policy perspective. More importantly, it could be politically hot, and we all know how pleasant the current environment in Washington is.

At most, we may see a few laws, probably bad, that nibble around the edges. But as the digital age continues to change society as we’ve known it, the law must catch up somehow.

How not to code your bank website

When is a number not a number? When it is a PIN. Backstory: recently my bank overhauled its website. On the whole, it’s an improvement, but it hasn’t been entirely awesome. One of the changes was that special characters were no longer allowed in the security questions. As it turns out, that’s a good way to lock your users out. Me included.

Helpfully, if you lock yourself out, there’s a self-service unlock feature. You just need your Social Security Number and your PIN (and something else that I don’t recall at the moment). Like any good form, it validates the fields before proceeding. Except holy crap, if your PIN begins with 0, pressing “Submit” means the PIN field becomes three characters and you can never proceed. That’s right: it treats the PIN as an integer when really it should be a string.

I’ve made my share of dumb mistakes, so I try to be pretty forgiving. But bank websites need to be held to a very high standard, and this one clearly misses the mark. Breaking existing functionality and mistreating PINs are bad enough, but the final part that lead me to a polite-but-stern phone call was the fact that special characters are not allowed in the password field. This is 2016 and if your website can’t handle special characters, I have to assume you’re doing something terribly, terribly wrong.

In the meantime, I’ve changed my PIN.

What3Words as a password generator

One of my coworkers shared an interesting site last week. What3Words assigns a three-word “address” to every 3m-by-3m square on Earth. The idea behind the site is that many areas of the world don’t have street numbers and names, and a three-word combination is much easier to remember than latitude/longitude pairs. Similar combinations are deliberately placed far apart so as to make them unambiguous.

It’s an interesting idea, but I immediately began thinking of a different use for it. What if people used it to come up with long, memorable, and hard-to-guess passwords? After all, the longer a password is (generally speaking), the better it is. And while correcthorsebatterystaple might be amusing, it’s much easier to remember a place. So you pick a memorable spot on the map and now you have a long password that you can look up if you forget it.

image

XKCD "Password Strength" by Randall Munroe. Used under the Creative Commons Attribution-NonCommercial 2.5 license.

This method isn’t perfect. The main problem is that with a 3x3m grid, it’s very sensitive to differences in location. But especially for the technically unsavvy, it can be a good way to enable better password habits.

Sidebar: why Randall Munroe is wrong (-ish)
There’s another reason What3Words isn’t perfect, and the XKCD cartoon above is subject to the same weakness. If a password cracker knows people are mostly using concatenated words, they’ll start guessing combinations of words instead of combinations of characters. These sorts of passwords are stronger when they’re rare. Of course, there are trivial ways to mitigate the risks (insertion of special characters, selective capitalization, etc.).

Still, given the choice between a 20-character random string and a 20-character set of words, I’ll take the random string as my password (unless the site/app disables paste, in which case I’ll cry). I use a password manager precisely so I don’t have to worry about trying to balance security and memorability. The What3Words method could be helpful as a password for my password safe, though.

Another reason to disable what you’re not using

A common and wise security suggestion is to turn off what you’re not using. That may be a service running on a computer or the bluetooth radio on a phone. This reduces the potential attack surface of your device and in the case of phones, tablets, and laptops helps to preserve battery life. On the way to a family gathering over the weekend, I discovered another, less intriguing reason.

As I exited the interstate, I passed a Comfort Inn. Having stayed a Comfort Inns in the past, my phone remembered the Wi-Fi network and apparently it tried to connect. The signal was just strong enough that my phone switched from 4G to Wi-Fi, and since the Comfort Inn had a registration portal, this messed up the navigation in the maps app. Oops.

I turned the Wi-Fi antenna off for the rest of the trip. It was a good reminder to shut off what I’m not using.

CERIAS Recap: Featured Commentary and Tech Talk #3

Once again, I’ve attended the CERIAS Security Symposium held on the campus of Purdue University. This is the final post summarizing the talks I attended.

I’m combining the last two talks into a single post. The first was fairly short, and by the time the second one rolled around, my brain was too tired to focus.

Thursday afternoon included a featured commentary from The Honorable Mark Weatherford, Deputy Undersecretary of Cybersecurity at the U.S. Department of Homeland Security. Mr. Weatherford was originally scheduled to speak at the Symposium, but restrictions in federal travel budgets forced him to present via pre-recorded video. Mr. Weatherford opened with an observation that “99% secure means 100% vulnerable.” There are many cases where a single failure in security resulted in compromise.

The cyber threat is real. DHS Secretary Napolitano says infrastructure is dangerously vulnerable to cyber attack. Banks and other financial institution have been under sustained DDoS attack and it has become very predictable. In the future, there will be more attacks, they will be more disruptive, and they will be harder to defend against.

So what does DHS do in this space? DHS provides operational protection for the .gov domain. They work with the .com sector to improve protection, especially against critical infrastructure. DHS responds to national events and works with other agencies to foster international cooperation.

Cybersecurity got two paragraphs in President Obama’s 2013 State of the Union address. Obama’s recent cybersecurity executive order has goals of establishing an up-to-date cybersecurity network and enhancing information sharing among key stakeholders. DHS is involved in the Scholarship for Service student program which is working to create professionals to meet current and future needs.

The final session was a tech talk by Stephen Elliott, Associate Professor of Technology Leadership and Innovation at Purdue University, entitled “What is missing in biometric testing.” Traditional biometric testing is algorithmic, with well-established metrics and methodologies. Operation testing is harder to do because test methodologies are sometimes dependent on the test. Many papers have been written about the contributions of individual error on performance. Some papers have been written on the contribution of metadata error. Elliott is focused on training: how do users get accustomed to devices, how they remember how to use them, and how can training be provided to users with a consistent message.

One way to improve biometrics is understanding the stability of the user’s response. If we know how stable a subject is, we can reduce the transaction time by requiring fewer measurements. Many factors, including the user, the agent, and system usability affect the performance of biometeric systems. Improving performance is not a matter of simply improving the algorithms, but improving the entire system.

Other posts from this event:

CERIAS Recap: Panel #3

Once again, I’ve attended the CERIAS Security Symposium held on the campus of Purdue University. This is one of several posts summarizing the talks I attended.

The “E” in CERIAS stands for “Education”, so it comes as no surprise that the Symposium would have at least one event on the topic. On Thursday afternoon, a panel addressed issues in security education and training. I found this session particularly interesting because it paralleled many discussions I have had about education and training for system administrators.

Interestingly, the panel consisted entirely of academics. That’s not particularly a surprise, but it does bias the discussion toward higher education issues and not vocational-type training. This is often a contentious issue in operations education discussions. I’m not sure if such a divide exists in the infosec world. Three Purdue professors sat on the panel: Allen Gray, Professor of Agriculture; Melissa Dark, Professor of Computer & Information Technology and Associate Directory of Educational Programs at CERIAS; and Marcus Rogers, Professor of Computer & Information Technology. They were joined by Ray Davidson, Dean of Academic Affairs at the SANS Technology Institute; and Diana Burley, Associate Professor of Human and Organizational Learning at The George Washington University.

Professor Gray began the opening remarks by telling the audience he had no cyber security experience. His expertise is in distance learning, as he is the Director of a MS/MBA distance program in food and agribusiness management. The rise of MOOCs has made information more available than ever before, but Gray notes that merely providing the information is not education. The MS/MBA program offers a curriculum, not just a collection of courses, and requires interaction between students and instructors.

Dean Davidson is in charge of the master’s degree programs offered by the SANS Technology Institute. This is a new offering and they are still working on accreditation. Although it incorporates many of the SANS training courses, it goes beyond those. “The old days of protocol vulnerabilities are starting to go away, but people still need to know the basics,” he said. “Vulnerabilities are going up the stack. We’re at layers 9 and 10 now.” Students need training in legal issues and organizational dynamics in order to become truly effective practitioners.

Professor Dark joined CERIAS without any experience in providing cybersecurity education. In her opening remarks, she talked about the appropriate use of language: “We always talk about the war on defending ourselves, the war on blah. We’re not using the language right. We should reserve ‘professionalization’ for people who deal with a lot of uncertainty and a lot of complexity.” Professor Burley also discussed vocabulary. We need to consider who is the cybersecurity workforce. Most cybersecurity professionals are in hybrid roles, so it’s not appropriate to focus on the small number who have roles entirely focused on cybersecurity.

Professor Rogers drew parallels to other professions. Historically, professionals of any type have been developed through training, certification, education, apprenticeship or some combination of those. In cybersecurity, all of these methods are used. Educators need to consider what a professional in the field should know, and there’s currently no clear-cut answer. How should education respond? “Better than we currently are.” Rogers advocates abandoning the stove pipe approach. Despite talk of being multidisciplinary, programs are often still very traditional.”We need to bring back apprenticeship and mentoring.”

The opening question addressed differences between education and training. Gray reiterated that disseminating information is not necessarily education; education is about changing behavior. Universities tend to focus on theory, but professionalization is about applying that theory. As the talk drifted toward certifications, which are often the result of training, Rogers said “we’re facing the watering-down of certifications. If everybody has a certification, how valuable is it?” Dark launched a tangent when she observed that cybersecurity is in the same space as medicine: there’s so much that practitioners can’t know. This lead to a distinction being made (by Spafford, if I recall correctly) between EMTs and brain surgeons as an analogy for various cybersecurity roles. Rogers said we need both.They are different professions, Burley noted, but they both consider themselves professionals.

One member of the audience said we have a great talent pool entering the work force, but they’re all working on same problems. How many professionals do we need? Davidson said “we need to change the whole ecosystem.” When the barn is on fire, everyone’s a part of the bucket brigade; nobody has time to design a better barn or better fire fighting equipment. Burley pointed out that the NSF’s funding of scholarships in cybersecurity is shifting toward broader areas, not just computer science. This point was reinforced by Spafford’s observation that none of the panelists have their terminal degree in computer science. “If we focus on the job openings that we have right now,” Rogers said, “we’re never going to catch up with the gaps in education.” One of the panelists, in regard to NSF and other efforts, said “you can’t rely on the government to be visionary. You might be able to get the government to fund vision,” but not set it.

The final question was “how do you ensure that ethical hackers do not become unethical hackers?” Rogers said “in education, we don’t just give you knowledge, we give you context to that knowledge.” Burley drew a parallel to the Hippocratic Oath and stressed the importance of socialization and culturalization processes. Davidson said the jobs have to be there as well. “If people get hungry, things change.”

Other posts from this event: