CERIAS Recap: Opening Keynote

Once again, I’ve attended the CERIAS Security Symposium held on the campus of Purdue University. This is the first of several posts summarizing the talks I attended.

The opening keynote was delivered by Todd Gebhart, the co-president of McAfee, Inc. Mr. Gebhart opened by reminding the audience that a “certain individual” who happens to share a name with the company is no longer involved with the McAfee corporation. Gebhart set the stage by addressing why McAfee employees go to work every day. The company focuses on protecting four areas: personal, business, government, and critical infrastructure.

The nature of security has changed over the years. In 1997, updates to antivirus subscriptions were physically mailed on disk to McAfee customers every three months. 17,000 known pieces of malware had been identified. Today, a growth in the number of connected devices has spurred a growth in malware. McAfee estimates one billion devices are connected to the Internet today, a number which is forecast to grow to 50 billion by 2020. Despite improvements in security procedures and products, the rate of growth in malware does not appear to be slowing.

The growth rate is greatest for mobile devices, where “only” 36,000 unique pieces of malware are known to exist (according to a preliminary study, 4% of all mobile apps are designed with mal intent). Consolidation of mobile operating systems into two main players (iOS and Android) has made it easier for malware writers. The nature of the threat on mobile has changed as well. Whereas desktop and server-based attacks were often about gaining control of or denying service to a machine, mobile threats are more focused on the loss of data and devices. The addition of WiFi, while of considerable benefit to users, has opened up a whole new realm of attack vectors that did not exist a few years ago.

Gebhart gave a brief survey of current malware threats in the four sectors listed above. He noted that attacks are no longer about machines; they’re about people and organizations. Accordingly, spam and botnets are becoming less of a concern in favor of malicious URLs. Behavior- and pattern-based attacks allow bad actors to focus their efforts more efficiently, and the development of Hacker-as-a-Service (HaaS) offerings allows for attackers with little-to-no technical knowledge.

The evolving threat has lead to greater awareness among non-technical business leaders. Security companies are now having discussions not only with technical leadership in organizations, but also with high level business and government leaders.

The industry is evolving to face the new and emerging threats. The use of real-time data to make real-time decisions can improve the response to attacks, or perhaps prevent them. Multi-organization cooperation can help defend against so-called “trial-and-error” attacks. Cloud-based threat intelligence allows McAfee to analyze malware traffic across 120 million devices worldwide. Hardware and software vendors are working together (or in the case of Intel, buying McAfee) to develop systems that can detect malware at the hardware interaction layer.

Gebhart closed by saying “it’s an exciting time to be in security” and noting that his company is always looking for talented security researchers and practitioners.

Other posts from this event:

Bad security from SpeedDate.com

The problem with having my first initial and last name as my email address is that I get a lot of email from other people. The other day, I started getting messages from SpeedDate.com. Not wanting to keep Barry from true love, I looked for a support address. There wasn’t one, so I tried going to the site by clicking the link. Imagine my surprise when I had full access to Barry’s profile.

Since I was in a good mood, I didn’t pretend to be Barry. I simply deactivated his account so that maybe he’d notice. The next day, I got another email saying a woman was interested in me Barry. Once again, I clicked the link to take me to the site. This time, I unchecked all of the notification options and put in a unused email address. I was interested to see what I could do if I were a bad person, so I looked at Barry’s profile information. He didn’t have much filled out, but he had his date of birth and his ZIP code. From which, I could use an online phone book to find his address and phone number.

That’s plenty of information for a social engineering attack. Considering he couldn’t enter his email address correctly, I’m willing to bet that if I called pretending to be from his bank, library, local police, etc. that I could get even more information out of him. Identity theft city, folks. It’s lucky for Barry that I’m generally a nice guy. Though it’s not really Barry’s fault that SpeedDate.com is stupid enough to allow unauthenticated access to user settings. I’ve tried to contact SpeedDate.com and have not yet received a response, so I’m opting to shame them publicly.

But ladies, if you’re looking for Barry, he’s not here.

Data breaches suck

Despite our best efforts, machines sometimes get compromised. The culprit isn’t always (or even usually) a highly publicized group in it for the laughter. It could be a curious student, or an overzealous admin, or the Russians. Whoever is behind it, when it happens, it sucks. Especially if sensitive data is involved. So I really feel bad for my colleagues in the Math department at Purdue, who had to deal with this recently. According to the University News Service, over 7000 former students have been notified that an attacker potentially accessed their Social Security Number.

I know only as much about this as has been publicized, so I can’t speak to any specifics. What I can say is that stuff like this kept me up at night in my previous job. For years, SSNs were used to “anonymously” distribute grades to students. They’re nice because they’re a unique identifier and nearly everyone has one. Unfortunately, they’re also kind of important elsewhere and protected by state and federal laws. The upshot is that many faculty had files containing SSNs on their desktop or on removable media or on a file server.

In 2006, if memory serves, we were tasked with scanning every machine owned by the department for SSNs. This involved adapting some existing tools (which were basically just really fancy regular expressions to grep for), doing a room-by-room inventory, and then asking users to scan their machines and sift through the output. After the machine owners ran their own scans and cleaned up offending files, we did it again, this time forcing the scans and having the IT staff look for offensive files. It was a many-month project that was not by any means pleasant.

From the article, it sounds like it was similarly awful after this breach. You can’t assume that a SSN will be formatted 000-00-0000, so you have to look for 9-digit strings, which occur with alarming frequency. In this case, it appears that no one’s number was actually divulged, which simultaneously lends relief and futility to the exercise.

Dropping Dropbox

When Dropbox first came to my attention, I was in love. What a great way to keep various config files synchronized across computers. Then it came out that Dropbox’s encryption wasn’t quite as awesome as they let on. It turns out there’s no technical restriction on (at least certain) employees accessing your files. The data is encrypted, but server-side. Now, I’m not all that concerned that someone will target me to find out what my .ssh/config file contains (heck, I’d put it on dotfiles if someone asked nicely), but it does make me reconsider what is appropriate for Dropbox.

Recently, Dropbox announced some changes to the Terms of Service. While the license part is what caused the most uproar on the Internet, the de-duplication part is what stood out the most to me. I know it’s not in Dropbox’s best interests to pay to store a thousand copies of Rebecca_Black-Friday.mp3, but that’s not my concern. The wording suggests that the de-duplication is block-level as opposed to file-level, which is less worrisome, but given their previous lack of transparency about the encryption, I wonder how they’re actually implementing it. If it’s file-level and if it spans multiple accounts, then that seems like a really terrible idea.

I’ve recently switched everything I had in Dropbox over to SpiderOak. The synchronization seems a bit slower and the configuration is less simple (but it’s much easier to back up multiple directories, instead of having to barf symlinks everywhere), but the encryption is client-side so that it’s impossible for SpiderOak to divulge user data (unless they’re lying, too). If you’re interested in trying SpiderOak for yourself, sign up through this link and we’ll both get an extra 1 GB of storage for free.

Making secure passwords

A recent ZDNet article claimed that GPU computing has rendered even the most secure passwords dangerously crackable. It’s true that passwords developed using the conventional wisdom are subject to more easy brute-forcing, but that doesn’t mean all hope is to be abandoned. The tradeoff is normally between complexity/length and memorability, but in a “Security Now” episode earlier this month, Steve Gibson tossed that tradeoff out the window. His idea: burying your password in a haystack. The general idea is this: if your password is “r4Nd0mBunn1es”, you can make it less crackable by doing something like “/\/\/\/\r4NdomBunn1es/\/\/\/\” or “—r4ndom*****Bunn1es+++” or some other method of padding extra characters.

Even if you use the same pattern every time, so long as the password needle is different, the overall password will be very difficult to crack. So if the password is so difficult, why use a different needle for each site? Because you can’t trust the site to do the right thing. As recent attacks against Sony and other sites have shown, some sites still store passwords in plain text. At least with the password haystack, you can remember shorter passwords and apply the appropriate pattern to fill them out.

I’ve not been as quick to react to this as I perhaps should be. Admittedly, I reuse my throwaway passwords a lot, and so I’m taking advantage of this opportunity to fix this glitch. I’ll probably just create jibberish ones and save them in KeePassX.

Understanding Condor security settings

Recently our colleagues at the University of Nebraska asked us to add host certificates to our Condor servers to enable encryption between sites. After fighting my way through Cfengine, I got the certificates in place. The next step was to enable GSI authentication on the servers. Having wisened up over the years, I chose to commit the change early in the day. My commit made the following two edits:

Modified: prod/tmpl/condor/condor_config.negotiator
 ===================================================================
 --- prod/tmpl/condor/condor_config.negotiator   2011-05-18 17:04:21 UTC (rev 12379)
 +++ prod/tmpl/condor/condor_config.negotiator   2011-05-18 17:12:46 UTC (rev 12380)
 @@ -45,6 +45,12 @@
 QUILL_DB_NAME           = quill
 QUILL_DB_IP_ADDR        = quill-00.rcac.purdue.edu:5432

 +# Use host certs to provide some inter-site security
 +SEC_DAEMON_AUTHENTICATION = preferred
 +SEC_DAEMON_AUTHENTICATION_METHODS = GSI, PASSWORD
 +SEC_NEGOTIATOR = preferred
 +SEC_NEGOTIATOR_AUTHENTICATION_METHODS = GSI, PASSWORD
 +GSI_DAEMON_DIRECTORY = /etc/grid-security

 # define sub collectors
 COLLECTOR2 = $(COLLECTOR)

 Modified: prod/tmpl/condor/condor_config.submit
 ===================================================================
 --- prod/tmpl/condor/condor_config.submit       2011-05-18 17:04:21 UTC (rev 12379)
 +++ prod/tmpl/condor/condor_config.submit       2011-05-18 17:12:46 UTC (rev 12380)
 @@ -40,6 +40,11 @@
 condor1.ipfw.edu
 #      condorcm.pnc.edu, \

 +# Use host certs to provide some inter-site security
 +SEC_ADVERTISE_SCHEDD_AUTHENTICATION = preferred
 +SEC_ADVERTISE_SCHEDD_AUTHENTICATION_METHODS = GSI, PASSWORD
 +GSI_DAEMON_DIRECTORY = /etc/grid-security
 +
 # Use global event log (like the userlog)
 # Set MAX_EVENT_LOG to a huge number, since there's a bug keeping the
 # set it to zero to let it grow unlimited bit work

I had tested the changes a bit and hadn’t noticed anything too out of whack. Until the follow morning. Users were complaining about slow queueing and execution of PBS jobs. At first, I thought it was a license problem on one of the PBS servers, since that had been happening a few times in the recent past. As the others in the group began investigating, though, they noticed that the PBS prologue script wasn’t completing as a job tried to land. It turns out the condor_config_val call that changes the PBSRunning attribute was failing because the node couldn’t talk to the collector.

The root cause was my misunderstanding of the Condor security documentation. With clients set to “optional” and daemons set to “preferred”, they try to use the relevant security features. But since the methods didn’t match, they refused to talk to each other instead of failing gracefully. Changing the “preferred” to “optional” restored performance and job throughput. Having gone through this, it now makes sense, but it’s more than a little embarrassing to bring the entire infrastructure down.

A Cfengine learning experience

Note: This post refers to Cfengine 2. The difficulties I had may quite likely be a result of peculiarities in our environment or the limits of my own knowledge.

A few weeks ago, my friends at the University of Nebraska politely asked us to install host certificates on our Condor collectors and submitters so that flocking traffic between our two sites would be encrypted. It seemed like a reasonable request, so after getting certificates for 17-ish hosts from our CA, I set about trying to put them in place. I could have plopped them all in place easily enough using a for loop, but I decided it would make more sense to left Cfengine take care of it. This has the added advantage of making sure the certificate gets put in place automatically when a host gets reinstalled or upgraded.

I thought it would be nice if I tested my Cfengine changes locally first. I know just enough Cfengine to be dangerous, and I don’t want to spam the rest of the group with mail as I check in modifications over and over again. So after editing the input file on one of the servers, I ran cfagent -qvk. It didn’t work. The syntax looked correct, but nothing happened. After a bit, I asked my soon-to-be-boss for help.

It turned out that I didn’t quite get the meaning of the -k option. I always used it to run against the local cache of the input files, not realizing that it killed all copy actions. Had I looked at the documentation, I would have figured that out. Like I said, I know just enough to be dangerous.

I didn’t want to create a bunch of error email since some hosts wouldn’t be getting host certificates, so I went with a IfFileExists statement that I could use to define a group to use in the copy: stanza. So I committed what I thought to be the correct changes and tried running cfagent again. The certificates still weren’t being copied into place. Looking at the output, I saw that it couldn’t find the file. Nonsense. It’s right there on the Cfengine server.

As it turns out, that’s not where IfFileExists looks, it looks on the server running cfagent. The file, of course, doesn’t exist locally because Cfengine hasn’t yet copied it. Eventually I surrendered and defined a separate group in cf.groups to reference in the appropriate input file. This makes the process more manual than I would have liked, but it actually works.

Oh, except for one thing. In testing, I had been using $(hostname) in a shellcommand: to make sure that the input file was actually getting read. When I finally got the copy: stanza sorted out, the certificates still weren’t being copied out. The cfagent output said it couldn’t find ‘/masterfiles/tmpl/security/host-certs/$(hostname).pem’. As it turns out, I thought $(hostname) was a valid Cfengine variable. Instead, it was actually being passed to the shell command and being executed by the shell. The end result was indiscernible from what I intended in that case, but didn’t translate to the copy: stanza. The variable I wanted was $(fqhost).

Comcast’s bot alert service is a good idea terribly implemented

Brian Krebs reported yesterday that Comcast will be implementing its bot detection feature nationwide.  Comcast will apparently put an overlay on websites when visited from an IP that exhibits signs of bot activity.  I don’t claim to be a security expert, but I think I’ve been in the business long enough to say “that’s really stupid.”

While I agree with Comcast’s efforts to fight bot infestations, they are going about it in exactly the wrong way.  Running man-in-the-middle code is unacceptable, regardless of the intent.  If the code is inserted into anything other than HTTP traffic, it will almost certainly break things, and I imagine that certain kinds of HTTP applications will break, too (specifically automated retrieval/parsing of sites).   Additionally, it opens up another attack vector if Comcast itself suffers a breach.

Perhaps the worst part of this plan, though, is the impact it has on user education.  For most users, nuance is not appropriate.  Despite repeated warnings about the illegitimacy of “Your computer is infected!” pop-ups, people still click on them.  Now suddenly there’s the Comcast nag with a link to download anti-malware tools.  Comcast seems to assume that users can handle the nuance.  My own experience suggests otherwise.

Unlike the authors of some of the comments on the post, I’m not concerned that Comcast can determine when a host (well, a customer’s connection, which may have several hosts behind the router) is operating as part of a botnet.  While they could be inspecting the contents of the packets, it’s more likely that they’re just using the routing information and other already-visible data.  There are some hosts and traffic patterns that are generally indicative of bot activity, but not conclusively so.  That’s how the network security group at my employer works, in fact: they determine that a host is displaying suspicious behavior, and notify the local admins to investigate.  Sometimes, it’s a false alarm, which is another cause for concern. If users get the Comcast “you’re a bot!” warning, act on it, and it turns out to be false, will they take it seriously again?

I don’t have an answer for Comcast.  They’re trying to do a great thing by combating botnets (not altruistically, of course, but helping their network helps their customers too, so who’s to complain?), but the current method of informing affected users is a really bad idea.

Summary of the 2010 CERIAS Information Security Symposium

Earlier this week, Purdue’s Center for Education and Research in Information Assurance and Security (CERIAS) held its annual Information Security Symposium. This year’s symposium was well-attended, and the keynote speakers perhaps had something to do with that.  The keynote speaker for the first day was the Honorable Mike McConnell, a former Director of National Intelligence, among several other posts.  The day two keynote speaker was the current Under Secretary for the National Protection and Programs Directorate in the Department of Homeland Security, the Honorable Rand Beers. Of course, the internationally-renowned director of CERIAS, Gene Spafford, was there, along with a collection of academic and industry representatives serving on three speaking panels.

With the exception of the poster session, the content of the symposium was largely non-technical.  This is fitting, since many of the greatest challenges in cyber security revolve around social or political difficulties, not technical limitations.  Both Admiral McConnell and Mr. Beers discussed at great length the interactions between the public and private sectors and the need for a mature cyber security policy. Continue reading

Cyber security awareness month: Other uses for SSH

As I noted a few weeks ago, October is cyber security awareness month.  I’d planned on writing a big how-to for remotely and securely connecting to another computer, but time has escaped me, so what I’ll give here is the quick and dirty version, and trust that my readers can use Google to fill in the backstory.

Back in May, I wrote an article about using SSH as a proxy to help secure your web browsing when away from home.  SSH was designed primarily to provide shell (command line) access to remote machines using encryption and other features to prevent someone from eavesdropping, but it can be used to tunnel all kinds of other traffic.  For example, you can tunnel your Subversion version control over SSH, using the svn+ssh argument (e.g. svn co svn+ssh my_svn_files). Or you could tunnel your VNC (a remote desktop protocol) over an SSH connection.

Why would you want to tunnel VNC?  The first reason is that VNC by default passes all traffic in plain text, which means all of your keystrokes (read: passwords) are exposed.  By using an SSH tunnel, your session is encrypted. The second reason is that by using an SSH tunnel, you don’t have to open the firewall for the VNC port(s).

So how do you tunnel VNC, or another protocol?  The -L argument to SSH (or LocalForward in the config file) tells SSH to forward locally.  To tunnel to a VNC server running on display :1, you’d do something like:  ssh -L 5901:localhost:5901 username@my.server.org   and then point your VNC viewer to localhost:1.

In addition to interactive-type uses, SSH can be used for file transport as well.  The scp command copies files to and from a remote server in the same manner that the cp command works locally.  sftp can be used as a secure replacement for the FTP protocol (but there’s no provision for anonymous access).  And most importantly, the venerable rsync command can be used with SSH by specifying it as the argument to the -e flag (e.g. rsync -e “ssh” -av /some/local/directory username@my.server.org:/the/remote/directory).

So the moral of the story is: SSH can help keep you secure.