CERIAS Recap: Thursday keynote

Once again, I’ve attended the CERIAS Security Symposium held on the campus of Purdue University. This is one of several posts summarizing the talks I attended.

Thursday’s keynote address was delivered by Christopher Painter, the Coordinator for Cyber Issues at the U.S. State Department. Mister Painter has a long and distinguished career in law and policy, starting with the U.S. Attorney’s office in Los Angeles, and moving through several roles in the Justice Department. He served as acting Cyber Czar during his time in the White House, and finally ended up in the State Department.

Cyber security issues have started receiving increased attention in recent years. Painter said President Obama came to the White House with a unique understanding of security because his 2008 campaign was hacked. In his 2013 State of the Union address, Mr. Obama became the first president to address cyber security on such a stage.

As Todd Gebhart noted the morning before, conversation has evolved from being purely technical to involving senior policy officials. This requires the technical community to work with the policy community so that they policy is informed. Painter takes heart in observing senior officials discuss cyber security issues beyond the scope of their prepared notes.

Although the State Department has a role in responding to DoS attacks against diplomatic institutions, the primary focus seems to be on fostering international cooperation. The international nature of cyber crime makes it very difficult to combat. Many different targets and intents are involved, as well. Although there have not been any [publicly reported] terrorist attacks on critical infrastructure, the threat exists. There are financial motivations for other cyber crimes. For example, one man spoofed Bloomberg web pages to publish fake articles in order to manipulate the stock price of a company. Although he got cold feet about executing the trade, people lost money in their own trades.

Regardless of the specific incident, the international nature of cyber crime makes it difficult to pursue and prosecute offenders. Some governments are more interested in “regime security”, protecting the interests of their own authoritarian states. The goal of U.S. cyber policy is an open, secure, reliable Internet system. To accomplish this, the State Department is promoting a shared framework of existing norms grounded in existing international law. Larger embassies have created “cyber attache” positions in order to help foster international cooperation.

Other posts from this event:

CERIAS Recap: Panel #1

Once again, I’ve attended the CERIAS Security Symposium held on the campus of Purdue University. This is one of several posts summarizing the talks I attended. This post will also appear on the CERIAS Blog.

With “Big Data” being a hot topic in the information technology industry at large, it should come as no surprise that it is being employed as a security tool. To discuss the collection and analysis of data, a panel was assembled from industry and academia. Alok Chaturvedi, Professor of Management, and Samuel Liles Associate Professor of Computer and Information Technology, both of Purdue Unversity, represented academia. Industry representatives were Andrew Hunt, Information Security Research at the MITRE Corporation, Mamani Older, Citigroup’s Senior Vice President for Information Security, and Vincent Urias, a Principle Member of Technical Staff at Sandia National Laboratories. The panel was moderated by Joel Rasmus, the Director of Strategic Relations at CERIAS.

Professor Chaturvedi made the first opening remarks. His research focus is on reputation risk: the potential damage to an organization’s reputation – particularly in the financial sector. Reputation damage arises from the failure to meet the reasonable expectations of stakeholders and has six major components: customer perception, cyber security, ethical practices, human capital, financial performance, and regulatory compliance. In order to model risk, “lots and lots of data” must be collected; reputation drivers are checked daily. An analysis of the data showed that malware incidents can be an early warning sign of increased reputation risk, allowing organizations an opportunity to mitigate reputation damage.

Mister Hunt gave brief introductory comments. The MITRE Corporation learned early that good data design is necessary from the very beginning in order to properly handle a large amount of often-unstructured data. They take what they learn from data analysis and re-incorporate it into their automated processes in order to reduce the effort required by security analysts.

Mister Urias presented a less optimistic picture. He opened his remarks with the assertion that Big Data has not fulfilled its promise. Many ingestion engines exist to collect data, but the analysis of the data remains difficult. This is due in part to the increasing importance of meta characteristics of data. The rate of data production is challenging as well. Making real-time assertions from data flow at line rates is a daunting problem.

Ms. Older noted that Citigroup gets DDoS attacks every day, though some groups stage attacks on a somewhat predictable schedule. As a result, Citigroup employs a strong perimeter defense. She noted, probably hyperbolically, that it takes 20 minutes to boot her laptop. Despite the large volume of data produced by the perimeter defense tools, they don’t necessarily have good data on internal networks.

Professor Liles focused on the wealth of metrics available and how most of them are not useful. “For every meaningless metric,” he said, “I’ve lost a hair follicle. My beard may be in trouble.” It is important to focus on the meaningful metrics.

The first question posed to the panel was “if you’re running an organization, do you focus on measuring and analyzing, or mitigating?” Older said that historically, Citigroup has focused on defending perimeters, not analysis. With the rise of mobile devices, they have recognized that mere mitigation is no longer sufficient. The issue was put rather succinctly by Chaturvedi: “you have to decide if you want to invest in security or invest in recovery.”

How do organizations know if they’re collecting the right data. Hunt suggested collecting everything, but that’s not always an option, especially in resource-starved organizations. Understanding the difference between trend data and incident data is important, according to Liles, and you have to understand how you want to use the data. Organizations with an international presence face unique challenges since legal restrictions and requirements can vary from jurisdiction-to-jurisdiction.

Along the same lines, the audience wondered how long data should be kept. Legal requirements sometimes dictate how long data should be kept (either at a minimum or maximum) and what kind of data may be stored. The MITRE corporation uses an algorithmic system for the retention and storage medium for data. Liles noted that some organizations are under long-term attack and sometimes the hardware refresh cycle is shorter than the duration of the attack. Awareness of what local log data is lost when a machine is discarded is important.

Because much of the discussion had focused on ways that Big Data has failed, the audience wanted to know of successes in data analytics. Hunt pointed to the automation of certain analysis tasks, freeing analysts to pursue more things faster. Sandia National Labs has been able to correlate events across systems and quantify sensitivity effects.

One audience member noted that as much as companies profess a love for Big Data, they often make minimal use of it. Older replied that it is industry-dependent. Where analysis drives revenue (e.g. in retail), it has seen heavier use. An increasing awareness of analysis in security will help drive future use.

Other posts from this event:

CERIAS Recap: Opening Keynote

Once again, I’ve attended the CERIAS Security Symposium held on the campus of Purdue University. This is the first of several posts summarizing the talks I attended.

The opening keynote was delivered by Todd Gebhart, the co-president of McAfee, Inc. Mr. Gebhart opened by reminding the audience that a “certain individual” who happens to share a name with the company is no longer involved with the McAfee corporation. Gebhart set the stage by addressing why McAfee employees go to work every day. The company focuses on protecting four areas: personal, business, government, and critical infrastructure.

The nature of security has changed over the years. In 1997, updates to antivirus subscriptions were physically mailed on disk to McAfee customers every three months. 17,000 known pieces of malware had been identified. Today, a growth in the number of connected devices has spurred a growth in malware. McAfee estimates one billion devices are connected to the Internet today, a number which is forecast to grow to 50 billion by 2020. Despite improvements in security procedures and products, the rate of growth in malware does not appear to be slowing.

The growth rate is greatest for mobile devices, where “only” 36,000 unique pieces of malware are known to exist (according to a preliminary study, 4% of all mobile apps are designed with mal intent). Consolidation of mobile operating systems into two main players (iOS and Android) has made it easier for malware writers. The nature of the threat on mobile has changed as well. Whereas desktop and server-based attacks were often about gaining control of or denying service to a machine, mobile threats are more focused on the loss of data and devices. The addition of WiFi, while of considerable benefit to users, has opened up a whole new realm of attack vectors that did not exist a few years ago.

Gebhart gave a brief survey of current malware threats in the four sectors listed above. He noted that attacks are no longer about machines; they’re about people and organizations. Accordingly, spam and botnets are becoming less of a concern in favor of malicious URLs. Behavior- and pattern-based attacks allow bad actors to focus their efforts more efficiently, and the development of Hacker-as-a-Service (HaaS) offerings allows for attackers with little-to-no technical knowledge.

The evolving threat has lead to greater awareness among non-technical business leaders. Security companies are now having discussions not only with technical leadership in organizations, but also with high level business and government leaders.

The industry is evolving to face the new and emerging threats. The use of real-time data to make real-time decisions can improve the response to attacks, or perhaps prevent them. Multi-organization cooperation can help defend against so-called “trial-and-error” attacks. Cloud-based threat intelligence allows McAfee to analyze malware traffic across 120 million devices worldwide. Hardware and software vendors are working together (or in the case of Intel, buying McAfee) to develop systems that can detect malware at the hardware interaction layer.

Gebhart closed by saying “it’s an exciting time to be in security” and noting that his company is always looking for talented security researchers and practitioners.

Other posts from this event:

Bad security from SpeedDate.com

The problem with having my first initial and last name as my email address is that I get a lot of email from other people. The other day, I started getting messages from SpeedDate.com. Not wanting to keep Barry from true love, I looked for a support address. There wasn’t one, so I tried going to the site by clicking the link. Imagine my surprise when I had full access to Barry’s profile.

Since I was in a good mood, I didn’t pretend to be Barry. I simply deactivated his account so that maybe he’d notice. The next day, I got another email saying a woman was interested in me Barry. Once again, I clicked the link to take me to the site. This time, I unchecked all of the notification options and put in a unused email address. I was interested to see what I could do if I were a bad person, so I looked at Barry’s profile information. He didn’t have much filled out, but he had his date of birth and his ZIP code. From which, I could use an online phone book to find his address and phone number.

That’s plenty of information for a social engineering attack. Considering he couldn’t enter his email address correctly, I’m willing to bet that if I called pretending to be from his bank, library, local police, etc. that I could get even more information out of him. Identity theft city, folks. It’s lucky for Barry that I’m generally a nice guy. Though it’s not really Barry’s fault that SpeedDate.com is stupid enough to allow unauthenticated access to user settings. I’ve tried to contact SpeedDate.com and have not yet received a response, so I’m opting to shame them publicly.

But ladies, if you’re looking for Barry, he’s not here.

Data breaches suck

Despite our best efforts, machines sometimes get compromised. The culprit isn’t always (or even usually) a highly publicized group in it for the laughter. It could be a curious student, or an overzealous admin, or the Russians. Whoever is behind it, when it happens, it sucks. Especially if sensitive data is involved. So I really feel bad for my colleagues in the Math department at Purdue, who had to deal with this recently. According to the University News Service, over 7000 former students have been notified that an attacker potentially accessed their Social Security Number.

I know only as much about this as has been publicized, so I can’t speak to any specifics. What I can say is that stuff like this kept me up at night in my previous job. For years, SSNs were used to “anonymously” distribute grades to students. They’re nice because they’re a unique identifier and nearly everyone has one. Unfortunately, they’re also kind of important elsewhere and protected by state and federal laws. The upshot is that many faculty had files containing SSNs on their desktop or on removable media or on a file server.

In 2006, if memory serves, we were tasked with scanning every machine owned by the department for SSNs. This involved adapting some existing tools (which were basically just really fancy regular expressions to grep for), doing a room-by-room inventory, and then asking users to scan their machines and sift through the output. After the machine owners ran their own scans and cleaned up offending files, we did it again, this time forcing the scans and having the IT staff look for offensive files. It was a many-month project that was not by any means pleasant.

From the article, it sounds like it was similarly awful after this breach. You can’t assume that a SSN will be formatted 000-00-0000, so you have to look for 9-digit strings, which occur with alarming frequency. In this case, it appears that no one’s number was actually divulged, which simultaneously lends relief and futility to the exercise.

Dropping Dropbox

When Dropbox first came to my attention, I was in love. What a great way to keep various config files synchronized across computers. Then it came out that Dropbox’s encryption wasn’t quite as awesome as they let on. It turns out there’s no technical restriction on (at least certain) employees accessing your files. The data is encrypted, but server-side. Now, I’m not all that concerned that someone will target me to find out what my .ssh/config file contains (heck, I’d put it on dotfiles if someone asked nicely), but it does make me reconsider what is appropriate for Dropbox.

Recently, Dropbox announced some changes to the Terms of Service. While the license part is what caused the most uproar on the Internet, the de-duplication part is what stood out the most to me. I know it’s not in Dropbox’s best interests to pay to store a thousand copies of Rebecca_Black-Friday.mp3, but that’s not my concern. The wording suggests that the de-duplication is block-level as opposed to file-level, which is less worrisome, but given their previous lack of transparency about the encryption, I wonder how they’re actually implementing it. If it’s file-level and if it spans multiple accounts, then that seems like a really terrible idea.

I’ve recently switched everything I had in Dropbox over to SpiderOak. The synchronization seems a bit slower and the configuration is less simple (but it’s much easier to back up multiple directories, instead of having to barf symlinks everywhere), but the encryption is client-side so that it’s impossible for SpiderOak to divulge user data (unless they’re lying, too). If you’re interested in trying SpiderOak for yourself, sign up through this link and we’ll both get an extra 1 GB of storage for free.

Making secure passwords

A recent ZDNet article claimed that GPU computing has rendered even the most secure passwords dangerously crackable. It’s true that passwords developed using the conventional wisdom are subject to more easy brute-forcing, but that doesn’t mean all hope is to be abandoned. The tradeoff is normally between complexity/length and memorability, but in a “Security Now” episode earlier this month, Steve Gibson tossed that tradeoff out the window. His idea: burying your password in a haystack. The general idea is this: if your password is “r4Nd0mBunn1es”, you can make it less crackable by doing something like “/\/\/\/\r4NdomBunn1es/\/\/\/\” or “—r4ndom*****Bunn1es+++” or some other method of padding extra characters.

Even if you use the same pattern every time, so long as the password needle is different, the overall password will be very difficult to crack. So if the password is so difficult, why use a different needle for each site? Because you can’t trust the site to do the right thing. As recent attacks against Sony and other sites have shown, some sites still store passwords in plain text. At least with the password haystack, you can remember shorter passwords and apply the appropriate pattern to fill them out.

I’ve not been as quick to react to this as I perhaps should be. Admittedly, I reuse my throwaway passwords a lot, and so I’m taking advantage of this opportunity to fix this glitch. I’ll probably just create jibberish ones and save them in KeePassX.

Understanding Condor security settings

Recently our colleagues at the University of Nebraska asked us to add host certificates to our Condor servers to enable encryption between sites. After fighting my way through Cfengine, I got the certificates in place. The next step was to enable GSI authentication on the servers. Having wisened up over the years, I chose to commit the change early in the day. My commit made the following two edits:

Modified: prod/tmpl/condor/condor_config.negotiator
 --- prod/tmpl/condor/condor_config.negotiator   2011-05-18 17:04:21 UTC (rev 12379)
 +++ prod/tmpl/condor/condor_config.negotiator   2011-05-18 17:12:46 UTC (rev 12380)
 @@ -45,6 +45,12 @@
 QUILL_DB_NAME           = quill
 QUILL_DB_IP_ADDR        = quill-00.rcac.purdue.edu:5432

 +# Use host certs to provide some inter-site security
 +SEC_NEGOTIATOR = preferred
 +GSI_DAEMON_DIRECTORY = /etc/grid-security

 # define sub collectors

 Modified: prod/tmpl/condor/condor_config.submit
 --- prod/tmpl/condor/condor_config.submit       2011-05-18 17:04:21 UTC (rev 12379)
 +++ prod/tmpl/condor/condor_config.submit       2011-05-18 17:12:46 UTC (rev 12380)
 @@ -40,6 +40,11 @@
 #      condorcm.pnc.edu, \

 +# Use host certs to provide some inter-site security
 +GSI_DAEMON_DIRECTORY = /etc/grid-security
 # Use global event log (like the userlog)
 # Set MAX_EVENT_LOG to a huge number, since there's a bug keeping the
 # set it to zero to let it grow unlimited bit work

I had tested the changes a bit and hadn’t noticed anything too out of whack. Until the follow morning. Users were complaining about slow queueing and execution of PBS jobs. At first, I thought it was a license problem on one of the PBS servers, since that had been happening a few times in the recent past. As the others in the group began investigating, though, they noticed that the PBS prologue script wasn’t completing as a job tried to land. It turns out the condor_config_val call that changes the PBSRunning attribute was failing because the node couldn’t talk to the collector.

The root cause was my misunderstanding of the Condor security documentation. With clients set to “optional” and daemons set to “preferred”, they try to use the relevant security features. But since the methods didn’t match, they refused to talk to each other instead of failing gracefully. Changing the “preferred” to “optional” restored performance and job throughput. Having gone through this, it now makes sense, but it’s more than a little embarrassing to bring the entire infrastructure down.

A Cfengine learning experience

Note: This post refers to Cfengine 2. The difficulties I had may quite likely be a result of peculiarities in our environment or the limits of my own knowledge.

A few weeks ago, my friends at the University of Nebraska politely asked us to install host certificates on our Condor collectors and submitters so that flocking traffic between our two sites would be encrypted. It seemed like a reasonable request, so after getting certificates for 17-ish hosts from our CA, I set about trying to put them in place. I could have plopped them all in place easily enough using a for loop, but I decided it would make more sense to left Cfengine take care of it. This has the added advantage of making sure the certificate gets put in place automatically when a host gets reinstalled or upgraded.

I thought it would be nice if I tested my Cfengine changes locally first. I know just enough Cfengine to be dangerous, and I don’t want to spam the rest of the group with mail as I check in modifications over and over again. So after editing the input file on one of the servers, I ran cfagent -qvk. It didn’t work. The syntax looked correct, but nothing happened. After a bit, I asked my soon-to-be-boss for help.

It turned out that I didn’t quite get the meaning of the -k option. I always used it to run against the local cache of the input files, not realizing that it killed all copy actions. Had I looked at the documentation, I would have figured that out. Like I said, I know just enough to be dangerous.

I didn’t want to create a bunch of error email since some hosts wouldn’t be getting host certificates, so I went with a IfFileExists statement that I could use to define a group to use in the copy: stanza. So I committed what I thought to be the correct changes and tried running cfagent again. The certificates still weren’t being copied into place. Looking at the output, I saw that it couldn’t find the file. Nonsense. It’s right there on the Cfengine server.

As it turns out, that’s not where IfFileExists looks, it looks on the server running cfagent. The file, of course, doesn’t exist locally because Cfengine hasn’t yet copied it. Eventually I surrendered and defined a separate group in cf.groups to reference in the appropriate input file. This makes the process more manual than I would have liked, but it actually works.

Oh, except for one thing. In testing, I had been using $(hostname) in a shellcommand: to make sure that the input file was actually getting read. When I finally got the copy: stanza sorted out, the certificates still weren’t being copied out. The cfagent output said it couldn’t find ‘/masterfiles/tmpl/security/host-certs/$(hostname).pem’. As it turns out, I thought $(hostname) was a valid Cfengine variable. Instead, it was actually being passed to the shell command and being executed by the shell. The end result was indiscernible from what I intended in that case, but didn’t translate to the copy: stanza. The variable I wanted was $(fqhost).

Comcast’s bot alert service is a good idea terribly implemented

Brian Krebs reported yesterday that Comcast will be implementing its bot detection feature nationwide.  Comcast will apparently put an overlay on websites when visited from an IP that exhibits signs of bot activity.  I don’t claim to be a security expert, but I think I’ve been in the business long enough to say “that’s really stupid.”

While I agree with Comcast’s efforts to fight bot infestations, they are going about it in exactly the wrong way.  Running man-in-the-middle code is unacceptable, regardless of the intent.  If the code is inserted into anything other than HTTP traffic, it will almost certainly break things, and I imagine that certain kinds of HTTP applications will break, too (specifically automated retrieval/parsing of sites).   Additionally, it opens up another attack vector if Comcast itself suffers a breach.

Perhaps the worst part of this plan, though, is the impact it has on user education.  For most users, nuance is not appropriate.  Despite repeated warnings about the illegitimacy of “Your computer is infected!” pop-ups, people still click on them.  Now suddenly there’s the Comcast nag with a link to download anti-malware tools.  Comcast seems to assume that users can handle the nuance.  My own experience suggests otherwise.

Unlike the authors of some of the comments on the post, I’m not concerned that Comcast can determine when a host (well, a customer’s connection, which may have several hosts behind the router) is operating as part of a botnet.  While they could be inspecting the contents of the packets, it’s more likely that they’re just using the routing information and other already-visible data.  There are some hosts and traffic patterns that are generally indicative of bot activity, but not conclusively so.  That’s how the network security group at my employer works, in fact: they determine that a host is displaying suspicious behavior, and notify the local admins to investigate.  Sometimes, it’s a false alarm, which is another cause for concern. If users get the Comcast “you’re a bot!” warning, act on it, and it turns out to be false, will they take it seriously again?

I don’t have an answer for Comcast.  They’re trying to do a great thing by combating botnets (not altruistically, of course, but helping their network helps their customers too, so who’s to complain?), but the current method of informing affected users is a really bad idea.