Archive for the ‘AppSec’ Category

Lastpass, Risk, and Security Expectations

Wednesday, March 29th, 2017

Last week was a rough week for LastPass. In his continuing work of scrutinizing security products in general, and recently password managers in particular, Tavis Ormandy has released a series of critical bugs against LastPass (Tweet, Writeups). They’re exactly the sort of nightmare scenario that scares the crap out of users adopting password managers, and especially cloud-connected ones. However, it’s too easy and too simplistic to look at this week and conclude anything sweeping about password managers.

Don’t Lose Sight of the Big Picture

Password managers are a hugely important measure for most people, and something that all information security professionals should be advocating for. It’s not necessarily an easy sell, so it really pisses me off when I see self-professed experts muddying the waters with security absolutism (phrases like “well a hacker could still do…”, or advocating pet approaches that are infeasible for most people). When a vulnerability like this comes around, it provides another opportunity for those sorts of folks to shout and snark and win some personal points, at the expense of laymen who don’t have the ability to put these histrionics in context. So, even if these issues were a death knell for LastPass (I don’t believe they are, but more on that below), then password managers would *still* be a good idea for nearly everyone.

One important-but-often-overlooked benefit of password managers is that it conditions users to emphasize specific passwords less and be more comfortable with the abstract idea of authenticating to a service using some intermediary. That forms one part of the bridge (along with the spread of federated auth and mobile push authentication, among others) toward a post-password future.  

So, 3 bad vulns in LastPass, but the concept of password managers is still a net positive for security. How do we reconcile those?

Let’s put two points together:

  • All software has bugs, and (essentially) all important software has had critical vulns at some point (MS08-067, Stagefright, Heartbleed, Dirty CoW on Linux, iOS Trident, and various IE vulns)
  • Risk is more than just the count or severity of known vulns

I would argue that although the severity of the vulnerabilities were “critical”, the actual risk was relatively low for most users. By the time most people became aware of the issues, they had already been fixed or patched and effective for anyone who was online to receive the update. I don’t have metrics on how long it too the patch to be actually deployed to users, but my browser picked it up before I even got finished reading the published disclosure. For highly-targeted users, that might be enough time to put together an attack, but for the vast majority of the population they had a fix before they needed to care. A non-browser based manager is an option that avoids some of these issues, but I rarely meet a person using one who’s not in infosec already.

For the rest of users, they continued to realize all of the risk-reduction benefits of a password manager (in addition to convenience, etc), and never actually realized significant risk from the vulnerability.

Planning for Failure

There’s a longer blogpost here about the significant attack surface of a browser, defense in depth, and security configuration decisions around that, but that’ll have to be another time.

At the moment, I just want to point out LastPass’s ability to respond to a bug submission, triage it, then develop and deploy and appropriate fix in in time to limit user impact is not luck or accident; I would argue that it speaks to internal engineering values, and that it’s table stakes for a modern software shop, *especially* in security software. Even while he was hammering on them, Tavis pointed out that their responsiveness is a better experience than he’s used to having with vendors. One of the most controversial points in my BlackHat talk from a few years ago was congratulating WordPress on their approach to automatic updates; while it’s easy to dump on the project as a poster child for security vulns, in practice their effort in automatic updates actually do more to keep their users safe than some other projects with fewer bugs.

Final Thoughts

They got beat up pretty bad, but I’ll continue to use LastPass. We don’t often get to see in a very public way how a company handles a security issue, but when the response shows us that they’re thinking and doing the right things both before and during (Lastpass 2015, Kickstarter 2014), then it helps with the decision about whether to continue using them.

On Pentesting, Professionalism, & “Chill”

Tuesday, September 13th, 2016

After a recent penetration test report-out call with a client, I asked my interns if anything from the call surprised them. ChillOne of them noted that he was surprised how “chill” the call was. That was interesting to me because it reminded me that I had thought the exact same thing when I first got into consulting and pentesting. It’s easy to see how a readout call could be an incredibly tense, combative affair but in my experience the best pentesters manage to not only avoid that but reverse it.

The mood of the report-out call is an excellent barometer for something that’s critical, and often lacking, in our industry: a constructive relationship between the red team and blue team. While critical, it’s also subtle, and creating the conditions for a good relationship is a process that requires real work and empathy for everyone at the table. My advice: Start Early, Be Meticulously Professional, and Remember the Goal.

Start Early

If the goal is a relaxed, productive, (even “fun”) readout call, then the groundwork must start early. While there are other things that come before it, a detailed kickoff is really the first big chance to get moving in the right direction. As a tester your goal should be to make sure that everyone is clear and comfortable with what’s about to happen, and what exactly the client hopes to get out of it. The behavior ends up being a combination of a lot of standard questions, and sniffing around for any hint that there are either concerns or complexities going unaddressed. It’s also important at this point to really understand context from the client’s view. What’s a critical vs. a high or medium? What do they care less about than you might expect them to? Why? The more understanding you have now, the more that the entire report can be placed in context. If it feels like you’re facilitating a group therapy session where the clients are sharing their (security-relevant) hopes and their fears, then you’re probably doing something right. My team literally asks questions like “what keeps you up at night?” and “what’s the scariest thing we could do here?” Asking the big questions frankly and early helps take the elephant out of the room and moves toward productive discussion of the big questions rather than tentatively working up to them through peripheral issues.

Aside from the kickoff meeting itself, “start early” means start doing things well now so that you have a buffer of goodwill to draw on later. I’ve heard it called an “emotional bank account”: make people feel good about you, make a deposit; let them down, make a withdrawal. Ideally you always want that balance going up, but when something happens (and it’s definitely going to), you want to make sure that you’ve got a nice buffer of goodwill so that it’s understood that it was a blip in an otherwise solid relationship. Neil Gaiman once explained that people keep working because their work is good, they’re pleasant to work with, and because they deliver on time. But the secret, he says, is that it only takes two out of the three. Different people are going to be able to take a different two for granted, but know that if you always shoot for all three you’ve bought yourself some leeway if something happens.

Be Meticulously Professional

Beyond simply good will, one of the important reasons clients call in pentesters (or consultants of any kind) is to get that feeling that they’re in good hands; that someone is going to make sure that messy, complex things get taken care of properly. We’re expected to drop into situations where deadlines, resources, or nerves are already in trouble and provide some useful answers and confidence that the “Right Things” are being done. So, standard consulting practices like “Communicate well and often”, and “Don’t surprise people” apply, of course.  

But one area that security folks sometimes struggle with is ego. There typically are already plenty of personalities and internal politics involved; that makes it critical for us as outsiders to not bring further ego into the situation.

This ego can take a few forms. The first is a tendency toward fearmongering and overselling findings; wanting to be perceived as one of those “scary hacker types”. That can be helpful (to a point) for establishing technical credibility but it’s important to realize that being cool isn’t in the job description. Likewise, neither is taking credit or passing blame. Remember: Amateurs get credit, professionals get paid. The rule for blame is similar: as in the airport in Fight Club, never imply ownership of the bug. If the goal is to make something more secure, it’s rarely relevant who exactly created a bug when it’s likely process, tooling, or training that really needs to change.   

Remember The Goal

This leads into another place that unhelpful ego pops up: security absolutism.

I hear security absolutism in language like “Windows sucks because…” or “Well, actually there’s no point in fixing that because hackers could still…” (or really anytime someone starts handwaving about esoteric TLS attacks or Van Eck phreaking … you know the type).  

Real professionals need to be able to set aside the hacker mindset long enough to have productive, nuanced discussions about how to fix things. There are rarely perfect solutions, and the imperfect ones come with tradeoffs. We should all be willing to be as pragmatic on defense as we are on offense. The perception that security people are going to naysay or ridicule every suggestion hurts all of us, and makes us less effective as an industry. The “Nick Burns” mentality is a self-reinforcing stereotype we need to fight against at each encounter. Similarly, there’s an odor of superiority that often comes off some pentesters when they break a thing and speak about it publicly, as if that somehow demonstrates that they are smarter than the person who designed it. Sometimes a thing is *so* bad that an example must be made, but for my tastes those instances are far more rare than twitter and blogs would make you think.    

Even if a client is a pain to work with, doesn’t take good advice, and fights you on everything, they made at least one smart call: they asked for help. The better we’re able to appreciate that, understand their perspective, and work toward improving the system, the better the relationship and better the results. I’ll feel happier about our industry when dev and ops actually look forward to their calls with security folks, and I’ll tell you this: life’s a lot better when we look forward to them too. So remember; be professional, be empathetic, be helpful — and be chill.

How I Use Firefox as a Web App Pentesting Browser

Sunday, April 3rd, 2016

I’m spending more of my time these days helping other people be effective at security testing applications, and as part of that I’m a huge fan of “over the shoulder” mentoring. Some of the most useful things that I’ve learned from others are not things they thought to mention, but rather those moments of “hey, back up a second — what was that thing you just did?“. Sometimes it’s commands or small utility tools, shortcut keys or capabilities of a program I didn’t know about, or just some quick and dirty technique that someone uses all the time but doesn’t think is special enough to talk about.

To that end, here’s a quick walkthrough of the broad strokes of how I set up Firefox for use in testing. My preferred testing setup is Firefox through Burp: the simplest setup is going to be useful, but there are a lot of small configuration details that can help a stock Firefox become even more of a pentesting asset.

Use Profiles

To test authentication and authorization issues you’re really going to need two browsers open at the same time, in different principal contexts (such “User”/”Admin”, “Tenant1″/”Tenant2”, and the ever populated “Unauthenticated”). Then, when you notice something that might have horizontal or vertical privilege issues, you can simple paste the request into the other browser, or swap cookies between your two active browsers. I prefer to run both through the same Burp instance so that I can easily diff or replay between equivalent requests/responses for different principals.

That’s where profiles come in. Normally when you launch Firefox it’ll give you multiple windows that share a common profile; however, if you launch with special command line flags, you can run two completely separate profiles at the same time. To create and manage profiles, launch Firefox Profile manager by adding the Profile Manager flag:

firefox -no-remote -ProfileManager

After creating different profiles, you can create shortcuts to launch them directly, eg:

"C:\path\to\firefox.exe" -no-remote -profile "C:\path\to\Mozilla\Firefox\Profiles\Assess1"
"C:\path\to\firefox.exe" -no-remote -profile "C:\path\to\Mozilla\Firefox\Profiles\Assess2"

To keep track of which is which, both visually and in Burp, I add a contrasting color themes (such as blue and red) and use a plugin to ensure that each sends an identifying header (see plugins section).

firefox testing config

As a sidenote to auth testing, I’m really excited about the AuthMatrix Burp Plugin. I haven’t gotten to properly put it through its paces yet, but more info to come when I have an informed opinion.

Plugins

Firefox Add-On collection that includes a lot of tools mentioned below and that you may find useful during a penetration test.

Some specific plugins you’ll definitely want:

And a couple other pieces of functionality which can be filled by various plugins:

  • Manage proxy settings:
    • FoxyProxy
    • ProxySelector
  • Change User Agents
    • UserAgentSwitcher
  • Simplify JS and JSON
    • JSONView
    • Javascript Deminifier
  • Passively detect remote technologies:
    • Wappalyzer
  • Fetch lots of content at once:
    • DownThemAll!
  • Interact with REST services:
  • RESTClient (although Chrome’s Postman is better, SoapUI is quite serviceable, and Burp will also work)

For Foxyproxy, I like to just blacklist a bunch of domains right in the browser so that they’ll never get passed to the proxy. This keeps the Burp request history cleaner and means I don’t have to make too many assumptions in Burp about what hosts an application will talk to (It also means you won’t have to reconfigure Firefox for each engagement to keep it clean). If the browser is too chatty through Burp you risk losing some valuable information when you rely on “Show only in-scope items”.
foxyproxy blacklist

 

When advertising and tracking domains are out of scope, you can also load large lists of advertisers and blacklist those from your proxy to keep the burp state even trimmer.

I use the ModifyHeaders plugin to send a unique header from each browser profile (eg, “BrowserProfile: AssessRed”). This helps me keep track in Burp during my testing, and it can also seriously help with potential client issues when they can easily identify and (hopefully) rule out your traffic as a potential cause of a problem.

Disable Chatty Features

Speaking of chatty features, you’ll probably want to disable a bunch of automatic/implicit traffic that could bloat your Burp state or create red herrings in testing: https://support.mozilla.org/en-US/kb/how-stop-firefox-making-automatic-connections

You’ll also want to tweak some settings in about:config to prevent both chatty traffic and sending potentially sensitive client URLs to public antimalware lists:

browser.safebrowsing.enabled -> false
browser.safebrowsing.malware.enabled -> false

A Few Words on Chrome

You can do a lot of this with Chrome. It supports profiles, has many approximately equivalent plugins, and can be configured to not use the system proxy by installing proxy manager plugins. That said, it feels like you have to work harder to make Chrome play nice in a pentesting environment. YMMV.

Burp Testing Profile

Although it’s not related to Firefox, one thing that I notice biting a lot of people is that they don’t load a consistent profile. Every single new test I do starts with a standard, clean burp state file with all of my preferences loaded in it. I just copy “InitialEngagementBurpState.burp” into my notes directory, load it in, and know that I’m getting all my standard preferences such as autosave (every hour (!) and into a directory that I can regularly clean up), logging, plugin config, etc. I’ve seen colleagues forget this on back to back tests and lose their first day of testing each time because they didn’t manually enable the autosave and hit a crash. (Update Sept 2016: this is less relevant now with Burp’s new project file feature. I’m still figuring out if there are any gotchas in it, but it really helps persisting defaults and making it harder to be dumb.)

What about you?

What did I miss? Some favorite plugin, or special approach? What’s unique about your own setup that you take some pride in?