Archive for the ‘Tools’ Category

How I Use Firefox as a Web App Pentesting Browser

Sunday, April 3rd, 2016

I’m spending more of my time these days helping other people be effective at security testing applications, and as part of that I’m a huge fan of “over the shoulder” mentoring. Some of the most useful things that I’ve learned from others are not things they thought to mention, but rather those moments of “hey, back up a second — what was that thing you just did?“. Sometimes it’s commands or small utility tools, shortcut keys or capabilities of a program I didn’t know about, or just some quick and dirty technique that someone uses all the time but doesn’t think is special enough to talk about.

To that end, here’s a quick walkthrough of the broad strokes of how I set up Firefox for use in testing. My preferred testing setup is Firefox through Burp: the simplest setup is going to be useful, but there are a lot of small configuration details that can help a stock Firefox become even more of a pentesting asset.

Use Profiles

To test authentication and authorization issues you’re really going to need two browsers open at the same time, in different principal contexts (such “User”/”Admin”, “Tenant1″/”Tenant2”, and the ever populated “Unauthenticated”). Then, when you notice something that might have horizontal or vertical privilege issues, you can simple paste the request into the other browser, or swap cookies between your two active browsers. I prefer to run both through the same Burp instance so that I can easily diff or replay between equivalent requests/responses for different principals.

That’s where profiles come in. Normally when you launch Firefox it’ll give you multiple windows that share a common profile; however, if you launch with special command line flags, you can run two completely separate profiles at the same time. To create and manage profiles, launch Firefox Profile manager by adding the Profile Manager flag:

firefox -no-remote -ProfileManager

After creating different profiles, you can create shortcuts to launch them directly, eg:

"C:\path\to\firefox.exe" -no-remote -profile "C:\path\to\Mozilla\Firefox\Profiles\Assess1"
"C:\path\to\firefox.exe" -no-remote -profile "C:\path\to\Mozilla\Firefox\Profiles\Assess2"

To keep track of which is which, both visually and in Burp, I add a contrasting color themes (such as blue and red) and use a plugin to ensure that each sends an identifying header (see plugins section).

firefox testing config

As a sidenote to auth testing, I’m really excited about the AuthMatrix Burp Plugin. I haven’t gotten to properly put it through its paces yet, but more info to come when I have an informed opinion.


Firefox Add-On collection that includes a lot of tools mentioned below and that you may find useful during a penetration test.

Some specific plugins you’ll definitely want:

And a couple other pieces of functionality which can be filled by various plugins:

  • Manage proxy settings:
    • FoxyProxy
    • ProxySelector
  • Change User Agents
    • UserAgentSwitcher
  • Simplify JS and JSON
    • JSONView
    • Javascript Deminifier
  • Passively detect remote technologies:
    • Wappalyzer
  • Fetch lots of content at once:
    • DownThemAll!
  • Interact with REST services:
  • RESTClient (although Chrome’s Postman is better, SoapUI is quite serviceable, and Burp will also work)

For Foxyproxy, I like to just blacklist a bunch of domains right in the browser so that they’ll never get passed to the proxy. This keeps the Burp request history cleaner and means I don’t have to make too many assumptions in Burp about what hosts an application will talk to (It also means you won’t have to reconfigure Firefox for each engagement to keep it clean). If the browser is too chatty through Burp you risk losing some valuable information when you rely on “Show only in-scope items”.
foxyproxy blacklist


When advertising and tracking domains are out of scope, you can also load large lists of advertisers and blacklist those from your proxy to keep the burp state even trimmer.

I use the ModifyHeaders plugin to send a unique header from each browser profile (eg, “BrowserProfile: AssessRed”). This helps me keep track in Burp during my testing, and it can also seriously help with potential client issues when they can easily identify and (hopefully) rule out your traffic as a potential cause of a problem.

Disable Chatty Features

Speaking of chatty features, you’ll probably want to disable a bunch of automatic/implicit traffic that could bloat your Burp state or create red herrings in testing:

You’ll also want to tweak some settings in about:config to prevent both chatty traffic and sending potentially sensitive client URLs to public antimalware lists:

browser.safebrowsing.enabled -> false
browser.safebrowsing.malware.enabled -> false

A Few Words on Chrome

You can do a lot of this with Chrome. It supports profiles, has many approximately equivalent plugins, and can be configured to not use the system proxy by installing proxy manager plugins. That said, it feels like you have to work harder to make Chrome play nice in a pentesting environment. YMMV.

Burp Testing Profile

Although it’s not related to Firefox, one thing that I notice biting a lot of people is that they don’t load a consistent profile. Every single new test I do starts with a standard, clean burp state file with all of my preferences loaded in it. I just copy “InitialEngagementBurpState.burp” into my notes directory, load it in, and know that I’m getting all my standard preferences such as autosave (every hour (!) and into a directory that I can regularly clean up), logging, plugin config, etc. I’ve seen colleagues forget this on back to back tests and lose their first day of testing each time because they didn’t manually enable the autosave and hit a crash. (Update Sept 2016: this is less relevant now with Burp’s new project file feature. I’m still figuring out if there are any gotchas in it, but it really helps persisting defaults and making it harder to be dumb.)

What about you?

What did I miss? Some favorite plugin, or special approach? What’s unique about your own setup that you take some pride in?


BlackHat USA Multipath TCP Tool Release & Audience Challenge

Wednesday, August 6th, 2014

(Crossposting & backdating some content from the Neohapsis blog, which will soon be defunct)

We hope everyone found something interesting in our talk today on Multipath TCP. We’ve posted the tools and documents mentioned in the talk at:

Update (Aug 12, 2014): We’ve now also added the slides from the talk.

At the end we invited participants to explore MPTCP in a little more depth via a PCAP challenge.

Without further ado, here’s the PCAP: neohapsis_mptcp_challenge.pcapng

It’s a simple scenario: one MPTCP-capable machine sending data to another. The challenge is “simply” to reassemble and recover the original data. The data itself is not complex so you should be able to tell if you’re on the right track, but getting it exactly right will require some understanding of how MPTCP works.

If you think you have it, tweet us and follow us (@secvalve and @coffeetocode) and we’ll PM you to check your solution. You can also ask for questions/clarifications on twitter; use #BHMPTCP so others can follow along. Winner snags a $100 Amazon gift card!

Hints #0:

  • The latest version of Wireshark supports decoding mptcp options (see “tcp.options.mptcp”).
  • The scapy version in the git repo is based on Nicolas Maitre’s and supports decoding mptcp options. It will help although you don’t strictly need it.
  • The is an mptcp option field to tell the receiver how a tcp packet fits into the overall logical mptcp data flow (what it is and how it works is an exercise for the user🙂 )
  • It’s possible to get close with techniques that don’t fully understand MPTCP (you’ll know you’re close). However the full solution should match exactly (we’ll use md5sum)

Depending on how people do and questions we get, we’ll update here with a few more hints tonight or tomorrow. Once we’ve got a winner, we’ll post the solution and code examples.

Update: Winners and Solution

We have some winners! Late last night @cozinuzo contacted us with a correct answer, and early this morning @darkfiberiru got it too.

The challenge was created using our fragmenter PoC tool, pushing to a netcat opened socket on an MPTCP-aware destination host:

python -n 9 --file=MPTCP.jpg --first_src_port 46548 -p 3000

The key to this exercise was to look at the mechanism that MPTCP uses to tell how a particular packet fits into the overall data flow. You can see that field in Wireshark as tcp.options.mptcp.dataseqno, or in mptcp-capable scapy as packet[TCPOption_MP].mptcp.dsn.




The mptcp-capable scapy in our mptcp-abuse git repo can easily do the reassembly across all the streams using this field.

Here’s the code (or as a Gist):

# Uses Nicolas Maitre's MPTCP-capable scapy impl, so that should be
# on the python path, or run this from a directory containing that "scapy" dir
from scapy.all import *

packets = rdpcap("pcaps/neohapsis_mptcp_challenge.pcap")
payload_packets = [p for p in packets if TCP in p
                   and p[IP].src in ("", "")
                   and TCPOption_MP in p
                   and p[TCPOption_MP].mptcp.subtype == 2
                   and Raw in p]

f = open("out.jpg", "w")

for p in sorted(payload_packets, key=lambda p: p[TCPOption_MP].mptcp.dsn):

These reassemble to create this image:





The md5sum for the image is 4aacab314ee1a7dc5d73a030067ae0f0, so you’ll know you’ve correctly put the stream back together if your file matches that.

Thanks to everyone who took a crack at it, discussed, and asked questions!

Fuzzing Comes in from the Cold

Thursday, May 20th, 2010

So, after a couple months of living in webapp security land and having my developer hat on, I finally took a few days to do some good old fashioned vulnerability hunting. These days, that means fuzzing.

I’m going to go ahead and say that fuzzing is ready to come out of the cold, from being primarily thought of as something security researchers and blackhats do, to eventually being something as expected as unit tests (…though I’m probably about 2 years late in saying that). With fuzzing a part of the SDL (gj MS) and Charlie Miller publicly calling on companies to get with the program, it’s well on its way.

Now, unit testing (and code coverage) took a while to be considered expected practice (and depending on where you work, might still raise eyebrows), but by and large they’re generally considered something that helps improve quality and reduce risk in a project. I have hopes that fuzzing will get there too.

The tooling is there, except that I think coming from the security community has hindered it a bit. There’s no standout leader like xUnit (cpp, n, j, etc), and instead we have dozens of tools grown from individual developers, which range from utterly broken to pretty good, and most serious fuzzing undertakings end up having to piece together a solution out of a number of other partial solutions. If you’re Charlie Miller, you have it figured out and built into a fusion powered spaceship, but the rest of us are still getting there (seriously, if you read one thing on fuzzing, check out that presentation from CANSECWEST this year… we all can aspire).

Trying to piece together such a solution myself, I started with FileFuzz and the excellent text Fuzzing: Brute Force Vulnerability Detection by Sutton, Greene and Amini.

Get Started Quickly With FileFuzz

If you’re looking to start file-based fuzzing as quickly as possble, FileFuzz is a good bet. It’s a mutational fuzzer so all you need to get started is a single sample file, and it’s “batteries included” (unlike many solutions) in that it incorporates the three big moving pieces of fuzzing: sample creation, test running, and error detection. Crash triage automation is a task that it doesn’t try to address, but if you’re just trying to get started, it’s going to help immensely.

While using it though, I found some bugs. Fuzzing a series of binary files, this pops up:

The output char buffer is too small to contain the decoded characters, encoding 'Unicode (UTF-8)' fallback 'System.Text.DecoderReplacementFallback'. Parameter name: chars.

A bit of googling suggests that PeekChar can’t reliably be used on binary data. I made the following change in the readBinary() function of Read.cs (line numbers are approximate because I made some other changes):

            //while (brSourceFile.PeekChar() != -1)
            while (brSourceFile.BaseStream.Position < brSourceFile.BaseStream.Length)

If you’re running FileFuzz on a modern .NET runtime (or through VisualStudio) you may see problems such as:

InvalidOperationException: Cross-thread operation not valid: Control 'Foo' accessed from a thread other than the thread it was created on

It looks like the FileFuzz UI was written before these cross-thread checks were enforced in .NET, so if you don’t want to spend a lot of time writing threadsafe delegates, you can add one line to revert to the old (unchecked) behavior. Add this at the beginning of InitializeComponent() in Main.cs (again, line numbers are approximate):

            Control.CheckForIllegalCrossThreadCalls = false;

At one point I thought I found that FileFuzz was only generating different files for the first 10 bytes or so, and identical files after that. It may have been some config error on my part and I couldn’t duplicate it later, but you may want to give your files a quick run through md5sum, just to make sure you don’t waste a lot of CPU cycles. (Has anyone else see this?)

Structured Exception Handling

While running FileFuzz against a particular target, I found a number of hits that didn’t reproduce nicely when run alone. When the target binary was executed via crash.exe (included w/ FileFuzz), it would show a access violation:

[*] "crash.exe" "C:\Program Files\xxx" 1000 C:\fuzzing\xxx\output\136
[*] Access Violation
[*] Exception caught at 1001c06d mov eax,[esi+0x8]
[*] EAX:0011f050 EBX:00000030 ECX:00000000 EDX:00000092
[*] ESI:00000000 EDI:0011f54c ESP:0011f0ec EBP:0011f0f4

When run with the same file from the command line, nothing; just an error message and a clean exit. Initially puzzling, I found that this is a result of windows Structured Exception Handling. (Here’s an old but worthwhile read on what really goes on under the hood in SEH) So, hook it up under OllyDbg or IDA and boink, there it is.

When I get a chance I need to get set up with !exploitable (presentation here ), but I’ll have to share that in a later post.