Archive for May, 2010

Humble Helps

Sunday, May 23rd, 2010

I just ran across a post at PreachSecurity about a recent CSRF discovered in OpenCart, and a blog post by the discoverer about his interactions with the maintainer. I share Rafal’s (and Ben’s) frustration with the interaction, but I think there’s an additional lesson to be learned here. Clearly this was a loss for the cause of security, but I think it was a loss in more ways than many of the participants see.

Ben didn’t just lose the battle to get one bug fixed, he also lost an ally in the fight. Daniel (of OpenCart) was and still is in the best position to make small changes to vastly improve the security of OpenCart for all users, but after a public browbeating what are the chances he’ll be willing to get help from security folks? That’s the critical part.

The bug by bug approach to information security is trench warfare: it will take us too many years and too many lives (or at least careers) to gain ten feet of mutilated ground. We need people and practices on our side, and “wins” in security should be measured in those terms, not just in bug counts.

Yes, we as security people often have to tell people that some code or process or idea is absolutely wrong. But to say those things without destroying the human capital we desperately need, we must do it with an overabundance of patience, humility, goodwill, and tact.

I think Ben did a reasonable job of trying to be respectful and helpful, but there was something jarring in it for me.

From: “Ben”
Sent: Friday, January 22, 2010 8:06 PM
To: < *******@opencart.com>
Subject: OpenCart – Enquiry

Hi,

I recently installed OpenCart and I noticed that it is vulnerable to CSRF attacks. I have created a sample page that is capable of inserting a rouge user (the page currently prompts you but could be done silently if the attacker knows the url of the site).

http://visionsource.org/*********.html

Please let know that you are looking into the security issue and are going to release an update with a fix otherwise I will make the issue public.

If you need any help fixing the problem please let me know.

Thanks,
Ben.

See it? “…otherwise I will make the issue public.” Now, as a security person that may not seem hugely disconcerting; we know that when the carrot of coordinated disclosure fails, the stick of public disclosure can get results.  However, put yourself in Daniel’s position: he just got threatened. Reread the email, and think honestly about whether anything else constructive that Ben said will have as much of an influence on Daniel as the mere feeling that he’s being bullied.

So, Daniel gets a lot of emails from people that misunderstand some code, and he sends back a quick response (on a Friday evening no less):

On 2010-01-22, at 4:50 PM, Daniel Kerr wrote:
Ben you seem to be very clever to come up with this. But! you need to be logged in for this to happen.

So, clearly he didn’t get it. Amazingly, he still seems to be receptive to Ben. On the technical side, he may not even have checked out the PoC Ben provided. Ben responds with a lot of good information. It’s a technically accurate and helpful response for someone who is ready to learn about CSRF, but this is how he leads off:

HI Daniel,
That is the whole point of a CSRF attack. Please read http://en.wikipedia.org/wiki/Csrf for an explanation on the attack.

Ouch. I’m sure it wasn’t Ben’s intent (or maybe he was just frustrated; understandable), but that line right there is going to put Daniel in defense mode. It’s subtle, but it’s an “I’m right, you’re wrong” moment. Even if Ben is right (he is), anyone’s ego would step in and interrupt rational thought right there.

Imagine if the second email had been even more patient and humble.

Hi Daniel,
Yes, you’re right that it requires the OpenCart to be logged in, but CSRF really is a commonly used attack, and it can be very dangerous <<insert Ben’s other paragrahs here; they’re a great description of how CSRF could be exploited.>>

There’s more good information on wikipedia [link], and there’s actually a pretty straightforward fix that can eliminate CSRF vulnerabilities  [link to owasp CSRF page, or whatever you like]. I’ve attached some files that fix these vulns, and added some anti-CSRF functionality to the URL class to make it easy to clean up any other cases.

Instead, things kinda spiral downward. It ends with:

On 2010-01-22, at 8:05 PM, Daniel Kerr wrote:
what protection do you recommend?

followed immediately by:

On 2010-01-22, at 8:05 PM, Daniel Kerr wrote:
“…your [sic] just wasting my time.”

Communication is no longer occurring. We as security people must take the responsibility to prevent that. What if, at the moment that things were spiraling downhill, Ben had sent an email like.

Hi Daniel,
I hear your frustration with trying to protect users from doing dumb stuff, and I agree there’s no way to fix all the stupid things they could do, but at least CSRF is one type of attack that we can stop cold. If you have ten minutes, I’d love to talk about why I think it’s really important, and how some protections could be added to OpenCart without too much effort.

My phone number is 555-555-1212: call me any time, or let me know if there’s a good time to call you.

Yes, I know it’s crazy. It’s over the top, it’s above and beyond the call of duty, and it’s kinda weird: use the telephone… really? But that might, just might, have helped Ben win not just the battle over a single bug, but might have won an ally in the security of an entire application, and gained the cause of security goodwill in the bargain.

I’m not saying that there aren’t complete, incorrigible, asshat people out there, or that killing them with kindness is any kind of panacea. Ultimately, Ben may not have been able to make progress even if he was a genetic hybrid of Mother Teresa and Bruce Schneier. What I am saying is that we too often forget that real security comes from the people, not just the code.

So, when you find yourself in a situation like Ben, please consider digging deep into your well of patience and giving a bit more when you’re tempted to give less.

Fuzzing Comes in from the Cold

Thursday, May 20th, 2010

So, after a couple months of living in webapp security land and having my developer hat on, I finally took a few days to do some good old fashioned vulnerability hunting. These days, that means fuzzing.

I’m going to go ahead and say that fuzzing is ready to come out of the cold, from being primarily thought of as something security researchers and blackhats do, to eventually being something as expected as unit tests (…though I’m probably about 2 years late in saying that). With fuzzing a part of the SDL (gj MS) and Charlie Miller publicly calling on companies to get with the program, it’s well on its way.

Now, unit testing (and code coverage) took a while to be considered expected practice (and depending on where you work, might still raise eyebrows), but by and large they’re generally considered something that helps improve quality and reduce risk in a project. I have hopes that fuzzing will get there too.

The tooling is there, except that I think coming from the security community has hindered it a bit. There’s no standout leader like xUnit (cpp, n, j, etc), and instead we have dozens of tools grown from individual developers, which range from utterly broken to pretty good, and most serious fuzzing undertakings end up having to piece together a solution out of a number of other partial solutions. If you’re Charlie Miller, you have it figured out and built into a fusion powered spaceship, but the rest of us are still getting there (seriously, if you read one thing on fuzzing, check out that presentation from CANSECWEST this year… we all can aspire).

Trying to piece together such a solution myself, I started with FileFuzz and the excellent text Fuzzing: Brute Force Vulnerability Detection by Sutton, Greene and Amini.

Get Started Quickly With FileFuzz

If you’re looking to start file-based fuzzing as quickly as possble, FileFuzz is a good bet. It’s a mutational fuzzer so all you need to get started is a single sample file, and it’s “batteries included” (unlike many solutions) in that it incorporates the three big moving pieces of fuzzing: sample creation, test running, and error detection. Crash triage automation is a task that it doesn’t try to address, but if you’re just trying to get started, it’s going to help immensely.

While using it though, I found some bugs. Fuzzing a series of binary files, this pops up:

The output char buffer is too small to contain the decoded characters, encoding 'Unicode (UTF-8)' fallback 'System.Text.DecoderReplacementFallback'. Parameter name: chars.

A bit of googling suggests that PeekChar can’t reliably be used on binary data. I made the following change in the readBinary() function of Read.cs (line numbers are approximate because I made some other changes):

            //while (brSourceFile.PeekChar() != -1)
            while (brSourceFile.BaseStream.Position &lt; brSourceFile.BaseStream.Length)

If you’re running FileFuzz on a modern .NET runtime (or through VisualStudio) you may see problems such as:

InvalidOperationException: Cross-thread operation not valid: Control 'Foo' accessed from a thread other than the thread it was created on

It looks like the FileFuzz UI was written before these cross-thread checks were enforced in .NET, so if you don’t want to spend a lot of time writing threadsafe delegates, you can add one line to revert to the old (unchecked) behavior. Add this at the beginning of InitializeComponent() in Main.cs (again, line numbers are approximate):

            Control.CheckForIllegalCrossThreadCalls = false;

At one point I thought I found that FileFuzz was only generating different files for the first 10 bytes or so, and identical files after that. It may have been some config error on my part and I couldn’t duplicate it later, but you may want to give your files a quick run through md5sum, just to make sure you don’t waste a lot of CPU cycles. (Has anyone else see this?)

Structured Exception Handling

While running FileFuzz against a particular target, I found a number of hits that didn’t reproduce nicely when run alone. When the target binary was executed via crash.exe (included w/ FileFuzz), it would show a access violation:

[*] "crash.exe" "C:\Program Files\xxx" 1000 C:\fuzzing\xxx\output\136
[*] Access Violation
[*] Exception caught at 1001c06d mov eax,[esi+0x8]
[*] EAX:0011f050 EBX:00000030 ECX:00000000 EDX:00000092
[*] ESI:00000000 EDI:0011f54c ESP:0011f0ec EBP:0011f0f4

When run with the same file from the command line, nothing; just an error message and a clean exit. Initially puzzling, I found that this is a result of windows Structured Exception Handling. (Here’s an old but worthwhile read on what really goes on under the hood in SEH) So, hook it up under OllyDbg or IDA and boink, there it is.

When I get a chance I need to get set up with !exploitable (presentation here ), but I’ll have to share that in a later post.

Shodan Now Exporting More Than 1K Results

Monday, May 17th, 2010

If you’re not familiar with Shodan, you should definitely check it out. It’s billed as a Computer Search Engine, and that’s exactly what it does. Want to find every FTP server out there? No sweat. How about webservers that provide a default password as part of the authentication realm?

If you sign up and log in, you’ll be able to run other interesting queries like every webserver in Nigeria (find your favorite spammer!).

I’ve personally been using Shodan heavily to calibrate a webapp fingerprinter, and the biggest pain has been inability to export more than 1000 results. I emailed John and begged for the feature and after some back and forth, as of Sunday night, it’s ready! If you click the Export button, you’ll now be prompted with the number of hosts you want to export (in increments of 1000). He says it will accommodate up to a million hosts, but might take a while to make the xml available.

Shodan Export

Incremental export (essentially pagination) isn’t yet supported, but if there’s demand he might add it.

I still think that $50/20 credits (20k hosts) is highway robbery (more begging is probably in order), but it’s a unique tool and may save you a lot of time with nmap and a scripting language.