Archive for the ‘Uncategorized’ Category

Lastpass, Risk, and Security Expectations

Wednesday, March 29th, 2017

Last week was a rough week for LastPass. In his continuing work of scrutinizing security products in general, and recently password managers in particular, Tavis Ormandy has released a series of critical bugs against LastPass (Tweet, Writeups). They’re exactly the sort of nightmare scenario that scares the crap out of users adopting password managers, and especially cloud-connected ones. However, it’s too easy and too simplistic to look at this week and conclude anything sweeping about password managers.

Don’t Lose Sight of the Big Picture

Password managers are a hugely important measure for most people, and something that all information security professionals should be advocating for. It’s not necessarily an easy sell, so it really pisses me off when I see self-professed experts muddying the waters with security absolutism (phrases like “well a hacker could still do…”, or advocating pet approaches that are infeasible for most people). When a vulnerability like this comes around, it provides another opportunity for those sorts of folks to shout and snark and win some personal points, at the expense of laymen who don’t have the ability to put these histrionics in context. So, even if these issues were a death knell for LastPass (I don’t believe they are, but more on that below), then password managers would *still* be a good idea for nearly everyone.

One important-but-often-overlooked benefit of password managers is that it conditions users to emphasize specific passwords less and be more comfortable with the abstract idea of authenticating to a service using some intermediary. That forms one part of the bridge (along with the spread of federated auth and mobile push authentication, among others) toward a post-password future.  

So, 3 bad vulns in LastPass, but the concept of password managers is still a net positive for security. How do we reconcile those?

Let’s put two points together:

  • All software has bugs, and (essentially) all important software has had critical vulns at some point (MS08-067, Stagefright, Heartbleed, Dirty CoW on Linux, iOS Trident, and various IE vulns)
  • Risk is more than just the count or severity of known vulns

I would argue that although the severity of the vulnerabilities were “critical”, the actual risk was relatively low for most users. By the time most people became aware of the issues, they had already been fixed or patched and effective for anyone who was online to receive the update. I don’t have metrics on how long it too the patch to be actually deployed to users, but my browser picked it up before I even got finished reading the published disclosure. For highly-targeted users, that might be enough time to put together an attack, but for the vast majority of the population they had a fix before they needed to care. A non-browser based manager is an option that avoids some of these issues, but I rarely meet a person using one who’s not in infosec already.

For the rest of users, they continued to realize all of the risk-reduction benefits of a password manager (in addition to convenience, etc), and never actually realized significant risk from the vulnerability.

Planning for Failure

There’s a longer blogpost here about the significant attack surface of a browser, defense in depth, and security configuration decisions around that, but that’ll have to be another time.

At the moment, I just want to point out LastPass’s ability to respond to a bug submission, triage it, then develop and deploy and appropriate fix in in time to limit user impact is not luck or accident; I would argue that it speaks to internal engineering values, and that it’s table stakes for a modern software shop, *especially* in security software. Even while he was hammering on them, Tavis pointed out that their responsiveness is a better experience than he’s used to having with vendors. One of the most controversial points in my BlackHat talk from a few years ago was congratulating WordPress on their approach to automatic updates; while it’s easy to dump on the project as a poster child for security vulns, in practice their effort in automatic updates actually do more to keep their users safe than some other projects with fewer bugs.

Final Thoughts

They got beat up pretty bad, but I’ll continue to use LastPass. We don’t often get to see in a very public way how a company handles a security issue, but when the response shows us that they’re thinking and doing the right things both before and during (Lastpass 2015, Kickstarter 2014), then it helps with the decision about whether to continue using them.

Netgear r7000 Command Injection Temporary Workaround

Sunday, December 11th, 2016

On Friday CERT issued a warning about the Netgear r7000 and R6400 lines of routers. They are vulnerable to a trivial, unauthenticated command injection via the internal-facing HTTP administrative interface.

proof_of_vuln

Yeah, that’s almost as bad as it gets.

There’s plenty of other reporting for confirmation, exploit info, and further details. However, CERT’s official guidance is, well, not all that practical for a lot of people:

The CERT/CC is currently unaware of a practical solution to this problem and recommends the following workaround: Discontinue use”

Since Netgear is so far mum on when they’re going to issue a patch, and not everyone has the luxury of getting a new router or doing without indefinitely, there are a couple of workarounds. The first is to use the vulnerability itself to kill the admin web interface; that appears to work, though the router will become vulnerable again next time it reboots.

A better solution is to migrate clients to a network that has no access to the admin interface. The r7000 at least (I can’t speak for other lines) has the ability to make guest wireless networks; since the guest networks have no access by default to the admin web interface, clients on those networks can’t be used to exploit it. This won’t work to isolate clients that physically plug into the router. Also, if you’re currently using the guest network feature for isolating some machines from others, then you all get to be on one network until a patch comes out.

So, as a temporary workaround, we can rename and hide the “main” network and create a new guest network using the same configuration options as the old main network. Clients will see the new one and migrate to it.

Step 1: Physically Connect to the Router

Physically plug your machine into the back of the router; we’re going to be messing with the wireless networks, you don’t want to lose access in the middle. Access the admin console (http://[router-address]/, where your router address is probably 192.168.1.1 or 10.0.0.1), and log in using your credentials (admin/password if you haven’t changed it… which you should have).

Step 2: Record The Configuration

Browse to the “Wireless” tab on the left and copy down details of your primary wireless network. You’ll use these to configure the new guest network.

Step 3: Disable Main Wireless Network

Still on the Wireless tab, change the “Name (SSID)” of the network(s) (both if you’re using both 2.4GHz and 5GHz) to something like DONOTUSE. It’s not necessary, but unchecking “Enable SSID Broadcast” will prevent it from cluttering up your network view. Hit apply, and wait for the change to apply and the page to reload.

disable_primary_network

Step 4: Configure and Enable Guest Networks

Browse to the “Guest Network” tab on the left and fill in the details you copied down from the primary page. Ensure that “Enable Guest Network” is checked, and “Allow guests to see eachother and access my local network” is unchecked.  Hit apply, and wait for the change to apply and the page to reload.

guest_network_config

 

Now, any clients will transparently migrate to the new guest networks, and clients on those networks won’t be able to exploit the vulnerability.

So, it’s something, but keep watching for an official patch.

Sort()ing out a Nerdsnipe

Saturday, October 1st, 2016

First, a disclaimer: this is really stupid stuff, and basically the opposite of work worth doing. But still, I got nerdsniped and the resulting rabbit hole went somewhere interesting, so I figured I’d write it up.

Update 10/2: Someone else got bit by the same bug. @swisshttp took a deeper dive, into the implementation details of different scripting engines. https://swisshttp.blogspot.ch/2016/10/latacora-riddle.html


The new security startup Latacora uses a small piece of javascript to randomize the names of the founders in the first paragraph. It seems like a reasonable & egalitarian thing to do, but after idly hitting F5 a few times (like you do…) I noticed that it’s not really random.

latacora_home

So, I went looking for the code that does the randomizing. It’s at the bottom and looks like:

original_javascript

Now, these are the same folks who put together the Matasano Crypto challenges and Starfighters.io, so something like that would not at all be out of the realm of possibility. And, whether intentional or not (hey, I said this whole rabbit hole was dumb), if test it (Gist)we find that the results are anything but random.

test-code

test-results

So, what gives? Is it in the call(s) to Math.random as the comment teases, or elsewhere? The code is really simple looking. Now, we hold as an article of faith that you cannot trust Javascript’s Math.random() (thanks @ropnop for the pointer), but there doesn’t seem to be any funny business with the seed or anything that might suggest actually messing with the generator. Plus, we’re not looking for a small bias, we’re looking for something really significant.

Could the issue be in the anonymous function passed to sort? Nope, not really. It correctly fulfills the contract expected by a custom sort function (return negative, 0, or positive number based on intended ordering of the compared values) , and the body of the function does what it says (implies) on the tin: returns each of those values close-enough-to-randomly, regardless of input.

Could it be something weird in the use of “ceil” to integerize the random number? That could create a small bias given the domain of Math.random, but again we’re looking for something that causes a huge bias.

Spoilers below – if you want to reason through this yourself, go ahead and do it, then come back.

No, the actual cause is both more subtle and more interesting. It’s actually in interaction of the implementation of Array.sort and the anonymous compare function. Several things are going on here:

  • the sort implementation (either quicksort or insertion sort in Chrome – I’m assuming insertion sort for an array this small and based on the observed behavior below ) assumes that the custom sort function passed into it does a sane comparison
  • the sort implementation moves left to right through the array
  • each comparison between two elements has 3 possible values (-1, 0, 1)
  • the sort operation is unstable (edit: this operation is stable. I misinterpreted the docs and how stability interacts with the random comparator. Thanks @fzzfzzfzzz)

So, there is no “correct” ordering of the array: the sorting algorithm isn’t checking to ensure that compares have a transitive property (nor should it). The result is not so much a true “sort” as a series of probabilistic changes of position.

I refactored the code a little bit to do a poor-man’s implementation of instrumenting the sort function (Gist). Here are three runs of it. Notice that that the only way Erin *doesn’t* get bumped is if she “loses” the first compare. A “win” (compare result of -1) does not result in a swap because of the intent of the sort function, and a “tie” (compare result of 0) is never rechecked because the sort implementation does not guarantee a stable result.

instrumented_sort

In effect: Someone starting in a given position has (1/3)^n chance of being bumped “n” positions in the array.

So, that explains why Erin gets to be first more often than expected, and why Thomas gets to be second. Here’s the breakdown of that math:

test-results

(1/3)^1 = .33333 ~= % of time Erin is bumped to position 1
(1/3)^2 = .11111 ~= % of time Erin is bumped to position 2

As expected, the pattern holds up if we add a “Dummy” to the end of the list:

test-results-with4

(1/3)^3 = .037 ~= % of time Erin is bumped to position 3

 


So… there you have it. It’s a (probably unintended) nerdsnipe that nonetheless took us into some interesting territory. Hope you had fun. Did I miss something or explain it wrong? Lemme know in the comments or on twitter (@coffeetocode).