Archive for the ‘Non-Technical’ Category

A Technical Manager’s View of Imposter Syndrome

Monday, July 17th, 2017

“Hey [boss] — am I fired?”

This is one of the those “ha ha, only serious” jokes; one of those places that we all use humor to broach a topic that’s hard to just come right out and ask. I remember using this (admittedly hamfisted) line with various managers over the years. What I really wanted to know was “Am I doing okay? You’d tell me if I wasn’t, right?

I remember one job where I was so deeply uncomfortable about whether I was measuring up that the sound of the Skype ringtone (of a boss or coworker calling) would give me an instant anxiety attack. To this day I can’t hear that sound without it triggering a Pavlovian response. But, when I finally left that job, they made an aggressive counter-offer: our internal barometer is not necessarily a reliable indicator of our external effectiveness.

Fast forward to recently as I was managing a team, except that I was on the receiving end of the “Am I fired?” (or equivalent) joking. It’s a whole different set of emotions to hear it from people who have every reasonable expectation of knowing where they stand with you; even more, this was coming from people who are diligent, talented, and effective at their jobs. Exactly the sort of people who really shouldn’t have to be walking around with some doubt in the back of their minds.

Chuttersnap on

I recently changed jobs from that position where I was a manager of a technical team, to one where I’m member of a team and not in a managing role. It’s not a move that a lot of people choose to make, and it has given me an unusual perspective and a chance to reflect on some things that technical managers can easily become disconnected from.

The Imposter Phenomenon

InfoSec has gone through some previous rounds of talking about imposter syndrome, so I won’t re-explain the concepts here. If you’re not already familiar with it, Jess Barker, Ben Hughes, and Micah Hoffman have all shared good explanations and perspectives on it (or outside of infosec: Neil Gaiman, Adam Savage). Likewise, the original paper on the “imposter phenomenon” (they specifically didn’t think it was a disorder, so “syndrome” is wrong) is still a relevant and interesting read, and has some actual thoughts on approaches to dealing with it; I highly recommend it. Finally, I think many people know about the main point of Dunning-Kruger (“Unskilled and Unaware of It”), but the core idea of of imposter syndrome also shows up on the right hand side of each of the graphs in that paper; even high achievers tend to perceive themselves as essentially just a little better than average. Put another way, we tend to devalue a skill as we improve. That’s the insidious thing about imposter syndrome: it doesn’t go away as you get better.

Figure from Dunning-Kruger. Notice the top quartile: top performers who assess themselves as performing similar to the third quartile.

With all that in mind, I’d like to use my recent experience and perspective to share some approaches to addressing and handling the imposter experience for both managers and individual contributors.

How Managers Can Mitigate Impostor Syndrome In Their Teams

Encourage “I Don’t Know”

As a manager, you should both foster and model team norms that encourage asking questions, getting help, and learning without judgement. Treat good questions as a valuable resource: ask questions publicly yourself, and encourage others to seek help visibly (on shared mailing lists or channels), and then summarize the answers and how they found them. When people share new learnings in private, encourage them to proudly share it more broadly; from something as trivial as a new keyboard shortcut to big strategic ideas, both the knowledge itself and the meta-level learning of how someone came to that knowledge can help the team. I would even argue that the latter is more important: it represents an investment in getting better at learning together, an investment which compounds over time.

Startup-stock-photos on

As a corollary to this, be vigilant for people undermining these values; a toxic individual on a team can affect this for an entire organization. Which takes us to…

Don’t Tolerate “Brilliant Jerks”

There are lots of ways that “brilliant jerks” manifest, but the bottom line is that their negative impact on the team always outweighs whatever individual contributions they’re making. If we truly value the investment in collective learning discussed above, then it becomes easier to reason about the damage caused by these people. Particularly in relation to imposter syndrome, I think some of these “jerks” thrive because other people who are as good but more cautious don’t call them out on it. That silence emboldens and normalizes the behavior, and the effect spreads. As a manager, specifically screen for it when hiring, and excise it if it slips through.

Give Specific Feedback (Both Positive and Negative)

There’s a lot of guidance out there about how to give critical/constructive feedback, so let me focus on positive feedback for a second. For strong performers, “good job, keep it up” is lazy feedback. It’s an easy trap to fall into, and I’ve personally been guilty of it, but it’s a disservice. It doesn’t help the recipient grow or continue to improve and it passes over an important opportunity to reinforce values, but there’s something more germane to imposter syndrome: it doesn’t come off as credible. Remember that the challenge here is that someone doing good work simply doesn’t believe it’s good. So specificity is an important tool here to address a bug in the recipient’s perception.

Especially for more junior positions where roles are relatively concretely defined, I encourage you to put in the thought to tie “good job” back to specific, areas of professional development and, hopefully, their next tier of advancement. If we, as managers, can’t link “good job” back to accomplishment of specific, collaboratively-set goals and visible milestones then we’re robbing our best people of a chance to recognize their own accomplishments and maybe even surprise us. (Aside: More senior roles need different types of feedback, where the specificity is about helping to bring patterns and trends into focus. But that’s a whole other post.)

Encourage Collaboration

Finally, find ways for people to work together. This can take the form of pair programming, cross-functional projects, outside blogs/talks/research, or really anything that has people of different skillset and skill levels doing meaningful work together. I’m a fan of the mantra that “we all have something to teach, and we all have something to learn.”

Close, collaborative work helps people move from the view on the left to the one on the right (Graphic from @aliciatweet & @rundavidrun)

Nothing drives that point home faster for team members than having them plunk their laptops next to each other and actually work on a problem together using their differing approaches. As a manager, look for ways to help your team internalize that lesson.

How Individual Contributors (IC) Can Reduce Impostor Syndrome.

Position versus Trajectory

The first thing you probably need to know about good managers is that they care a lot less about where you are than about where you’re going. If you’re making progress, learning from mistakes, and getting better at your role, then you shouldn’t worry too much about individual day to day bumps.

This also applies importantly in hiring: it’s not uncommon for folks to somehow think they “lucked” or “conned” their way into a job offer, that they’re not actually good enough to get the job they just got. From a manager’s perspective, this is bollocks. The single most important thing that people managers do is pick their team. Jess Barker does a good job of translating a technique from the original Clance & Imes paper into an useful infosec context: imagine for a moment “confessing” to your new manager about how you feel like you conned them into giving you the position, and then try to imagine the response. Realistically, the response should be something like “I put a lot of thought into who I hire, and there are really good reasons I think you belong here; it’s my job to see that.

Stop looking at your feet; look at the road ahead. Ask “where should I be in 6 months?”, then work on getting there, a little bit at a time.

Embrace the “Thirty Percent Feedback” Protocol

Fear of not being good enough can make us less productive. There’s a strange, destructive feedback loop that I’ve noticed (and been a victim of) where the more we worry about our work being worthwhile, the longer we’ll hold it away from scrutiny and criticism. This can manifest as trying to perfect a project or assignment right up to deadline, then dumping the entire thing in “final” form, without ever soliciting meaningful feedback. Ultimately this is counterproductive because it means that we’re not getting potentially useful feedback at the earlier, formative stages of the work. Worse, that lack of useful feedback increases our uncertainty about the quality of our work, which increases the inclination to work privately longer. Ever felt like you worked really hard for really long on something, and even at the end thought “that’s really not my best work”? Yup, that’s this. It sucks.

That’s a tough cycle to break. One approach I really like is Jason Freedman’s “Thirty Percent Feedback”. Give it a read for the entire concept, but the essence is to force yourself to seek feedback early, and provide context to your reviewer about how “done” the thing is. An outline, a whiteboard sketch, or a chunk of pseudocode is totally fine if you preface it with “this thing is 10% done”. Then you’re inviting responses that are useful at the 10% or 30% mark and can drive important holistic improvements, instead of getting bogged down in discussing things that aren’t important until the 90% mark.

Startup-stock-photos on

Don’t Shoot for 100% Perfect

The “X% Complete” protocol actually helps with another challenge in complex fields: there is rarely a single, completely correct, un-nuanced answer. This can lead to analysis paralysis as we try to force a solution asymptotically to 100% complete and correct. Likewise, it contributes to imposter experience as we are best positioned to see the flaws in our own work. I’ve listened to phenomenally talented people absolutely *trash* their own work, even though it was objectively useful and important. Whether it’s releasing code as open source, giving a conference talk, or delivering a PhD thesis, the creator is inevitably the greatest critic. All the work that goes into understanding a topic or building a thing makes us better at seeing failings, which means that the horizon of perfection inevitably moves during the process.

There’s no fix for that, nor would we really want one; it means that we’ve grown while doing the work. To counteract the imposter experience that comes with this effect, change your goal: explicitly do not aim for perfect. Make your goal instead to make things a little better, and also to make it easier for others to make them better still.

  • Don’t write all the code to solve a problem: write enough, and make it maintainable and extensible.
  • Don’t write an entire textbook in a technical answer: write some guidelines and examples, and document the most useful references for others to expand later.
  • Don’t try to cram everything into a 50 minute conference talk: instead, try to convey the improved mental model you have after doing all the prep work, and help listeners get to your level faster than you did.



The entire impetus for this post was my recent change of roles; a good reminder that starting a new, larger job is a chance to be both excited and scared. Does knowing this stuff on an intellectual basis actually help with the feelings? I truly think it can, but the reason this article focuses on technical managers and leaders is because the real meaningful solution is in the culture of an organization. One reviewer pointedly asked if these ideas can work at high performing companies. I think that’s reversing causality; these are the kinds of ideas that *create* high performing teams and companies.

The role that’s currently scaring and exciting me is application security at Netflix. When I have a little more perspective I’ll certainly write about the first sixish months, but in the meanwhile I’ll answer the reviewer by pointing at Netflix’s recently updated culture essay. It really is how the company operates, and the cumulative effects of an entire organization working from those principles and at that level is a strong argument for the effectiveness of building a team around those values (…and if the culture deck resonates with you, you’ll find that we’re always hiring).

Talk Back

These are my thoughts, but I really hope they encourage others to share theirs. Managers — are there approaches that are useful in your team that others should use? ICs — are there things you or your managers do that improve or exacerbate the challenges of imposter experience? What am I not asking about that I should?

Please share in comments, twitter (@coffeetocode), or email (pst @ this domain).

Thanks to the many reviewers of various drafts of this post. 

On Pentesting, Professionalism, & “Chill”

Tuesday, September 13th, 2016

After a recent penetration test report-out call with a client, I asked my interns if anything from the call surprised them. ChillOne of them noted that he was surprised how “chill” the call was. That was interesting to me because it reminded me that I had thought the exact same thing when I first got into consulting and pentesting. It’s easy to see how a readout call could be an incredibly tense, combative affair but in my experience the best pentesters manage to not only avoid that but reverse it.

The mood of the report-out call is an excellent barometer for something that’s critical, and often lacking, in our industry: a constructive relationship between the red team and blue team. While critical, it’s also subtle, and creating the conditions for a good relationship is a process that requires real work and empathy for everyone at the table. My advice: Start Early, Be Meticulously Professional, and Remember the Goal.

Start Early

If the goal is a relaxed, productive, (even “fun”) readout call, then the groundwork must start early. While there are other things that come before it, a detailed kickoff is really the first big chance to get moving in the right direction. As a tester your goal should be to make sure that everyone is clear and comfortable with what’s about to happen, and what exactly the client hopes to get out of it. The behavior ends up being a combination of a lot of standard questions, and sniffing around for any hint that there are either concerns or complexities going unaddressed. It’s also important at this point to really understand context from the client’s view. What’s a critical vs. a high or medium? What do they care less about than you might expect them to? Why? The more understanding you have now, the more that the entire report can be placed in context. If it feels like you’re facilitating a group therapy session where the clients are sharing their (security-relevant) hopes and their fears, then you’re probably doing something right. My team literally asks questions like “what keeps you up at night?” and “what’s the scariest thing we could do here?” Asking the big questions frankly and early helps take the elephant out of the room and moves toward productive discussion of the big questions rather than tentatively working up to them through peripheral issues.

Aside from the kickoff meeting itself, “start early” means start doing things well now so that you have a buffer of goodwill to draw on later. I’ve heard it called an “emotional bank account”: make people feel good about you, make a deposit; let them down, make a withdrawal. Ideally you always want that balance going up, but when something happens (and it’s definitely going to), you want to make sure that you’ve got a nice buffer of goodwill so that it’s understood that it was a blip in an otherwise solid relationship. Neil Gaiman once explained that people keep working because their work is good, they’re pleasant to work with, and because they deliver on time. But the secret, he says, is that it only takes two out of the three. Different people are going to be able to take a different two for granted, but know that if you always shoot for all three you’ve bought yourself some leeway if something happens.

Be Meticulously Professional

Beyond simply good will, one of the important reasons clients call in pentesters (or consultants of any kind) is to get that feeling that they’re in good hands; that someone is going to make sure that messy, complex things get taken care of properly. We’re expected to drop into situations where deadlines, resources, or nerves are already in trouble and provide some useful answers and confidence that the “Right Things” are being done. So, standard consulting practices like “Communicate well and often”, and “Don’t surprise people” apply, of course.  

But one area that security folks sometimes struggle with is ego. There typically are already plenty of personalities and internal politics involved; that makes it critical for us as outsiders to not bring further ego into the situation.

This ego can take a few forms. The first is a tendency toward fearmongering and overselling findings; wanting to be perceived as one of those “scary hacker types”. That can be helpful (to a point) for establishing technical credibility but it’s important to realize that being cool isn’t in the job description. Likewise, neither is taking credit or passing blame. Remember: Amateurs get credit, professionals get paid. The rule for blame is similar: as in the airport in Fight Club, never imply ownership of the bug. If the goal is to make something more secure, it’s rarely relevant who exactly created a bug when it’s likely process, tooling, or training that really needs to change.   

Remember The Goal

This leads into another place that unhelpful ego pops up: security absolutism.

I hear security absolutism in language like “Windows sucks because…” or “Well, actually there’s no point in fixing that because hackers could still…” (or really anytime someone starts handwaving about esoteric TLS attacks or Van Eck phreaking … you know the type).  

Real professionals need to be able to set aside the hacker mindset long enough to have productive, nuanced discussions about how to fix things. There are rarely perfect solutions, and the imperfect ones come with tradeoffs. We should all be willing to be as pragmatic on defense as we are on offense. The perception that security people are going to naysay or ridicule every suggestion hurts all of us, and makes us less effective as an industry. The “Nick Burns” mentality is a self-reinforcing stereotype we need to fight against at each encounter. Similarly, there’s an odor of superiority that often comes off some pentesters when they break a thing and speak about it publicly, as if that somehow demonstrates that they are smarter than the person who designed it. Sometimes a thing is *so* bad that an example must be made, but for my tastes those instances are far more rare than twitter and blogs would make you think.    

Even if a client is a pain to work with, doesn’t take good advice, and fights you on everything, they made at least one smart call: they asked for help. The better we’re able to appreciate that, understand their perspective, and work toward improving the system, the better the relationship and better the results. I’ll feel happier about our industry when dev and ops actually look forward to their calls with security folks, and I’ll tell you this: life’s a lot better when we look forward to them too. So remember; be professional, be empathetic, be helpful — and be chill.

What Kickstarter Did Right

Monday, February 17th, 2014

Only a few details have emerged about the recent breach at Kickstarter, but it appears that this one will be a case study in doing things right both before and after the breach.

What Kickstarter has done right:

  • Timely notification
  • Clear messaging
  • Limited sensitive data retention
  • Proper password handling

Timely notification

The hours and days after a breach is discovered are incredibly hectic, and there will be powerful voices both attempting to delay public announcement and attempting to rush it. When users’ information may be at risk beyond the immediate breach, organizations should strive to make an announcement as soon as it will do more good than harm. An initial public announcement doesn’t have to have all the answers, it just needs to be able to give users an idea of how they are affected, and what they can do about it. While it may be tempting to wait for full details, an organization that shows transparency in the early stages of a developing story is going to have more credibility as it goes on.

Clear messaging

Kickstarter explained in clear terms what was and was not affected, and gave straightforward actions for users to follow as a result. The logging and access control groundwork for making these strong, clear statements at the time of a breach needs to be laid far in advance and thoroughly tested. Live penetration testing exercises with detailed post mortems can help companies decide if their systems will be able to capture this critical data.

Limited sensitive data retention

One of the first questions in any breach is “what did they get?”, and data handling policies in place before a breach are going to have a huge impact on the answer. Thinking far in advance about how we would like to be able to answer that question can be a driver for getting those policies in place. Kickstarter reported that they do not store full credit card numbers, a choice that is certainly saving them some headaches right now. Not all businesses have quite that luxury, but thinking in general about how to reduce the retention of sensitive data that’s not actively used can reduce costs in protecting it and chances of exposure over the long term.

Proper password handling (mostly)

Kickstarter appears to have done a pretty good job in handling user passwords, though not perfect. Password reuse across different websites continues to be one of the most significant threats to users, and a breach like this can often lead to ripple effects against users if attackers are able to obtain account passwords.

In order to protect against this, user passwords should always be stored in a hashed form, a representation that allows a server to verify that a correct password has been provided without ever actually storing the plaintext password. Kickstarter reported that their “passwords were uniquely salted and digested with SHA-1 multiple times. More recent passwords are hashed with bcrypt.” When reading breach reports, the level of detail shared by the organization is often telling and these details show that Kickstarter did their homework beforehand.

A strong password hashing scheme must protect against the two main approaches that attackers can use: hash cracking, and rainbow tables. The details of these approaches have been well-covered elsewhere, so we can focus on what Kickstarter used to make their users’ hashes more resistant to these attacks.

To resist hash cracking, defenders want to massively increase the amount of work an attacker has to do to check each possible password. The problem with hash algorithms like SHA1 and MD5 is that they are too efficient; they were designed to be completed in as few CPU cycles as possible. We want the opposite from a password hash function, so that it is reasonable to check a few possible passwords in normal use but computationally ridiculous to try out large numbers of possible passwords during cracking. Kickstarter indicated that they used “multiple” iterations of the SHA1 hash, which multiplies the attacker effort required for each guess (so 5 iterations of hashing means 5 times more effort). Ideally we like to see a hashing attempt take at least 100 ms, which is a trivial delay during a legitimate login but makes large scale hash cracking essentially infeasible. Unfortunately, SHA1 is so efficient that it would take more than 100,000 iterations to raise the effort to that level. While Kickstarter probably didn’t get to that level (it’s safe to assume they would have said so if they did), their use of multiple iterations of SHA1 is an improvement over many practices we see.

To resist rainbow tables, it is important to use a long, random, unique salt for each password. Salting passwords removes the ability of attackers to simply look up hashes in a precomputed rainbow tables. Using a random, unique salt on each password also means that an attacker has to perform cracking on each password individually; even if two users have an identical password, it would be impossible to tell from the hashes. There’s no word yet on the length of the salt, but Kickstarter appears to have gotten the random and unique parts right.

Finally, Kickstarter’s move to bcrypt for more recent passwords is particularly encouraging. Bcrypt is a modern key derivation function specifically designed for storing password representations. It builds in the idea of strong unique salts and a scalable work factor, so that defenders can easily dial up the amount computation required to try out a hash as computers get faster. Bcrypt and similar functions such as PBKDF2 and the newer scrypt (which adds memory requirements) are purpose built make it easy to get password handling right; they should be the go-to approach for all new development, and a high-priority change for any codebases still using MD5 or SHA1.