Katie Moussouris - CEO, Luta Security


Written by

You can’t talk about vulnerability disclosure without talking about Katie Moussouris.

For the past 15 years, Moussouris has pushed enterprises, researchers and anyone in between on the proper ways to report, detect and fix vulnerabilities. Apart from process, Moussouris has done extensive work making sure that everyone is on the same page when it comes to thinking about the issue in the right way.

But beyond vulnerability research, Moussouris has been a vocal advocate for everything from workplace equality to better cyber-hygiene at the individual and enterprise level.

CyberScoopIt’s been a busy 12 months for you, so let’s highlight all of the work you have been doing around vulnerability disclosure.

Katie Moussouris:  My customers that everybody knows about, folks like the U.K. government and a couple of others, have been going through their transformations internally in the past year.

Certainly the U.K. government is a great example, because of course, they didn’t really have a formal process for disclosure, and more importantly, they didn’t have a way to guide new services to have the capabilities to handle bug reports. What they’ve been working on is developing those capabilities basically by introducing target teams to the concept of vulnerably coordination, and developing a set of criteria. But they needed to formalize this process. I think that, in and of itself, is a much more mature way to deliver. It’s been very inspiring to work with a government organization that understands that this is about a process that needs to be rolled out, and it’s not just a quick fix and that’s just slapping an email address up front and hoping for the best.

This is this is a problem that pervades most of computing and will certainly pervade medical device security, IoT security, car security — pretty much everything that has a very well-known supply chain is going to have to deal with this problem.

CS: Do you think that we will ever reach a balance where things are secure by design, in that they will stay secure five to 10 years into their life cycle? 

KM: Secure by design is the holy grail. We’ve been preaching it in software development for a very, very long time with the security life cycle. Secure by design takes on different forms and has different challenges. For example, you could perhaps find something today that is secure as it can be. Yet any number of fields could change the fact that your originally secure design is now no longer secure. Those things are kind of out of your control, right?

There’s also an issue with this notion that “OK, it was designed securely from the ground up and it will magically hold.” You see things that indirectly affect the ability to be truly secure by design. For example, we’re not writing code from scratch all the time. Even when we are writing code, it will integrate with the rest of the world’s software.  We do our best to secure it, but there’s other code that’s intrinsically already deployed and already insecure. So we have backwards compatibility as one of the major impediments to modern security by design.

CS: A big part of security by design is taking humans out of the equation as much as possible. Do you think the cybersecurity community is making progress in that regard? What more could the community do?

KM: At the design level, one of the biggest choices that you can make is in what language you use. C and C++ obviously have a lot of unsafe functions associated with them, for instance. There are also development framework that developers can use, where you can ban the use of certain very unsafe functions. If you’re using those, there are ways to basically make it harder for developers to implement errors.

So step one: Let’s create a language that makes it a lot harder for developers to make mistakes and let’s get developers to use it. Step two: Let’s get people who might use some of these apps excited about the apps themselves. Step three: Get widespread adoption of a new technology that might make it harder to make common security errors.

CS: What is one big thing the cybersecurity community could improve upon? 

KM: I have been professionally in this industry for the past 20 years, and the one thing that I feel like we can improve upon is the use of marketing to oversell and overpromise whatever security solution it is that we’re offering to the world. This has been bugging me for literally 20 years. We’ve been doing things in cycles. Every new security technology gets this overhyped marketing beginning.

I think everybody wants an easy solution. I think that marketing at this point should be mature enough to say that there is no silver bullet. We’re never going to be 100 percent secure. So instead of  promising that you’re going to prevent breaches, I think we need to be realistic and say this is about prevention as best we can, detection as early as we can and recovery as completely as we can.