Tuesday, September 23, 2014

No Thing Left Behind

You're Damn Right

Adorable Crochet Puppy Mauls Researcher

Most of what we've heard about the Internet of Things (IoT) has been pushing fear, uncertainty, and doubt with regard to security. But, the effect has not been an improvement of security! Rather, the result has been fear that information security researchers aren't taking the time to look at the actual threat surface affected by the device(s) they analyze. Uncertainty regarding what direction engineers should take in order to solve realistic IoT security concerns. And doubt as to whether the talking heads driving the industry conversation are basing their perspectives on any semblance of reality what-so-ever. In effect, the Internet of Things landscape looks much like a foggy moor with Dracula hovering menacingly over a bubbling bog. The reality, however, is quite the contrary. Internet of Things security is about as terrifying as Grayson the Vampire Puppy whose cute and cuddly fangs will snuggle their way into your palpitating heart. 
Grayson Will Stare Deep into Your Void, Researcher Soul

Lead Your Flock

My biggest concern with the current conversation about IoT has nothing to do with Dave's concerns. While his concerns are valid, I don't have to sit around reading CFP submissions for Infiltrate and SyScan. Though hacking some random device has value, it has less value than it did four years ago. This is not to invalidate the work people are putting in to getting up to speed. This is commenting on the structure of our industry. If we look at the industry from a scientific perspective, we need only take enough samples to identify patterns of behavior. In IoT, we have established not only patterns of behavior, but we have identified substantial models in which the Things exist. So, at some point, we should move on from hacking things to solving models. Instead of breaking down hardware just to score a point on the information security conference scene, we should be holding the hands of these devices as they prepare to enter the brave new world they're designing for us. 

Off to School, Little Guy
This year at Black Hat, Zach Lanier and I (Don A. Bailey) started this very conversation by getting people to talk more about the models than the devices.  This was reinforcement of the ideas I set forth at the CyberHive Securing the Internet of Things: Masters dinner, where I presented on this very concept just prior to Black Hat. The only way we can move forward as an industry is if we start talking about the correct models in IoT, the realistic risks associated with the models, and practical ways to mitigate these risks. 

The Butt Dance 

Another major concern is the platform upon which people speak. I am of the belief that everyone, an I do mean everyone, has value to bring to a community. Agreeing or disagreeing with a person or a perspective is often moot. What is important is whether people's opinions are heard, and a consensus is achieved as to whether or not the opinion has merit. For this to work, opinions must be presented with facts or some semblance of research to support the opinion as more than subjective rantings. That said, we also live in a highly competitive world, and an even more competitive industry. I get it. We all want to be the best at all the Things. That's fine. 

But, when one or a few of us decides to be the mouth piece for the industry, they better have their shit together. There are a few things wrong with being a talking head without a body. Without an appropriate and well-researched technological platform on which to stand, a mouthpiece is spouting nothing anyone wants to hear. This is because the intentions are coming from a heart warming, fuzzy, and Care Bear Stare place of good will, but the words are jumbled and juxtaposed webs of bullshit that look like something a spider might spin on copious amounts of drugs

Charlie Can Serve It Up
People aren't stupid. Engineers aren't stupid. Project managers and product drivers are not stupid. They might not be as technically capable as a security researcher, but they understand the technology and business risks. What they often lack in is the ability to prioritize threats according to probability of abuse, and couple those prioritized concerns with tangible, practical, and cost-effective solutions. That's our job, to bridge that gap. If we waste their time with hyperbole, they will realize that we're full of shit and look for solutions elsewhere. This helps no one and distills a significant and imperative message of security down to what is essentially a Butt Dance: people think you're either insulting them, or yourself, but they can't tell which. That's a problem. 

IoT: The Next Generation

Take a break. Let's stop acting for a moment and take a break. Put down the Saleae Logic analyzer. Put down the oscilloscope. Put down the multi-meter. Let's circle back as an industry and get our message right. There's a whole world out there of product owners, engineers, lawyers, and politicians, that are willing to help build and enforce security. But, in order to make it happen we need a cohesive message. 

Delicious Breaks...
As a result, there are a few rules that need to be adhered to. These aren't my rules. They're really just common sense points that should be emphasized. 

For Speakers

  • Debugging is not a hack
  • Sunlight shining down on garbage don't make garbage smell sweeter
  • Trust is built by new research, not describing the wheel
Telling hardware, firmware, or software engineers that using interfaces specifically designed to analyze or alter code is a hack, is really a poor attempt at hacking humans. You're trying to convince someone that you performed a significant attack by doing something that the device or interface was designed to do. This is universally ridiculous, an is essentially insulting your potential customer. Sure, they might need to disable debugging capability in production level devices, but do they need to? What's the true risk to the business or to the end-user if debugging capability is left enabled? Every device has a different threat surface. Identify whether this even falls into the category of reasonable risk. 
Ricky Understands Metaphors

There's a famous saying in the south about sunlight never sweetening garbage. It essentially means that no matter what light you shine on a certain topic, you'll never frame that topic in a way that makes it seem desirable. This is the case with presenting an attack from the perspective of the wrong threat model. For example, I recently had a discussion with an individual that couldn't get beyond the use of Zigbee in a home product. They were absolutely infuriated that this product used Zigbee because of "all the security risks" with the Zigbee product. Sure, Zigbee has issues, but are those issues in scope? Their perspective was that Zigbee was a serious problem because it "makes everything critically vulnerable". But, they completely hand-waved over the fact that the risk can't be abused remotely. So, sure, you might be able to break the crypto key in a reasonable amount of time for every single instance of this particular device. But, you'd have to figure out how to do it at each instance's location. This means you'd need a secondary (or more) set of attacks to even get to the Zigbee layer, or be on site at every attack location. This is not a realistic attack! Sure, you'll get some laughs at a conference about how Zigbee can be broken if the crypto is weak, but you're not really breaking the IoT device at that point. You're just breaking Zigbee. So, talk about Zigbee and be done with it. Oh, Zigbee talks have already been done? Whoops. 

And since you're not in the business of reinventing the wheel, why do something someone else has already done? Break new ground! That's why we're here, right? Scanning the Internet for VNC ports may not be a valuable use of your time when instead you could have said "hey, let's try and push for standards and legislation that enforces ISPs to restrict access to critical ports unless subscribers explicitly ask" or some such variation. More importantly, what if the real research focused on how embedded photovoltaics systems are being designed with VNC enabled by default in some models. Wouldn't it be more useful to talk about the specific issues, and assist those companies with engineering to help diminish the problem at the source, rather than shaking a finger at the aftermath while using the same finger to point every would-be attacker in the direction of all the vulnerable things? Yeah. 

Not one bit.

For Talking-Heads

  • Your job is to get us through the golden gates
  • Never build a house on sand
  • Kiwis can't fly, and they don't tell people they can
The number one job of a talking head is to open doors and burst through gates. Often times technical researchers don't have the time or negotiating skills to get in the right places to effect change at high levels. The talking head is designed, through evolution, for this specific goal. But, if the talking head presumes he or she can fulfill the role of a technical researcher once through those gates, they risk losing the ear of the audience in the secret rooms and clubhouses of the elite. You cannot afford to lose the contacts you've just made by augmenting your verbiage with less-than-honest technical details. Someone will notice, and you'll be excommunicated from those golden halls. Accept your position at being, essentially, a politician. A politician's job is to speak the word of their constituent. In this case, it's the voice of the technical community. Be the channel from which the technical community speaks to the executive decision makers of the world. Only together can you facilitate action. Separately, the story becomes imbalanced and full of holes that even non-technical people can identify quickly. 

Building a house on sand is the same as making technical presumptions without the technical or engineering experience to know they are true. A great example of this is Karsten Nohl's brilliant work on USB hacking. The attack works because removable devices have no trust mechanism, regardless of what type of device they are. Seeing a USB mouse is believing the device is a USB mouse. But, anyone that has ever written firmware (especially for a USB module) knows that the firmware can be written on a generic module, and the firmware can be any type of USB device it wants to be. Without the technical details, the attack surface will look like "what if someone uses this mouse while you're gone to click on a administrative interface" when the real attack surface may be "this mouse can detect when it has been idle for over 5 minutes, and will then switch to a USB network dongle and hijack your DNS". Without accurate technical details you can't facilitate proper mitigation, which will cause high level decision makers to kick off initiatives with poorly designed scope. High level initiatives may take years to adjust if they aren't formulated accurately. Think about how long it took to integrate proper security engineering into Microsoft's SDLC. This was a result of a model that incorrectly dealt with security gaps and needed to be adjusted over time to accommodate for existing infrastructure and personnel. 

Finally, and this is just reiterating the points made above, never tell someone you know something you don't. Jack Welch of GE fame was a brilliant business leader because he knew the value of his gaps in knowledge. Jack openly admits that his sole job was to hire people that were smarter than him to fill the gaps in his knowledge. This allowed him to run GE with not only an iron fist, but an iron mind. He had the support of his employees, because they knew he put his trust into them to advise him. Use the technical research community in the same way. While none of us will want to be seen as your employee (unless we technically are) if you are presenting a message as a community initiative - not your own - the community will participate. 


We've got a long way to go with IoT. The technology is moving fast, but it's not a terror or a loss. In fact, it's getting better every day. Today, we have access to organizations like Bugcrowd and Hacker1 to facilitate crowd-scale security testing. We've got more researchers and white papers on security process than we can shake a stick at. We've got increasingly well engineered technology for deploying trusted hardware. Sure, it's not perfect, but we're light years beyond where we thought we'd be five years ago. 

So let's stop talking about how the world is burning, and start working together to put out the real fires. In other words, PoC||GTFO

Only You Can Save the World