MindShaRE: Analyzing BSD Kernels for Uninitialized Memory Disclosures using Binary Ninja
2022-9-22 00:39:45 Author: www.thezdi.com(查看原文) 阅读量:23 收藏

Disclosure of uninitialized memory is one of the common problems faced when copying data across trust boundaries. This can happen between the hypervisor and guest OS, kernel and user space, or across the network. The most common bug pattern noticed among these cases is where a structure or union is allocated in memory, and some of the fields or padding bytes are not initialized before copying it across trust boundaries. The question is, is it possible to perform variant analysis of such bugs?

The idea here is to perform a control flow insensitive analysis to track all memory store operations statically. Any memory region never written to is identified as uninitialized when the data from it is copied across trust boundaries.

Generalizing the code pattern for analysis

Consider the case of CVE-2018-17155, a FreeBSD kernel memory disclosure in the getcontext() and swapcontext() system calls due to a lack of structure initialization. Shown below is the patch for sys_getcontext(). The listing on the left shows the patched code. sys_swapcontext() was patched in a similar fashion.

Figure 1 - Patch for sys_getcontext() information disclosure. Vulnerable code appears on the right.

The vulnerable code declared a ucontext_t structure on the stack, wrote to some but not all fields, and finally used copyout() to copy UC_COPY_SIZE bytes of data from the structure to userland. The problem here is that not all fields are initialized, so any data occupying the uninitialized parts of the structure memory region are disclosed. To solve the problem, the patched code zeroes out the entire structure using the bzero() function.

The generalization of the above code pattern looks like this:

• A memory region (structure, union, etc.) is declared on the stack or allocated on the heap, which could be the source of uninitialized memory.
• The memory region may get fully or partially written.
• There is an API that transfers data across trust boundaries. This could be the sink for uninitialized memory.
• The API generally takes at least 3 parameters: source buffer, destination buffer, and size. In this case, the source of the memory is a stack offset, and the size of the transfer is a constant value. A constant size of transfer means the value is either the entire size of the memory region (using sizeof operator) or a part of it until an offset.
• The memory region may be zeroed out before usage using functions like memset() or bzero().

The sink function is application-specific. To mention a few of the more likely sinks: copy_to_user() in case of Linux kernel, copyout() in case of BSD kernels, send() or sendto() for network transfers or any wrappers around them. The definitions of these functions are either documented, or else understood by reverse engineering if the target is closed source.

Searching the code pattern for analysis

Once the sink function and its definition are known, we can query for calls to the sink function with a constant size argument and source buffer pointing to a stack offset or heap memory. Querying for a pointer to stack memory is straightforward, whereas detecting heap pointers requires visiting the definition site of source variables. Consider the definition of copyout() function in BSD:

         copyout(const void *kaddr, void *uaddr, size_t len)

When looking for stack memory disclosures, search for cross-references to the copyout() function where kaddr is pointing to a stack offset and the len parameter is a constant.

Binary Ninja has a static data flow feature that propagates known values within a function, including stack frame offsets and type information. Using this feature, it is possible to narrow down calls to copyout() that satisfy our search criteria. To understand this better, let’s inspect the arguments passed to copyout() from sys_getcontext().

Figure 2 - sys_getcontext() invoking copyout(kaddr, uaddr, len)

The kaddr parameter, or params[0], holds a kernel stack pointer, is shown as the stack frame offset -0x398. The value for the len parameter, or params[1], is shown as the constant 0x330. Since Binary Ninja has no information regarding uaddr, this is shown as <undetermined>. With this register type information for kaddr and len, the following query fetches all instances of calls to copyout() with a kernel stack pointer and constant size:

Statically tracking memory stores

The core idea of the analysis is to track all the memory store operations using Binary Ninja’s static data flow capability and propagate pointers manually using Single Static Assignment (SSA) form whenever necessary. For tracking stack memory stores in local function scope, we rely on Low-Level IL (LLIL), because Medium Level IL (MLIL) abstracts stack access and might eliminate some of the memory stores. For tracking inter-procedure store operations where the address is passed to another function, we rely on the MLIL SSA form to propagate the pointers. The visitor class implemented to handle IL instructions is based on Josh Watson’s Emilator.

Tracking stack memory stores with LLIL

In LLIL, any instruction writing to memory is represented as an LLIL_STORE operation. It has a source and destination parameter. The idea is to linearly visit each LLIL instruction in a function and check if it is an LLIL_STORE operation having a stack frame offset as its destination. When a memory store writing to stack is identified, we will log the source offset of the write and its size. Consider a simple 8-byte memory move operation and its corresponding LLIL information provided by Binary Ninja:

Figure 3 - LLIL_STORE operation in freebsd32_sigtimedwait()

The StackFrameOffset value is the offset from the base of the stack and the size property gives the size of the store operation. Using this information, it is possible to know which memory address are being written. In this case, the addresses from stack base offset -116 to -109 (8 bytes) are being initialized.

Static function hooks and memory writing APIs

While memory store instructions are one way to initialize memory, functions like memset() and bzero() are frequently used to initialize a memory region with NULLs. Similarly, functions such as memcpy(), memmove(), bcopy(), strncpy(), and strlcpy() are also used to write to a memory region. All these functions have something in common: there is a destination memory pointer and a size to write. If the destination and size values are known, it is possible to know the memory region being written to. Consider the case of bzero(), which is used to clear stack memory in the patched sys_getcontext():

Figure 4 - Clearing stack memory using bzero()

By querying the destination pointer and size parameters, it is possible to know their respective values and hence the target memory region.

Now let us consider how the analyzer can handle CALL operations. Static hooks are handlers to functions which we intend to handle differently compared to other functions. For any CALL instruction with a known destination i.e., MLIL_CONST_PTR, the symbol is fetched to check for static hooks.

A JSON configuration with the function names as well their positional parameters (destination buffer and size) is provided to the analyzer for static hooking:

The copyin() function is specific to BSD kernels. It is used to initialize kernel buffers with data from user space. Any target-specific functions to hook can be added to the JSON config and handled in visit_function_hooks() as per necessity.

Handling x86 REP optimization

Many times compilers optimize memory writing functions into REP instructions or a series of store operations. While store operations introduced due to optimization can be handled like any other store operation, REP instructions requires special handling. Static function hooks are not useful in detecting memory writes due to REP. So how do we handle such optimizations and avoid missing those memory writes? First, let’s look at how Binary Ninja translates the REP instruction in LLIL or MLIL.

Figure 5 - memcpy() optimized to REP instruction

Figure 6 - REP instruction translation in MLIL

The REP instruction repeats the string operation until RCX is 0. The direction of copy operation depends on the Direction Flag (DF), hence the branching where one branch increments the source (RSI) and destination (RDI) pointers and the other decrements them. In general, it is reasonably safe to assume that DF will be 0, and that pointers are incremented.

When linearly walking through the ILs, the translated REP instruction will look no different from other instructions. The idea is to check for GOTO instruction, and for every GOTO instruction in IL, fetch the disassembly at the same address. If the disassembly is REP instruction, then fetch the destination pointer as well as size arguments and mark the memory region as initialized.

The LLIL has a get_possible_reg_values() API to read values of registers statically. The MLIL provides couple of APIs, get_var_for_reg() and get_ssa_var_version(), to map architecture registers to SSA variables. This is very useful when propagating values manually using SSA variables in the absence of RegisterValueType information (i.e. RegisterValueType.UndeterminedValue). Similar APIs are currently missing in LLIL and tracked as a feature request: API to get SSARegister for a register at a given LLIL instruction.

Tracking Inter-procedure memory stores with MLIL

At this point we can track memory store operations, CALL operations such as bzero(), memset(), and also deal with REP optimization. The next task is to track memory writes across function calls, as when a caller passes a memory address to a callee. The interesting challenge here is that once a stack pointer has been passed into another function, it can no longer be tracked using the register value type information (StackFrameOffset) as we did within the local function scope using LLIL (see above).

To solve this problem, we propagate the pointers within the callee function using MLIL SSA variables, just like propagating taint information. Whenever a MLIL_STORE_SSA instruction is encountered, we log the offset of the write operation and size values whenever the destination of the memory write operation is resolved manually based on values of SSA variables. The set_function_args() function shown below iterates through MLIL variables and assigns the value (pointer) passed by the caller:

Once the initial SSA variables are set, we visit all the instructions linearly to propagate the pointer and log memory writes. While doing this, the most common operation performed on the pointer is addition. Therefore, it is necessary to emulate MLIL_ADD instruction to handle pointer arithmetic operations. Additionally, it is also important to emulate instructions such as MLIL_SUB, MLIL_LSR and MLIL_AND to handle certain pointer-aligning operations in case of optimizations. Here is an example of how these MLIL SSA expressions are resolved to log a memory store operation:

Considering the SSA variable rax_43#65 as a manually propagated pointer value, it is possible to resolve the destination of the store operation as well as the size of the write. But when the value of the SSA variable rax_43#65 is not available, this memory is not associated with the pointer that was propagated by the caller and therefore not logged.

Handling pointer-aligning optimizations

When performing inter-procedure analysis, further optimizations were noticed in addition to the REP optimization as seen in the “Handling x86 REP optimization” section above. A variable allocated on the stack will usually be aligned to meet the needs of upcoming operations. Let’s say a stack pointer is passed to memset() and the compiler inlines the call as a REP instruction. In this case, it is very likely the memory will be allocated at an aligned address such that the fastest instructions can be used during REP operation.

However, when a pointer is received as an argument by a callee or as a return value of an allocator function, the compiler may have to generate pointer and size alignment opcodes which could rely on branching decisions before reaching REP instruction. Here is an example of such an optimization commonly found in the NetBSD kernel used for analysis:

Figure 7 - An example memset() optimization from NetBSD

When such branching decisions are involved, the pointer, as well as the size, can take multiple possible values (from the perspective of static analysis) at the point of REP instruction. This is different from what we observed in the “Handling x86 REP optimization" section where there is only one possible value for pointer and size. Our goal here is to find the actual value of the pointer and size in the absence of pointer-aligning computations. To achieve this, a couple of SSA expressions were identified that can be used to resolve the original value:

• Search for an expression involving (ADDRESS & BYTESIZE). This could be the first use of ADDRESS before taking any conditional branches.
• Search for an expression involving (SIZE >> 3). This is where the adjusted size is passed to a REP instruction.

I had a couple of ideas in mind to track back the above expressions from the point of REP instruction, one relying entirely on SSA and the other based on dominators:

• Use get_ssa_var_definition() and get_ssa_var_uses() APIs to get a variable’s definition site and its uses.
• Alternatively, get the dominators of the basic block containing the REP instruction and visit the instructions in the dominator blocks.

The function resolve_optimization() shown below uses dominators to get the basic blocks to perform the search operation. Since the pointer is manually passed by the caller, the value is fetched from the SSA variables.

In the case of a possible constant size value, we fetch the maximum from the list of available size values. Once both pointer and size values are available, we log the memory region as initialized.

Tracking memory stores in dynamic memory allocations

So far, all our analyses were concentrated on stack memory as the source buffer for information disclosure. This is largely due to the prevalence of stack memory disclosure bugs, as described in KLEAK: Practical Kernel Memory Disclosure Detection (PDF). What about other memory regions such as the heap? Can we model some of the heap memory disclosures too?

When looking for heap memory disclosures, the idea remains the same. We are still looking for calls to sink functions with known size value. But instead of the source pointer being RegisterValueType.StackFrameOffset, we check for RegisterValueType.UndeterminedValue. Consider the code for sys_statfs():

Figure 8 - Dynamic memory allocation in sys_statfs()

Here the kernel pointer rdi_1#2 in copyout() is undetermined because Binary Ninja does not know what the allocator function returns. However, by using the SSA form, we can manually track back whether rdi_1#2 is holding the return value of malloc(). For example, follow the highlighted instructions in Figure 8. - the variables are assigned as rax_1#1->r15#1->rdi_1#2. This information can be obtained programmatically using the MLIL get_ssa_var_definition() API. Once the definition site of an SSA variable is obtained, we can check whether the variable is initialized using a CALL operation as demonstrated below:

How does the analyzer know the definition of allocator functions? We can take the same approach used for providing information regarding static function hooks (see the “Static function hooks and memory writing APIs” section above). A JSON configuration with a list of allocator functions and an index of size parameters is provided to the analyzer. For any CALL instruction with a known destination (i.e., MLIL_CONST_PTR), the symbol is fetched to check for known allocator functions. Here is a sample JSON configuration used for analysis:

Once we have established the connection between the source pointer and allocator call, the next question is, what pointer value will be assigned as the return value of the allocator call? The stack pointers as tracked as negative offsets in Binary Ninja as seen below:

To have a generalized representation between the stack and heap pointers, I decided to set the return value of a heap allocator calls as a negative value of the size of the allocation. For the malloc() call in sys_statfs(), rax_1#1 is set to -0x1d8 as the starting address. Therefore, the memory region which needs to be initialized ranges from -0x1d8 to 0 [start + size of allocation]. Even when the allocation size is undetermined, starting address can be set to some arbitrary value such as -0x10000. All that matters here is to know whether the contiguous memory region accessed by copyout() is initialized or not.

Filtering memory stores using dominators and post dominators

A dominator in graph theory provides information on the order of execution of some basic blocks. While we have already used dominators for handling pointer-aligning optimizations in the “Handling pointer aligning optimizations” section, this section details the usage of dominators in detecting control flow-sensitive memory store operations.

To analyze uninitialized memory disclosures, we explore two ideas: dominators and post-dominators. A basic block X is said to dominate another basic block Y if all paths to Y should go through X. A basic block Y is said to post-dominate basic block X if all paths from X to any of the function’s return blocks should go through Y. Consider this example from Wikipedia:

Figure 9 - Graph demonstrating dominators and post dominators

In the provided graph, node B dominates nodes C, D, E, and F because all paths to these nodes must go through node B. By definition, every node dominates itself, so the set of all nodes dominated by node B will be B, C, D, E, and F. Also, node A dominates all the nodes in the graph. Therefore, the dominators of nodes C, D, E, F are A and B.

Similarly, when A is considered as the function entry node, with E and F being exit nodes, node B is the post-dominator of node A. This is because all paths from A to the exit nodes must go through B.

Now, how can dominators and post-dominators help us in this analysis?

We can perform dominator analysis on the callers of the sink function. The idea is to log only memory stores in basic blocks which dominate the basic block calling copyout(), that is, basic blocks which will be executed irrespective of branching decisions. Consider the code below:

Figure 10 - Dominators of basic block calling copyout()

Here the basic block calling copyout() is <mlil block: [email protected]> and there are five dominator blocks in the path from the function entry to copyout(). When performing dominator-based analysis, we will log only memory stores within these five dominator blocks. The memory store operations in other basic blocks might be skipped and not execute. The same is the case with the callee function. We will perform an inter-procedure analysis only when the function is called from a dominator block.

Post-dominator analysis is done on the callee function during an inter-procedure analysis. It is meant to find bugs where a callee can possibly return before initializing the memory region it is supposed to. Consider the callee function do_sys_waitid() from figure 10.

Figure 11 - Post dominators of function entry block in do_sys_waitid()

The function entry block <mlil block: [email protected]> is always executed. The other basic blocks that are executed irrespective of the branching decisions are <mlil block: [email protected]> and <mlil block: [email protected]>. Once again, memory stores and callee analysis are limited only to these three basic blocks.

Dominator- and post-dominator-based analysis tries to fill the gaps in control flow insensitive analysis performed by the analyzer. The general assumption here is that memory is initialized or cleared before performing further operations and therefore dominates other basic blocks. However, this assumption is not always true. For example, there are cases where individual code paths can perform the same operation as done in the dominators. Moreover, when a callee returns due to any error condition, the return value could be validated by the caller before calling copyout(). Consequently, dominator-based analysis as done in this implementation is prone to large numbers of false positives.

Checking for uninitialized memory disclosures

Once all the memory store operations are statically logged with information on offset and size of write, the memory region copied out to user space using copyout() can be evaluated for uninitialized memory disclosure. Consider the call to copyout() shown below:

The source pointer is -0x398 and the size copied is 0x330 bytes. Therefore, the analyzer has to validate if all the bytes in the memory range from -0x398 to (-0x398 + 0x330) are initialized, and if not, flag that as a bug.

False positives and limitations

The analyzer is written with the goal of finding memory regions that never get written to in any possible code paths. False positives occur in cases when it is unable to track a memory store operation. Below are some common false positive conditions and limitations of the implementation:

• The analyzer does not emulate branching instructions. Therefore, false positives are seen in code constructs involving control flow decisions. Consider a memory region such as an array that is initialized in a loop operation. In this case, the store operation would be detected only once because the loop body is visited only once by the analyzer, and not in a loop as it would be during execution.

• Indirect calls are not resolved statically. Consequently, any memory store done during indirect calls is not tracked.

• Optimizations may make it harder to track memory stores. Some common optimizations noticed were tackled in the “Handling x86 REP optimization” and “Handling pointer aligning optimizations” sections.

• Binary Ninja may wrongly detect type information of functions used for static hooking or sink functions like copyout(). Since our analysis relies on RegisterValueType information, any failure to accurately detect the function prototype may lead to false results. Verify the type information before analysis and update if necessary.

• The analyzer looks only for code patterns where the memory source and sink function are within the same function. There is no tracking back of memory source beyond the local function scope.

• Dominator analysis is experimental. You should use it only as a guideline to perform code reviews.

When there is access to source code, some of these false positives can be resolved by changing the optimization flags or by unrolling loops to reduce branching decisions.

Analysis and results

The target kernel executable is loaded in Binary Ninja to generate the BNDB analysis database. Then the analyzer is executed against the database for faster analysis. There are a couple of scripts: one for analyzing stack memory disclosures and another for analyzing sink functions with known size and unknown source pointer. Since the source pointer could be from a heap allocator, provide a JSON configuration with a list of allocator functions as an argument. The dominator analysis is experimental. You need to enable it using an optional argument when needed:

Conclusion

The scripts were tested on Binary Ninja version 2.4.2846 against FreeBSD 11.4, NetBSD 9.2, and OpenBSD 6.9 kernels. Amongst the results, code paths that are possibly reachable for an unprivileged user were evaluated. The OpenBSD bugs were found in sysctls related to multicast routing in IPv4 as well as IPv6, which are tracked as ZDI-22-073 and ZDI-22-012 respectively.

The four vulnerabilities (ZDI-22-075, ZDI-22-1036, ZDI-22-1037, ZDI-22-1067) found in NetBSD are related to syscalls supporting backward compatibility for older NetBSD releases ZDI-22-075 and ZDI-22-1036 are information disclosures in VFS syscalls for NetBSD 3.0 and NetBSD 5.0 respectively. Details regarding the fixes can be found here. Next, ZDI-22-1037 is an information disclosure in getkerneinfo syscall for NetBSD 4.3. This bug was fixed with many other potential issues as seen here. Finally, ZDI-22-1067 is another information disclosure related to VFS syscalls but in NetBSD 2.0 compatibility. Details regarding the fix can be found here.

The FreeBSD bug found in version 11.4 was also related to compatibility, which in this case for supporting 32-bit binaries. However, this bug was fixed without a disclosure during a large change done for the 64-bit inode. The uninitialized structure fields were cleared in the copy_stat function as part of the 64-bit inode project. Though this commit was in May 2017, it was tagged to release 12.0 and above. Therefore, the bug remained unfixed in release 11.4 until it reached EOL in September 2021, soon after our bug report.


文章来源: https://www.thezdi.com/blog/2022/9/19/mindshare-analyzing-bsd-kernels-with-binary-ninja
如有侵权请联系:admin#unsafe.sh