导语:我们在上一篇文章中介绍如何在QEMU上执行iOS并启动一个交互式bash shell,在第这篇文章中,我们将详细介绍为实现这些目标所进行的一些具体的项目研究。
我们在上一篇文章中介绍如何在QEMU上执行iOS并启动一个交互式bash shell,在第这篇文章中,我们将详细介绍为实现这些目标所进行的一些具体的项目研究。
本文的研究项目是以该项目为基础进行的,我们本次的目的是,在没有安全监控器的情况下,在不同的iPhone上启动版本略微不同的iOS内核,同时在运行时修补内核以使其启动,运行预先存在ramdisk映像以及没有交互式I/O的launchd 服务。在这篇文章中,我们将介绍:
1.如何将代码作为新设备类型插入QEMU项目中。
2.如何在不运行时或事先修补内核的情况下启动内核;
3.如何在EL3中加载和执行安全监控器映像;
4.如何添加新的静态信任缓存,以便可以执行自签名的可执行文件;
5.如何添加新的launchd 项以执行交互式shell,而不是ramdisk上的现有服务;
6.如何建立完整的串行I/O;
该项目现在可以在qemu-aleph-git上获得,其中包含qemu-scripts-aleph-git所需的脚本。
QEMU代码
为了能够稍后在更新版本的QEMU上重新设置代码并添加对其他iDevices和iOS版本的支持,我们将所有QEMU代码更改都移动到了新模块——hw/arm/n66_iphone6splus.c 中,它是QEMU中iPhone 6s plus(n66ap)iDevice的主要模块,可用于:
1.定义新设备类型;
2.在不同的异常级别的内存中定义UART内存映射I/O、加载的内核、安全监控器、启动参数,设备树,信任缓存的内存布局。
3.定义iDevice的专有寄存器(当前什么都不做,只是作为通用寄存器操作);
4.定义设备的功能和属性,例如支持EL3并在安全监控器入口点开始执行。
5.将内置定时器中断连接到FIQ;
6.获取用于定义以下文件的命令行参数:内核映像,安全监控器映像,设备树,ramdisk,静态信任缓存,内核启动参数。
另一个主要模块是hw/arm/xnu.c,它负责:
1.将设备树加载到内存中,并将ramdisk和静态信任缓存地址添加到实际加载它们的设备树中。
2.将ramdisk加载到内存中;
3.将静态信任缓存加载到内存中;
4.将内核映像加载到内存中;
5.将安全监控器映像加载到内存中;
6.加载和设置内核并保护监控器启动参数。
在没有补丁的情况下启动内核
基于原有的项目,我们已经能够使用不同的iOS版本和不同的iPhone启动到用户模式,同时使用内核调试器在运行时修补内核。之所以修补程序是因为:在更改设备树并从ramdisk启动之后,我们封装了一个不返回的函数,等待一个永远不会发生的事件。通过在内核调试器中放置断点并挨个运行,我们发现不返回函数是IOSecureBSDRoot(),它可以在Apple发布的xnu-4903.221.2版本的XNU代码中找到:
在运行时,调试内核本身时发生的情况:
此函数不返回,因为对pe->callPlatformFunction()的调用不会返回。对于这个函数,我们没有任何参考代码,所以内核被反汇编:
通过检查这个函数,我们可以看到不返回函数对x19中对象的特定成员进行了大量处理,并且流程根据这些成员而变化。我们尝试了一些方法来了解这些成员所代表的具体内容,但都没有成功。这些成员似乎确实处于特殊的偏移状态,所以过了一段时间,我们试着使用Ghidra在整个内核中搜索使用对象及其成员在偏移量0x10a,0x10c和0x110的函数 ,很幸运!我们找到了这个函数:
在这个函数中,很容易看出当prop secure-root-prefix不在设备树中时,偏移量为0x110的成员保持不变,值为0,且原始函数(pe->callPlatformFunction())返回,可以看出,没有必要修补内核。
加载安全监控器映像
现在,我们能够将iPhone X映像启动到用户模式。此映像直接启动到EL1并且没有安全监控器。于是,我们决定使用iPhone 6s plus的另一个映像,因为Apple在那里留下了很多符号,我们认为这会使研究变得更简单。事实证明,没有KTRR(内核文本只读区域)的KPP(内核补丁保护)设备有一个安全的监控器映像,需要加载自己的启动参数,并在EL3中执行。该项目的这一部分是关于查找内核文件中嵌入的安全监控器映像,加载它,理解启动参数结构,加载映像和配置QEMU以开始执行EL3中的入口点。完成这些步骤后,仍然没有成功。原因似乎是安全监控器映像尝试解析内核库(通过内核启动参数读取)中的内核mach-o标头,且我们没有在该基地址处的内核映像。这一切都发生在以下函数中:
我们相信这个函数负责KPP功能,并假设它根据内核部分应有的权限保存内核部分的映射,但是这个假设仍然需要验证。
从原有项目的代码中可以看出,virt_base参数指向的是加载内核的最低段:
static uint64_t arm_load_macho(struct arm_boot_info *info, uint64_t *pentry, AddressSpace *as) { hwaddr kernel_load_offset = 0x00000000; hwaddr mem_base = info->loader_start; uint8_t *data = NULL; gsize len; bool ret = false; uint8_t* rom_buf = NULL; if (!g_file_get_contents(info->kernel_filename, (char**) &data, &len, NULL)) { goto out; } struct mach_header_64* mh = (struct mach_header_64*)data; struct load_command* cmd = (struct load_command*)(data + sizeof(struct mach_header_64)); // iterate through all the segments once to find highest and lowest addresses uint64_t pc = 0; uint64_t low_addr_temp; uint64_t high_addr_temp; macho_highest_lowest(mh, &low_addr_temp, &high_addr_temp); uint64_t rom_buf_size = high_addr_temp - low_addr_temp; rom_buf = g_malloc0(rom_buf_size); for (unsigned int index = 0; index < mh->ncmds; index++) { switch (cmd->cmd) { case LC_SEGMENT_64: { struct segment_command_64* segCmd = (struct segment_command_64*)cmd; memcpy(rom_buf + (segCmd->vmaddr - low_addr_temp), data + segCmd->fileoff, segCmd->filesize); break; } case LC_UNIXTHREAD: { // grab just the entry point PC uint64_t* ptrPc = (uint64_t*)((char*)cmd + 0x110); // for arm64 only. pc = VAtoPA(*ptrPc); break; } } cmd = (struct load_command*)((char*)cmd + cmd->cmdsize); } hwaddr rom_base = VAtoPA(low_addr_temp); rom_add_blob_fixed_as("macho", rom_buf, rom_buf_size, rom_base, as); ret = true; uint64_t load_extra_offset = high_addr_temp; uint64_t ramdisk_address = load_extra_offset; gsize ramdisk_size = 0; // load ramdisk if exists if (info->initrd_filename) { uint8_t* ramdisk_data = NULL; if (g_file_get_contents(info->initrd_filename, (char**) &ramdisk_data, &ramdisk_size, NULL)) { info->initrd_filename = NULL; rom_add_blob_fixed_as("xnu_ramdisk", ramdisk_data, ramdisk_size, VAtoPA(ramdisk_address), as); load_extra_offset = (load_extra_offset + ramdisk_size + 0xffffull) & ~0xffffull; g_free(ramdisk_data); } else { fprintf(stderr, "ramdisk failed?!\n"); abort(); } } uint64_t dtb_address = load_extra_offset; gsize dtb_size = 0; // load device tree if (info->dtb_filename) { uint8_t* dtb_data = NULL; if (g_file_get_contents(info->dtb_filename, (char**) &dtb_data, &dtb_size, NULL)) { info->dtb_filename = NULL; if (ramdisk_size != 0) { macho_add_ramdisk_to_dtb(dtb_data, dtb_size, VAtoPA(ramdisk_address), ramdisk_size); } rom_add_blob_fixed_as("xnu_dtb", dtb_data, dtb_size, VAtoPA(dtb_address), as); load_extra_offset = (load_extra_offset + dtb_size + 0xffffull) & ~0xffffull; g_free(dtb_data); } else { fprintf(stderr, "dtb failed?!\n"); abort(); } } // fixup boot args // note: device tree and args must follow kernel and be included in the kernel data size. // macho_setup_bootargs takes care of adding the size for the args // osfmk/arm64/arm_vm_init.c:arm_vm_prot_init uint64_t bootargs_addr = VAtoPA(load_extra_offset); uint64_t phys_base = (mem_base + kernel_load_offset); uint64_t virt_base = low_addr_temp & ~0x3fffffffull; macho_setup_bootargs(info, as, bootargs_addr, virt_base, phys_base, VAtoPA(load_extra_offset), dtb_address, dtb_size); // write bootloader uint32_t fixupcontext[FIXUP_MAX]; fixupcontext[FIXUP_ARGPTR] = bootargs_addr; fixupcontext[FIXUP_ENTRYPOINT] = pc; write_bootloader("bootloader", info->loader_start, bootloader_aarch64, fixupcontext, as); *pentry = info->loader_start; out: if (data) { g_free(data); } if (rom_buf) { g_free(rom_buf); } return ret? high_addr_temp - low_addr_temp : -1;
在本文的例子中,这个段被映射到加载的mach-o标头的地址下面。这意味着virt_base不指向内核mach-o标头,因此不能使用上面提到的安全监控器代码。我们尝试解决这个问题的一种方法是将virt_base设置为mach-o标头的地址,但这使得一些内核驱动程序代码加载到virt_base之下,这搞砸了很多东西,比如下面的函数:
vm_offset_t ml_static_vtop(vm_offset_t va) { for (size_t i = 0; (i < PTOV_TABLE_SIZE) && (ptov_table[i].len != 0); i++) { if ((va >= ptov_table[i].va) && (va < (ptov_table[i].va + ptov_table[i].len))) return (va - ptov_table[i].va + ptov_table[i].pa); } if (((vm_address_t)(va) - gVirtBase) >= gPhysSize) panic("ml_static_vtop(): illegal VA: %p\n", (void*)va); return ((vm_address_t)(va) - gVirtBase + gPhysBase); }
我们尝试的另一种方法是跳过安全监控器的执行,直接从EL1中的内核入口点开始。不过在我们点击第一条SMC指令时,该方法就失效了。它可能通过在使用SMC的地方修补内核来解决这个问题,但我们不想这样做。最终我们还是将virt_base设置为低于最低加载段的较低地址,并且在该位置只有整个原始kernelcache文件的另一个副本。这个解决方案可以将virt_base置于内核中实际使用的所有虚拟地址之下,让它指向内核的mach-o标头,并将内核按段优先加载到其首选地址,在那里执行。
信任缓存
在本节中,我们将介绍为加载非Apple自签名可执行文件所做的所有工作。 iOS系统通常只执行受信任的可执行文件,这些可执行文件要么在信任缓存中,要么由Apple或已安装的配置文件签名。关于这一主题的更多资料可以在这里http://www.newosxbook.com/articles/CodeSigning.pdf找到,一般来说,信任缓存有三种类型:
1.内核缓存中硬编码的信任缓存;
2.可以在运行时从文件加载的信任缓存;
3. 从设备树指向内存中的信任缓存;
我们在本文中主要分析第3种,以下函数包含最高级逻辑,,用于检查可执行文件是否有基于信任缓存或其他方式批准执行的代码签名:
如果我们深入研究,我们最终会得到这个检查静态信任缓存的函数:
我们可以看到,使用了XREF,值的设置如下所示:
上述函数解析原始信任缓存格式,你也可以按照代码和错误消息进行测试,得出的信任缓存格式为:
struct cdhash { uint8_t hash[20]; //first 20 bytes of the cdhash uint8_t hash_type; //left as 0 uint8_t hash_flags; //left as 0 }; struct static_trust_cache_entry { uint64_t trust_cache_version; //should be 1 uint64_t unknown1; //left as 0 uint64_t unknown2; //left as 0 uint64_t unknown3; //left as 0 uint64_t unknown4; //left as 0 uint64_t number_of_cdhashes; struct cdhash[]; }; struct static_trust_cache_buffer { uint64_t number_of_trust_caches_in_buffer; uint64_t offsets_to_trust_caches_from_beginning_of_buffer[]; struct static_trust_cache_entry entries[]; };
而且似乎即使结构在缓冲区中支持多个信任缓存,代码实际上也将大小限制为1。
从此函数执行XREF后,我们将获得以下代码:
因此,可以推测出,这些数据是从设备树中读取的。
现在,剩下要做的就是将信任缓存加载到内存中,并从设备树指向它。我们必须决定把它放在内存中的哪个位置。内核数据顶部有一个内核启动参数,它指向内核,ramdisk,设备树和启动参数之后的地址。我们尝试的第一个位置靠近内核数据地址的顶部(在它之前和之后)。这不是很好,因为下面的代码是匹配上面的程序集的代码:
void arm_vm_prot_init(boot_args * args) { segLOWESTTEXT = UINT64_MAX; if (segSizePRELINKTEXT && (segPRELINKTEXTB < segLOWESTTEXT)) segLOWESTTEXT = segPRELINKTEXTB; assert(segSizeTEXT); if (segTEXTB < segLOWESTTEXT) segLOWESTTEXT = segTEXTB; assert(segLOWESTTEXT < UINT64_MAX); segEXTRADATA = segLOWESTTEXT; segSizeEXTRADATA = 0; DTEntry memory_map; MemoryMapFileInfo *trustCacheRange; unsigned int trustCacheRangeSize; int err; err = DTLookupEntry(NULL, "chosen/memory-map", &memory_map); assert(err == kSuccess); err = DTGetProperty(memory_map, "TrustCache", (void**)&trustCacheRange, &trustCacheRangeSize); if (err == kSuccess) { assert(trustCacheRangeSize == sizeof(MemoryMapFileInfo)); segEXTRADATA = phystokv(trustCacheRange->paddr); segSizeEXTRADATA = trustCacheRange->length; arm_vm_page_granular_RNX(segEXTRADATA, segSizeEXTRADATA, FALSE); } /* Map coalesced kext TEXT segment RWNX for now */ arm_vm_page_granular_RWNX(segPRELINKTEXTB, segSizePRELINKTEXT, FALSE); // Refined in OSKext::readPrelinkedExtensions /* Map coalesced kext DATA_CONST segment RWNX (could be empty) */ arm_vm_page_granular_RWNX(segPLKDATACONSTB, segSizePLKDATACONST, FALSE); // Refined in OSKext::readPrelinkedExtensions /* Map coalesced kext TEXT_EXEC segment RWX (could be empty) */ arm_vm_page_granular_ROX(segPLKTEXTEXECB, segSizePLKTEXTEXEC, FALSE); // Refined in OSKext::readPrelinkedExtensions /* if new segments not present, set space between PRELINK_TEXT and xnu TEXT to RWNX * otherwise we no longer expect any space between the coalesced kext read only segments and xnu rosegments */ if (!segSizePLKDATACONST && !segSizePLKTEXTEXEC) { if (segSizePRELINKTEXT) arm_vm_page_granular_RWNX(segPRELINKTEXTB + segSizePRELINKTEXT, segTEXTB - (segPRELINKTEXTB + segSizePRELINKTEXT), FALSE); } else { /* * If we have the new segments, we should still protect the gap between kext * read-only pages and kernel read-only pages, in the event that this gap * exists. */ if ((segPLKDATACONSTB + segSizePLKDATACONST) < segTEXTB) { arm_vm_page_granular_RWNX(segPLKDATACONSTB + segSizePLKDATACONST, segTEXTB - (segPLKDATACONSTB + segSizePLKDATACONST), FALSE); } } /* * Protection on kernel text is loose here to allow shenanigans early on. These * protections are tightened in arm_vm_prot_finalize(). This is necessary because * we currently patch LowResetVectorBase in cpu.c. * * TEXT segment contains mach headers and other non-executable data. This will become RONX later. */ arm_vm_page_granular_RNX(segTEXTB, segSizeTEXT, FALSE); /* Can DATACONST start out and stay RNX? * NO, stuff in this segment gets modified during startup (viz. mac_policy_init()/mac_policy_list) * Make RNX in prot_finalize */ arm_vm_page_granular_RWNX(segDATACONSTB, segSizeDATACONST, FALSE); /* TEXTEXEC contains read only executable code: becomes ROX in prot_finalize */ arm_vm_page_granular_RWX(segTEXTEXECB, segSizeTEXTEXEC, FALSE); /* DATA segment will remain RWNX */ arm_vm_page_granular_RWNX(segDATAB, segSizeDATA, FALSE); arm_vm_page_granular_RWNX(segBOOTDATAB, segSizeBOOTDATA, TRUE); arm_vm_page_granular_RNX((vm_offset_t)&intstack_low_guard, PAGE_MAX_SIZE, TRUE); arm_vm_page_granular_RNX((vm_offset_t)&intstack_high_guard, PAGE_MAX_SIZE, TRUE); arm_vm_page_granular_RNX((vm_offset_t)&excepstack_high_guard, PAGE_MAX_SIZE, TRUE); arm_vm_page_granular_ROX(segKLDB, segSizeKLD, FALSE); arm_vm_page_granular_RWNX(segLINKB, segSizeLINK, FALSE); arm_vm_page_granular_RWNX(segPLKLINKEDITB, segSizePLKLINKEDIT, FALSE); // Coalesced kext LINKEDIT segment arm_vm_page_granular_ROX(segLASTB, segSizeLAST, FALSE); // __LAST may be empty, but we cannot assume this arm_vm_page_granular_RWNX(segPRELINKDATAB, segSizePRELINKDATA, FALSE); // Prelink __DATA for kexts (RW data) if (segSizePLKLLVMCOV > 0) arm_vm_page_granular_RWNX(segPLKLLVMCOVB, segSizePLKLLVMCOV, FALSE); // LLVM code coverage data arm_vm_page_granular_RWNX(segPRELINKINFOB, segSizePRELINKINFO, FALSE); /* PreLinkInfoDictionary */ arm_vm_page_granular_RNX(phystokv(args->topOfKernelData), BOOTSTRAP_TABLE_SIZE, FALSE); // Boot page tables; they should not be mutable. }
我们可以从中看到,当我们有一个静态信任缓存时,segEXTRADATA被设置为信任缓存缓冲区,而不是segLOWESTTEXT。
在以下两个函数中,我们可以看到,如果gVirtBase和segEXTRADATA之间的数据有意义,那么可怕的事情就会发生:
static void arm_vm_physmap_init(boot_args *args, vm_map_address_t physmap_base, vm_map_address_t dynamic_memory_begin __unused) { ptov_table_entry temp_ptov_table[PTOV_TABLE_SIZE]; bzero(temp_ptov_table, sizeof(temp_ptov_table)); // Will be handed back to VM layer through ml_static_mfree() in arm_vm_prot_finalize() arm_vm_physmap_slide(temp_ptov_table, physmap_base, gVirtBase, segEXTRADATA - gVirtBase, AP_RWNA, FALSE); arm_vm_page_granular_RWNX(end_kern, phystokv(args->topOfKernelData) - end_kern, FALSE); /* Device Tree, RAM Disk (if present), bootArgs */ arm_vm_physmap_slide(temp_ptov_table, physmap_base, (args->topOfKernelData + BOOTSTRAP_TABLE_SIZE - gPhysBase + gVirtBase), real_avail_end - (args->topOfKernelData + BOOTSTRAP_TABLE_SIZE), AP_RWNA, FALSE); // rest of physmem assert((temp_ptov_table[ptov_index - 1].va + temp_ptov_table[ptov_index - 1].len) <= dynamic_memory_begin); // Sort in descending order of segment length. LUT traversal is linear, so largest (most likely used) // segments should be placed earliest in the table to optimize lookup performance. qsort(temp_ptov_table, PTOV_TABLE_SIZE, sizeof(temp_ptov_table[0]), cmp_ptov_entries); memcpy(ptov_table, temp_ptov_table, sizeof(ptov_table)); } void arm_vm_prot_finalize(boot_args * args __unused) { /* * At this point, we are far enough along in the boot process that it will be * safe to free up all of the memory preceeding the kernel. It may in fact * be safe to do this earlier. * * This keeps the memory in the V-to-P mapping, but advertises it to the VM * as usable. */ /* * if old style PRELINK segment exists, free memory before it, and after it before XNU text * otherwise we're dealing with a new style kernel cache, so we should just free the * memory before PRELINK_TEXT segment, since the rest of the KEXT read only data segments * should be immediately followed by XNU's TEXT segment */ ml_static_mfree(phystokv(gPhysBase), segEXTRADATA - gVirtBase); /* * KTRR support means we will be mucking with these pages and trying to * protect them; we cannot free the pages to the VM if we do this. */ if (!segSizePLKDATACONST && !segSizePLKTEXTEXEC && segSizePRELINKTEXT) { /* If new segments not present, PRELINK_TEXT is not dynamically sized, free DRAM between it and xnu TEXT */ ml_static_mfree(segPRELINKTEXTB + segSizePRELINKTEXT, segTEXTB - (segPRELINKTEXTB + segSizePRELINKTEXT)); } /* * LowResetVectorBase patching should be done by now, so tighten executable * protections. */ arm_vm_page_granular_ROX(segTEXTEXECB, segSizeTEXTEXEC, FALSE); /* tighten permissions on kext read only data and code */ if (segSizePLKDATACONST && segSizePLKTEXTEXEC) { arm_vm_page_granular_RNX(segPRELINKTEXTB, segSizePRELINKTEXT, FALSE); arm_vm_page_granular_ROX(segPLKTEXTEXECB, segSizePLKTEXTEXEC, FALSE); arm_vm_page_granular_RNX(segPLKDATACONSTB, segSizePLKDATACONST, FALSE); } cpu_stack_alloc(&BootCpuData); arm64_replace_bootstack(&BootCpuData); ml_static_mfree(phystokv(segBOOTDATAB - gVirtBase + gPhysBase), segSizeBOOTDATA); #if __ARM_KERNEL_PROTECT__ arm_vm_populate_kernel_el0_mappings(); #endif /* __ARM_KERNEL_PROTECT__ */ #if defined(KERNEL_INTEGRITY_KTRR) /* * __LAST,__pinst should no longer be executable. */ arm_vm_page_granular_RNX(segLASTB, segSizeLAST, FALSE); /* * Must wait until all other region permissions are set before locking down DATA_CONST * as the kernel static page tables live in DATA_CONST on KTRR enabled systems * and will become immutable. */ #endif arm_vm_page_granular_RNX(segDATACONSTB, segSizeDATACONST, FALSE); #ifndef __ARM_L1_PTW__ FlushPoC_Dcache(); #endif __builtin_arm_dsb(DSB_ISH); flush_mmu_tlb(); }
现在,根据上述观察,我们决定将信任缓存缓冲区放在我们放置在virt_base的原始内核文件之后。当然,这仍然没有奏效。在设置页面表的代码之后,我们找到了从表中卸载此内存位置的位置,并最终了解到原始内核文件结束后的几个页面会在某些时候从内存中被卸载。查看代码:
你可以阅读该函数并查看是否实现了二进制搜索,在缓冲区中对哈希值进行排序后,该函数并最终可以正常工作了。
Bash launchd项
此时,我们有能力执行我们自己的自签名非Apple可执行文件,所以我们希望launchd执行bash而不是ramdisk上存在的服务。为此,我们删除了/System/Library/LaunchDaemons/ 中的所有文件,并添加了一个新文件com.apple.bash.plist,其中包含以下内容:
<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"><plist version="1.0"><dict> <key>EnablePressuredExit</key> <false/> <key>Label</key> <string>com.apple.bash</string> <key>POSIXSpawnType</key> <string>Interactive</string> <key>ProgramArguments</key> <array> <string>/iosbinpack64/bin/bash</string> </array> <key>RunAtLoad</key> <true/> <key>StandardErrorPath</key> <string>/dev/console</string> <key>StandardInPath</key> <string>/dev/console</string> <key>StandardOutPath</key> <string>/dev/console</string> <key>Umask</key> <integer>0</integer> <key>UserName</key> <string>root</string></dict></plist>
这使得launchd尝试并执行bash,但尝试结果失败,原因是ramdisk没有动态加载器缓存。为了解决这个问题,我们将dyld缓存从完整磁盘映像复制到ramdisk(在调整ramdisk大小后,它将有足够的空间)。但尝试结果又失败,问题似乎仍然是缺少相同的库,即使有了dyld缓存也不行。为了调试它,我们需要更好地了解故障发生的位置。最后,我们发现加载缓存发生在dyld里面:
static void mapSharedCache(){ uint64_t cacheBaseAddress = 0; // quick check if a cache is already mapped into shared region if ( _shared_region_check_np(&cacheBaseAddress) == 0 ) { sSharedCache = (dyld_cache_header*)cacheBaseAddress; // if we don't understand the currently mapped shared cache, then ignore#if __x86_64__ const char* magic = (sHaswell ? ARCH_CACHE_MAGIC_H : ARCH_CACHE_MAGIC);#else const char* magic = ARCH_CACHE_MAGIC;#endif if ( strcmp(sSharedCache->magic, magic) != 0 ) { sSharedCache = NULL; if ( gLinkContext.verboseMapping ) { dyld::log("dyld: existing shared cached in memory is not compatible\n"); return; } } // check if cache file is slidable const dyld_cache_header* header = sSharedCache; if ( (header->mappingOffset >= 0x48) && (header->slideInfoSize != 0) ) { // solve for slide by comparing loaded address to address of first region const uint8_t* loadedAddress = (uint8_t*)sSharedCache; const dyld_cache_mapping_info* const mappings = (dyld_cache_mapping_info*)(loadedAddress+header->mappingOffset); const uint8_t* preferedLoadAddress = (uint8_t*)(long)(mappings[0].address); sSharedCacheSlide = loadedAddress - preferedLoadAddress; dyld::gProcessInfo->sharedCacheSlide = sSharedCacheSlide; //dyld::log("sSharedCacheSlide=0x%08lX, loadedAddress=%p, preferedLoadAddress=%p\n", sSharedCacheSlide, loadedAddress, preferedLoadAddress); } // if cache has a uuid, copy it if ( header->mappingOffset >= 0x68 ) { memcpy(dyld::gProcessInfo->sharedCacheUUID, header->uuid, 16); } // verbose logging if ( gLinkContext.verboseMapping ) { dyld::log("dyld: re-using existing shared cache mapping\n"); } } else {#if __i386__ || __x86_64__ // <rdar://problem/5925940> Safe Boot should disable dyld shared cache // if we are in safe-boot mode and the cache was not made during this boot cycle, // delete the cache file uint32_t safeBootValue = 0; size_t safeBootValueSize = sizeof(safeBootValue); if ( (sysctlbyname("kern.safeboot", &safeBootValue, &safeBootValueSize, NULL, 0) == 0) && (safeBootValue != 0) ) { // user booted machine in safe-boot mode struct stat dyldCacheStatInfo; // Don't use custom DYLD_SHARED_CACHE_DIR if provided, use standard path if ( my_stat(MACOSX_DYLD_SHARED_CACHE_DIR DYLD_SHARED_CACHE_BASE_NAME ARCH_NAME, &dyldCacheStatInfo) == 0 ) { struct timeval bootTimeValue; size_t bootTimeValueSize = sizeof(bootTimeValue); if ( (sysctlbyname("kern.boottime", &bootTimeValue, &bootTimeValueSize, NULL, 0) == 0) && (bootTimeValue.tv_sec != 0) ) { // if the cache file was created before this boot, then throw it away and let it rebuild itself if ( dyldCacheStatInfo.st_mtime < bootTimeValue.tv_sec ) { ::unlink(MACOSX_DYLD_SHARED_CACHE_DIR DYLD_SHARED_CACHE_BASE_NAME ARCH_NAME); gLinkContext.sharedRegionMode = ImageLoader::kDontUseSharedRegion; return; } } } }#endif // map in shared cache to shared region int fd = openSharedCacheFile(); if ( fd != -1 ) { uint8_t firstPages[8192]; if ( ::read(fd, firstPages, 8192) == 8192 ) { dyld_cache_header* header = (dyld_cache_header*)firstPages; #if __x86_64__ const char* magic = (sHaswell ? ARCH_CACHE_MAGIC_H : ARCH_CACHE_MAGIC); #else const char* magic = ARCH_CACHE_MAGIC; #endif if ( strcmp(header->magic, magic) == 0 ) { const dyld_cache_mapping_info* const fileMappingsStart = (dyld_cache_mapping_info*)&firstPages[header->mappingOffset]; const dyld_cache_mapping_info* const fileMappingsEnd = &fileMappingsStart[header->mappingCount]; shared_file_mapping_np mappings[header->mappingCount+1]; // add room for code-sig unsigned int mappingCount = header->mappingCount; int codeSignatureMappingIndex = -1; int readWriteMappingIndex = -1; int readOnlyMappingIndex = -1; // validate that the cache file has not been truncated bool goodCache = false; struct stat stat_buf; if ( fstat(fd, &stat_buf) == 0 ) { goodCache = true; int i=0; for (const dyld_cache_mapping_info* p = fileMappingsStart; p < fileMappingsEnd; ++p, ++i) { mappings[i].sfm_address = p->address; mappings[i].sfm_size = p->size; mappings[i].sfm_file_offset = p->fileOffset; mappings[i].sfm_max_prot = p->maxProt; mappings[i].sfm_init_prot = p->initProt; // rdar://problem/5694507 old update_dyld_shared_cache tool could make a cache file // that is not page aligned, but otherwise ok. if ( p->fileOffset+p->size > (uint64_t)(stat_buf.st_size+4095 & (-4096)) ) { dyld::log("dyld: shared cached file is corrupt: %s" DYLD_SHARED_CACHE_BASE_NAME ARCH_NAME "\n", sSharedCacheDir); goodCache = false; } if ( (mappings[i].sfm_init_prot & (VM_PROT_READ|VM_PROT_WRITE)) == (VM_PROT_READ|VM_PROT_WRITE) ) { readWriteMappingIndex = i; } if ( mappings[i].sfm_init_prot == VM_PROT_READ ) { readOnlyMappingIndex = i; } } // if shared cache is code signed, add a mapping for the code signature uint64_t signatureSize = header->codeSignatureSize; // zero size in header means signature runs to end-of-file if ( signatureSize == 0 ) signatureSize = stat_buf.st_size - header->codeSignatureOffset; if ( signatureSize != 0 ) { int linkeditMapping = mappingCount-1; codeSignatureMappingIndex = mappingCount++; mappings[codeSignatureMappingIndex].sfm_address = mappings[linkeditMapping].sfm_address + mappings[linkeditMapping].sfm_size;#if __arm__ || __arm64__ mappings[codeSignatureMappingIndex].sfm_size = (signatureSize+16383) & (-16384);#else mappings[codeSignatureMappingIndex].sfm_size = (signatureSize+4095) & (-4096);#endif mappings[codeSignatureMappingIndex].sfm_file_offset = header->codeSignatureOffset; mappings[codeSignatureMappingIndex].sfm_max_prot = VM_PROT_READ; mappings[codeSignatureMappingIndex].sfm_init_prot = VM_PROT_READ; } }#if __MAC_OS_X_VERSION_MIN_REQUIRED // sanity check that /usr/lib/libSystem.B.dylib stat() info matches cache if ( header->imagesCount * sizeof(dyld_cache_image_info) + header->imagesOffset < 8192 ) { bool foundLibSystem = false; if ( my_stat("/usr/lib/libSystem.B.dylib", &stat_buf) == 0 ) { const dyld_cache_image_info* images = (dyld_cache_image_info*)&firstPages[header->imagesOffset]; const dyld_cache_image_info* const imagesEnd = &images[header->imagesCount]; for (const dyld_cache_image_info* p = images; p < imagesEnd; ++p) { if ( ((time_t)p->modTime == stat_buf.st_mtime) && ((ino_t)p->inode == stat_buf.st_ino) ) { foundLibSystem = true; break; } } } if ( !sSharedCacheIgnoreInodeAndTimeStamp && !foundLibSystem ) { dyld::log("dyld: shared cached file was built against a different libSystem.dylib, ignoring cache.\n" "to update dyld shared cache run: 'sudo update_dyld_shared_cache' then reboot.\n"); goodCache = false; } }#endif #if __IPHONE_OS_VERSION_MIN_REQUIRED { uint64_t lowAddress; uint64_t highAddress; getCacheBounds(mappingCount, mappings, lowAddress, highAddress); if ( (highAddress-lowAddress) > SHARED_REGION_SIZE ) throw "dyld shared cache is too big to fit in shared region"; }#endif if ( goodCache && (readWriteMappingIndex == -1) ) { dyld::log("dyld: shared cached file is missing read/write mapping: %s" DYLD_SHARED_CACHE_BASE_NAME ARCH_NAME "\n", sSharedCacheDir); goodCache = false; } if ( goodCache && (readOnlyMappingIndex == -1) ) { dyld::log("dyld: shared cached file is missing read-only mapping: %s" DYLD_SHARED_CACHE_BASE_NAME ARCH_NAME "\n", sSharedCacheDir); goodCache = false; } if ( goodCache ) { long cacheSlide = 0; void* slideInfo = NULL; uint64_t slideInfoSize = 0; // check if shared cache contains slid info if ( header->slideInfoSize != 0 ) { // <rdar://problem/8611968> don't slide shared cache if ASLR disabled (main executable didn't slide) if ( sMainExecutable->isPositionIndependentExecutable() && (sMainExecutable->getSlide() == 0) ) cacheSlide = 0; else { // generate random slide amount cacheSlide = pickCacheSlide(mappingCount, mappings); slideInfo = (void*)(long)(mappings[readOnlyMappingIndex].sfm_address + (header->slideInfoOffset - mappings[readOnlyMappingIndex].sfm_file_offset)); slideInfoSize = header->slideInfoSize; // add VM_PROT_SLIDE bit to __DATA area of cache mappings[readWriteMappingIndex].sfm_max_prot |= VM_PROT_SLIDE; mappings[readWriteMappingIndex].sfm_init_prot |= VM_PROT_SLIDE; } } if ( gLinkContext.verboseMapping ) { dyld::log("dyld: calling _shared_region_map_and_slide_np() with regions:\n"); for (int i=0; i < mappingCount; ++i) { dyld::log(" address=0x%08llX, size=0x%08llX, fileOffset=0x%08llX\n", mappings[i].sfm_address, mappings[i].sfm_size, mappings[i].sfm_file_offset); } } if (_shared_region_map_and_slide_np(fd, mappingCount, mappings, codeSignatureMappingIndex, cacheSlide, slideInfo, slideInfoSize) == 0) { // successfully mapped cache into shared region sSharedCache = (dyld_cache_header*)mappings[0].sfm_address; sSharedCacheSlide = cacheSlide; dyld::gProcessInfo->sharedCacheSlide = cacheSlide; //dyld::log("sSharedCache=%p sSharedCacheSlide=0x%08lX\n", sSharedCache, sSharedCacheSlide); // if cache has a uuid, copy it if ( header->mappingOffset >= 0x68 ) { memcpy(dyld::gProcessInfo->sharedCacheUUID, header->uuid, 16); } } else {#if __IPHONE_OS_VERSION_MIN_REQUIRED throw "dyld shared cache could not be mapped";#endif if ( gLinkContext.verboseMapping ) dyld::log("dyld: shared cached file could not be mapped\n"); } } } else { if ( gLinkContext.verboseMapping ) dyld::log("dyld: shared cached file is invalid\n"); } } else { if ( gLinkContext.verboseMapping ) dyld::log("dyld: shared cached file cannot be read\n"); } close(fd); } else { if ( gLinkContext.verboseMapping ) dyld::log("dyld: shared cached file cannot be opened\n"); } } // remember if dyld loaded at same address as when cache built if ( sSharedCache != NULL ) { gLinkContext.dyldLoadedAtSameAddressNeededBySharedCache = ((uintptr_t)(sSharedCache->dyldBaseAddress) == (uintptr_t)&_mh_dylinker_header); } // tell gdb where the shared cache is if ( sSharedCache != NULL ) { const dyld_cache_mapping_info* const start = (dyld_cache_mapping_info*)((uint8_t*)sSharedCache + sSharedCache->mappingOffset); dyld_shared_cache_ranges.sharedRegionsCount = sSharedCache->mappingCount; // only room to tell gdb about first four regions if ( dyld_shared_cache_ranges.sharedRegionsCount > 4 ) dyld_shared_cache_ranges.sharedRegionsCount = 4; const dyld_cache_mapping_info* const end = &start[dyld_shared_cache_ranges.sharedRegionsCount]; int index = 0; for (const dyld_cache_mapping_info* p = start; p < end; ++p, ++index ) { dyld_shared_cache_ranges.ranges[index].start = p->address+sSharedCacheSlide; dyld_shared_cache_ranges.ranges[index].length = p->size; if ( gLinkContext.verboseMapping ) { dyld::log(" 0x%08llX->0x%08llX %s%s%s init=%x, max=%x\n", p->address+sSharedCacheSlide, p->address+sSharedCacheSlide+p->size-1, ((p->initProt & VM_PROT_READ) ? "read " : ""), ((p->initProt & VM_PROT_WRITE) ? "write " : ""), ((p->initProt & VM_PROT_EXECUTE) ? "execute " : ""), p->initProt, p->maxProt); } #if __i386__ // If a non-writable and executable region is found in the R/W shared region, then this is __IMPORT segments // This is an old cache. Make writable. dyld no longer supports turn W on and off as it binds if ( (p->initProt == (VM_PROT_READ|VM_PROT_EXECUTE)) && ((p->address & 0xF0000000) == 0xA0000000) ) { if ( p->size != 0 ) { vm_prot_t prot = VM_PROT_EXECUTE | PROT_READ | VM_PROT_WRITE; vm_protect(mach_task_self(), p->address, p->size, false, prot); if ( gLinkContext.verboseMapping ) { dyld::log("%18s at 0x%08llX->0x%08llX altered permissions to %c%c%c\n", "", p->address, p->address+p->size-1, (prot & PROT_READ) ? 'r' : '.', (prot & PROT_WRITE) ? 'w' : '.', (prot & PROT_EXEC) ? 'x' : '.' ); } } } #endif } if ( gLinkContext.verboseMapping ) { // list the code blob dyld_cache_header* header = (dyld_cache_header*)sSharedCache; uint64_t signatureSize = header->codeSignatureSize; // zero size in header means signature runs to end-of-file if ( signatureSize == 0 ) { struct stat stat_buf; // FIXME: need size of cache file actually used if ( my_stat(IPHONE_DYLD_SHARED_CACHE_DIR DYLD_SHARED_CACHE_BASE_NAME ARCH_NAME, &stat_buf) == 0 ) signatureSize = stat_buf.st_size - header->codeSignatureOffset; } if ( signatureSize != 0 ) { const dyld_cache_mapping_info* const last = &start[dyld_shared_cache_ranges.sharedRegionsCount-1]; uint64_t codeBlobStart = last->address + last->size; dyld::log(" 0x%08llX->0x%08llX (code signature)\n", codeBlobStart, codeBlobStart+signatureSize); } }#if __IPHONE_OS_VERSION_MIN_REQUIRED // check for file that enables dyld shared cache dylibs to be overridden struct stat enableStatBuf; // check file size to determine if correct file is in place. // See <rdar://problem/13591370> Need a way to disable roots without removing /S/L/C/com.apple.dyld/enable... sDylibsOverrideCache = ( (my_stat(IPHONE_DYLD_SHARED_CACHE_DIR "enable-dylibs-to-override-cache", &enableStatBuf) == 0) && (enableStatBuf.st_size < ENABLE_DYLIBS_TO_OVERRIDE_CACHE_SIZE) );#endif }}
通过使用上一篇文章中介绍过的有趣功能,我们可以在gdb内核调试器中调试用户模式应用程序,然后通过逐步对此功能进行调试,并查看失败的原因。为此,我们使用HLT指令对dyld进行了修补,修改后的QEMU将其视为断点。然后我们用jtool重新签名了可执行文件,并将新的签名添加到静态信任缓存中。
于是,我们终于弄清了失败的原因:
if (_shared_region_map_and_slide_np(fd, mappingCount, mappings, codeSignatureMappingIndex, cacheSlide, slideInfo, slideInfoSize) == 0) { // successfully mapped cache into shared region sSharedCache = (dyld_cache_header*)mappings[0].sfm_address; sSharedCacheSlide = cacheSlide; dyld::gProcessInfo->sharedCacheSlide = cacheSlide; //dyld::log("sSharedCache=%p sSharedCacheSlide=0x%08lX\n", sSharedCache, sSharedCacheSlide); // if cache has a uuid, copy it if ( header->mappingOffset >= 0x68 ) { memcpy(dyld::gProcessInfo->sharedCacheUUID, header->uuid, 16); } } else { #if __IPHONE_OS_VERSION_MIN_REQUIRED throw "dyld shared cache could not be mapped"; #endif if ( gLinkContext.verboseMapping ) dyld::log("dyld: shared cached file could not be mapped\n"); }
好在我们正在运行内核调试器,所以可以直接进入内核系统调用并查看它失败的位置。另外,我们还获得了一些版本的代码:
int shared_region_map_and_slide_np( struct proc *p, struct shared_region_map_and_slide_np_args *uap, __unused int *retvalp) { struct shared_file_mapping_np *mappings; unsigned int mappings_count = uap->count; kern_return_t kr = KERN_SUCCESS; uint32_t slide = uap->slide; #define SFM_MAX_STACK 8 struct shared_file_mapping_np stack_mappings[SFM_MAX_STACK]; /* Is the process chrooted?? */ if (p->p_fd->fd_rdir != NULL) { kr = EINVAL; goto done; } if ((kr = vm_shared_region_sliding_valid(slide)) != KERN_SUCCESS) { if (kr == KERN_INVALID_ARGUMENT) { /* * This will happen if we request sliding again * with the same slide value that was used earlier * for the very first sliding. */ kr = KERN_SUCCESS; } goto done; } if (mappings_count == 0) { SHARED_REGION_TRACE_INFO( ("shared_region: %p [%d(%s)] map(): " "no mappings\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm)); kr = 0; /* no mappings: we're done ! */ goto done; } else if (mappings_count <= SFM_MAX_STACK) { mappings = &stack_mappings[0]; } else { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map(): " "too many mappings (%d)\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, mappings_count)); kr = KERN_FAILURE; goto done; } if ( (kr = shared_region_copyin_mappings(p, uap->mappings, uap->count, mappings))) { goto done; } kr = _shared_region_map_and_slide(p, uap->fd, mappings_count, mappings, slide, uap->slide_start, uap->slide_size); if (kr != KERN_SUCCESS) { return kr; } done: return kr; }
通过逐步调试调试器中的代码,我们发现对_shared_region_map_and_slide()的调用实际上是失败的。
/* * shared_region_map_np() * * This system call is intended for dyld. * * dyld uses this to map a shared cache file into a shared region. * This is usually done only the first time a shared cache is needed. * Subsequent processes will just use the populated shared region without * requiring any further setup. */ int _shared_region_map_and_slide( struct proc *p, int fd, uint32_t mappings_count, struct shared_file_mapping_np *mappings, uint32_t slide, user_addr_t slide_start, user_addr_t slide_size) { int error; kern_return_t kr; struct fileproc *fp; struct vnode *vp, *root_vp, *scdir_vp; struct vnode_attr va; off_t fs; memory_object_size_t file_size; #if CONFIG_MACF vm_prot_t maxprot = VM_PROT_ALL; #endif memory_object_control_t file_control; struct vm_shared_region *shared_region; uint32_t i; SHARED_REGION_TRACE_DEBUG( ("shared_region: %p [%d(%s)] -> map\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm)); shared_region = NULL; fp = NULL; vp = NULL; scdir_vp = NULL; /* get file structure from file descriptor */ error = fp_lookup(p, fd, &fp, 0); if (error) { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map: " "fd=%d lookup failed (error=%d)\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, fd, error)); goto done; } /* make sure we're attempting to map a vnode */ if (FILEGLOB_DTYPE(fp->f_fglob) != DTYPE_VNODE) { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map: " "fd=%d not a vnode (type=%d)\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, fd, FILEGLOB_DTYPE(fp->f_fglob))); error = EINVAL; goto done; } /* we need at least read permission on the file */ if (! (fp->f_fglob->fg_flag & FREAD)) { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map: " "fd=%d not readable\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, fd)); error = EPERM; goto done; } /* get vnode from file structure */ error = vnode_getwithref((vnode_t) fp->f_fglob->fg_data); if (error) { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map: " "fd=%d getwithref failed (error=%d)\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, fd, error)); goto done; } vp = (struct vnode *) fp->f_fglob->fg_data; /* make sure the vnode is a regular file */ if (vp->v_type != VREG) { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map(%p:'%s'): " "not a file (type=%d)\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, (void *)VM_KERNEL_ADDRPERM(vp), vp->v_name, vp->v_type)); error = EINVAL; goto done; } #if CONFIG_MACF /* pass in 0 for the offset argument because AMFI does not need the offset of the shared cache */ error = mac_file_check_mmap(vfs_context_ucred(vfs_context_current()), fp->f_fglob, VM_PROT_ALL, MAP_FILE, 0, &maxprot); if (error) { goto done; } #endif /* MAC */ /* make sure vnode is on the process's root volume */ root_vp = p->p_fd->fd_rdir; if (root_vp == NULL) { root_vp = rootvnode; } else { /* * Chroot-ed processes can't use the shared_region. */ error = EINVAL; goto done; } if (vp->v_mount != root_vp->v_mount) { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map(%p:'%s'): " "not on process's root volume\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, (void *)VM_KERNEL_ADDRPERM(vp), vp->v_name)); error = EPERM; goto done; } /* make sure vnode is owned by "root" */ VATTR_INIT(&va); VATTR_WANTED(&va, va_uid); error = vnode_getattr(vp, &va, vfs_context_current()); if (error) { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map(%p:'%s'): " "vnode_getattr(%p) failed (error=%d)\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, (void *)VM_KERNEL_ADDRPERM(vp), vp->v_name, (void *)VM_KERNEL_ADDRPERM(vp), error)); goto done; } if (va.va_uid != 0) { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map(%p:'%s'): " "owned by uid=%d instead of 0\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, (void *)VM_KERNEL_ADDRPERM(vp), vp->v_name, va.va_uid)); error = EPERM; goto done; } if (scdir_enforce) { /* get vnode for scdir_path */ error = vnode_lookup(scdir_path, 0, &scdir_vp, vfs_context_current()); if (error) { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map(%p:'%s'): " "vnode_lookup(%s) failed (error=%d)\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, (void *)VM_KERNEL_ADDRPERM(vp), vp->v_name, scdir_path, error)); goto done; } /* ensure parent is scdir_vp */ if (vnode_parent(vp) != scdir_vp) { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map(%p:'%s'): " "shared cache file not in %s\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, (void *)VM_KERNEL_ADDRPERM(vp), vp->v_name, scdir_path)); error = EPERM; goto done; } } /* get vnode size */ error = vnode_size(vp, &fs, vfs_context_current()); if (error) { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map(%p:'%s'): " "vnode_size(%p) failed (error=%d)\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, (void *)VM_KERNEL_ADDRPERM(vp), vp->v_name, (void *)VM_KERNEL_ADDRPERM(vp), error)); goto done; } file_size = fs; /* get the file's memory object handle */ file_control = ubc_getobject(vp, UBC_HOLDOBJECT); if (file_control == MEMORY_OBJECT_CONTROL_NULL) { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map(%p:'%s'): " "no memory object\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, (void *)VM_KERNEL_ADDRPERM(vp), vp->v_name)); error = EINVAL; goto done; } /* check that the mappings are properly covered by code signatures */ if (!cs_system_enforcement()) { /* code signing is not enforced: no need to check */ } else for (i = 0; i < mappings_count; i++) { if (mappings[i].sfm_init_prot & VM_PROT_ZF) { /* zero-filled mapping: not backed by the file */ continue; } if (ubc_cs_is_range_codesigned(vp, mappings[i].sfm_file_offset, mappings[i].sfm_size)) { /* this mapping is fully covered by code signatures */ continue; } SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map(%p:'%s'): " "mapping #%d/%d [0x%llx:0x%llx:0x%llx:0x%x:0x%x] " "is not code-signed\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, (void *)VM_KERNEL_ADDRPERM(vp), vp->v_name, i, mappings_count, mappings[i].sfm_address, mappings[i].sfm_size, mappings[i].sfm_file_offset, mappings[i].sfm_max_prot, mappings[i].sfm_init_prot)); error = EINVAL; goto done; } /* get the process's shared region (setup in vm_map_exec()) */ shared_region = vm_shared_region_trim_and_get(current_task()); if (shared_region == NULL) { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map(%p:'%s'): " "no shared region\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, (void *)VM_KERNEL_ADDRPERM(vp), vp->v_name)); error = EINVAL; goto done; } /* map the file into that shared region's submap */ kr = vm_shared_region_map_file(shared_region, mappings_count, mappings, file_control, file_size, (void *) p->p_fd->fd_rdir, slide, slide_start, slide_size); if (kr != KERN_SUCCESS) { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map(%p:'%s'): " "vm_shared_region_map_file() failed kr=0x%x\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, (void *)VM_KERNEL_ADDRPERM(vp), vp->v_name, kr)); switch (kr) { case KERN_INVALID_ADDRESS: error = EFAULT; break; case KERN_PROTECTION_FAILURE: error = EPERM; break; case KERN_NO_SPACE: error = ENOMEM; break; case KERN_FAILURE: case KERN_INVALID_ARGUMENT: default: error = EINVAL; break; } goto done; } error = 0; vnode_lock_spin(vp); vp->v_flag |= VSHARED_DYLD; vnode_unlock(vp); /* update the vnode's access time */ if (! (vnode_vfsvisflags(vp) & MNT_NOATIME)) { VATTR_INIT(&va); nanotime(&va.va_access_time); VATTR_SET_ACTIVE(&va, va_access_time); vnode_setattr(vp, &va, vfs_context_current()); } if (p->p_flag & P_NOSHLIB) { /* signal that this process is now using split libraries */ OSBitAndAtomic(~((uint32_t)P_NOSHLIB), &p->p_flag); } done: if (vp != NULL) { /* * release the vnode... * ubc_map() still holds it for us in the non-error case */ (void) vnode_put(vp); vp = NULL; } if (fp != NULL) { /* release the file descriptor */ fp_drop(p, fd, fp, 0); fp = NULL; } if (scdir_vp != NULL) { (void)vnode_put(scdir_vp); scdir_vp = NULL; } if (shared_region != NULL) { vm_shared_region_deallocate(shared_region); } SHARED_REGION_TRACE_DEBUG( ("shared_region: %p [%d(%s)] <- map\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm)); return error; }
通过在内核调试器中逐步执行此函数,我们发现原理真正的故障在下图中的那个部分:
/* make sure vnode is owned by "root" */ VATTR_INIT(&va); VATTR_WANTED(&va, va_uid); error = vnode_getattr(vp, &va, vfs_context_current()); if (error) { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map(%p:'%s'): " "vnode_getattr(%p) failed (error=%d)\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, (void *)VM_KERNEL_ADDRPERM(vp), vp->v_name, (void *)VM_KERNEL_ADDRPERM(vp), error)); goto done; } if (va.va_uid != 0) { SHARED_REGION_TRACE_ERROR( ("shared_region: %p [%d(%s)] map(%p:'%s'): " "owned by uid=%d instead of 0\n", (void *)VM_KERNEL_ADDRPERM(current_thread()), p->p_pid, p->p_comm, (void *)VM_KERNEL_ADDRPERM(vp), vp->v_name, va.va_uid)); error = EPERM; goto done; }
问题是缓存文件不是由root拥有的,所以我们找到了一种在OSX上安装ramdisk并启用文件权限的方法,并对文件进行了限制。这使得bash能够工作!现在我们只有stdout,但是没有输入支持。
UART交互式I/O
现在,剩下的就是启用UART输入。在查看并反转了一些串行I/O处理代码之后,我们找到了这个部分,它会决定是否启用UART输入(默认情况下是关闭的):
这段代码读取一个全局值,并检查#1。如果已打开,则启用UART输入。通过检查这个全局值,我们可以看到它在以下部分中设置:
该值是由serial启动参数得出的,最后,通过将serial启动参数设置为2,我们得到了一个交互式bash shell!