
For the last few months, we’ve been hard at work on today’s release, Binary Ninja 5.2 (Io)! This release delivers some of our most impactful and highly requested features yet, including bitwise data-structure support (second most requested), container support (fifth most requested), full Hexagon architecture support for disassembly and decompilation, and much more. Under-the-hood, 5.2 also contains some other improvements that will help us chart a course toward even bigger improvements in the future.
Let’s dig in!
Well, not quite free candy, but it might seem like it if you’re a user of the Free edition of Binary Ninja! In 5.2, we’re adding many new features from the paid versions to Free:
Previous versions of Binary Ninja had no support for bitwise structure members, commonly referred to as bitfields. With 5.2, we can now represent structure members at a given bit position and bit width.
Currently, we use bitfield information when rendering structures in the data renderer, as shown above. The included debug info plugins (e.g. DWARF, PDB) have been updated to express bitfields alongside other plugins like the built-in SVD import, where MMIO peripherals make heavy use of bitfields.
In an future release, we will extend our analysis to resolve common access patterns for bitfields in Medium and High Level IL.
One of our most-requested features is finally here: full container support. With the introduction of [Container Transforms](https://de-docs.binary.ninja/dev/containertransforms.html), Binary Ninja can now seamlessly handle nested formats like ZIP, IMG4, or CaRT directly in-memory — no manual extraction required.
At its core, container support lets you browse inside archives and automatically follow transformation layers to reach the data you care about. When a container resolves to a single target, Binary Ninja can transparently open it for analysis. Combined with the files.container.defaultPasswords setting, this makes it effortless to open password-protected samples or malware archives safely — everything happens in memory, so nothing ever touches disk and you can create an analysis database immediately.
When there are multiple payloads, the new Container Browser lets you explore and choose exactly which one to load:
You can easily extend this functionality since it leverages the Transform API. A complete ZipInfo example is included with the API, so you can add support for whatever container formats you need.
Want to add your own custom string deobfuscator that can be automatically applied merely by adding a type? How about support for custom string formats for a new language? Here’s some of the changes we made in this release to support it:
We’ll show off more details in an upcoming blog post, but here are a few examples to whet your appetite. First, we asked one of our favorite YouTubers and malware analysis trainers Josh Reynolds from InvokeRE for a good sample that uses a custom string obfuscation implementation. He suggested a recent Amadey variant(1, 2). Next, we wrote a quick example plugin that allows us to deobfuscate strings using simple type annotations which is useful for lots of other samples as well.
4cfd8b1592254d745d8f654e97b393c620ed463e317e09caa13b78d4cd779fdd, 1, 2).aDecrypt function using a hard-coded subtraction key from offset 00405000. Navigate to that location, right-click on the string and choose Copy As/Binary/Raw Hex . The sub_encoded attribute will be used by our plugin from step 1 above to automatically replace strings. typedef char __attr("sub_encoded", "31656537366531313932396130373434356335616264373434616134303764623239613037343435633561626437343461613430376462")* deobfuscate;
aDecrypt function: int32_t aDecrypt(deobfuscate arg1) (Note: the type is NOT a pointer since we want to apply the transformation directly).
Another useful feature is custom constant rendering. Check out how the bid64_constant.py example plugin handles rendering a lesser-known floating point format.
We didn’t just make custom string renderers for this specific feature. Part of the vision for Binary Ninja over the next few releases is a push toward language-specific decompilation. You can already see that with our current Objective-C support. A lot of the work over the past few releases has been in core APIs and features needed to support specific architectures or languages, and custom string support is one such feature. Many languages have their own string representations and encodings, so keep an eye as we really begin to take advantage of this over the next several releases.
As a side-benefit, any strings identified by __builtin_strcpy and related functions will now also show up in the string list as Outlined. This makes it even more useful in quickly identifying stack-strings that might previously not have been shown in the strings view despite being identified during analysis.
Stuck working with co-workers who prefer another reverse engineering tool? While Binary Ninja has had IDB import support for some time now, with 5.2, you can also import directly from Ghidra!
You can either import the data into an existing file using the Plugins/Ghidra Import/Import Database... menu (or command-palette action), or just open the database directly with Plugins/Ghidra Import/Open Database... instead. You’ll be able to select a single .gbf or use the file browser from a .gpr to select the specific file to apply or load.
When importing, you can choose with categories of information to import into your current analysis:
With Commercial and above editions, you can also directly import part or all of a Ghidra project into a Binary Ninja project using Plugins/Ghidra Import/ Import Project... if you run it inside of an existing Binary Ninja Project.
We plan to include Ghidra export support as well in a future version of Binary Ninja for bi-directional compatibility when working with collaborators who are using Ghidra.
WARP, our function signature matching plugin, can now optionally push and retrieve function and type information from a server! This allows user-contributed signatures so you can more easily share reverse engineering information with others using our WARP server. Another benefit is that we can provide signatures for uncommon libraries or one-off functions without worrying about the size on disk for users that may never need them.
While the first network implementation of WARP is being implemented in Binary Ninja, we have publicly documented the format and API and look forward to other tools including WARP support. Our goal is to make WARP a common format for all reverse engineering tools to support sharing function signatures no matter what your tool of choice is.
By default, WARP’s network functionality is disabled. The first time you access the WARP sidebar icon in Binary Ninja, you’ll be asked whether you want to enable the setting.
When fetching from a server, we send the function’s GUID along with the platform name, keeping transmission of sensitive information to a minimum. No authentication is required to query the database, though authentication is required to push changes to the server.
You can also push signatures to a WARP server using a free Binary Ninja account. See the documentation for more details!
Version 2.0 of the Enterprise server is scheduled for the Io Release 2 milestone and includes an integrated WARP server for all of our enterprise customers.
For more information regarding WARP server support, see the documentation and the WARP website and be sure to keep an eye out for another blog post showing several examples of using the WARP network service.
With Binary Ninja 5.2, we’re excited to announce we’ve added support for the Qualcomm Hexagon DSP architecture in our Ultimate and Enterprise editions.
Hexagon is a particularly tricky target for decompilation due to several characteristics of the DSP’s pipeline. In particular we support hardware loops which we believe to be an industry first! We’ll be back with a blog post with more details on those features and how we were able to add support, but you might remember in 5.1 when we mentioned how the custom basic block analysis was a precursor for some tricky architectures — this is the first architecture released using that new system.
That brings the total count of first-party supported architectures for decompilation to 17 in Ultimate and above! Our Commercial and Non-Commercial editions include first-party support for 12 architectures, all of which are open source [1, 2, 3]. Of course, there are even more third-party architectures available in the extension manager, so the total count is even higher.
One of the very first design decisions we made in Binary Ninja was to do things differently from other tools with our Cross References (xrefs). We love having a small xrefs window available that updates as you click around. That said, there’s plenty of people with muscle memory from other tools, so who are we to limit your choice? We now support three different modes of xrefs, available in the ui.defaultXrefInterface setting.
x hotkey.x hotkey.Can’t decide which you like best? No problem, you can even bind all of them to different hotkeys and keep them all at your fingertips if you prefer:
Pin Cross ReferencesFocus Cross ReferencesCross References Dialog...This release brings major enhancements to WinDbg TTD (Time-Travel Debugging) integration. A TTD trace is a vast information source, and efficient querying is the key to unlocking its full potential. We’ve added powerful new widgets to make running TTD queries easier and expanded the Python API to enable seamless automation.
The TTD Calls widget allows you to query and analyze function call events from your TTD trace. This is equivalent to WinDbg’s dx @$cursession.TTD.Calls() functionality, but integrated directly into Binary Ninja. It also lets you set a return address range, which in most cases identifies the caller, so you can limit results to calls from one specific module to another. This is invaluable for extracting API usage patterns and getting high-level behavioral information.
The TTD Memory widget allows you to query memory access events from your TTD trace. This is equivalent to WinDbg’s dx @$cursession.TTD.Memory() functionality. Use it to query read/write/execute operations in a given address range. This is especially helpful for surgical access to the trace—whether you’re hunting for specific memory accesses or tracking down executed instructions.
The TTD Events widget displays important events that occurred during the TTD trace, such as thread creation/termination, module loads/unloads, and exceptions. This is equivalent to WinDbg’s dx @$cursession.TTD.Events() functionality. It creates three tabs by default, showing module/thread/exception information, giving you a high-level overview of the program’s behavior.
Creating UI widgets for TTD data model queries is great, but we can do even better! If you’ve ever used lighthouse or bncov, you know how valuable code coverage visualization is in reverse engineering. Here’s the exciting part: your TTD trace already contains that information! We’ve included TTD Code Coverage Analysis, which processes the coverage data and uses a render layer to highlight executed instructions directly in disassembly. More TTD analyses are coming soon!
We created Python APIs that allows you to access TTD queries easily and enable building your own analysis. Here is a quick example:
# Get the debugger controller
dbg = binaryninja.debugger.DebuggerController.get_controller(bv)
# Query all calls to a function
calls = dbg.get_ttd_calls_for_symbols("user32!MessageBoxA")
print(f"Found {len(calls)} calls to MessageBoxA")
# Query memory writes to an address range
events = dbg.get_ttd_memory_access_for_address(0x401000, 0x401004, "w")
print(f"Found {len(events)} writes to 0x401000-0x401004")
# Query all TTD events
print(dbg.get_ttd_events())
If you haven’t yet explored TTD, now is the perfect time to transform your dynamic analysis workflow! Read the documentation, or check out an awesome list of TTD resources.
In 5.2, we continue to work on ensuring we have the best Objective-C decompilation around. We’ve rewritten our Objective-C workflow in Rust, and added two new features that drastically improve decompilation.
First, we propagate type information from [super init], which you can see below:
Second, we added a setting to remove reference counting calls which can really simplify decompilation in the Pseudo Objective-C view:
Let us know if you have any other feature requests for Objective-C, but just as a teaser, we’re already working on giving Swift the same treatment with a decompilation workflow and support for types, symbol demangling, debug information, and more.
Special thanks to the following open source contributors whose PRs were merged into this release:
We appreciate your contributions!
AND operationsvoid* types[round](https://github.com/Vector35/binaryninja-api/issues/7263) and renders HLIL_SPLIT operandsCreate Array dialog would sometimes fail[u"](https://github.com/Vector35/binaryninja-api/issues/7427) string prefixDataVariablesTBZ/TBNZ and CBZ/CBNZ on AArch64ldrsw ARM64 instructiontbnz condition on ARM architecturesArchitecture::GetRegisterInfo handles invalid register IDs gracefullyILTransparentCopyget_basic_block_atstd::find_ifFunctionTypeInfo equality checks[BasicBlockAnalysisContext](https://api.binary.ninja/binaryninja.architecture-module.html#binaryninja.architecture.BasicBlockAnalysisContext)str or QualifiedName[PossibleValueSet](https://github.com/Vector35/binaryninja-api/issues/7484)api_REVISION.txtType//* context menuUTF8 decodingunwrap() in dwarf_import to handle errors more gracefully.NoneDownloadInstanceimpl AsRef<Path> for path argumentsget_dataHighLevelILFunction.create_database().cargo check only if inputs change@installed keyword search in the Plugin ManagerEven this massive list isn’t everything! For even more items that were not included here including the usual assortment of performance improvements and more, check out our closed milestone on GitHub.