The (Long) Journey To A Multi-Architecture Disassembler

Last week we presented a talk at REcon on the internals of JEB’s native disassembler.

During this talk, we focused on some of the research problems we encountered while developing our custom disassembler engine — the foundation for JEB native decompiler.

Interested readers can find an extended version of the slide deck here.

If you have questions, comments or suggestions, feel free to:

Native Signatures Generation

JEB 3.3 ships with our internal tool SiglibGen to generate signatures for native routines. Until now, users could sign individual routines only from JEB user interface (menu Native> Create Signature for Procedure), or with the auto-signing mode.

With the release of SiglibGen, users can now create signatures for whole files in batch mode, notably executables (PE, ELF) libraries (Microsoft COFF and AR files) and JDB2 (JEB project files)1.

In this post, we will explain how SiglibGen allows power-users to generate custom signature libraries, in order to quickly identify similar code between different executables.

Signature Libraries (siglibs)

Signature libraries are stored in <JEB install folder>/siglibs folder. Each signature contains a set of features identifying a routine (detailed below), and a set of attributes representing the knowledge about the routine (name, internal labels, comments…).

JEB currently ships with signature libraries for x86/x64 Microsoft Visual Studio libraries (from Visual Studio 2008 to 2017), and for ARM/ARM64 Android NDKs (from NDKr10 to NDKr19). These signatures will be automatically loaded when a suitable file is opened (see File>Engines>Signature Libraries for the complete list of available signature libraries).

These compiler signatures are intended to be “false positive free”, i.e. they should only identify the exact same routine (though it can be mapped at a different location). Therefore, the signatures can be blindly trusted by users, and by JEB automatic analysis2.

But users might want to generate their own signature libraries, for example in the following scenarios:

  • User analyzed an unknown executable. The resulting JDB2 file can then be signed, such that all routines can be identified in others executables and related information (name, comments, labels) be imported.
  • User found out that an executable is statically linked with a public library. The library can then be compiled with symbols and signed such that the library routines will be renamed in the analyzed executable3.

Use Case: Operation ShadowHammer

To illustrate the signatures generation process, we are going to use the recent attack dubbed “Operation ShadowHammer” as an example. This operation was originally documented by Kaspersky. Roughly summarized, malicious code was inserted into a legitimate ASUS’s automatic update tool named “ASUS Live Update Utility” 4 .

In this use case, we are going to put ourselves in the shoes of an analyst willing to understand the trojanized ASUS installers. We do not intend to analyze them in-depth – it has been done several times already -, but rather show how SiglibGen can accelerate the analysis.

At first, we got our hands on three samples, originally mentioned in CounterCept’s analysis with their date of use:

SHA-256Date Of Use
6aedfef62e7a8ab7b8ab3ff57708a55afa1a2a6765f86d581bc99c738a68fc74June
736bda643291c6d2785ebd0c7be1c31568e7fa2cfcabff3bd76e67039b71d0a8September
9a72f971944fcb7a143017bc5c6c2db913bbb59f923110198ebd5a78809ea5fcOctober

Oldest Sample

Quick Analysis

An analyst would likely start looking at the oldest sample (6aedfef6…), in order to investigate possible evolution of the attack. In this sample, the installer’s main() routine was modified to load a malicious PE executable from its resources:

JEB Project View. The embedded executable can be seen in resources5.

Here is the memory map after opening the malicious executable in JEB:

Embedded PE navigation view. Blue is code, cyan is code identified by siglib, green is data.

The large chunks of cyan correspond to routines identified as being part of “Microsoft Visual C++ 2010 /MT” libraries. Then, we analyzed the remaining seven routines (the blue chunk in the navigation view), and renamed them as follow:

Malicious Routines (our names)

These routines implement the following logic: check if one of the machine’s MAC address match a hard coded list, and if it’s the case download a payload (otherwise a .idx log file is dropped).

Now in order to re-use this knowledge on more recent trojanized ASUS installers, let’s generate signatures for this first sample.

Generating Signatures

In order to sign the analyzed file, we are going to create a configuration file from the sample file provided in <JEB install folder>/siglibs/custom:

;------------------------------------------------------------------------------
; *** SAMPLE *** JEB Signature Library configuration file
;------------------------------------------------------------------------------

;template file used to configure the generation of a *.siglib file for JEB

;how to generate the siglib specified by this file?
;open a terminal and execute: (eg, on Windows)
;  $ ..\..\jeb_wincon.bat -c --siglibgen=sample-siglib.cfg

;(mandatory) name of the folder containing files to sign
; must be in the same folder as this configuration file 
input_folder_name=

;(mandatory) processor type 
; see com.pnfsoftware.jeb.core.units.codeobject.ProcessorType 
; eg: X86, X86_64, ARM, ARM64, MIPS, MIPS64
processor=

;(mandatory) output siglib file name
; '.siglib' extension will be appended to it
; IMPORTANT! once generated, this file must be moved to the <JEB>/siglibs/ folder
; (user generated siglibs have to be manually loaded)
output_file_name=mysiglib

;(mandatory) unique identifier for your siglib
; keep it < 0 and decrement for each package you generate
uuid=-1

;(mandatory) *absolute* path to JEB typelibs folder, usually <JEB>/typelibs
typelibs_folder=

;(mandatory) name of your package
; e.g. 'Microsoft Visual C++ 2008 signatures' (without '')
package_name=

;(mandatory) package version
package_version=0

;(optional) description of your package
package_description=

;(optional) package author
package_author=

;(mandatory) list of features included in each signature
; i.e. the characteristics of the signed routines serving to identify them
; see com.pnfsoftware.jeb.core.units.code.asm.sig.NativeFeatureSignerID
; note: defaults should be suitable for most cases. ROUTINE_SIZE must always be included. 
features=ROUTINE_SIZE,ROUTINE_CODE_HASH,CALLED_ROUTINE_NAME_ONLY_EXTERN 

;(mandatory) list of attributes included in each signature
; i.e. additional knowledge on the signed routines conveyed by signatures 
; (other than routine name)
; see com.pnfsoftware.jeb.core.units.code.asm.sig.NativeAttributeSignerID
attributes=COMMENT,LABEL

A particularly interesting part of this configuration is the features field, where users can select the characteristics of the routine they want to put in signatures. The complete feature list can be found here; here are the features we included in our case (the default ones):

Feature NameDescription
ROUTINE_SIZESize of the routine (number of instructions).
ROUTINE_CODE_HASHCustom hash computed from the routine assembly code.
CALLED_ROUTINE_NAME_ONLY_EXTERNNames of the external routines called by the signed routine.

Note that by including ROUTINE_CODE_HASH, our signatures will only match routines with the exact same code (but possibly mapped at a different location). The use of
CALLED_ROUTINE_NAME_ONLY_EXTERN allows to distinguish different wrapper routines calling different API routines, but having the same code.

Here is the specific configuration file shadowhammer-oldest.cfg we made for this first sample:

input_folder_name=input
processor=X86
output_file_name=shadowhammer-6aedfef6
uuid=-1
typelibs_folder=[...REDACTED...]\typelibs
package_name=ShadowHammer -- sample 6aedfef6 (oldest)
package_version=0
package_description=Signatures generated from the analysis of the oldest sample known
package_author=Joan Calvet
features=ROUTINE_SIZE,ROUTINE_CODE_HASH,CALLED_ROUTINE_NAME_ONLY_EXTERN 
attributes=COMMENT,LABEL

Then we put the JDB2 file of the analyzed sample into the input folder (see configuration’s input_folder_name field). SiglibGen can then be called by executing JEB startup script (e.g. jeb_wincon.bat) with the following flags:

$jeb -c --siglibgen=shadowhammer-oldest.cfg

The generated signature libraries will then be written in the output folder. In our case, SiglibGen signed our seven routines, as indicated in siggen_stat.log file 6:

> Package created on 2019.05.01.15.29.23
> metadata: X86/ShadowHammer -- sample 6aedfef6 (oldest)/0/Signatures generated from the analysis of the oldest sample known/Joan Calvet/1556738959
> # sigs created: 7
> # very small routines: 0
> # small routines: 0
> # medium routines: 6
> # large routines: 1
> # unnamed routines: 1
> # blacklisted routines: 0
> # duplicated routines: 0

We can now copy shadowhammer-6aedfef6.siglib to <JEB>/siglibs/ folder. It will now be available under File>Engines>Signature Libraries to be manually loaded.

Second Sample Analysis

Now, it is time to turn to the second sample (736bda6432…). The workflow is quite different from the previous one: a routine call has been inserted into Visual Studio library method __crtExitProcess, which is called whenever the program exists:

Trojanized __crtExitProcess. Call to __crtCorExitProcess was replaced by a call to malicious code.

The astute reader might wonder why the routine is still named __crtExitProcess(), as if it was the original one, if one of its call has been rewritten to point elsewhere. In this case, the routine’s name comes from the fact that several caller routines were identified as library code (and are known to call __crtExitProcess()), as indicated by the routine header comment “Routine’s name comes from a caller […]”.

Following the dubious call, we end up decrypting the malicious payload, which is then executed. We can load the malicious dump in JEB with the x86 processor and the correct base address. After manually defining the code area, we obtain the following navigation view:

Memory dump’s initial navigation view. Blue is code.

For now, no compiler signature libraries were loaded because it is a memory dump without a proper PE header. As we know the previous malicious sample was compiled with Visual Studio 2010 /MT libraries, we can manually load the corresponding signatures (File>Engines>Signature Libraries). Here is the navigation bar at this time:

Memory dump’s navigation view with Visual Studio 2010 /MT signature libraries loaded. Blue is code, cyan is library code.

Most of the code has been identified. Now, we can load the custom signatures we generated from the previous sample, and we end up with two more routines being identified (i.e. miscreants directly re-used them from the first sample):

We can now look at the non-identified routines, without having to reanalyze the duplicates.

Finally, after having analyzed the remaining routines, we can generate a new signature library, following the same steps previously described. This time we put two samples in the input folder (the trojanized installer’s JDB2, and the memory dump’s JDB2). Eight routines are then signed.

Third Sample Analysis

The most recent sample (9a72f971944f…) follows the same logic as the previous one, namely it dynamically decrypts the malicious code, which is then executed. As previously, we load the memory dump in JEB with Visual Studio 2010 /MT signatures:

Memory dump’s navigation view with Visual Studio 2010 /MT signature libraries loaded.

Finally, we load the ShadowHammer signature libraries generated from the previous two samples:

Memory dump’s navigation view with
Visual Studio 2010 /MT and ShadowHammer signature libraries loaded.

At this point, only one malicious routine has not been identified (the large blue area in the navigation view). We can now focus on it, knowing that the rest of the code is the same.

If we open the two binaries side-by-side, we can rapidly pinpoint that the unidentified routine has indeed been modified between the two samples. For example:

It appears the hardcoded list of searched MAC addresses (represented by their MD5 hashes) has been modified between the two samples.

Conclusion

We hope this blog post demonstrated how SiglibGen allows users to speed up their analysis by easily re-using their work. Remember that signatures can be generated in a lighter manner directly from JEB UI (as shown in the auto-signing mode video). As usual, do not hesitate to contact us if you have any questions (emailTwitterSlack).

Note: SiglibGen might set .parsers.*.AnalysisStyle and .parsers.*.AllowAdvancedAnalysis engines option to specific values suitable for signatures generation, without restoring the original values after the generation. For now, JEB power-users have to manually restore these two engines options to the intended values after having generated signatures (menu Edit>Options>Engines). This will be fixed in next release JEB 3.4.

Annex: SiglibGen Log Files

A typical SiglibGen run will produce several log files (in the same folder):

File NamePurpose
siggen_stat.logSummary log (number of signatures created, etc). A new entry is appended to the log file at each signature generation.
siggen_report.htmlComplete HTML log file; each signed routine is shown with the corresponding features and attributes.
conflicts.txtConflict resolution file; users can tweak here the decisions taken when several routines have the same features (and then regenerate the signatures).
removals.txtRemovals resolution file; users can tweak here the automatic decisions regarding removing certain signatures (and then regenerate the signatures) .

  1. More formats could be handled, do not hesitate to contact us if you have such needs.
  2. While the signatures shown in this blog post will also be generated in a false positive free manner, SiglibGen allows to build more flexible signatures; this will the topic of another blog post.
  3. If signatures were built to be strict (i.e. not allowing any modifications to the original routine), this can be far from trivial, as the library needs to be compiled with the exact same options as the analyzed executable.
  4. There are numerous excellent analysis available for Operation ShadowHammer, like the one from CounterCept.
  5. Note that thanks to JEB recursive processing, the embedded executable does not need to be extracted, and can be directly analyzed within the original JEB’s project
  6. See Annex for a description of all log files produced by SiglibGen.

Traveling Around Mars With C Emulation (Part 1)

In previous blog posts, we explained how JEB’s custom Intermediate Representation can serve to analyze an executable and perform advanced deobfuscation. Now it’s time to turn to the final output produced by JEB native decompilers: C code1!

In this series of blog posts, we will describe our journey toward analyzing a heavily obfuscated crackme dubbed “MarsAnalytica”, by working with JEB’s decompiled C code.

To reproduce the analysis presented in this post, make sure to update JEB to version 3.1.3+.

MarsAnalytica Challenge Reconnaissance

MarsAnalytica crackme was created by 0xTowel for NorthSec CTF 2018. The challenge was made public after the CTF with an intriguing presentation by its author:

My reverse engineering challenge ‘MarsAnalytica’ went unsolved at #nsec18 #CTF. Think you can be the first to solve it? It features heavy #obfuscation and a unique virtualization design.

0xTowel

Given that exciting presentation, I decided to use this challenge mainly as a playground to explore and push JEB’s limits (and if we happen to solve it on the road, that would be great!).

The MarsAnalytica sample analyzed in these blogs is the one available on 0xTowel’s GitHub 2. Another version seems to be available on RingZer0 website, called “MarsReloaded”.

So, let’s examine the beast! The program is a large x86-64 ELF (around 10.8 MB) which, once executed, greets the user like this:

Inserting a dummy input gives:

So I guess we have to find a correct Citizen ID! Now let’s open the executable in JEB. First, the entry point routine:

Entry Point

Ok, the classic libc entry point, now let’s look at strings and imports:

A few interesting imports: getchar() to read user input, and putchar() and puts() to write. Also, some memory manipulation routines, malloc() and memcpy(). No particular strings stand out though, not even the greeting message we previously saw. This suggests we might be missing something.

Actually, looking at the native navigation bar (right-side of the screen by default), it seems JEB analyzed very few areas of the executable:

Navigation Bar
(green is cursor’s location, grey represents area without any code or data)

To understand what happened let’s first look at JEB’s notifications window (File > Notifications):

Notifications Window

Two interesting notifications here: first the file was deemed “malformed/obfuscated”, due to its sections being stripped, and second the analysis style was initially set to CONSERVATIVE.

The CONSERVATIVE analysis style means JEB was cautious during disassembly; it only followed safe control-flow relationships (i.e. branches with known targets), and searched for common routine prologue patterns to find unreferenced routines (e.g. push rbp / mov rbp, rsp)

This likely explains why most of the executable was not analyzed: the control-flow could not be safely followed and the unreferenced code does not start with common prologue patterns.

JEB usually employs AGGRESSIVE analysis on standard Linux executables, and disassembles (almost) anything within code areas (also known as “linear sweep disassembly”). In this case, JEB went CONSERVATIVE because the ELF file looks non-standard (sections are stripped).

Explore The Code (At Assembly Level)

Let’s take a look at the actual main() (first argument of __libc_start_main()):

main() code
(part 4)

Ok… that’s where the fun begins!

So, first a few memcpy() to copy large memory areas onto the stack, followed by series of “obfuscated” computations on these data. The main() routine eventually returns on an address computed in rax register. In the end, JEB’s disassembler was not able to get this value, hence it stopped analyzing there.

Let’s open the binary in JEB debugger, and retrieve the final rax value at runtime: 0x402335. We ask JEB to create a routine at this address (“Create Procedure”, P), and end up on very similar code. After manually following the control-flow, we end up on very large routines — around 8k bytes –, with complex control-flow, built on similar obfuscated patterns.

And yet at this point we have only seen a fraction of this 10MB executable… We might naively estimate that there is more than 1000 routines like these, if the whole binary is built this way (10MB/8KB = 1250)!

It should be noted that most of the obfuscated routines re-use the same stack area (initialized in main() with the series of memcpy()).

Disassemble’em All!

To automatically discover more routines, I configured JEB into disassembling “everything” within code areas, i.e. applying an AGGRESSIVE analysis rather than a CONSERVATIVE one 3.

This did not end well, due to an anti-disassembly trick that interleave useless 0xE8 bytes within correct code.

Correct disassembly
(i.e. faithful to what will be executed)

When disassembling linearly, those bytes will be considered as code, because 0xE8 is x86 opcode for the CALL instruction. Hence the disassembly listing ends-up de-synchronized to what will actually be executed:

Incorrect disassembly
(0xE8 byte has been disassembled as CALL, and “stole” the next bytes)

We could tweak the disassembly algorithm, but at this point we don’t know if there are others hidden “gifts” . Thus, it seems that the most robust option to correctly analyze the whole executable is to find a way to follow the control-flow.

Explore The Code (At C Level)

Let’s now take a look at the pseudo-C code produced by JEB for those first routines. For example, here is main():

Decompiled main()

The code is (pretty) nice and short! Actually… too short, a lot of the original code is not present in the decompiled code. What happened? Many instructions write values into the stack, and those values will not be re-used later on in the same routine; therefore the instructions have been deemed “useless” and removed.

But those written values will likely be used by the next routines, which share the same stack, as noted earlier. So we need to keep them if we want to have a chance to correctly analyze the code.

This can be done by configuring JEB to not apply “aggressive optimizations” during decompilation 4. Here is the new main() decompiled code:

Decompiled main() without aggressive optimizations

A few lengthy expressions, but it remains pretty decent, given the complexity of the original assembly code: 60 lines of C, most of them simple assignments, to represent around 200 non-trivial assembly instructions.

We Need A Plan

Current Understanding

The executable is divided into (not so small) handler routines, each of them passing control to the next one by computing its address. For that purpose, each handler reads values from a large stack, make a series of non-trivial computations on them, then write back new values into the stack.

After following manually a bunch of the handlers it seems the user input is only processed after a lot of them have been executed.

As originally mentioned by 0xTowel, the crackme author, it looks like a virtual-machine style obfuscation, where bytecodes are read from memory, and are interpreted to guide the execution.

Also, let’s notice that while the executable is impressively obfuscated, there are some “good news”:

  • There does not seem to be any self-modifying code, meaning that all the code is statically visible, we “just” have to compute the control-flow to find it.
  • JEB decompiled C code looks (pretty) simple, most C statements are simple assignments, except for some lengthy expression always based on the same operations; the decompilation pipeline simplified away parts of the complexity of the various assembly code patterns.
  • There are very few subroutines called (more on that in the next blogs), and also few system APIs, so most of the logic is contained within the chain of obfuscated handlers, connected through jmp rax or push rax/ret instructions.

Current Objective

Our first goal would be to find where the user input starts to be processed (typically a call to getchar()), and what is the exact memory state at this point (as we are likely going to need it to solve this madness).

Here Is A Plan

Given all that, we could pass through all the “deterministic” part of the execution (i.e. until the user’s input is processed) by implementing a C emulator.

The emulator would simulate the execution of each handler routine, update a memory state, and retrieve the address of the next handler, which can be described by the following pseudo-code:

emulatorState = initEmulator() // initialize memory
handlerAddress = 0x400DA9 // first handler (known address)
while(true){
  analyze(handlerAddress) // disassemble and decompile
  emulatorState = emulate(handlerAddress, emulatorState)
  handlerAddress = emulatorState.getRAX() // RAX=next handler
}

The program will then produce an execution trace, and provide us access to the exact program’s state. Hence, we should find at some point where the user’s input is processed (typically, a call to getchar()).

The main advantage of this approach is that we are going to work on C code, rather than assembly code. This will become handy later to analyze how the user’s input is processed.

Also, there are a few reasons I decided to go down that (unusual?) road, which have very little to do with MarsAnalytica challenge:

  • The emulator would be architecture-independent — several native architectures are decompiled to C by JEB –, allowing us to re-use it in situations where we cannot easily execute the target (e.g. MIPS/ARM).
  • It will be an interesting use-case for JEB public API to manipulate C code. Users could then extend the emulator to suit their needs.
  • This approach can only work if the decompilation is correct, i.e. if the C code remains faithful to the original native code. In other words, it allows to “test” the decompilation pipeline’s correctness, which is — as a JEB’s developer — interesting!

Nevertheless, a major drawback of emulating C code on this particular executable, is that we need the C code in the first place! Decompiling 10MB of obfuscated code is going to take a while; therefore this “plan” is certainly not the best one for time-limited Capture-The-Flag competitions.

Cliffhanger Ending

How to implement a C emulator with JEB API? Is MarsAnalytica decompiled code correct? Is it really such a bad good plan to use an emulator? Will I perish under performance problems?

Answers to those questions (and more) in part 2!


  1. While JEB’s default decompiled code follows (most of) C syntactic rules and their semantics, some custom operators might be inserted to represent low-level operations and ease the reading; hence strictly speaking JEB’s decompiled code should be called pseudo-C. The decompiled output can also be variants of C, e.g. the Ethereum decompiler produce pseudo-Solidity code.
  2. SHA1 of the UPX-packed executable: fea9d1b1eb9d3f93cea6749f4a07ffb635b5a0bc
  3. Changing the analysis style can be done with .parsers.x86_64.AnalysisStyle engine option
  4. Disabling aggressive optimizations can be done with .parsers.dcmp_x86_64.IROptimizerDisableAggressivePass engine option

JEB Native Analysis Pipeline – Part 1: Intermediate Representation

JEB native code analysis components make use of a custom intermediate representation (IR) to perform code analysis.

Some background: after analysis of a code object, the native assembly of a reconstructed routine is converted to an intermediate representation. 1 That IR subsequently goes through a series of transformation passes, including massages and optimizations. Final stages include the generation of high-level C-like code. Most stages in this pipeline can be customized by users via the use of plugins. A high-level, simplified view of the pipeline could be as follows:

CodeObject (*)
-> Reconstructed Routines & Data
-> Conversion to IR (low-level, non-optimized)
-> IR Optimizations
-> Final IR (higher-level, optimized, typed)
-> Generation of AST
-> AST Optimizations
-> Final AST (final, cleaned)
-> High-level output (eg, C variant)

(*) Examples of code objects: a Windows PE file with x86-code, an ELF library with with MIPS code, a headless ARM firmware, a Wasm binary file, an Ethereum smart contract, etc.

Two important JEB API components to hook into and customize the native analysis pipeline are:
– The IR classes
– The AST classes
We will start looking at IR components through the rest of this part 1.

IR Description

JEB IR can be seen as a low-level, imperative assembly language, made of expressions. Highest-level expressions are statements. Statements contain expressions. Generally, expressions can contain expressions. IR can be accessed via interfaces in the JEB API. The top-level interface for all IR expressions is IEGeneric. All IR elements start with IExxx. 2

The diagram below shows the current hierarchy of IR expression interfaces:

Note that IEGeneric sits at the top. All other IRE’s (short for IR Expressions from now on) derive from it. Let’s go through those interfaces:

  • IEImm: Integer immediate of arbitrary length. Eg,
    Imm(0x1122, 64) would represent the 64-bit integer value 0x1122.
  • IEVar: Generic IRE to represent variables. Variables can represent underlying physical registers, virtual registers, local function variables, global program variables, etc.
  • IEMem: Piece of memory of arbitrary length. The memory address itself is an IRE; the accessed bitsize is not.
  • IECond: A ternary expression “c ? a: b”, where a, b and c are IRE’s.
  • IERange: A fixed integer range, commonly used with Slice
  • IESlice: A chunk (contents range) of an existing IR. Eg, Slice(Imm(0x11223344, 32), 16, 24)) can be simplified to Imm(0x22, 8)
  • IECompose: The concatenation of two or more IRE’s (IR0, IR1, …), resulting in an IR of size SUM(i=0->n, bitsize(IRi))
  • IEOperation: A generic operation expression, with IRE operands and an operator. Eg, Operation(ADD,Imm(0x10,8),Mem(Imm(0x10000,32),8)). Most standard operators are supported, as well as less standard operators such as the Parity function or Carry function.)
  • IEStatement: the super-interface for IR statements; we will detail them below.

An IR translation unit, resulting from the conversion of a native routine, consists of a sequential list of IEStatement objects. An IR statement has a size (generally, but not necessarily, 1) and an address (generally, a 0-based offset relative to its position in the translation unit).

As of JEB 3.0.8, IR statements can be:

  • IEAssign: The most common of all statements: an assignment from a right-side source to a left-side destination. While the source can be virtually anything, the destination IRE is restricted to a subset of expressions.
  • IENop: This statement does nothing but consumes virtual size in the translation unit.
  • IEJump: An unconditional or conditional jump within the translation unit, expressed using IR offsets.
  • IEJumpFar: An unconditional or conditional far jump (can be outside the translation unit), expressed using native addresses.
  • IESwitch: The N-branch equivalent of IEJump.
  • IECall: Represent a well-formed static or dynamic dispatch to another IR translation unit. The dispatch expression can be any IRE (eg, an Imm for a static dispatch; a Var or Mem for a dynamic dispatch).
  • IEReturn: A high-level expression used to denote a return-to-caller from a translation unit representing a routine. This IRE is always introduced by later optimization passes.
  • IEUntranslatedInstruction: This powerful statement can be used to express anything. It is generally used to represent native instructions that cannot be readily translated using other IR expressions. (Users may see it as an IECall on steroid, using native addresses. In that sense, it is to IECall what IEJumpFar is to IEJump.)

Now, let’s look at a few examples of conversions.

IR Examples

Let’s assume the following EVars were previously defined by an Intel x86 (or x86-64) converter: tmp (a 32-bit EVar representing a virtual placeholder register); eax (an EVar representing the physical register %eax); ?f (1-bit EVars representing standard x86 flags).

  • x86: mov eax, 1
s32:_eax = s32:00000001h

Translating this mov instruction is straight-forward, and can be done with a single Assign IR statement.

  • x86-64: not r9d
s64:_r9 = C(~(s64:_r9[0:32[), i32:00000000h)

Translating a not-32-bit-register on an x86-64 platform is slightly more complex, as the upper 32-bit of the register are zeroed out. Here, the converter is making use of three nested IREs: (IECompose(IEOperation(NOT, Slice(r9, 0, 32))))

Reading IR. IECompose are pretty-printed as C(lo, …, hi), IESlice as Expr[m:n[ 

  • x86-64: xor rax, qword ds:[ecx+1]
0000 : s64:_rax = (s64:_rax ^ 64<s16:_ds>[(s64:_rcx[0:32[ + i32:00000001h)])
0001 : s1:_zf = (s64:_rax ? i1:0 : i1:1)
0002 : s1:_sf = s64:_rax[63:64[
0003 : s1:_pf = PARITY(s64:_rax[0:8[)
0004 : s1:_of = i1:0
0005 : s1:_cf = i1:0

One side-effect of arithmetic operations on x86 is the modification of flag registers. A converter explicits those side effects. Consequently, translating the exclusive-or above resulted in several Assign IR statements to represent register and flags updates. 3

Reading IR. IEMem are pretty-printed as bitsize<SegmentIR>[AddressIR]

  • x86: add eax, 2
0000 : s32:_tmp = s32:_eax
0001 : s32:_eax = (s32:_eax + i32:00000002h)
0002 : s1:_zf = (s32:_eax ? i1:0 : i1:1)
0003 : s1:_sf = s32:_eax[31:32[
0004 : s1:_pf = PARITY(s32:_eax[0:8[)
0005 : s1:_af = ((s32:_tmp ^ i32:00000002h) ^ s32:_eax)[4:5[
0006 : s1:_cf = (s32:_tmp CARRY i32:00000002h)
0007 : s1:_of = ((s32:_tmp ^ s32:_eax) & ~((s32:_tmp ^ i32:00000002h)))[31:32[

The translation of add makes use of the temporary, virtual EVar tmp. It holds the original value of %eax, before the addition was done. That value is necessary for some flag update computations (eg, the overflow flag.) Also take note of the use of special operators Parity and Carry in the converted stub.

  • x86-64: @100000h: jz $+1
s64:_rip = (s1:_zf ? i64:0000001000000003h : i64:0000001000000002h)

Note that a native address is written to the RIP-IEVar (or any EVar representing the Program Counter – PC). PC-assignments like those can later be optimized to IEJump, making use of IR Offsets instead of Native Addresses.

Also note that the Control Flow Graph (CFG) of the native instruction in the examples thus far are isomorphic to their IR-CFG translated counterparts. That is not always the case, as seen in the example below.

  • x86: repe cmpsb
0000 : if (s32:_ecx == i32:00000000h) goto 000B
0001 : s1:_zf = ((8<s16:_ds>[s32:_esi] - 8<s16:_es>[s32:_edi]) ? i1:0 : i1:1)
0002 : s1:_sf = (8<s16:_ds>[s32:_esi] - 8<s16:_es>[s32:_edi])[7:8[
0003 : s1:_pf = PARITY((8<s16:_ds>[s32:_esi] - 8<s16:_es>[s32:_edi]))
0004 : s1:_cf = (8<s16:_ds>[s32:_esi] <u 8<s16:_es>[s32:_edi])
0005 : s1:_of = ((8<s16:_ds>[s32:_esi] ^ (8<s16:_ds>[s32:_esi] - 8<s16:_es>
       [s32:_edi])) & (8<s16:_ds>[s32:_esi] ^ 8<s16:_es>[s32:_edi]))[7:8[
0006 : s1:_af = ((8<s16:_ds>[s32:_esi] ^ 8<s16:_es>[s32:_edi]) ^ (8<s16:_ds>
       [s32:_esi] - 8<s16:_es>[s32:_edi]))[4:5[
0007 : s32:_esi = (s32:_esi + (s1:_df ? i32:FFFFFFFFh : i32:00000001h))
0008 : s32:_edi = (s32:_edi + (s1:_df ? i32:FFFFFFFFh : i32:00000001h))
0009 : s32:_ecx = (s32:_ecx - i32:00000001h)
000A : if s1:_zf goto 0000

Reading IR. conditional IEJump are pretty-printed “if (cond) goto IROffset”. Unconditional IEJump are rendered as simple “goto IROffset”.

This IR-CFG is not isomorphic to the native CFG. Additional edges (per the presence of 2x IEJump) are used to represent the compare “[esi+xxx] to [edi+xxx]” loop.

Accessing IR

The JEB back-end API allows full access to several IR-CFG’s, from low-level, raw IR to partially optimized IR, to fully lifted IR just before AST generation phases.

Navigating the IR in the GUI

The UI client currently provides access to the most optimized IR of routines. Those IR-CFG’s can be examined in the apt-named fragment right next to the source fragment showing decompiled code. Here is an example of a side-by-side assemblies (x86, IR). The next screenshot shows the decompiled source.

Left-side: x86 routine / Right-side: optimized IR of the converted routine
(Click to enlarge)
Decompiled source

IR via API

The API is the preferred method when it comes to power-users wanting to manipulate the IR for specific needs, such as writing a custom optimizer, as we will see in the next blog in this series.

Reminder: JEB back-end plugins can be written in Java (preferably) or Python. JEB front-end scripts can be written in Python, and can run both in headless clients (eg, using the built-in command line client) or the UI client.

For now, let’s see how to write a Python script to:

  • Retrieve a decompiled routine
  • Get the generated Intermediate Representations
  • Print it out

The following script does retrieve the first internal routine of a Native unit, decompiles it, retrieve the default (latest) IR, and prints out its CFG. The full scripts is available on GitHub.

# retrieve `unit`, the code unit

# GlobalAnalysis is assumed to be on (default)
decomp = DecompilerHelper.getDecompiler(unit)
if not decomp:
  print('No decompiler unit found')
  return

# retrieve a handle on the method we wish to examine
method = unit.getInternalMethods().get(0)#('sub_1001929')
src = decomp.decompile(method.getName(True))
if not src:
  print('Routine was not decompiled')
  return
print(src)
    
decompTargets = src.getDecompilationTargets()
print(decompTargets)

decompTarget = decompTargets.get(0)
ircfg = decompTarget.getContext().getCfg()
# CFG object reference
# see package com.pnfsoftware.jeb.core.units.code.asm.cfg
print("+++ IR-CFG for %s +++" % method)
print(ircfg.formatSimple())

Running on Desktop Client. Run this script in the UI client via File, Scripts, Run… (hotkey: F3). Remember to open a binary file first, with a version of JEB that ships with the decompiler for that file’s architecture.

Running on the command-line. You may also decide to run it on the command-line. Example, on Windows:

$ jeb_wincon.bat -c --srv2 --script=PrintNativeRoutineIR.py -- winxp32bit/notepad.exe

Example output:

... <trimmed>
...
+++ IR-CFG for Method{sub_1001929}@1001929h +++
0000/1>  s32:_$eax = 32<s16:_$ds>[s32:_gvar_100A4A8]
0001/1:  if !(s32:_$eax) goto 0003
0002/1+  call s32:_GlobalFree(s32:_$eax)->(s32:_$eax){i32:0100193Ch}
0003/1+  s32:_$eax = 32<s16:_$ds>[s32:_gvar_100A4AC]
0004/1:  if !(s32:_$eax) goto 0006
0005/1+  call s32:_GlobalFree(s32:_$eax)->(s32:_$eax){i32:01001948h}
0006/1+  32<s16:_$ds>[s32:_gvar_100A4A8] = i32:00000000h
0007/1:  32<s16:_$ds>[s32:_gvar_100A4AC] = i32:00000000h
0008/1:  return s32:_$eax

Conclusion

That is it for part 1. In part 2, we will continue our exploration of the IR and see how we can hook into the decompilation pipeline to write our custom optimizers to clean packer-specific obfuscation, as well as make use of the data flow analysis components available with the IR-CFG. Stay tuned!

  1. Working on IR presents several advantages, two of which being: a/ the reduction of coupling between the analysis pipeline and the input native architecture; b/ and offering a side-effect free representation of a program.
  2. The design choices of JEB IR are out-of-scope for this blog. They may be the subject of a separate document.
  3. When decompiling routines, IR optimization passes will iteratively refactor and clean-up unnecessary operations. In practice, most flag assignments will end up being removed or consolidated.

Native types and type libraries

JEB 3.0.7 ships with our internal type library generation tool. In this post, we will show how to use native types with the client and API, and how power-users can generate custom type libraries.

Type libraries (typelibs)

Type libraries are *.typelib files stored in the JEB’s typelibs/ folder. They contain type information for a given component (eg, an OS or an SDK), such as:

  • Types (aliases, structures, enumerations, etc.) and prototypes (~function pointers)
  • Publicly exported routines
  • Constants

JEB ships with typelibs for major sub-systems (such as Windows win32 (user-mode), Windows Driver Kit (kernel), Linux GNU, Linux Android, etc.) running on the most popular architectures (x86, x86-64, arm, aarch64, mips).

Let’s see how types can be used to ease your reverse-engineering tasks.

Using native types with the UI client

Applying types

Using types with JEB is straightforward. If your file’s target environment was identified (or partially identified), then, matching typelibs will be loaded and their types be made available to the user.

The file shown below is an x86 file compiled for Windows 32-bit:

As such,  win32 typelibs were loaded. You can verify that by clicking File, Engines, Type Libraries…:

Let’s define the bytes at address 0x403000 as belonging to a FILETIME structure. You may right-click and select Edit Type (Y):

and input the exact type name: (the type must exist)

Alternatively, it is easier to select a type using Select Type (T). A list of available types is displayed. Filter on “FILETIME”:

And apply it.

The resulting updated disassembly listing will be:

Type editor

JEB features a powerful native type editor, that allows the modification of existing “complex” types (that is, structure and derivative) and the definition of new types. Open it with Ctrl+Alt+T (macOS: Cmd+Alt+T).

Below, we are selecting an existing well-known Windows type, IMAGE_DOS_HEADER.

The left panel allows you to define the exact structure layout. The right panel is a C-like view of the structure, with actual offsets.

Let’s create a new type.

To create a structure type, click Create, and input a name, such as MyStruc1. The type editor will display your empty structure:

You may then add or remove fields, using the following hotkeys:

Here, we define MyStruc1 to be as such: a structure containing primitives, a nested structure, and arrays.

As seen earlier, we can apply our type MyStruc1  anywhere on bytes, eg at offset 0x403027:

Constants

Typelib files also bundle well-known constants, generally defined in header files with #DEFINE pre-processor commands. You may use them to replace immediate values in your assembly or decompiler views.

Here is an example, again, coming from a Windows win32 file. The following decompiled method makes use of SendMessage routine:

Note that the second parameter is the message id. The MSDN provides a long list of well-known ids; Most of them are bundled with Windows typelibs shipping with JEB.

Right click on the immediate value (176), and select Replace to see what is offered:

Click OK to perform the replacement:

More readable, isn’t it?

Custom typelibs

There exist scenarios where users will want to create their own typelibs, generally when many custom types would have to be created and/or may need to be reused later. Examples:

  • Analysis of a Windows kernel component making use of Driver Kit headers whose types were not added to JEB’s pre-built WDK typelibs (our own wdk10-<arch>.typelib files do not contain all WDK components, although they do contain the most important ones).
  • The types of platform X were not compiled for a given architecture (eg, JEB does not ship with Linux types specific to Atmel AVR microcontrollers).
  • The binary to be analyzed makes use of a third-party SDK and the program is dynamically linked to that SDK. In that scenario, a user may want to generate typelibs for the SDK for the platform of their choosing.

Creating custom typelibs

Creating a custom typelib file is a fairly simple process: the generator is called by executing your JEB startup script (eg, jeb_wincon.bat) with the following flags:

$ jeb - c --typelibgen=<typelib_configuration_file>

JEB ships with a sample typelib cfg file: typelibs/custom/sample-typelib.cfg. This key-value file is mostly self-explanatory, please refer to it for reference. (Below, we focus solely on the two most important entries, hdrsrc and cstsrc.)

You may want to copy the sample configuration file and adjust it to match your requirements.

The input files can be either or both of the following:

  • An aggregated, preprocessed header file: it should contain C types and exported methods
  • A constant file containing a list of named constants

Types and public routines

The aggregated header can be generated by pre-processing a simple C file including your target header file(s).

Example: let’s say we want to generate types for stdio.h, on Windows ARM64 platform. We can use Microsoft Compiler’s /P flag to pre-process a sample file, 1.c including the target headers:

// 1.c
#include "stdio.h"
int main(void) {return 0;}
$ cl.exe" /P 1.c /D "WIN32" /D "NDEBUG" /D "_CONSOLE" /D "_UNICODE" /D "UNICODE" /D "_ARM64_WINAPI_PARTITION_DESKTOP_SDK_AVAILABLE=1"

The resulting file will be quite large – and is likely to contain much more than just stdio.h type information (all headers recursively-included by stdio.h would be processed as well).

We can rename that file as hdr.h and feed it to JEB’s Typelib Generator. (entry: hdrsrc)

Quick reference: To preprocess a file with…

JEB’s built-in C declaration parser

Our C parser is C11 based, and supports most standard C declarations, as well as common MSVC and GCC extensions. Two important caveats to remember:

  • anonymous structure bitfields are not supported: things like “int :4” will need to be massaged to, eg, “int _:4”
  • anonymous aliased parameter for single-parameter methods are not supported: things like “void foo(X)” will need to be massaged to, eg, “void foo(X _)”

Predefined constants

As seen earlier, typelib files can also contain list of named constants – generally, they will be those constants that are #DEFINE’d in header files.

They can be scraped from C/C++ header files. JEB ships with a handy Python script that will help you do that quickly: see typelibs/custom/collectDefines.py (other tools exist, such as GCC’s dM flag, but they may not generate all constants, only those that are preprocessed with a given set of precompilation parameters).

Example:

$ ./collectDefines.py -r w10ddk
CDF_DVCR_625_50_BLOCK_PERIOD:3276
CDF_DVCR_625_50_BLOCK_PERIOD_REMAINDER:800000000
CDROM_AUDIO_CONTROL_PAGE:14
CDROM_CD_TEXT_PACK_ALBUM_NAME:128
CDROM_CD_TEXT_PACK_ARRANGER:132
...
...

We can save that file as, eg cst.txt, and feed it to JEB’s Typelib Generator. (entry: cstsrc)

Loading custom typelibs

If your typelib configuration matches your input files (most notably, the groupid and processor fields), then JEB will load it automatically during analysis of your input file.

Example, with the sample typelib shipping with JEB (groupid=GROUPID_TYPELIB_WIN32, processor=X86):

Obviously, you may decide to force-load a type lib by ticking the “Loaded” checkbox.

Programmatic access with JEB API

Native types, like any other component of JEB, can be accessed with the API. Scripts and plugins can use the API to programmatically retrieve, define, apply types, as well as manipulate type libraries.

The two single most important classes are:

Below is a reference to a sample JEB Python script that will get you started with the API. It shows how to define the following custom type:

struct MyStruct1 {
  int a;
  unsigned char[3][2] b;
};

Source: https://github.com/pnfsoftware/jeb2-samplecode/blob/master/scripts/AddCustomNativeTypes.py

We shall upload more sample scripts in the future. Feel free to share your contributions with us as well.

Conclusion

If you have questions, comments or suggestions, feel free to:

JEB3 is still in Beta, for a few more weeks. General availability should be expected during the first or second week of January. If you haven’t done so, feel free to ask for a Beta build right away.

Once again, thank you to all our users, we are very grateful for your feedback and support. Finally, a special thank you note to our user “Andy P.” who pushed JEB’s boundaries relatively far (!) and allowed us to uncover interesting corner cases when working with large firmware binaries.

Android NDK Libraries Signatures

In this blog post, we present a new batch of native signatures released with JEB3 to identify Android Native Development Kit (NDK) libraries.

First, let’s briefly give some context. The Android NDK is a set of tools allowing developers to embed compiled C/C++ code into their Android applications. Thus, developers can integrate existing native code libraries, develop performance-sensitive code in C/C++ or obfuscate algorithms with native code protectors.

In practice, native code within Android applications comes in the form of ELF shared libraries (“.so”); the native methods can then be called from Java using Java Native Interface (JNI), which we described in a previous blog post.

NDK Pre-Built Libraries

Android NDK provides some pre-built libraries that can be linked against. For example, there are several C++ Standard Template Library (STL) 1 , or the Zlib decompression library.

As an example, let’s compile a “hello world” Android NDK C++ library with NDK r17. By default, the C++ implementation will be gnustl — the default choice before NDK r18.

Here is the C++ code:

When compiled with Android Studio’s default settings, libraries are linked dynamically, and libgnustl_shared.so is directly included in the application — because it is not a system library –, for each supported Application Binary Interface (ABI).

Files hierarchy of the Android application containing our “hello world” native library

If we open the ARM library we can pretty easily understand the — already convoluted — logic of our “hello world” routine, thanks to the names of gnustl external API calls:

Control-flow graph of ARM “hello world” with gnustl dynamically linked. Note that JEB displays mangled names when API calls correspond to external routines.

Now, Android NDK also provides static versions for most of the pre-built libraries. A developer — especially a malware developer wishing to hinder analysis — might prefer to use those.

When compiled in static mode, gnustl library is now ‘included’ in our native library, and here is our “hello world” routine:

Control-flow graph of ARM “hello world” with gnustl statically linked. Subroutines bear no specific names.

In this case, the analysis will be slowed down by the numerous routine calls with no specific names; each of this subroutine will need to be looked at to understand the whole purpose.

This brings us to a common reverse-engineering problem: is there a way to automatically identify and rename static library code, such that the analyst can focus on the application code?

JEB3 NDK Signatures

That’s when JEB native signatures come to the rescue! Indeed JEB3 now provides signatures for the following Android NDK  static libraries:

  • gnustl
  • libc++
  • STLport
  • libc
  • libmath
  • zlib

We provide signatures for ARM/ARM64 ABIs (including all variants like arm-v7a, arm-v7a-hard, thumb or ARM mode, etc) of these libraries, from NDK r10 to NDK r18.

These signatures are built in a similar fashion to our x86/x64 Visual Studio native signatures, and are intended to be “false-positive free”, which means a match should be blindly trustable. Note that JEB users can create their own signatures directly from the UI.

So, within JEB, if we open our statically-linked library with the signatures loaded, gnustl library routines are identified and renamed:

Control-flow graph of ARM “hello world” with gnustl statically linked and NDK signatures loaded. Subroutines have been renamed.

Note: the attentive reader might have noticed some “unk_lib_subX” routines in the previous image. Those names correspond to cases where several library routines match the routine. The user can then see the conflicting names in the target routine and use the most suitable one.

Due to the continuous evolution of compilers and libraries, it is not an easy task to provide up-to-date and useful signatures, but we hope this first NDK release will help our users. Nevertheless, more libraries should certainly be signed in the future, and we encourage users to comment on that  (email, Twitter, Slack).

  1.  NDK C++ support is a turbulent story, to say the least. Historically, different implementations of C++ have been provided with the NDK (gnustl, STLport, libc++,…), each of them coming with a different set of features (exceptions handling, RTTI…). Since the very recent r18 version (released in september 2018) Android developers must now use only libc++.

JEB3 Auto-Signing Mode

In this video we introduce a novel JEB 3.0 feature: auto-signing mode for native code.

In a nutshell, when this mode is activated all modifications made by users to native code in JEB (renaming a routine, adding a comment, etc) are “signed”.

The newly created signatures can then be loaded against another executable, and all the information of the original analysis will be imported if the same code is recognized. Therefore, the user only needs to analyze each routine once.

Without further ado, here is the video, which begins by introducing native signatures before showcasing auto-signing:

As usual, feel free to reach out to us (email, Twitter, Slack) if you have questions or suggestions.

Dynamic JNI Detection Plugin

Update (Nov 29): the plugin was open-sourced on our GitHub repository. JEB 3.0.7+ is required to load and run it.

Java applications can call native methods stored in dynamic libraries via the Java Native Interface (JNI) framework. Android apps can do the same: developers can use the NDK to write their own .so library to use and distribute.

In this post, we briefly present how the binding mechanisms work, allowing a piece of bytecode to invoke native code routines.

Named Convention Method

The easiest way to call native method is as such:

In Java, class com.example.hellojni.HelloJni:

In C:

The native method name adheres to the standard JNI naming convention, allowing automatic resolution and binding.

The corresponding Dalvik bytecode is:

and here are the the corresponding ARM instructions:

JEB automatically binds those methods together, to allow easy debugging from bytecode to native code.

However, there is another way to bind native code to Java.

Dynamic JNI Method

One can decide to bind any function to Java without adhering to the naming convention, by using the JNIEnv->RegisterNatives method.

For example, the following line of code dynamically binds the Java method add(II)I to the native method add():

Due to its dynamic nature, statically resolving those bindings can prove difficult in practice, e.g. if names were removed or mangled, or if the code is obfuscated. Therefore, not all calls to RegisterNatives may be found and/or successfully processed.

However, JEB 3.0-beta.2 (to  be released this week) ships with an EnginesPlugin to heuristically detect – some of – these methods, and perform binding – and of course, you will also be able to debug into them.

Execute the plugin via the File, Plugins menu

Once run, it will :

  • annotate the dex code with the target addresses:

  • rename targets (prefixing names with __jni_) :

  • enable you to seamlessly debug into them (jump from Java to this JNI method)

 

Heuristics

As of this writing, the plugin uses several heuristics, implemented for ARM and ARM64 (Aarch64):

  • The first is the simplest one: the JNIEnv->RegisterNatives method is commonly called from the standard JNI initialization function JNI_OnLoad, so JEB searches for this method and attempt to find calls to RegisterNatives.

Once the ‘BL RegisterNatives‘ is found, JEB uses the decompiler to create an IR representation of the block, and determines the values of R2 and R3 (X2 and X3 on Aarch64). R3 indicates the number of native methods to register, R2 is a pointer to the array of JNI native methods (structure with a pointer to method name, a pointer to method signature and a pointer to the native function bound):

Even if accurate, this method does not work when a Branch is issued via a register (BL R4) or method name is hidden.

  • The second heuristic is based on method name. First, in Dalvik, we search for all invocations to native methods. Then, for each method found, we search in binaries if there is a String reference matching the method name. (This heuristic is dangerous but yields decent results. A future plugin update may allow users to disable it.)

If found, the plugin looks at cross references of this String and checks if it looks like the expected JNI structure.

  • The third and last heuristic is the same as the previous one, but based on arguments. Since names can be shortened, they may not be interpreted as String, and thus not referenced, whereas it is easier to find argument signatures.

These three heuristics only work when methods are defined as a static array variable. Dynamic variables would need some emulation of the JNI_OnLoad method to be resolved.

As you can see, detection is currently based on heuristics, so obfuscated methods may be missing. Feel free to tweak and improve the plugin, it is available on our GitHub repository. As usual, feel free to reach out to us (email, Twitter, Slack) if you have questions or suggestions.

Having Fun with Obfuscated Mach-O Files

Last week was the release of JEB 2.3.7 with a brand new parser for Mach-O, the executable file format of Apple’s macOS and iOS operating systems. This file format, like its cousins PE and ELF, contains a lot of technical peculiarities and implementing a reliable parser is not a trivial task.

During the journey leading to this first Mach-O release, we encountered some interesting executables. This short blog post is about one of them, which uses some Mach-O features to make reverse-engineering harder.

Recon

The executable in question belongs to a well-known adware family dubbed InstallCore, which is usually bundled with others applications to display ads to the users.

The sample we will be using in this post is the following:

57e4ce2f2f1262f442effc118993058f541cf3fd: Mach-O 64-bit x86_64 executable

Let’s first take a look at the Mach-O sections:

Figure 1 – Mach-O sections

Interestingly, there are some sections related to the Objective-C language (“__objc_…”). Roughly summarized, Objective-C was the main programming language for OS X and iOS applications prior the introduction of Swift. It adds some object-oriented features around C, and it can be difficult to analyze at first, in particular because of its way to call methods by “sending messages”.

Nevertheless, the good news is that Objective-C binaries usually come with a lot of meta-data describing methods and classes, which are used by Objective-C runtime to implement the message passing. These metadata are stored in the “__objc_…” sections previously mentioned, and the JEB Mach-O parser process them to find and properly name Objective-C methods.

After the initial analysis, JEB leaves us at the entry point of the program (the yellow line below):

Figure 2 – Entry point

Wait a minute… there is no routine here and it is not even correct x86-64 machine code!

Most of the detected routines do not look good either; first, there are a few objective-C methods with random looking names like this one:

Figure 3 – Objective-C method

Again the code makes very little sense…

Then comes around 50 native routines, whose code can also clearly not be executed “as is”, for example:

Figure 4 – Native routine

Moreover, there are no cross-references on any of these routines! Why would JEB disassembler engine – which follows a recursive algorithm combined with heuristics – even think there are routines here?!

Time for a Deep Dive

Code Versus Data

First, let’s deal with the numerous unreferenced routines containing no correct machine code. After some digging, we found that they are declared in the LC_FUNCTION_STARTS Mach-O command – “command” being Mach-O word for an entry in the file header.

This command provides a table containing function entry-points in the executable. It allows for example debuggers to know function boundaries without symbols. At first, this may seem like a blessing for program analysis tools, because distinguishing code from data in a stripped executable is usually a hard problem, to say the least. And hence JEB, like other analysis tools, uses this command to enrich its analysis.

But this gift from Mach-O comes with a drawback: nothing prevents miscreants to declare function entry points where there are none, and analysis tools will end up analyzing random data as code.

In this binary, all routines declared in LC_FUNCTION_STARTS command are actually not executable. Knowing that, we can simply remove the command from the Mach-O header (i.e. nullified the entry), and ask JEB to re-analyze the file, to ease the reading of the disassembly. We end up with a much shorter routine list:

Figure 5 – Routine list

The remaining routines are mostly Objective-C methods declared in the metadata. Once again, nothing prevents developers to forge these metadata to declare method entry points in data. For now, let’s keep those methods here and focus on a more pressing question…

Where Is the Entry Point?

The entry point value used by JEB comes from the LC_UNIXTHREAD command contained in the Mach-O header, which specifies a CPU state to load at startup. How could this program be even executable if the declared entry point is not correct machine code (see Figure 2)?

Surely, there has to be another entry point, which is executed first. There is one indeed, and it has to do with the way the Objective-C runtime initializes the classes. An Objective-C class can implement a method named “+load” — the + means this is a class method, rather than an instance method –, which will be called during the executable initialization, that is before the program main() function will be executed.

If we look back at Figure 5, we see that among the random looking method names there is one class with this famous +load method, and here is the beginning of its code:

Figure 6 – +load method

Finally, some decent looking machine code! We just found the real entry point of the binary, and now the adventure can really begin…

That’s it for today, stay tuned for more technical sweetness on JEB blog!

Firmware exploitation with JEB part 3: Reversing the SmartRG’s sr505n

For the final blog post of this series (part 1 , part 2), let’s reverse a real router firmware. First off, no 0days or security sensitive information will be disclosed in this blogpost but if you have a contact at SmartRG, let us know.

To be able to reverse easily and test my findings, I wanted a MIPS router that was still used, that had a public firmware update that I could dig into and that was relatively cheap. I begun with the ZyXel NBG6716 by downloading the firmware update from their website and bought one on Amazon. Sadly, I received the wrong model so I decided to try another approach.

Interestingly enough, the router I personally own met all my criterias and some locals were selling it cheaply on the internet which allowed me to not brick my own device. Here is where the sr505n comes into play. I encourage to follow this blog post by looking at the firmware update while reading, and here is where you can download it.

Static analysis

Extraction and file system

The `file` command tells us that the firmware update is plain data but let’s see what binwalk thinks of that:

binwalk CA_PBCA_2.5.0.14_698450e_sr505n_cfe_fs_kernel 

DECIMAL       HEXADECIMAL     DESCRIPTION
--------------------------------------------------------------------------------
0             0x0             Broadcom 96345 firmware header, header size: 256, firmware version: "68", board id: "63168MBV_17AZZ", ~CRC32 header checksum: 0x64494342, ~CRC32 data checksum: 0xACF56C90
14308         0x37E4          LZMA compressed data, properties: 0x6D, dictionary size: 4194304 bytes, uncompressed size: 230336 bytes
61648         0xF0D0          Squashfs filesystem, little endian, non-standard signature, version 4.0, compression:gzip, size: 6672344 bytes, 1060 inodes, blocksize: 65536 bytes, created: 2017-05-31 18:49:24
[...]

Let’s rerun binwalk with the `-e` switch to extract the squashfs file system and begin reversing the firmware binaries. One thing to note here is that squashfs is read-only but we can still write on other file systems that are mounted as we can see here:

# mount
rootfs on / type rootfs (rw)
/dev/root on / type squashfs (ro,relatime)
proc on /proc type proc (rw,relatime)
tmpfs on /var type tmpfs (rw,relatime,size=420k)
tmpfs on /mnt type tmpfs (rw,relatime,size=16k)
sysfs on /sys type sysfs (rw,relatime)

We have the usual file system structure so let’s head to /bin, /sbin. A good portion of the binaries are linked to busybox but the majority are real ELF binaries. Interestingly, there is a /lib/private directory where proprietary libraries seem to be stored as we can confirm from proprietary binaries linked against those.

Binaries

At first, I saw myself flooded with binaries to reverse, some with helpful names and some not. I had the idea to create a simple plugin to kick-start the research (whether it’s for vulnerability research, malware analysis or other reverse engineering tasks) by listing some user-selected function names (or sub-strings of names) and creating a list of which binaries call those and where they are called. Let’s see an example:

I chose memory sensitive functions as well as networking functions to identify binaries that dealt with user input (possibly without requiring authentication). For example, the `smd` binary is the service manager daemon and caCaptivePortal has the functionalities its name implies.

If you want to use that plugin here is the repo. You’ll need to copy the `functionList.json` or create one in ${JEB_HOME}/bin/cl/ for it to work properly. Specify the functions that interest you and add all the artifacts you want to search from.

There are some other things that you will quickly notice if you analyze the firmware too. Each user has a simple and hardcoded password but I can confirm ISPs seem to change those (but for simple and hardcoded ones as well in my experience). I grabbed the latest firmware I could find and there might be newer ones but some software need updating as the /tmp/bootupmessages file reveals:

# cat tmp/bootupmessages 
<5>Linux version 2.6.30 (root@cpebuild.smartrg.local) (gcc version 4.4.2 (Buildroot 2010.02-git) ) #1 SMP PREEMPT Mon May 18 13:51:47 PDT 2015

You will also see some interesting memory management functions made in-house if you analyze the binary:

Go grab a copy of our trial, reverse some binaries and share your findings with us!

Dynamic Analysis

One thing that will help you along the way is to be able to upload binaries to the device to run them. The way I did it was to set up a web server on my computer and `wget` the statically-linked binaries in /var or /tmp (as /var is a tmpfs as well, there are not much differences between the two). I took one trick from this great presentation which mentioned that you can upload your own busybox binary to break out of the limits imposed by the default busybox binary inside the firmware. For example, the `netstat` utility (that was not part of one of the original BusyBox applets) can become useful when you want to assess the possible attack vectors.

# ./busybox-mips netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:44401 0.0.0.0:* LISTEN 259/smd
tcp 0 0 0.0.0.0:30103 0.0.0.0:* LISTEN 1031/caCaptivePorta
tcp 0 0 0.0.0.0:5431 0.0.0.0:* LISTEN 1240/upnp
tcp 0 0 127.0.0.1:5916 0.0.0.0:* LISTEN 799/acsd
tcp 0 0 :::80 :::* LISTEN 259/smd
tcp 0 0 :::21 :::* LISTEN 259/smd
tcp 0 0 :::30005 :::* LISTEN 259/smd
tcp 0 0 :::22 :::* LISTEN 259/smd
tcp 0 0 :::23 :::* LISTEN 259/smd

One  other thing you’ll like to have is a statically-linked gdbserver. You can found one online or build a newer one with buildroot and connect to it from JEB, for example.

QEMU

You can of course emulate the binaries as I did for the DVRF challenges but I found it hard to recreate the whole environment with running daemons. Be sure to have the exact same behaviour as the real device’s with all the required files created on boot.

Further analysis

I did find what seems to be UART pinout and wanted to play with that as well (OpenOCD might become useful for later analysis). More binaries and shared libraries need to be checked and even ARM devices will become interesting since we released the alpha version of the ARM decompiler in the latest update.

And that was the tips and tools I wanted to share! If you want to see MIPS memory corruption, head over to the previous blogposts for more.