Using Binject

Binject is a sweet multipart library, making up several tools for code-caving and backdooring binaries via golang. The project was originally inspired as a rewrite of the backdoor factory in go and now that it’s functional this post will show you how to use it. In this post we are going to explore how you can use the library operationally for a number of tasks. We will start with an example of using some of the command line tools included with the project for the arbitrary backdooring of files. Next we will look at using the library to backdoor a file programmatically. Finally we will use the bdf caplet with bettercap to backdoor some binaries being transmitted on the network, on-the-fly. I want to give a shout out to the homie Vyrus, as a lot of this was inspired by him but in non-public projects, so I can’t link to his stuff. I also want to give a shoutout to Awgh, as he’s been an awesome mentor and powerhouse in implementing a lot of the Binject features. Below you can see the binjection command line tool being used to backdoor an arbitrary windows PE, on Linux. In the next section we will explore some of the command line features of Binject.

Using the command line tools included with Binject is pretty straightforward; the main library Binject/binjection contains a command line interface tool that exposes all of the existing functionality for backdooring files on macOS, Windows, and Linux. Above we can see go-donut being used to turn a gscript program into position independent shellcode, then we use the binjection command line tool to backdoor a Windows PE (a .exe file), all on a Linux OS. The binjection cli tool takes 3 main command line flags, “-f” to specify the target file to backdoor, “-s” to specify a file containing your shellcode in a raw bytecode format, and “-o” specifying where to write your new backdoor file. Optionally you can give a “-l” to write the output to a logfile instead of standard out. You can also specify the injection method to use, although the tool only supports a very limited and mostly default set currently. The binjection cli tool will automatically detect the executable type and backdoor it accordingly. Another library and command line tool included with the framework is Binject/go-donut, which is essentially just a port of TheWover/donut. We can see this being used above to prepare another program to be embedded in our target executable. I really like both of these command line tools because it’s easy to cross compile them for linux or macOS, giving me a really convenient way to generate my target shellcode regardless of what OS I’m operating from. Having the entire tool chain in go allows me to easily move my tools to whatever operating system or use them all together in the same codebase. Even if you’re not familiar with go, you can just as easily compile the cli tools and script them together with something like bash or powershell. Below we can see the binjection cli tool being used to backdoor ELF executables on Linux.

Using binjection programmatically as a go library is also super simple and arguably far more useful because you can now integrate it into so many more projects. The library calls are just as straight forward, basically a single function call depending on the binary type your backdooring. Here we can see it as a standalone example for others to use. We can also see it being implemented here for Windows in Sliver, a golang based c2 framework with tons of features. We can also use binjection in gscript, although it requires this embarrassingly small shim interface. This is insanely powerful functionality to be able to ship in an implant binary, as the implant can now backdoor, already persisted, legitimate binaries on the target system. You can even break down the supporting libraries and use other parts of Binject, like Binject/debug, as a triage tool, which we demonstrate with bintriage. Finally, to bring the project full circle, Binject has been integrated with bettercap for the on-the-fly backdooring of files on the network. It currently accomplishes this using bettercap’s ARP spoofing module, the network proxy module, and a helper tool to manage the file queue, making the whole process really clean. Using the integration is easy with the Binject/backdoorfactory helper tool. Simply follow these usage instructions, which just involves installing all of the necessary prerequisite tools, and then Binject/backdoorfactory will spit out the caplet and command you need you need for bettercap. You can see a demo of all of this together in the video at the end. So now you have a pretty good idea of some different ways you can use Binject. We also encourage people to submit pull requests to the library with new injection methods or even further enumerating the executable types. There is still a lot of work to be done here but you can use the library currently to great effect.

Back to the Backdoor Factory

backdoorfactory setting up the man-in-the-middle with bettercap and injecting a binary inside of a tar.gz as it’s being downloaded by wget (courtesy of sblip)

Backdoor Factory documentation

Backdoor Factory source code

About six years ago, during a conversation with a red teamer friend of mine, I received “The Look”. You know the look I’m talking about. It’s the one that keeps you reading every PoC and threat feed and hacker blog trying to avoid. That look that says “What rock have you been under, buddy? Literally everyone already knows about this.

In this case, my transgression was my ignorance of The Backdoor Factory.

The Backdoor Factory was released by Josh Pitts in 2013 and took the red teaming world by storm. It let you set up a network man-in-the-middle attack using ettercap and then intercept any files downloaded over the web and inject platform-appropriate shellcode into them automatically.

Man-in-the-Middle Attack Using ARP Spoofing

In the days before binary signing was widely enforced and wifi security was even worse than it is now, this was BALLER. People were using this right and left to intercept installer downloads, pop boxes, and get on corpnet (via wifi) or escalate (via ARP). It was like a rap video, or that scene in Goodfellas before the shit hits the fan.

But nothing lasts forever. Operating systems made some subtle changes and entropy took over, and so the age of The Backdoor Factory came to an end. Some time later, the thing actually stopped working and red teamers sadly packed up their shit and lumbered off to the fields of Jenkins.

Fear not, gentle reader, for our tale does not end here.

For some reason, a year and change back, I found myself once again needing something very much like The Backdoor Factory and stumbled on this “end of support” announcement. Perhaps still motivated by my shameful ignorance years ago, I thought “maybe I owe this thing something for all the good times” and took a look into the code to see if something could be fixed easily.

No, no it couldn’t. Not at all. But the general design and the vast majority of the logic was all in there. It worked alongside ettercap to do ARP spoofing, then intercepted file downloads, determined what format they were, selected an appropriate shellcode if one was available, and then had a bunch of different configurable methods to inject shellcode into all binary formats.

…It’s just that it was heaps and heaps of prototype-grade Python and byte-banged files. I have heard a rumor, similar to On The Road, that the original version had been written in a single night. It clearly was going to take longer than that to port this to something maintainable, but… I mean… automatic backdooring of downloaded files! This needed to happen. This needed to be a capability that red teamers just had available in the future. Fuck entropy.

Around this time, I pitched the idea of an end-to-end rewrite to some others and we started a little group of enthusiasts.

For each of the abstract areas of functionality from the original, we made a separate Go library. The shellcode repository functions went into shellcode. The logic that handles how to inject shellcode into different binary formats went into binjection. To replace the binary parsing and writing logic, we forked the standard Golang debug library, which already parsed all binary formats, and we simply added the ability to write modified files back out.

This gives us a powerful tool to write binary analysis and modification programs in Go. All of these components work together to re-implement the original functionality of BDF, but since they’ve been broken into separate libraries, they can be re-used in other programs easily.

Finally, to replace the ailing ettercap, we used bettercap, the new Golang replacement, which supports both ARP spoofing and DNS poisoning attacks. bettercap allows for extension “caplet” scripts using an embedded Javascript interpreter, so we wrote the Binject caplet that intercepts file downloads and sends them to our local backdoorfactory service for injection via a named pipe and then passes the injected files along to the original downloader.

The flow of a file through the components of the Backdoor Factory, on its journey to infection

Injection methods have been updated to work on current OS versions for Mach-O, PE, and ELF formats, and will be much easier to maintain in the future, since they’re now written to an abstract “binary manipulation” API.

To put a little extra flair on it, we’ve added the ability to intercept archives being downloaded, decompress them on the fly, inject shellcode into any binaries inside, recompress them, and send them on. Just cuz. In the future, we’re planning on adding some extra logic to bypass signature checks on certain types of files and some other special handlers for things like RPMs.

Now you will have to provide your own shellcode, backdoorfactory only ships with some test code, but if you’re targeting Windows, I’ve also ported the Donut loader to Golang, so you can use go-donut to convert any existing Windows binary (EXE/DLL/.NET/native) to an injectable, encrypted shellcode. It even has remote stager capabilities.

We fully intend to get into a lot more detail about how to use Donut and BDF in future posts, but don’t wait for us to get it together for some vaporware future blog post that may never come… You can try it yourself right now!

Mach-O Universal / Fat Binaries

This is part two on our series on Mach-O executables, where in the first part we did a deep dive on the Mach-O file format. During the Apple-Intel transition, when they were moving from PowerPC to Intel x86_64 architecture, Apple came out with the Mach-O Universal Executable, a play on existing Fat binaries. This is one of the more interesting binaries types I’ve ever encountered. The Fat / Universal binary format acts as a wrapper for multiple Mach-O binaries of different architectures and CPU types. Meaning these binaries can run on multiple different architecture CPUs. Already having the ability to parse and write the Mach-O file structures, parsing the additional Fat headers is actually pretty simple. We’ve added both parsing and writing support for this file in our binject/debug library. Originally this was to include functionality for such 3rd party signing attacks as described in this set of posts. But legacy file types often contain many exploitable properties, and this file type may even be coming back as Apple is rumored to be moving to ARM CPUs from the Intel CPUs in the future, so let’s dig into the file format and learn some more.

Universal Binary Overview
A high level overview of a Fat / Universal Mach-O file is actually pretty simple, as the Fat headers simply act as a wrapper for the multiple Mach-O files contained within. While these can hold any number of Mach-O files within, on MacOS they are often found with two, being used to bridge between CPU transitions. Below we dive into what these Fat headers actually contain, but short to say it’s not too much. Here is an example of a Fat file structure that contains two Mach-O structures within one Universal file:

An example of the Fat / Universal Mach-O binary format

Universal Header
The Fat Mach-O file header starts with some special magic bytes, then moves on to a 4 byte arch size variable; the arch size is the length of the next dynamically sized section, the arch headers array, with one descriptor for each Mach-O structure contained within the Fat binary. Each arch header contains the same 5 properties, each 4 bytes in length. Each arch header contains the cpu and subcpu fields, which specify the architecture and cpu type the corresponding Mach-O file is for. When a Universal binary is loaded into memory this is what is checked to see what Mach-O file contained within should be loaded. Following that is the arch offset, arch size, and arch alignment, all of which specify where the corresponding Mach-O structure is located and how large it is.

N Mach-O Structures
Following the Fat binary header, there is a Mach-O structure for each of the corresponding arch headers. Each Mach-O file starts at it’s specified offset with large pads of 0 bytes between the headers and each respective Mach-O file. The Mach-O files themselves follow the exact structure outlined in the first post on Mach-Os, except for a slight difference between the reserved bytes (the i386 arch Mach-O binaries don’t seem to have the four reserved 0-bytes after the header). Simply put, these are normal Mach-O files.

Conclusion
This is an interesting file format that is still around on modern MacOS versions, and may even be coming back if Apple makes the move from Intel CPUs to ARM CPUs in the future. Ultimately, this is a pretty simple file format, but I think there is a lot of room for abuse here which we plan to experiment with in our Binjection library! Stop by soon for more interesting low level file format posts!

ahhh
Latest posts by ahhh (see all)

So You Want To Be A Mach-O Man?

Probably one of the least understood executable file formats is Mach-O, short for the Mach Object File Format, this was originally designed for NexTSTEP systems and later adopted by MacOS. When researching this file format there were a lot of documents that were misleading and the documents we did find were pretty old, prompting us to write this post. We’ve mirrored two docs (one and two) we found particularly helpful, although a little outdated. This post will also not cover Universal binaries, otherwise known as Fat Mach-O binaries, which can hold multiple Mach-O objects for different CPU architectures.

Tooling
While there are limited tools for investigating Mach-O files, there are some important ones you will want in your toolbox. One of the most useful tools you can use for parsing and visualizing a Mach-O file is MachOExplorer. otool is also a very useful native tool when investigating these files. Finally, bintriage is an interface to our custom debug library and our command line utility for investigating these files.

Intro / Background
The Mach-O file format on MacOS started with NexTSTEP CPUs targeting CMU’s Mach kernel design and Steve Jobs winning his way back into Apple’s good graces. The merger of the computing models, known as the Apple-Intel transition, gave way to multi-architecture universal file formats, which will be the subject of a follow up post. Out of those came the general Mach-O, intel 64 bit executable binary format for MacOS. That Mach-O executable file is the modern executable on MacOS (inside all Apps), and will be subject of our deep dive today.

High-Level Overview of the File Structure
Mach-O files themselves are very easy to parse, as almost every field is 4 bytes or a multiple of 4 bytes in length. Only larger abstract structures within the file format have offsets and lengths which must be observed, and even fewer data structures have required alignments. If I were to draw a diagram of the file structure based on my interactions and understanding of critical components, it would look like:

High level view of the Mach-O file structure

Mach-O Header
The header is extremely important, it helps provide data such as the magic bytes, the cpu type, and the subcpu type, which indicate the architecture and exact model cpu the binary is for. There is also the type, a 4 byte field that says whether this is an object file, a dynamic library (dylib), or an executable Mach-O file. Next is the ncmd and cmdsz fields, indicating the number of load commands and total size of the load commands. Finally there is the flags field, which is 4 bytes of bitwise flags specifying special linker options. At the end of the header is a reserved 4 byte space.

Example of a Mach-O file header

Load Commands
Probably the most important part of the Mach-O file structure are the load commands. Our debug library parses all of the critical structures required to rebuild a Mach-O file, however there are still several load commands we don’t parse but rather copy directly over. Each load command structure varies and needs to be parsed differently based on the type of load commands. Some load commands are self contained, telling the loader how to load dynamic libraries within the load command itself, where as other load commands reference other structures of data contained within the file, such as sections and tables that get loaded into segments. These are what truly represent how the file is mapped into virtual memory.

Example of some Mach-O load commands

Segments with Sections
Several segments are loaded after and corresponding to the load commands, with their respective sections. This is where a lot of the machine executable code, variables, and pointers to various functions from program code come from. Sections from the _TEXT segment and _DATA segment will almost always be present, while other segments and sections may be optional, depending on the various libraries they call and how they are compiled.

Example of some Mach-O sections

Optional Segments
Some segments are only present with certain Mach-O binary types, based on if the load commands are in place or not. An example of an optional segment would be a _DWARF segment and sections, which are used in debugging. We will likely write more on DWARF sections in a later post as there is often similar optional debug data present in ELF files.

Example of an optional _DWARF segment in a Mach-O file

LinkEdit Segment
The _LINKEDIT segment is the final segment in most Mach-O binaries and contains a number of critical components, such as the dynamic loader info, the symbol table, the string table, the dynamic symbol table, the digital signature, and potentially even more properties. This is one of the most important segments and is unfortunately left out of a lot of documentation, but make no mistake, this segment is critical for the binaries to run.

Example of some of the data in the LinkEdit Segment

Conclusion
Those are the major parts of the Mach-O file format as I understood them and worked with them to recreate working executable files. Stay tuned for a follow up post covering Fat / Universal Mach-O binaries, which act as a wrapper for multiple Mach-O binaries of different architectures. I hope this information is useful to people, and if there are questions or discussions lets talk about them in the comments and forums! If I missed anything major please let me know, and pull requests are definitely welcome on our Binject projects!

ahhh
Latest posts by ahhh (see all)

Introducing Symbol Crash

Welcome to the Symbol Crash repository of write-ups related to binary formats, injections, signing, and our group’s various projects. This all started as a revamp of the Backdoor Factory techniques and port to Go, so BDF functionality can be used as a shared library. It has since blossomed into something much more, a wellspring of cool research and a deep technical community. We recruited a number of passionate computer science and information security professionals along the way and decided to form this group to document our work. We also wanted to give back some of the neat things we were discovering and document some of the harder edge cases we came across in the process.

Most of our current projects live in the Binject organization on GitHub. We formed this blog mostly to discus the nuances of these projects and our lessons learned from these deep dives.

Binject
13 repositories, 91 followers.

We’ve divided the projects up into several libraries. These are as follows, with a short description of each:

debug
We have forked the debug/ folder from the standard library, to take direct control of the debug/elf, debug/macho, and debug/pe binary format parsers. To these parsers, we have added the ability to also generate executable files from the parsed intermediate data structures. This lets us load a file with debug parsers, make changes by interacting with the parser structures, and then write those changes back out to a new file.

shellcode
This library collects useful shellcode fragments and transforms for all platforms, similar to the functionality provided by msfvenom or internally in BDF, except as a Go library.

binjection
The Binjection library/utility actually performs an injection of shellcode into a binary. It automatically determines the binary type and format, and calls into the debug and shellcode libraries to actually perform the injection. Binjection includes multiple different injection techniques for each binary format.

bintriage
This utility is used as an extension of the debug library to provide more insight and debug information around specific binaries. It provides a verbose interface to enumerate and compare all of the features we parse in the binject/debug library.

forger
This is an experimental library to play with various binary specific code signing attacks.

We also plan to write a lot about low level file formats such as Elf, PE, Mach-O formats in the coming months, so def stop by and follow the blog for those updates. Finally, we are always looking for new members who want to join us on this journey of bits and documentation 🙂 If this resonates with you please reach out or comment.

ahhh
Latest posts by ahhh (see all)