Tradecraft Orchestration in the Garden

What’s more relaxing than a beautiful fall day, a crisp breeze, a glass of Sangria, and music from the local orchestra? Of course, I expect you answered: writing position-independent code projects that separate capability from tradecraft. If you didn’t answer that way, you’re wrong.

In the last six months, Tradecraft Garden has covered a lot of ground. This project started as a linker to make it easier to write position-independent code DLL loaders and it has rapidly evolved into an Aspect Oriented Programming tool to weave tradecraft into PIC and PICO capabilities. It can still write position-independent code DLL loaders too (modular and self-incepting DLL loaders, at that)—but there’s a lot more potential here.

One of the needs in the current model, is that while we can separate tradecraft from capability in C—the linker script forces us to define our architecture and tradecraft as one monolithic thing. Separating base architecture and tradecraft, within the specification files themselves, is the theme of this release.

Build Templates with %variables

This release introduces %variables into the Crystal Palace specification language. These are user-defined strings passed in at the beginning of the program. %variables are usable anywhere you would use a “string” argument in Crystal Palace’s specification language.

For example:

load %foo

This command will resolve %foo and load its contents. Easy enough, right?

We also have string concatenation with the <> operator too. So:

load %foo <> “.x64.o”

This will resolve %foo and append .x64.o to it.

With %variables comes the need to better understand what a script is doing. The new echo command prints its arguments to STDOUT (CLI) or to a SpecLogger object (API):

echo “I am looking at: “ %hooks

%variables evolve the Crystal Palace specification file language from a collection of commands into a build templating language. That is, a specification file may now define the architecture of a loader or capability with the information about specific tradecraft getting added later.

Simple Loader – Execution Guardrails uses these new features. The loader is now agnostic to the follow-on specification file it encrypts and packs. The %STAGE2 variable is a stand-in for that file. You can pair this stage with one of the other examples from the Tradecraft Garden (including a COFF loader or just straight PIC).

Populating %variables (“The Glue”)

For this base architecture and specific tradecraft separation to work, we need a glue. That is a means to specify %variables.

Specifying %variables via the Java API is easy. Put a “%variable” key into the environment map with a “string” value.

You may also specify %var=”value” arguments via the ./link and ./piclink CLI tools too.

If you want to keep a variable configuration in a file, add @file.spec to your program’s CLI. This will read file.spec and use its commands to populate variables before the main linker script is run. And yes, we have new commands to set %variables from a Crystal Palace .spec file.

setg sets a variable within the global scope for this build session:

setg “%bar” “0xAABBCCDD”

Crystal Palace has a concept of scope for variables too. You will almost always want global variables in these configuration files. But, local scope exists for variables that are visible only within the current label’s execution context. Use set to set a local variable:

set “%foo” “file.spec”

With any commands that set or update %variables, quote the variable name to prevent Crystal Palace from evaluating the variable to its contents before command execution.

And, pack is a way to marshal %variables (and other strings) into a $VARIABLE byte array.

pack $VAR “template” “arg1” %arg2 …

The Perl programmer in me can’t resist a good pack command. pack accepts a variable, template string, and list of arguments that correspond to the characters in the template string. Think of the template as a condensed C struct definition. Each character specifies how pack will interpret the argument string and what type it will marshal the value to.

While I see these commands as configuration tools, they are generic Crystal Palace commands. You can use them anywhere.

Layering and Chaining

While straight variable substitution is handy, sometimes, we want to mix and match modules of an unknown quantity. That’s where the next feature helps.

Crystal Palace’s foreach command expects a comma-separated list of items as its argument.

foreach %libs: mergelib %_

The foreach command walks the provided list and calls another Crystal Palace command with %_ set to the current element of the list. %libs needs to exist, but an empty value is OK. The goal of foreach is to act as a placeholder for layered tradecraft that’s dynamically brought into a base architecture.

The next command is a list shift and execute tool. next expects a “%variable” name argument and it expects %variable is a comma-separated list of values. If the list is empty, next does nothing. If the list is not empty, next removes %variable’s first item and runs the specified command with %_ set to that element.

next “%NEXT”: run %_

The goal of next is to support chaining tradecraft together, giving cooperating modules a mean to pass execution through each other.

Modular Specification File Contracts

Crystal Palace’s main modularity tool is to create separate .spec files and use run to execute their commands in another file. This release gives us more modularity options.

The run command now accepts positional arguments. And, it passes them on to the child script as %1, %2, %3, etc.

run “file.spec” “arg1” %arg2

The positional arguments of “file.spec” are local to that run of that file. If file.spec runs another .spec file, the positional arguments are local to that specification file’s runs. The local scope is necessary to prevent our .spec files from stepping on each other’s arguments.

This release also adds callable labels to specification files too. Think of them as local runnable files, user-defined commands, or Crystal Palace functions. To create a callable label use:

name.x64:

Then, list your commands like normal. That’s it. These labels are callable with a dot name syntax and accept positional arguments too:

.name “arg1” “arg2”

The nice thing about this feature is we now have a choice or whether to split a function into its own .spec file or let it co-exist inside of the current .spec file. And, while this is nice, that’s not why I chose to build this feature. The real payoff is the call command.

call runs a callable label from another specification file:

call “file.spec” "name" “arg1” “arg2”

call is an encapsulation tool. That is, our base specification defines the architecture of our capability or loader. %variables become the placeholder for our tradecraft implementation. And, callable labels and call let us present the pieces of a tradecraft in a single file, with a common interface (callable labels), that our base architecture expects to act on.

Simple Loader – Hooking demonstrates this idea. It defines a base architecture for hooking a DLL and a modular contract for hooking tradecraft modules. XORhooks demonstrates this contract. Stack Cutting is now a module that composes on top of this base loader. What’s really cool? You can layer them together.. See the Simple Loader – Hooking notes for more on this.

File Paths

One of the mundane, but important details in this scheme is file path resolution. Crystal Palace commands treat file paths as relative to the current .spec file. With %variables, things get messy, because we might specify a file path in one place (e.g., the CLI, @config.spec, an argument to run/call) and it gets used elsewhere. The resolve command brings some predictability to this.

resolve “%files”

resolve will walk through %files (assuming it’s a comma-separated list of files), canonicalize each entry to a filename [relative to the current .spec], and set %files to these full paths. This gives you control over when this path resolution happens and which .spec context its relative to.

The -r CLI option will resolve file paths in a %key=”value” argument provided that way.

Migration Notes

  1. The link and piclink shell scripts changed in this version. You’ll want to update to the latest from the Crystal Palace distribution or source archive.

  2. I’ve updated the CLI syntax for piclink and link to accept % and $ sigils for %VAR and $DATAKEY values specified on the command-line. The old behavior of KEY=[bytes] to set $KEY still works, but going forward, the documentation and other materials will use explicit sigils. You may want to update your scripts to use explicit sigils too.

  3. Several of the methods in crystalpalace.spec.LinkerSpec were deprecated. I cleaned up the API to make it easier to share arguments between a configuration LinkSpec and the main LinkSpec. https://tradecraftgarden.org/docs.html#javaapi

Closing Thoughts

This release introduced several commands and features to change how Crystal Palace specification files work together.

The combination of %variables and the ability to set those via the Java API or CLI allow our users to combine a base specification with specific tradecraft choices later on. This turns a base PIC into a re-usable component.

Callable labels and call allow us to encapsulate tradecraft modules into a single file with different touch points (e.g., initialization, apply hooks) available via a common convention. Pair this with foreach and we have a means to specify tradecraft modules in one %variable and use them at the right-spots within the base project.

The goal of this release is to move Tradecraft Garden projects from singular monolithic examples, that each redefine everything, into reusable components to compose tradecraft cocktails with.

To see the full list of what’s new, check out the release notes.

Tradecraft Engineering with Aspect-Oriented Programming

It’s 2025 and apparently, I’m still a Java programmer. One of the things I never liked about Java’s culture, going back many years ago, was the tendency to hype frameworks that seemed to over-engineer everything. One of the hot trends I never understood, was Aspect-Oriented Programming. I now have a healthy respect for AOP’s core ideas, as there’s real overlap with Tradecraft Garden’s goals and the Crystal Palace effort.

Aspect-Oriented Programming

Aspect-Oriented Programming is a paradigm to add functionality to a base program that normally needs code in a lot of places, but to do it from one place. The classic example is logging. A base program might not have any logging built-in at all. Aspect-Oriented Programming is a way to take this cross-code need and uniformly apply it to different points of interest in the program. In some AOP models (as we’ll discuss here) it’s possible to add features without changing the base program at all.

Source: Gall, Harald. Aspect-Oriented Programming: Based on the Example of AspectJ. Presentation Slides, University of Zurich. (Slides 8, 9).

Aspect-Oriented Programming ideas and design patterns are absolutely relevant to separating capability and tradecraft, whether use-case and tool agnostic (as I’m working to do) or to make a C2 modular and flexible.

Here’s a quick run-down of Aspect-Oriented Programming vocabulary:

A cross-cutting concern is functionality one needs to apply into many places in a base program.

Join Points are specific instrumentation opportunities or places in the program one can attach an Advice to. These are the opportunities to get into the flow of the program.

An Advice is code that implements cross-cutting functionality.

A Pointcut is a mechanism to define (and even dynamically select) Join Points to insert Advice into. Pointcut refers to both the mechanism and a specific query to choose Join Points.

An Aspect is a module that packages Advices, Pointcut mechanisms, and Pointcut selectors.

The process of adding Aspects to a program is called Weaving. There are different methods of weaving. Some AOP frameworks process source code directly. Some managed runtime AOPs dynamically generate object proxies. Some AOPs rely on binary transformations like what I’m doing with Crystal Palace. LLVM plugins that dynamically add tradecraft at compile-time is AOP and Weaving too.

Hooking and other instrumentation techniques are a form of weaving. What separates AOP weaving from dynamic instrumentation like Frida, is AOP weaved functionality is part of production use rather than a temporary and risky intrusion into running code.

Crystal Palace PIC and AOP

Crystal Palace’s PIC service modules are an example of Aspect-Oriented Programming. Every PIC needs to resolve functions and the specific method is often painfully tangled with the program. This is true for using (or awkwardly avoiding) global variables too. These are cross-cutting concerns.

In Crystal Palace, Win32 API resolutions are a PIC Join Point. dfr attaches function resolution advice (the resolver) to these join points. dfr demonstrates a selective Pointcut feature too. Crystal Palace dynamically selects DFR advice for each Win32 API using its module as criteria. The end result is that a PIC capability exists blissfully agnostic to the service module that it’s paired with. This is possible because Crystal Palace weaves these Advices into the program through its binary transformation framework..

My point summoning all of this isn’t to flex with arcane enterprise programming jargon from 20+ years ago. Rather, it’s to show that there’s a body of work on these ideas that pre-dates a niche profession’s 2025 needs. And, to highlight that others used this paradigm to separate tightly coupled and disparately spread functionality into something architecturally clean, centralized, and swappable. This exists.

Attach, Redirect, Protect, Preserve, and Optout

In the recent Arranging the PIC Parterre, Daniel Duggan speculated:

“CP’s ability to weave helper functions inline with a merged capability provides a lot of exciting potential. A feature that I think would be really useful is changing how an API call is performed after it’s been resolved… but being able to rewrite DFR functions to push execution through a similar proxy, but without having to do explicit hooking, would be awesome.”

I fully agree. And today, I’ve got another update to Crystal Palace and Tradecraft Garden. This update brings weaving-enabled AOP primitives to PIC and PICOs.

One cross-cutting concern is Win32 API calls. Each Win32 API call is an opportunity to choose something equivalent, but stealthier, or to decorate the call with some sort of evasive action (e.g., prepping a fake call stack). To attach Advice (tradecraft) to a Win32 API call, we have the attach command:

attach “MODULE$Function” “_Hook1”

attach disassembles our program, finds references to MODULE$Function (M$F’er), and switches instructions and relocations to use _Hook1. This is weaving.

Crystal Palace applies these changes in a context-aware way. For example, M$F’er calls and references in _Hook1 are not updated. This means you can M$F’er from _Hook1 and access the original functionality.

M$F’er advices stack too. Let’s add this command after the above:

attach “MODULE$Function” “_Hook2”

This attach updates _Hook1’s M$F’er reference to _Hook2. _Hook2’s use of the Win32 API is untouched. A _Hook3 would update _Hook2’s reference to the API. Crystal Palace tracks this chain and knows which functions get which hooks. Each hook decides whether to call the target API and continue the chain. Hooks execute in a first-attached first-executed order.

Crystal Palace’s model to add hooks to a capability is to merge the hooks COFF with the capability COFF. That’s the merge verb introduced a few releases ago.

The updated Stack Cutting project in the Tradecraft Garden demonstrates attach. This IAT hooking demo now uses attach to incept some of the PIC loader’s Win32 API calls and push them through the stack cutting call proxy. I mean, why not?

redirect is another primitive, similar to attach. redirect targets the program’s local function calls:

redirect "function" "_hook"

The semantics of redirect are like attach. Calls and references to the target function within the hook aren’t touched. And, hooks stack too. Calls to function() within _hook and its successors execute the rest of the chain.

While redirect is awesome, don’t use it in place of the obvious. If you wish to lay functionality over a base program with a 1:1 mapping–good ol’ fashioned modular C is the right tool here. Declare an undefined function in a base module, use it, and merge the chosen implementation later.

One design pattern redirect enables is Chain of Responsibility. Imagine a C2 agent ships with a processinject() function. Its implementation is print an “I failed” error. With redirect, an end-user could stack processinject() advices onto this function in a desired handling order. If a processinject() advice is not applicable to the context (or it fails) it calls the next processinject(). On down the chain until one succeeds or the original “I failed” is reached. merge brings processinject() advices in dynamically. Maybe a big library with a bunch of them. With link-time optimization enabled, the unused functions go away. This same pattern could apply to communication modules or any number of things. That’s the power here.

If attach or redirect are used on a non-existing Join Point (e.g., the function or Win32 API doesn’t exist)—the command will do nothing. If link-time optimization is enabled, Advices not attached to something will get removed when the PICO or PIC is exported.

These powerful cross-program hooking tools require ways to isolate parts of the program from these hooks. That’s what protect, preserve, and opt out are for.

protect isolates the listed functions from all attach and redirect hooks. dprintf is opted into this automatically. Use this tool to protect debugging tools and other sensitive code.

protect "function1, function2, etc."

preserve protects a specific target (local function or Win32 API) from attach and redirect hooks in the listed function contexts. Use preserve functions that need direct access to the target function or its pointer.

preserve "target|MODULE$Function" "function1, function2"

optout prevents specific hooks from taking effect within a function. Use optout to isolate a tradecraft setup function from its own hooks. This makes it possible for other tradecraft to modify the setup function later.

optout "function" "hook1, hook2, hook3"

When I started this work, I had ambitions to explore a many to 1 call proxy for attach and redirect. I thought I could come up with a scheme that used %rax/%eax to keep the original function pointer. But, as I thought on this further, I didn’t like it. My thought is that I would need a many to 1 proxy for each number of arguments I expect to proxy (e.g., proxy2, proxy3, etc.). I also didn’t have a means to have Advices follow function pointers. I opted to abandon this approach and go with tractable 1:1 mappings.

I really like this implementation of attach and redirect. Nothing feels janky about it. The program output after attach and redirect is indistinguishable from the original program calling the hook function. In some cases, attach simplifies the instructions some (e.g., it detects load/call to pre-call fetch a Win32 API’s pointer from an IAT slot and simplifies this to call hook).

Right-sized IAT Hooking

One of the tradecraft and capability separation models I cater to is the PIC loader paired with a DLL (or PICO) capability. Here, Win32 APIs are an of-interest Join Point. And, our method to apply Aspects/tradecraft to Win32 API calls is to hook these APIs during the load process.

This Crystal Palace update adds some features to help with the above. addhook registers a MODULE$Function hook with Crystal Palace.

addhook “MODULE$Function” “hook”

Specify as many of these as you like. Crystal Palace won’t throw an error if a MODULE$Function hook isn’t used.

These registered hooks are used by the __resolve_hook() linker intrinsic (defined in tcg.h):

FARPROC __resolve_hook(DWORD functionHash);

A linker intrinsic is a pseudo-function that expands into linker generated code later in the process. __resolve_hook() generates code to turn a Function ror13 hash into a hook.

One of the problems with IAT hooks, is we don’t know which capability (or variation) they’re going to get attached to. And, so we run the risk of specifying too many hooks and having a bloated loader/tradecraft layer or specifying too few, in the name of code economy. This release helps with this problem too:

filterhooks $DLL

The filterhooks command walks the imports of a specified $DLL or $OBJECT and removes registered hooks that aren’t needed in this capability context. What this means is our dynamically generated __resolve_hook() function only maps hashes to hooks that are needed by $DLL or $OBJECT. Pair this with link-time optimization and the unused hooks are removed when the final program is generated. Right-sized IAT hooking!

The Simple Loader (Hooking) and Stack Cutting examples demonstrate these IAT hooking features.

Tips for Hooking

Here are some tips to write addhook and attach hook functions, compatible with the original API call:

  1. hook should accept the same number and type of arguments as the Win32 API you’re attaching to.
  2. hook should have the same decorators (e.g., WINAPI) as the Win32 API you’re attaching to. These set things like the calling convention (e.g., __cdecl vs. __stdcall). If the hook and API don’t match here, it’ll bite ya—especially on x86.
  3. Don’t decorate hooks with __declspec(dllimport) or something that aliases it (e.g., WINBASEAPI, DECLSPEC_IMPORT, etc.). Crystal Palace won’t treat these symbols like local functions.
  4. Beware: x86 __stdcall convention function symbols have @## after the function name. Crystal Palace expects full function symbols in most places. The ## is the size (in bytes) of arguments pushed onto the stack. Multiply the number of arguments by four and you’ve guessed the ## value. If you forget @## in a symbol, Crystal Palace will throw an error–BUT it’ll also suggest which full symbol name it thinks you meant.

PICO Function Exports

This release adds PICO function exports and a PICO API to get their function pointers.

exportfunc exports a function and assigns it a tag intrinsic:

exportfunc “function” "__tag_function"

This process generates a random integer tag for the function. This tag makes the exported function discoverable via the PICO’s loading directives.

We don’t know this hidden tag value. That’s where __tag_function() comes in. This pseudo-function is replaced with the hidden tag at link time. __tag_function() is available globally to any PIC/PICO exported after exportfunc is used.

Here’s the prototype to declare __tag_function:

int __tag_function();

Use this intrinsic with PicoGetExport to get a pointer to the exported function:

PICOMAIN_FUNC pFunction = PicoGetExport(picoSrc, picoDst, __tag_function());

The purpose of the tag scheme is to make dynamic linking safer. Originally, I had a manual user-assigned tag scheme. And, even I, illustrious developer of this cockamamie tool, found I couldn’t keep an integer synced between two files. I chased a crash because I forgot exportfunc too. If __tag_function() resolves–we know that exportfunc was used on a valid PICO function during this linking session.

Simple Loader (Hooking) merges two PICOs to save a memory allocation and relies on exportfunc to call code in the merged PICO. Simple Loader (Hooking) and Stack Cutting use exportfunc to make post-merge tradecraft setup APIs available between a hooks module and base loader in their shared architecture.

Migration Notes

(1) I’ve changed the function signatures of findFunctionByHash and findModuleByHash in tcg.h to use the right types. You’ll need to update your DFR resolvers to keep the compiler happy:

FARPROC resolve(DWORD modHash, DWORD funcHash) {
	HANDLE hModule = findModuleByHash(modHash);
	return findFunctionByHash(hModule, funcHash);
}

(2) I’ve removed reladdr from Crystal Palace and revoked permission for linked contents to use partial pointers. fixptrs is now required for x86 PIC doing anything interesting.

Closing Thoughts

I led this post with Aspect-Oriented Programming for good reason. Aspect-Oriented Programming is an instrumentation-enabled programming paradigm. AOP does not replace modular C programming. Instead, it’s a complement to layer cross-program changes onto a base program. Evasion tradecraft is a cross-cutting program concern that benefits from AOP-style tools.

While Crystal Palace continues to serve the PIC loader + DLL capability problem set, I think this technology’s most exciting when using PICOs (usable as PIC or with a loader) as an AOP-friendly canvas to compose fully realized tradecraft demonstrations onto.

To see what’s new, check out the release notes.

Tradecraft Garden’s PIC Parterre

The goal of Tradecraft Garden is to separate evasion tradecraft from C2. Part of this effort involves looking for logical lines of separation. And, with PIC, I think we’ve just found one of them.

Two weeks ago, I introduced several features that brought a unified convention model for Crystal Palace PIC and its BOF-like PICOs. This release continues that effort by fixing some bugs and expanding on these features.

Dynamic Function Resolution, Revisited

Last release, I added a means for Crystal Palace PIC to call Win32 APIs using the Dynamic Function Resolution convention. The implementation has Crystal Palace disassemble the program, find any calls to a MODULE$Function Win32 API, and dynamically insert a call to a user-provided resolver (specified, in the specification file) to turn that call into a ready-to-use pointer.

Part of where I’m going with this work, is I’d like one set of conventions to write PICOs and PIC–with the same program working as-is in either context. When I introduced PIC DFR, I realized… shit… I painted myself into a corner here.

PICO loaders have a convention to act on libraries that are not loaded in the current process yet. The current DFR model for PIC doesn’t. I ran into this problem when recording my PIC ergonomics video. I had to debug why a call to User32$MessageBoxA crashed. No User32 library. 😦 The solution was to add LoadLibraryA(“User32”) at the beginning of the program.

This release solves the above problem. Crystal Palace specification files can now specify multiple Dynamic Function Resolution resolvers and either associate a resolver with a specific set of modules OR set a default resolver.

Now, it’s possible to do this:

dfr “resolve” “ror13” “KERNEL32, NTDLL”

And Crystal Palace will use the resolve() function with the ror13 module and function arguments to resolve APIs tied to Kernel32 or NTDLL.

And, for everything else, we can set a default resolver:

dfr “resolve_ext” “strings”

The above is the same syntax introduced in the last release. Any API not matched to another resolver will fall back to the default resolver. And, it’s perfectly OK for the default resolver to use KERNEL32 and NTDLL APIs to aid its work (since they’re resolved in a different resolver):

char * resolve_ext(char * mod, char * func) {
    HANDLE hModule = KERNEL32$GetModuleHandleA(mod);
    if (hModule == NULL)
        hModule = LoadLibraryA(mod);
 
    return (char *)GetProcAddress(hModule, func);
}

Say yes to the .bss (PIC Global Variables)

One of the ass-pains that inspired Crystal Palace was my attempt to implement Page Streaming as a DLL loader without access to global variables to keep state. Clearly, there’s a lot we can do in PIC without global variables—but it’s nice to have the option when it’s needed.

Crystal Palace’s fixbss command is a way to bring (uninitialized) global variables to Crystal Palace position-independent code. Like fixptrs and dfr, this command relies on a binary transform and a helper function. When a fixbss function is provided, Crystal Palace will disassemble your program, find instructions that reference .bss data (uninitialized global variables), and dynamically insert a call to your function. This function must reliably return the same pointer to read/write memory where the .bss data can live in memory. The beautiful part of restoring just the .bss section, is we don’t have to worry about copying over any initial data. We just need read/write memory of a suitable size that is initialized (or already set to) zeroes.

Now, the question becomes… how to implement our fixbss function? This is a tradecraft question and there are certainly different creative ways to do this.

The Simple PIC Tradecraft Garden example demonstrates one way. Its specification file defines getBSS as our .bss fixing function.

fixbss “getBSS”

Simple PIC implements getBSS by looking for a data cave. What is a data cave? It’s unused space within a loaded library or program we can stash data into. Simple PIC’s getBSS uses the slack space between a read/write mapped section’s real size and it’s rounded-up to the nearest page virtual size. We simply walk a few modules (in my example, I walk the current program and Kernel32), find their mapped .data section, and check if the slack space is large enough to accommodate our program’s .bss section. This space is already read/write, it’s predictable to find with a walk, and it’s initialized to zero. Perfect for our demonstration. But, beware—my implementation is not friendly to multiple PICs in the same process using this technique.

Other caveats apply. Crystal Palace only transforms certain common instruction forms (here: load address, load, store, and store a constant for byte/word/dword/qword types) with this feature. As soon as you start doing esoteric stuff or enabling optimizations—you may run into instruction forms Crystal Palace can’t handle.

While I think transparent availability of globals in PIC is a cool trick, I want to clarify something important. This is not a push to always use transformed PIC. My goal is more parity between PIC and PICOs. In situations where I’m allocating or repurposing eXecutable memory in the current process for a capability… I think the better choice is a loaded PICO (globals, strings, Win32 API, etc. without binary transforms). What we get with this parity is code re-use between PIC and PICOs, a common set of skills to write PIC and PICOs, and flexibility to choose the right output for the needs of the situation. That’s why I’ve made this work part of the Crystal Palace effort.

Remap

This release adds a symbol remapping command (e.g., remap “old” “new”). This is a dumb primitive to simply rename a symbol in your COFFs symbol table.

One use of symbol remapping is to aid those of you who dislike the Dynamic Function Resolution convention. Simply create a .spec file of APIs you might use and fill it with:

x86:
  remap “__imp__Function@8” “__imp__MODULE$Function@8”
  remap “__imp__VirtualAlloc@16” “__imp__KERNEL32$VirtualAlloc@16"

x64:
  remap “__imp_Function” “__imp_MODULE$Function”
  remap “__imp_VirtualAlloc” “__imp_KERNEL32$VirtualAlloc

Remap won’t complain if a function you intend to remap doesn’t exist.

The above isn’t the only use of remap. The remap command is also a way to build one binary and, in the context of a specification file, remap one of several stand-by implementations of an expected function to the expected name. This is better described with an example.

Tradecraft Garden’s Simple Object and DLL loader demonstrated (with too much complexity) what it looks like to write a loader that works for a PICO or a DLL. The remap command gives us a tool to simplify this example some.

Rather than have two Makefiles (one for a DLL loader, another for an Object loader)—we instead compile one loader binary. And, our loader binary as-built has no go() function defined. Instead, it has go_dll and go_object.

With remap, our specification file can remap “go_object” “go” when we have an object target. And, we can also remap “go_dll” “go” when we have a DLL target.

What about the extra unneeded function? Use make pic +optimize and let the link-time optimizer get rid of the unused function. The end result is starting with one binary, we’re able to use our .spec file context to tailor that program down to exactly what’s needed for the situation.

Migration Notes

None.

Closing Thoughts

With these new features introduced, let’s revisit this logical line of separation thing.

One of the responsibilities of a DLL loader is to bootstrap the needed execution “stuff” to allow loading a DLL. That is finding Win32 APIs, allocating memory, etc. These bootstrapping behaviors are a detection opportunity and an area for exploring/finding alternate techniques.

Today, it’s easy to think of a PIC program as a monolithic thing that must do everything, including its own bootstrapping. But, with Crystal Palace we now have a clear line between bootstrapping and capability within PIC itself.

The bootstrapping functionality for our PIC loaders is the stuff that Crystal Palace’s helper functions implement: the x86 pointer fixing, the .bss fixing to get global variables back, and our strategy to satisfy the contract of dynamic function resolution. The choices made for each of these bootstrapping chores is a tradecraft choice and it makes sense to keep these bootstrapping cocktails separate of and agnostic to the follow-on PIC. We can do this now.

The new Simple PIC example in the Tradecraft Garden demonstrates this separation well. I’ve implemented a PIC bootstrapping cocktail as something I call a service module. This service module is scoped to the functionality our PIC capability needs to restore key language features and start doing real work in a Win32 process space. It handles bootstrapping like a loader, but it’s not a loader—because it doesn’t load anything.

Crystal Palace’s model to pair a service module with a capability is to merge the two programs into one and dynamically weave calls to the different helper functions into the end-capability to make everything work as expected. [I intend to explore what else this approach can buy us.]

The Simple PIC project merges its service module with an arbitrary PICO to make a functional PIC program. This separation makes the code easier to digest, it makes the follow-on loader agnostic to these bootstrapping choices, and it containerizes an important set of discrete tradecraft choices (e.g., we could have an arsenal of PIC bootstrappers to choose from and they could stand-alone as self-contained units of study).

Thoughtful containerization, finding these lines of separation, is the name of the game here. This is all part of an effort to reframe offensive security research as a systemic security science. And, I believe componentization of the attack chain is a needed step to enable this.

To see what’s new, check out the release notes.

Enjoy the release.

Weeding the Tradecraft Garden

When I started work on Crystal Palace, my initial thought was to see how much I could ease development of position-independent code DLL capability loaders using the tools and manipulations possible with a linker. The initial Tradecraft Garden examples were built with these primitives.

Crystal Palace doesn’t just merge and append COFFs though. It’s a powerful binary transformation framework as well. That is, Crystal Palace has the ability to disassemble a compiled program, turn it into an abstract form, manipulate it, and put it back together. This technology is the foundation of Crystal Palace’s code mutation, link-time optimization, and function disco features.

Before we go further, I’d like to shout out Austin Hudson’s Orchestrating Modern Implants with LLVM Fortra booth presentation at BlackHat 2025. One of the LLVM uses he described was transforming non-PIC x86 programs to working PIC. It’s a cool talk and it inspired me to look at binary transformations to make Crystal Palace PIC more ergonomic. Which gets to the new features…

This release adds features to transform x86 and x64 programs, written to PICO conventions, into working position-independent code. While these binary transformations are opt-in (you can still manually copy and paste ror13 hashes from elsewhere, if you really want to), they greatly simplify writing PIC and I’m all-in with them. All of the examples in the Tradecraft Garden were simplified with these new tools.

And, taking full advantage of unified conventions for x86 PIC, x64 PIC, and x86/x64 PICOs—this release introduces a Crystal Palace concept of shared libraries to aid modularity, code re-use, and open up opportunities to participate in the ground truth eco-system this project seeks to build.

Dynamic Function Resolution for PIC

One of the defining features of Cobalt Strike BOFs and Crystal Palace PICOs are Dynamic Function Resolution (DFR). That is, calling Win32 APIs as MODULE$Function to give their loaders the needed hints to resolve the right function pointer.

Crystal Palace now has Dynamic Function Resolution for x86 and x64 PIC. Because PIC has no loader, acting on this convention requires a different approach. Here, the binary transformation framework helps. Crystal Palace walks your program, finds instructions that reference __imp_MODULE$Function symbols, and dynamically inserts code to call a user-provided resolver function. This new code replaces the original instruction that would have expected a patch from the loader.

To opt into this feature, specify a DFR resolver in your Crystal Palace specification file:

dfr “resolver” “method”

There are two resolver methods right now: ror13 and strings. The method dictates the arguments generated and passed to the resolver function.

The ror13 method will turn MODULE and Function into ror13 hashes and pass them as arguments to your resolver function. The strings method pushes strings onto the stack and calls your resolver with pointers to these strings. Most Tradecraft Garden Simple Loaders demonstrate the ror13 method. Simple Loader – Pointer Patching (which patches-in a pre-existing GetProcAddress/GetModuleHandle) demonstrates the strings method.

With this feature, this goes away:

And, we get this instead:

Crystal Palace’s PIC DFR convention expects you’re resolving Win32 APIs from already loaded libraries. If you’re writing a full-fledged capability as PIC, make sure you load any needed libraries at the beginning of your program.

If you’re worried that the above introduces unwelcome fixed constants into your program, fear not. Crystal Palace’s code mutation, when enabled with “make pic +mutate”, will rewrite these constants for you.

Fixing x86 PIC Pointers

One of the barriers to unified code for x86 PIC and x64 PIC is the addressing model. x86 programs reference data using a model known as direct addressing, that is they expect to know the full address of any data they work with. Crystal Palace PIC, so far, has worked around x86’s direct addressing model with hacks. That is, Crystal Palace patches in an offset to any referenced data and expects, somehow, that the PIC’s C code takes special steps to turn that offset into a full pointer. These hacks are why Tradecraft Garden was full of #ifdef macros like this:

But, thanks to another opt-in transform—we can get away from the above:

fixptrs  “_caller”

The fixptrs command accepts the name of a caller function. This is a simple contract. The caller function’s job is simply to return its return address. And, the above becomes this:

When fixptrs is set, Crystal Palace will walk your x86 PIC, find any data references, and rewrite the instructions to create a full pointer. (The caller function aids this process.) This means you don’t have to rely on #ifdef hacks and manually calculated instruction offsets to referenced linked data or function pointers in x86 PIC.

This also opens another door. Crystal Palace’s “make pic” now appends .rdata to x86 and x64 PIC programs. Thanks to the fixptrs feature, x86 instruction references to .rdata now work too—giving x86 PIC access to string constants without any special handling in your C code.

The above makes “make pic64” redundant and I’ve removed “make pic64” from Crystal Palace. “make pic64” was an opt-in way to append .rdata to x64 PIC programs and (thanks indirect addressing) get access to string constants from x64 PIC.

While the above is a lot of fun, there are some caveats. x86 references to .rdata, especially with compiler optimizations enabled, generate a lot of possible instruction forms. Crystal Palace only has strategies for the most common forms. When a novel instruction form is encountered, Crystal Palace will catch it, and fail with an error stating it can’t fix the pointer. This is a good hint to either disable optimizations in your x86 PIC or… at least… to disable optimizations within the offending function.

What should go() first?

In position-independent code, your entry point must live at position 0 of your program. Tradecraft Garden used to satisfy this requirement by specifying a go() function at the top of a program, before anything else is included, and having it call the real entry point:

Now, you can declare your entry point as go. And, with make pic +gofirst—Crystal Palace will move this function to position 0 of your PIC.

This is a minor ergonomic feature for developers and another way to make PIC work more like PICOs. Like all of the above features, it’s opt-in via your Crystal Palace specification file.

Shared Libraries

With the above work, Crystal Palace now has shared conventions for x86 PIC, x64 PIC, and x86/x64 PICOs. In practice, this means a compiled object written to these conventions should “just work” in an x86, x64 PIC or PICO context. And, this paves the way for Crystal Palace shared libraries.

Crystal Palace shared libraries are zip files with compiled objects that use these common conventions to offer easy code re-use between PIC and PICO projects. Note: Crystal Palace PIC does not support read/write global variables, where PICOs do. Shared libraries that wish to work with PIC and PICOs should not use read/write global variables either.

The first Crystal Palace shared library is LibTCG, the Tradecraft Garden Library. LibTCG is all of the common functionality in the Tradecraft Garden examples now. It offers DLL parsing and loading, PICO running, printf debugging, and Export Address Table API resolution. LibTCG replaces all of the duplicate #include header files that implemented this functionality in the previous iterations of the Tradecraft Garden examples.

Use Crystal Palace’s mergelib "lib.x64.zip" command to merge all of the objects from a Crystal Palace shared library into a PIC or PICO.

Now, you may have some concerns about the above in the context of offensive security use cases. The first concern is size. Naturally, a shared library will come with a lot of stuff your current program doesn’t need—making it artificially larger than it would be. Well, you can still load and merge individual objects if you like. But, there’s another solution to this problem too: link-time optimization. Crystal Palace’s link-time optimization walks your program, starting at the entry point go, and builds a set of all called/referenced functions. It then rewrites your program to keep the used stuff and get rid of the unused stuff. So, that’s one solution.

Of course, incorporating a shared library means a static content fingerprinting playground. But, that’s why we have code mutation. And, you can use function disco to shuffle functions and effectively interweave imported content with your bespoke content. It’s up to you. Further, if a shared library has a stable API—there’s nothing to say you can’t maintain a private version of that library and keep it for your projects while letting the generic fingerprinted version demonstrate functionality publicly.

Shared Libraries also fill an eco-system gap in Tradecraft Garden’s model. Up until now, I struggled with how I (or others) might expose one-off tradecraft ideas or variations of existing ideas—without writing a bespoke loader for each. Crystal Palace’s shared libraries are path to package and document one-off or like ideas into something easy to incorporate and use from both PIC and PICOs.

License Change

When I initially released the Tradecraft Garden, I chose the BSD license for Crystal Palace. This was a deliberate choice to welcome this component’s integration and use in commercial and open source projects.

Separately, I chose the GPLv2 for Tradecraft Garden’s example loaders. I liked the idea of the GPLv2 as a way to keep the derived tradecraft open. My goal isn’t just an open source eco-system though. I want something that welcomes security conversation-aligned commercial players to co-create value with this commons. A permissive license better serves my goal to create this security conversation-aligned eco-system and more opportunities for offensive security research.

So, in the spirit of weeding the garden and cleaning things up, I’m relicensing the Tradecraft Garden example loaders under the BSD license. (And, for those of you who’ve built on the previous code, I’ve released an updated package with those materials under a BSD license—which you’re welcome to use if you wish for the permissive freedoms). For anyone building open source on this tech, I encourage you to use a permissive license (e.g., BSD, MIT, Apache 2.0).

Migration Path

Here are the migration notes for projects updating to the latest release of Crystal Palace:

This release deprecates “make pic64”. Use make pic instead.

The Makefile for LibTCG requires zip in your WSL environment. Use sudo apt-get install zip to install it

The rest of the features are opt-in and won’t break existing code and specification files. But, I recommend looking at these new conventions as they are the future of Tradecraft Garden:

  1. Use “make pic +gofirst” in all of your PIC programs and rename the intended entry point to go()
  2. Move away from explicit ROR13 hashes and port your PIC programs to use Dynamic Function Resolution
  3. Opt into the x86 pointer fixing with fixptrs and get rid of the manual #ifdef hacks and pointer offsetting required before.
  4. Ditch the original Tradecraft Garden common functionality headers and move over to the Tradecraft Garden library. You just need to #include “tcg.h” and use mergelib to include libtcg.x86.zip (or libtcg.x64.zip) in your program.

To see what’s new, check out the release notes.

Enjoy the release.

Analysis of a Ransomware Breach

“An individual working in one of our facilities accidentally downloaded a malicious file that they thought was legitimate. We have no reason to believe this was anything but an honest mistake.”

– An Ascension Spokesperson, June 12, 2024

This post revisits the 2024 ransomware breach of Ascension Healthcare. This is news again, because U.S. Senator Wyden pressed Microsoft to warn its users and address systemic Active Directory flaws that enable these horrible events. But, the story goes deeper than that. This is a case study about how what we know, dictates what we talk about, and informs who we blame. In this post, we’ll look into this breach, follow the reveals, and keep asking–what do we know now and who do we blame?

Ascension Healthcare Ransomware Breach

May 8, 2024, a months-long IT disaster became visible at Ascension, a St. Louis-based healthcare organization with 131,000 employees, 37,000 aligned providers, and 140 hospitals. They were the latest victims of a Black Basta ransomware attack.

This event shutdown Ascension’s Electronic Health Record system and forced hospitals to use paper and pen processes. In some cases, ambulances were routed to other hospitals. An NPR article shared stories from on-the-ground clinicians who struggled to work in this environment and recounted close calls and mix-ups from the chaos of this situation.

Ascension dedicated a webpage to the breach and shared updates with the general public. It would take them until mid-June to restore IT functionality and go back to most normal business.

The ransomware breach of Ascension wasn’t the first or last of 2024. Another group extorted $22 million from Change Healthcare in February 2024. Lurie Children’s hospital was attacked in January 2024, but refused to pay a ransom. The stolen data was posted by the threat actors.

The healthcare news organization STAT reports that Ascension suffered a $1.3 billion loss from this cyber attack. Despite this, there are silver linings. Ascension stated Black Basta only stole files from 7 of 25,000 servers. For the people whose data was on those servers, Ascension has generously offered the post-breach cure-all of free credit monitoring for two years–a windfall for those excited by this service.

What Happened?

In February 2024, a contractor working from an Ascension laptop conducted a search on Microsoft’s Bing, using Microsoft’s Edge browser, and they clicked on a link that they thought was legitimate. Unbeknownst to them, they ran malware that would open the door for Black Basta to compromise the entire organization.

Headlines, “How One Employee’s Honest Mistake Caused a Massive Ransomware Attack” and “A ‘mistake’ allowed hackers into Ascension’s IT system” said to the public: there’s nothing to see here. This is a story of bad luck, a click, some malware, and a healthcare system IT collapse.

These headlines are not just a journalist’s take. One post-breach podcast featured a Healthcare CISO expressing sadness that an organization can do everything right—security hygiene, centralized change management—and still have their whole system undone by a single click.

But, as a former security professional, I have questions. Microsoft Edge has features like Microsoft Defender SmartScreen, an on-by-default feature said to provide ‘peace of mind, for every click.‘ Where was the peace of mind for this click?

Surely, based on what we know, or can speculate… there’s more to this story than a contractor, a click, and Microsoft Edge’s Smartscreen. Fortunately, there is!

What the threat actors say…

We know more about the Ascension breach, thanks to the February 2025 leak of chat logs from the Black Basta ransomware group.

Reading the chats, I noticed the threat actors strategized about the negotiation. And, this discussion raised something I don’t see in public discourse much:

It’s not easy to pay a ransom! The victim organizations likely don’t have processes to make payments with cryptocurrency. And, that’s not the main hurdle. A victim that engages with organized crime risks breaking U.S. laws. For example, OFAC imposes strict liability for sending funds, even unknowingly, to a sanctioned entity.

This gives rise to the ransomware negotiator. A service often bundled with incident response. These deal makers are key for closing high-dollar transactions with organized crime. They know the price for decryptors and understand extortion’s going rates. They also bring turn-key process for the transaction’s compliance, diligence, and disclosure needs–helping the victim and criminal effect a “lawful” transaction smoothly and quickly.

Of course, the above is non-technical business speak. Ransomware is a cybersecurity failure, which is a technical failure. Certainly, this technical problem has a technical solution.

Let’s pivot to the technical. Black Basta’s chats say they encrypted and effectively seized 12,000 of Ascension’s systems. This differs from the official statement that data was only stolen from seven servers. While encrypting and exfiltration are different acts, the two numbers suggest different scales of compromise. This said, if Black Basta controlled Ascension’s Active Directory, it’s not an act of magic to touch 12,000 systems. Black Basta can do as they wish with Microsoft Windows remote execution and file sharing primitives. There’s no need for malware on each host.

The leaked chats include Black Basta’s technical reconnaissance of Ascension, where we learn something new: the installed security products. They saw Cylance EDR, McAfee, and Tanium.

I’m sad that we need threat actor chats to learn about present controls. We miss the chance to compare defense-in-depth claims with real-world failure context. During my time, I’ve only read about security product successes. Aren’t the failures a teachable moment too?

After a breach, industry experts often say AI will restore the advantage for defenders. Cylance EDR is an AI-powered solution!

Cylance is known for its prevention-first philosophy. The product uses AI to predict if a file is malicious before it’s run. We know Cylance’s efficacy claims at the time of the Ascension breach too.

May 6, 2024–days before Black Basta toppled Ascension, Cylance announced nearly 100% detection of the threats in a commissioned independent study. The study was run by the Tolly Group who used 1,000 then-recent malware samples in the test. The Tolly Group lauded Cylance’s low CPU use and ability to detect well in offline and internet connected scenarios.

We don’t know if Cylance was installed on the contractor’s laptop or the systems where Black Basta used malware. But, I want to point out a discrepancy between the security we sell and the reality of how it’s used. Given the outcome at Ascension, I ask, how did this AI-powered product help?

Senator Wyden Investigates

If this was just another tragic ransomware breach, with the same “someone clicked” narrative—-this blog post wouldn’t exist. But, this incident had a recent reveal that fills in details.

U.S. Senator Wyden’s office investigated the Ascension Healthcare breach. This included interviews to find the ground truth. The investigation found a key detail: Black Basta used Kerberoasting to go from the contractor’s laptop to full control of Ascension’s Windows enterprise. This full control is what allowed Black Basta to carry out their devastating ransomware attack.

Contrast this new detail with the industry discourse. Reputable organizations like the U.S. government and FIRST released advisories with IOCs, pie-chart breakdowns of .com vs. .net threat actor TLDs, and helpful advice. The advice includes: train users, implement 2FA, patch, use legal software, and watch out for remote management tools. For all of this well-meaning guidance, acting on a moment of interest, this connective tissue of Active Directory privilege escalation as a systemic issue didn’t come up.

Why didn’t Active Directory domain privilege escalation come up? Is it because everyone understands this issue, so there’s no point repeating it? Or, does this missing (and not missed) detail suggest cybersecurity’s narrative is about the wrong things?

Kerberoasting

Kerberoasting is a technique to become another, often globally privileged, user in networks managed with Microsoft’s Active Directory. I’m going to spare you a full description of Ticket Granting Service Tickets and cut to the chase.

Any user on an Active Directory-managed computer can request a file encrypted with the password of certain special accounts. These special accounts are for software services (e.g., an SQL database) and not people. This encrypted file is available for the Kerberos authentication protocol. Once the attacker has this file, the next step is to guess passwords to decrypt the file. We call this password cracking. This cracking happens on an attacker asset. It’s not visible to us. When cracking yields a password, the attacker now has a credential for this other account in the Active Directory network.

Any user can ask Active Directory for a list of these special accounts. This attack doesn’t require any special insider knowledge.

Cobalt Strike never had a Kerberoasting feature. So much of my work with the product was to provide methods to operationalize new and existing techniques. For example, SOCKS pivoting is how Cobalt Strike’s users Kerberoast with Impacket or other stand-alone tools. My work with Cobalt Strike also set conventions to develop and use post-exploitation tools, specifically .NET and BOFs. This made Cobalt Strike a first choice to use Rubeus and Kerbeus BOF. While Cobalt Strike only had basic post-exploitation primitives, I considered Kerberoasting a staple attack, and I taught it in my 2019 Red Team Operations with Cobalt Strike course.

Kerberoasting isn’t the root cause of the Ascension ransomware event. It’s Active Directory privilege escalation. This process involves harvesting credential materials and using them to become users who can do things the current user can’t. Pivoting identities is how an attacker goes from control of one laptop, via one person of 131,000+ clicking a link, to the power to enact their will at scale within the enterprise.

Active Directory privilege escalation is not a Kerberoasting problem. There are many ways to escalate privileges in Active Directory. More importantly, this is not a fully understood area of risk either. New features and work on Microsoft’s technologies occasionally introduce new Active Directory privilege escalation options. And, new privilege escalations are occasionally discovered too.

Active Directory privilege escalation isn’t an obscure issue that plagues irresponsible organizations. It’s a pervasive problem. An anecdote that was once relayed to me: a group of consultants went into a financial organization and measured how many employee computers had an automate-able path to full control of the network. I think the number was 96% or higher. The ubiquity of these paths to go from an unprivileged context to full control of a network is why I refer to Active Directory as IT’s lead in the pipes danger. These paths are not patched with software updates. And, they don’t necessarily exist in a network during day one of its build (or rebuild). Instead, they creep in, as unintended consequences of routine IT activity.

I do not want to discount the importance of Kerberoasting as a contributing factor in these ransomware breaches, either. In the 2025 blog post, Why Kerberoasting Still Matters for Security Teams, Varonis shared: “In nearly every Windows domain compromise our Forensics Team investigated last year, Kerberoasting was attempted, and often succeeded.” Kerberoasting is a go-to attack technique because the issue is fundamental to how network operators manage service accounts in Active Directory.

Since when?

With such a devastating issue, one of many, it’s fair to ask: when did we find out about this? What was done since we found out about it? Let’s discuss these things.

Tim Medin introduced Kerberoasting at SANS Hackfest 2014 in his talk: Kicking the Guard Dog of Hades. This talk was the beginning of the offensive security work to understand and socialize the issue. Recently, Tim wrote: “When I came up with Kerberoasting in 2014, I never thought it would live for more than a year or two. I (erroneously) thought that people would clean up the poor, dated credentials and move to more secure encryption. Here we are 11 years later, and unfortunately it still works more often than it should.”

The future of any security discovery is always uncertain. But, we do know others have to understand the issue before it’s possible to defend against it. And, understanding isn’t a given. I was in the audience that day. And, I didn’t understand the talk. I’m not an Active Directory expert and I never was. But, thanks to community tutorials, tools, and other exploration of Active Directory issues–I have a working knowledge I wouldn’t have gotten on my own. And, I assert that it’s the same for other security professionals, including security architects at Microsoft. This is why I’m so passionate about the Security Conversation. Without it, we have no chance to address the issue.

But, what happens when issues don’t get addressed? They move from temporary curiosity to repeated tragedy. This is where Active Directory attacks are today. These issues are absent from the broader cybersecurity narrative. What replaces these issues are simple narratives that point blame at other targets. This is the substance of the OST debate, a strategic campaign to educate the public on how criminals co-opt tools and research from the offensive security eco-system. While these critics may acknowledge the issues are a problem too, they hold individual offensive security researchers and tool authors morally responsible for harm and imply they must conclusively fix it.

I always felt these debates missed the big picture of the offensive security profession. Red Teamers are not a tiny cabal that execute niche security tests, solely for the benefit of “mature” client organizations. Offensive security insight is the technical truth of complex issues and yields strategic security leaps. Its formerly default-open culture is what socialized offense ideas to the security profession. But, that culture has changed, partially to align with its critics. The space is moving to fragmented closed eco-systems and a fragmented community as a whole. Some continue with openness and others prize offense insights as trade secrets to “Win” security tests. While this does reduce visibility and blame for offensive security, I believe it stagnates the security profession and further separates discourse from technical truth.

Senator Wyden’s Call to Action

We now have movement on Kerberoasting. In October 2024, Microsoft published guidance to help mitigate Kerberoasting. In the same post, Microsoft announced Windows Server 2025 and Windows 11 24H2 will deprecate RC4. And, Microsoft also introduced Delegated Managed Service Accounts–a new Kerberoasting-resilient way to manage software service accounts in Active Directory.

We know now that this action is from Senator Wyden’s pressure. In a public letter to the chair of the U.S. FTC, Senator Wyden revealed that his staff spoke with Microsoft in July 2024 and urged them to warn their customers about Kerberoasting.

Microsoft’s use of RC4 in Kerberos is one of the details Senator Wyden uses to justify his concerns about Microsoft’s security choices. The focus on RC4 is well-meaning, but it’s not the full picture of the attack. Tim Medin addressed this in Kerberoasting, Microsoft, and a Senator:

“With the Kerberoast attack, attackers are targeting the encryption key (the server’s password), not the underlying encryption protocol. The common encryption protocols for encryption on tickets are RC4, AES128, and AES256. The attackers aren’t fundamentally breaking RC4, but…”

“The fundamental problem that leads to a successful Kerberoasting attack is bad passwords. Ideally use Managed Service Accounts (see the Microsoft blog post) or use a long, secure, and preferably random password. Remember, attackers are guessing billions of passwords per second. A secure password for a user is much different than a secure password for these accounts.”

Kerberoasting’s BadSuccessor

Microsoft introduced delegated Managed Service Accounts (dMSA) to address Kerberoasting. This feature creates a path to manage service accounts with long random passwords, which makes them near-impossible to crack. dMSA also has an easy migration path for existing service accounts that are susceptible to Kerberoasting. This Windows Server 2025 feature shipped days after Microsoft’s October 2024 Kerberoasting guidance.

I’d love to say, problem solved. But, that’s not the whole story.

Yuval Gordon, a researcher at Akamai, looked at dMSA and found something horrifying. Microsoft’s dMSA feature added a trivial path to escalate privileges in Active Directory. What’s worse, this attack doesn’t require the organization to use dMSA; it just requires a Windows Server 2025 Active Directory domain.

Akamai reported the issue to Microsoft in April. Microsoft accepted the issue and said they would fix it. But, they rated the issue as Moderate severity. This means they didn’t see a need to immediately fix the issue (or warn users about it).

Yuval and his colleagues disagreed with Microsoft’s severity assessment. They worried that the ease of exploitation and lack of urgency for a fix put others at risk. They went public with their findings and steps to demonstrate the attack, which they dubbed BadSuccessor.

This decision re-ignited the coordinated disclosure debate. Critics maligned Yuval for seeking clout over the safety of potential BadSuccessor victims. Their logic: because Microsoft will fix the problem and there’s no fix now–it is harmful to make this attack public and arm criminals. This criticism was echoed in some of the industry media around BadSuccessor. In some cases, it featured prominently.

Some defended Yuval’s actions, but rarely with a blanket trust of Yuval’s professional judgement or respect for the role of offensive security. Rather, supporters described why it was OK “this time”. One supporter argued that Windows Server 2025 isn’t widely deployed yet and it’s better to know about the issue early. The public information pressures Microsoft to act. Another ally offered that, even if no fix is available, system administrators have a right to know if a supposed upgrade introduces a no-fix devastating attack into their environment.

Outsiders may see this as a spirited industry debate, two well-meaning sides working through difficult questions. But, these debates are not always nuanced considerations of two valid choices. They’re often character attacks for not making a specific choice. The party that loses is the selfish “clout chasing” security researcher.

CVE-2025-53779 was issued for BadSuccessor and Microsoft did patch it. This sounds like a win! The issue was identified, reported, and fixed. Except, BadSuccessor is not a software mistake. It’s a byproduct of complex interactions of permissions and features in Microsoft’s Active Directory. Yuval reviewed Microsoft’s fix and found that it did address the wide-open privilege escalation he reported. But, even with the fix dMSA adds attack-friendly credential and privilege abuse primitives.

Microsoft’s Bad Precedent

Kerberoasting and BadSuccessor remind me of a pattern I’ve seen before. To understand this pattern, we need to revisit pass-the-hash discourse from a decade ago.

We understand that if an attacker knows a username and password, they can do things as that user. Well, before 2FA anyways. But, it was quite a reveal in 1997, when Paul Ashton posted to NTBugTraq: Windows allows authentication with the user’s encrypted password. There’s no need to crack it. Just use it to authenticate. This is the pass-the-hash attack.

Mark Russinovich and Nathan Ide’s Pass-the-Hash: How Attackers Spread and How to Stop Them presentation at TechEd 2014 is a favorite overview. This talk was part of Microsoft’s efforts to socialize their new mitigations and efforts to address pass-the-hash. An effort that began in 2012 with the publication of guidance to mitigate pass-the-hash techniques..

Like, Kerberoasting, Microsoft waited to acknowledge and act on the issue. 15 years for pass-the-hash. 11 for Kerberoasting.

I chalk this up to Microsoft’s Public Security Servicing criteria. This is a list of stuff that Microsoft will act on when it’s reported. The list is a necessity. A lot of intended functionality aids attackers and authorized users alike–Microsoft has to set expectations. But, dogmatic adherence to these criteria slows Microsoft’s action on risks from intended behavior–even when attack trends show a serious issue. User-account control bypasses, as an example, don’t violate Microsoft’s concept of security boundaries and would go unaddressed for years.

We see this cultural tap dance with Active Directory issues. Mark and Nathan’s talk frames pass-the-hash as intended Single Sign-on behavior. This is the ability to use a share (the way Black Basta used shares) or print files without typing your username and password each time. Microsoft never admits their choices contribute to this problem, a stance that attracted a rebuke in Skip Duckwall and Chris Campbell’s post, “Why We Don’t Get It and Why We Shouldn’t.

Microsoft’s pass-the-hash efforts followed a pattern we see with Kerberoasting now. Microsoft published guidance, made baseline changes, added mitigation features, and declared the problem mostly solved. Like Kerberoasting, Microsoft’s pass-the-hash efforts added a new attack tool. Prior to Windows 8.1, the Remote Desktop Protocol required the attacker to know the plaintext username and password to login. But, thanks to the mitigation features, it became possible to pass-the-hash with RDP.

Researchers evaluated Microsoft’s baseline changes and found that yes, they took away some options. But, the attack remains intact. Will Schroeder’s 2014 Pass-the-Hash is Dead: Long Live Pass-the-Hash is one of the definitive explorations of this topic. The pass-the-hash (really, pass-the-*) attack is still alive today and part of breaches like Ascension.

Microsoft did deliver powerful mitigation features and I imagine they expect them to address the remaining attacks. The challenge is some of the features are complex and difficult to use without specialized knowledge. Or, not possible to use without disabling a needed legacy feature. This makes their adoption less common. But, Microsoft doesn’t weigh this. They argue their product is defensible and it’s now their user’s problem to figure out these features.

With Microsoft’s dMSA, we’ve again welcomed a opt-in mitigation with new attack opportunities into Active Directory. This is after Microsoft’s patch to “fix” BadSuccessor. Our collective awareness of the new attack primitives isn’t from Microsoft. It’s from Yuval and the red teamers who will further explore his work. It was probably no accident that Yuval titled his post-fix review: BadSuccessor Is Dead, Long Live BadSuccessor(?)

This Blame Game’s Final Round

Senator Wyden’s public letter to the U.S. FTC starts with a damning conclusion:

“I write to request that the Federal Trade Commission (FTC) investigate and hold Microsoft responsible for its gross cybersecurity negligence, resulting in ransomware attacks against critical infrastructure, including U.S. health care organizations, which have caused enormous harm to health care providers, put patient care at risk, and continues to threaten U.S. national security.”

It’s in this letter, we also learned that Senator Wyden wasn’t too pleased with Microsoft’s actions so far:

“While my staff specifically requested that Microsoft publish and publicize clear guidance in plain English so that senior executives would understand this serious, avoidable cyber risk, Microsoft instead published a highly technical blog post on an obscure area of the company’s website on a Friday afternoon. Microsoft took no meaningful steps to publicize this blog post. Moreover, Microsoft declined to explicitly warn its customers that they are vulnerable to the Kerberoasting hacking technique unless they change the default settings chosen by Microsoft. As such, it is highly likely that most companies, government agencies, and nonprofits that are Microsoft customers remain vulnerable to Kerberoasting.”

I share Senator Wyden’s frustration here. When these issues were well-known and used by organized crime, Microsoft started a campaign to message it as human-operated ransomware. A tool and actor framing with a flawed defense roadmap that deflects blame from the systemic weaknesses in Active Directory. This message was bigger than my ability to take on, but it’s a message that’s fed the industry’s mob mentality to blame security researchers and the offensive security profession for these issues too.

Detecting and Mitigating Active Directory Compromises published by the Australian Signals Directorate has the guidance and acknowledgements Microsoft’s materials lack. It is one of many extra-Microsoft efforts to address this root cause issue.

Senator Wyden’s letter concludes with a damning statement about the whole thing:

“There is one company benefiting from this status quo: Microsoft itself. Instead of delivering secure software to its customers, Microsoft has built a multibillion dollar secondary business selling cybersecurity add-on services to those organizations that can afford it. At this point, Microsoft has become like an arsonist selling firefighting services to their victims.”

Ars Technica and other media outlets covered Senator Wyden’s letter. And, the Ars Technica article published a response from Microsoft:

”RC4 is an old standard, and we discourage its use both in how we engineer our software and in our documentation to customers – which is why it makes up less than .1% of our traffic. However, disabling its use completely would break many customer systems. For this reason, we’re on a path to gradually reduce the extent to which customers can use it, while providing strong warnings against it and advice for using it in the safest ways possible. We have it on our roadmap to ultimately disable its use. We’ve engaged with The Senator’s office on this issue and will continue to listen and answer questions from them or others in government.

In Q1 of 2026 any new installations of Active Directory Domains using Windows Server 2025 will have RC4 disabled by default, meaning any new domain will inherently be protected against attacks relying on RC4 weaknesses. We plan to include additional mitigations for existing in-market deployments with considerations for compatibility and continuity of critical customer services.”

Here, Microsoft latched onto the red herring of RC4, and failed another chance to own the Active Directory Domain privilege escalation problem set. Tim Medin also reacted to Microsoft’s response in his Kerberoasting, Microsoft, and a Senator post:

“Microsoft claims this affects “less than 0.1% of traffic”, but this statistic is misleading. It’s like saying “only 0.1% of our doors are used by burglars” – attackers will simply choose the weak door. Microsoft has prioritized backward compatibility over security, leaving all organizations vulnerable by default.

The truth is, since the protocol is enabled (by default), all an attacker needs to do is request an RC4 ticket. The surrounding traffic (99.9%) is irrelevant. Microsoft has left RC4 enabled because they have chosen to support backward compatibility (as mentioned in the quote), which leads to a faster Kerberoast attack.

The second paragraph is misleading as well. They state that “in Q1 of 2026 any new installations of Active Directory Domains using Windows Server 2025 will have RC4 disabled by default”. The key here is “new”. Existing Active Directory Domains will still maintain their backward compatibility, and RC4, by default. If a new server is installed, and it doesn’t support RC4, that doesn’t prevent the rest of the domain, and the Domain Controllers (the brains of the Domain) from preventing RC4.”

Working with these new details, Ars Technica published a second article. Their article used Tim Medin’s comments and explained Kerberoasting with more depth. When I saw the first article, I was excited by Senator Wyden’s attention to this systemic issue. But, Ars Technica’s second article went back to the same ol’ blame exercise. The article’s social media hook says it all:

“There’s no excuse for an organization as big and sensitive as Ascension suffering a Kerberoasting attack in 2025, though Microsoft is also partially to blame for the breach.”

Closing Thoughts

Thank you for your patience to stay with this long post. I didn’t know what this post would become when I started to write it. I saw a thematic opportunity to talk about Kerberoasting and advocate for offensive security. I didn’t expect, until I sat with the details, that the Ascension Breach, Senator Wyden’s pressure, Microsoft’s action on Kerberoasting, and the dMSA debacle were connected events.

Much of this post is about cybersecurity discourse and the blame game within it. What I hope you saw, with each reveal of new information, was a shifting concept of blame and moral responsibility. Whose actions led to this preventable tragedy? Whose actions or inactions, contributed to this tragedy?

The initial narrative focused on the contractor and how their one-click “honest” mistake led to the downfall of Ascension. Some pointed fingers at Ascension speculating that they did not take actions to prevent this foreseeable harm. Of course, we don’t know what actions Ascension did or didn’t take. This isn’t public information. We know nothing about their security maturity or IT baseline. If we did, we might have a chance to evaluate Ascension’s defense-in-depth and ask ourselves, what was missing or what didn’t work? This information doesn’t exist though. I think this is the industry being self-protective. But, if we knew this information, breach after breach, we might find that many organizations do the reasonable things and organized crime still succeeds. Senator Wyden’s letter brought Microsoft into this exercise. And, Senator Wyden asserts because Microsoft failed to warn their customers or address these issues–they are blameworthy too.

This cybersecurity culture of blame and moral outrage is why its narratives omit details, encourage deflection, and repeat easy answers. Players either feed this ugly or withdraw altogether, because it’s a no-win discourse. There is no sober ground truth in cybersecurity, even though the frequency, scale, and repeated patterns of failure suggests a systemic cybersecurity failure rather than persistent one-off cases of organization negligence or incompetence.

If I were playing the blame game, I would point at this broken discourse. I started my commercial offensive security work in the Windows XP-era. Back then, cybersecurity was a staid cost-center with established players. Since then, we’ve had technical improvements, cybersecurity has become a large industry, and society feels the stakes and consequences of our work. Yet, in this climate of innovation, resources, and buy-in; a single click from a contractor can collapse the IT of a large healthcare system? That’s the narrative? It’s like saying a skyscraper collapsed because someone left a window open. The problem isn’t the click. It’s this vacuum of information that no one questions.

For the security discourse, who to blame isn’t the right question. Instead, we should look at breaches systematically, with the full ground truth, and ask a different question. For each contributing factor, viewed as a system, who has the most leverage to act and prevent future criminal success? Active Directory privilege escalation is a root cause cybersecurity issue and Microsoft alone has the leverage to act on this in a systemic way. While I do not believe Microsoft perpetuates Active Directory privilege escalation to sell security products, I would like them to own Active Directory privilege escalation as a product flaw and action it to give their customers a fighting chance. I applaud Senator Wyden for adding weight to this downplayed part of the security conversation. And, to the Yuval Gordon and Tim Medin offensive security researchers out there—thank you for bringing these issues to light. You did your job, and we couldn’t have this discussion without your insight.

Disclosures: I was a cybersecurity researcher and entrepreneur, until I stepped away from the industry in 2021. The above are my opinions. I do not speak for any projects or companies I’ve had or have affiliation with. I own stock in Fortra and SpecterOps.

COFFing out the Night Soil

I’m back with another update to the Tradecraft Garden project. Again, this release is focused on the Crystal Palace linker. My priority in this young project is to build the foundation first, then the rest can move in earnest.

What’s New?

The major focus of this release was the re-development of Crystal Palace’s COFF parser and intermediate COFF representation. I’ve also added COFF normalization (e.g., collapsing multiple related sections) into Crystal Palace’s internals too.

This work paved the way for some standard linker features in Crystal Palace:

(1) This release also adds COFF merging via the Crystal Palace ‘merge’ command.

(2) Crystal Palace can now export a COFF file as output. This is done with the ‘make coff’ command.

And, with these new features comes complexity and a lot of room for regressions. So, on my end, I’ve put together a local unit testing regimen consisting of open source BOFs, personal code from Raffi stash, and specifically crafted test cases to help defend the quality of this project going forward. This part of the project is not released, but I wanted to mark that this is part of the development process now.

Why these features?

Inspired by Daniel Duggan’s Modular PIC C2 agents blog post, I spent some time asking myself, what would it look like to use Crystal Palace to assemble a capability and later, as a separate step, apply tradecraft to it? I didn’t like my answer and developed the above features to help.

Here’s the concept:

If you’d like to use Crystal Palace to assemble a capability, now you can. Create a specification file to build your capability. It’s fair to think of your .spec file as a linker script, because that’s exactly what it is. And, within that .spec file, do whatever you want to assemble your capability. At its simplest, you can do something like this:

x64:
   load "bin/kernel.x64.o"
      make coff

      load "bin/http_c2.x64.o"
         merge

      load "bin/utils.x64.o"
         merge

      export

The above is a hypothetical specification file for a simple agent with a kernel, http_c2, and utils modules. make coff directs Crystal Palace to export a normalized COFF file. The merge command acts on the content at the top of the Crystal Palace program stack and merges it into the object (COFF export, PICO, PIC/PIC64) that’s next on the program stack.

There’s nothing special about the above. Merging and normalizing to a target convention is what every linker ever does. But, the above option now co-exists with Crystal Palace’s other tools specialized to this PIC problem set.

Exporting COFF via a .spec file is convenient, because Crystal Palace can pair COFF with tradecraft implemented as a position-independent PICO runner. PICOs are Crystal Palace’s executable COFF convention.

The technical benefits of PICOs, as a capability container:

  • Projects are smaller than DLLs
  • You can split your code and data in memory!
  • It’s mutate-able (Crystal Palace has a built-in binary transformation framework to mutate, removed unused functions, and shuffle functions in your compiled code)
  • A PICO paired with a loader is easier to develop than straight PIC.
  • The PICO loader is very simple, because so much merging and processing happens within the linker itself
  • And, you get that tradecraft and capability separation ground truth thing I’m going on about!

Migration Notes

If you’re already starting to experiment with Crystal Palace, I have a few notes for you:

Crystal Palace will no longer auto-resolve x86 PIC relocations with a partial function pointer. Use reladdr “_symbol” to give Crystal Palace permission to apply this non-conventional strategy. This will come up most often, where a reference to the entry point _go is used to determine the beginning of the position-independent code.

I’ve updated picorun.h to support relocations in your PICO’s data. In theory, the old picorun.h is still compatible (with previously working projects). But, if you have troubles on update, grab the new picorun.h from Tradecraft Garden and recompile your project. There’s no API change.

To see what’s new, check out the release notes.

Taking them to the SHITTER: an analysis of vendor abuse of security research in-the-wild

I have a personal interest in incidents of vendor disparagement and attacks on security researchers (and their security research).

It’s in this context I need to address Elastic’s July 2025 blog post: ‘Taking SHELLTER: a commercial evasion framework abused in-the-wild.

I see this as a vendor fearmongering exercise. Elastic has motivated why SHELLTER is a blog-worthy security concern. They’ve conducted a technical analysis and released tooling to knock this bogeyman down. Then, they conclude the world is inevitably less safe because Shellter exists.

I hesitated to jump in on this one, because while I do see this as vendor disparagement and an attack on security research—on the spectrum of what I’ve seen towards others and personally experienced, this isn’t especially egregious.

Still, this is a helpful case study. There’s good here. And, its less egregious nature demands a finer point to the specific ethical lines Elastic crossed. This perspective isn’t in the discourse, because anyone too sympathetic risks the scapegoat anti-security label.

The Good

First, I like that Elastic made clear that Shellter is a commercial security testing tool. I like that Elastic linked to Shellter’s website. I really like that Elastic acknowledged that they have read Shellter’s public documentation. And, I especially appreciate that Elastic incorporated Shellter’s vetting process document into their narrative and linked to it. These are details many vendors deliberately omit and I recognize them as a step to soften the direct reputation damage to Shellter.

As a self-protective measure, Shellter was well served having published information about their company’s processes and controls in advance. The best time for a business to make a statement in its defense, is in advance of anything happening.

I also want to laud Shellter’s use of customer attribution markers as a product protection control. Clearly, this showed efficacy when Shellter acted on these samples and shut this leaking down. And, while I’m adamant that Elastic’s post is overstated fear, Shellter published a second follow-on statement to this incident, sharing added product control measures they will implement.

Similarly, understand what risk the Shellter leaker has taken on. If those samples get connected to a law enforcement investigation, one subpoena is all that’s needed to identify this party for a potential conspiracy charge. I hope this true culprit is shitting their pants right now. Attribution isn’t some never-used reactive control. It’s a dual-use deterrent. The leaker’s current vulnerability is a reminder of this.

Analysis

I take no umbrage with Elastic analyzing samples they find on VirusTotal or elsewhere. That’s part of how they do their job. And, I appreciate that unlocking technical knowledge from samples is threat intelligence.

I can’t speak for them, but I imagine the Shellter team has zero concerns with this either. This analysis (even when it requires adaptation) is part of “the game”.

Many other firms have analyzed and acted on Shellter samples in the past. I’m certain this isn’t Elastic’s first rodeo with it. And, no instance of past analysis has likely served as anything other than a speed bump (IF that) for Shellter in its 10 years of commercial existence. I’m certain this analysis and tooling release is no different.

I am not against published analysis. It’s part of the security conversation. Published analysis of my work fed my thinking. But, there’s an often ignored line here. Over time, many parties co-opted my name and efforts to sell unchecked security efficacy and generate fear. I became hyper-vigilant to this. While I am not anti-published analysis, I care deeply about the integrity of how it’s used.

The Bad (Faith)

I have big concerns when technical analysis is wielded in a self-serving way to disparage a party or mislead the public. Elastic is doing so here.

While I could nitpick propaganda-style techniques in the post (e.g., putting VirusTotal sample submission dates on a graphical timeline, to represent observed “scary campaigns”)—the core issue here is framing. That is the facts selected, how they’re presented, and the non-dialectical conclusion Elastic steers their readers to.

What we have in this blog post is 37 printed pages of technical analysis, all of it informing and technically well done. Whether it’s 37 pages, 100 pages, or a Wheel of Time-length saga, Elastic’s message remains the same: Shellter’s existence is an inevitable problem.

Here’s Elastic’s conclusion, a key public discussion point:

Let’s break this down:

“”Although the Shellter Project is a victim in this case through intellectual property loss and future development time””

Elastic has pre-emptively framed any concern Shellter might raise as a concern about a hit to their evasion IP. Boo-fucking-hoo for them. This feigned sympathy is self-serving and misleading. It casts Elastic’s effort as defenders winning at their job vs. the shadowy, “what good are they, really?” offensive security tooling vendor stomaching a technical loss. This is viscerally satisfying to some cybersecurity readers, but is detached from impact to Shellter’s efficacy or other like-loaders. Having sat in Shellter’s shoes here, the impacts are diminishment of our character reputation (e.g., integrity, commitment to cybersecurity, respect for professional responsibility, etc.) and erosion of public good will towards our efforts.

Which gets to the mask-off sentiment Elastic is stoking here:

“”other participants in the security space must now contend with real threats wielding more capable tools””

Elastic is stating, despite Shellter’s possible concerns, the real weight is now borne by Elastic and other cybersecurity professionals engaged in more important work. Elastic and society, collectively, are the greater-loss victims in this circumstance. A convenient assertion for a bully, right?

Elastic’s imbalanced and misleading framing undermines Shellter’s validity, overshadows its security contributions, and diminishes the offensive security profession.

This is all part of a non-nuanced norm in the cybersecurity industry: A suspect sample on VirusTotal is self-assigned license for any company to seize someone’s brand, assign the narrative they choose, and insist their motives are immaterial–they are acting on moral authority beyond reproach. This deprives subject researchers agency and voice in their work.

Why does this matter?

Something most client-facing offensive security practitioners understand: do not point fingers and do not create blame. We understand that when we talk about vulnerabilities, bypasses, or describe security failure—there’s a human tendency to latch onto a single element of that chain (maybe a vendor, maybe a person) and unhelpfully hold them to account for all failures, blind spots, and shortcomings that led to the outcome.

We respect that when we draw security implications from someone else’s work, we hold their reputation in our hands. And, if we’re acting in good faith, we have a professional obligation to honor this. Our discussion of findings, at their best as systemic and applicable elsewhere lessons learned, should not unfairly diminish the unlucky subject.

These values are why I am so taken aback by Elastic’s post. It’s disproportionate finger pointing directed at Shellter.

It’s possible that, existing in a different professional community, Elastic doesn’t cherish or care for the above values and ideas. But, they matter and they’ve come about after years of offensive security professionals going through our behave-like-assholes-with-our-findings phase. This behavior seeds deep animus. We learned that’s not the way to get things done. It’s not the way to move things forward. Elastic’s leadership would do well to give these values and ideas consideration in their company’s culture.

But, in the meantime, I need to draw attention to Shellter’s plight. Elastic has a larger voice, a broader media reach, and a different relationship with the general public vs. Shellter and other players in their often misunderstood niche professional community. Elastic’s choice to misframe Shellter as solely an inevitable cybersecurity problem—feeds sentiment that’s led to social media mobbing and normalizes researcher disparagement.

The InElastic (Inconvenient to Elastic) Truth

Shellter is an important cybersecurity tool. Here, I’ll remind why.

Security is hard. And, despite all of the marketing bluster and sales promises, every technology has limitations. While EDR is a fantastic host-based compensating control, EDR by itself doesn’t win the security game. It’s only part of the puzzle.

Now, here’s the problem. There’s so much noise, misinformation, and complexity in cybersecurity that it’s hard to know what matters and how to navigate security decisions. Many security vendors, unfortunately, are too eager to sell their flatulence and contribute to this noise and misinformation.

Security needs objective voices, and offensive security experts are just that. Beyond the constant horn-locking over definitions and engagement models, what red teaming offers is clear: ground-truth information, rooted in experience with how attacks really work, to inform effective security practice. This battle-tested insight is something that benefits organizations and the broader industry.

Shellter plays into this. Shellter is packaged ground truth demonstrating the limits of EDR. For any security advisor (and their clients), Shellter is a tool to demonstrate the need for defense-in-depth and advocate for resources to have a complete defense posture. And, it’s invaluable for exercising the same.

Elastic’s uncomfortable truth is this: they likely get support tickets and sales pushback asking, “Why did a Shellter payload blind your product?” Shellter’s own brand seizing marketing has not helped. But even good EDR has limitations. Every EDR customer should assume that some loaders will bypass their EDR and blind some actions. This isn’t some difficult bar met solely by Shellter. It’s a baseline everyone should know when they assess the security value their licensed products give them. A few samples on VirusTotal paired to an info stealer does not change this security reality or require action from the broader security profession.

Closing Thoughts

Broadly, there’s a tendency for some EDR and Threat Intelligence teams, acting more like Military Psy Ops than objective intelligence professionals, to treat anyone working in seeming opposition to their business model as insulting their profession and existence. This leads to articles like what Elastic published here.

Regardless of what Elastic says, Shellter is an important security tool. It’s representative of how far EDR has come (the fact some red teams buy Sheller vs. writing loaders themselves) and it’s a reminder of where EDR still needs to go.

Security thinkers, sitting at the edge of these problem sets, understand that tools like Shellter are not the problem. They represent a category of attacker possibility. The way to objective progress, is to address these attacker possibilities with categorical solutions. Elastic understands this. They’re at the forefront of marrying call stack analysis with telemetry to reveal memory-injected payloads. And, as Intel’s control-flow enforcement technology becomes more widespread, many stack spoofing techniques used by evasive loaders will become invalid. That’s Elastic’s technical leadership.

I would have loved to see Elastic frame Shellter as an exemplar of what a modern loader and evasion cocktail looks like. It would demonstrate good faith if Shellter wasn’t in the headline. The same technical analysis would apply. But, Elastic could have owned the EDR limits Shellter acts on, articulated why those exist, and emphasized Elastic’s vision of how coming technologies will obsolete some of these techniques. This would have demonstrated a coherence of vision within Elastic and remained congruent with their commitment to technical transparency. Instead, we get this regressive cheap shot, demonizing a niche cybersecurity player meeting a valid need. Do better.

Tradecraft Garden: Tilling the Soil

Today, I’m releasing another update to the various Tradecraft Garden projects. This update is a dose of Future C2 and some cool updates to the Crystal Palace tech. Here’s the latest:

Code Mutation and More…

This release adds a Binary Transformation Framework (BTF) to Crystal Palace. The BTF is the ability to disassemble programs, modify them, and put them back together into a working program.

Tools built on the Binary Transformation Framework are exposed as +options to Crystal Palace’s make object/make pic specification file commands.

x64:
	load "bin/loader.x64.o"
		make pic +optimize

		push $OBJECT
			make object +optimize
			export
			link "my_data"

		disassemble "out.txt"

		export

The options are:

+optimize enables a link-time optimization pass over the COFF. This is a way to remove unused functions from the COFF or PIC. For example, if you #include a mini C runtime into your loaders/programs, this will get rid of any functions you didn’t use.

+disco is a feature I call function disco. It randomizes the order of functions within the COFF or PIC, but if you’re using “make pic” it respects the first function as the required entry point and doesn’t randomize that.

+mutate is a code mutator. The intent of this feature isn’t to frustrate reverse engineering, rather it’s to bring some variety to the code and create content signature resilience. The initial Crystal Palace mutator focuses on breaking up constants, stack strings, and creating some noise in the program.

Future C2 Pivot

I don’t see Tradecraft Garden as a Future C2 effort. Rather, my goal is to containerize and separate evasion tradecraft from C2s. I see this as helpful to socialize security research as security ground truth, something applicable to uses beyond security testing exercises. I see this as a path to invite a broader market for this capability. This isn’t an abandonment of red teaming, quite the opposite: red teaming is the idea factory. My goal is more opportunities for researchers and the intellectual freedom to pursue our work without harassment.

Bigger ends aside, I appreciate that pursuing the above requires this project to go where researcher interests live.

To accommodate those of you working with position-independent code projects, I’ve added a ./piclink command (and Java API) to use a Crystal Palace specification file to assemble a position-independent code project without a target capability to pair with. This gives you the benefit of Crystal Palace’s error checking, code mutation, link-time optimization, and ability to pack multiple resources together and link them to symbols within your PIC (or… PICOs).

If I were looking to replace DLLs as a capability container, I wouldn’t bet everything on position-independent code. I would also look at COFF combined with a PIC loader, similar to what we do with memory-injected DLLs now. The one downside of the loader is the need to allocate new memory for the capability (vs. executing it in place). But, the upside is you can instrument a COFF and separate tradecraft much the same way that’s possible with a DLL (e.g., the Tradecraft Garden design patterns for proxying calls, execution guardrails, etc. are the same). The benefit of COFF is it’s still very small, it’s mutate-able, and it’s easier to work with than position-independent code. Further, executing a capability via a PIC loader with follow-on one-time and persistent tradecraft pieces allows memory clean-up of the tradecraft as its loading and executing.

To support the above, I’ve made some changes to Crystal Palace:

The ./link command and Java API in Crystal Palace now transparently accept PICOs (Crystal Palace COFF) or DLL. If a COFF file is specified, it’s set to $OBJECT. To state the obvious: the specification file (and loader code) need to accommodate your PICO too, but it’s possible to support DLL and COFF capability via a single specification file.

I’ve also ported the test demonstration program from DLL to COFF. And, so, Crystal Palace now comes with test.x86.o and test.x64.o which will pop a “Hello World” message box. Pretty exciting, right? And, I’ve added Simple Loaders to Tradecraft Garden to demonstrate the basics of building a tradecraft that can accommodate either a DLL or COFF.

And, some helpful material…

And, I’ve also published a video on PIC development fundamentals to the Tradecraft Garden Amphitheater. The goal of this video is share some fundamental knowledge for writing PIC whether its using Crystal Palace, Stardust, or something else.

Check out the release notes to see a full list of what’s changed.

Enjoy!

Beacon Object Files – Five Years On…

When I was active in the red teaming space, one of my stated goals was to act on problems with solutions that would have utility 5-10 years from the time of their release. This long-term thinking was my super-power. One of the lasting fruits of my approach is Beacon Object Files.

Five years ago, today yesterday, I shipped Beacon Object Files with Cobalt Strike 4.1.

In this blog post, I’ll walk you through some history of Beacon Object Files and add some insight to the questions and discourse around them today.

Late-2010 Problems

Go back to the late-2010s and Cobalt Strike had a fork&run problem. Fork&Run was not some doomed from the start design choice. It was a thoughtful way to separate post-exploitation functionality from Beacon to ensure the stability of and limit the size of the Beacon agent. Cobalt Strike’s fork&run (+ remote inject) post-ex architecture also allowed user-exploitation (taking screenshots, logging keystrokes) from remote processes without migrating the agent. And, in the messy time when Beacon was x86-only, fork&run was a way to make sure hashdump, mimikatz, and other architecture-sensitive capabilities ran inside a native-arch container process.

When EDR showed up in 2016, targeting behaviors related to fork&run was a vogue thing. I responded to a lot of that by creating flexibility in Beacon’s fork&run chain and advocating for a practice I called session prepping.

Go back to the late-2010s and every C2 had a memory-injected DLL problem. The common container for Cobalt Strike’s post-exploitation functionality was the Reflective DLL. I recognized that memory-injected DLLs and their tells were about to become a cat&mouse focus of defensive security. Every evasion eventually becomes a tell! I started my work on in-memory evasion in 2017. To play this anticipated cat and mouse game, I followed a playbook I brought to many situations: I looked at the observable attack chain (as a defense-minded programmer might), I identified the observable properties of the attack events, and I mapped out the modalities and opportunities to play with these properties. This wasn’t driven by “reverse engineering EDR” (the favored future predicting activity in C2 today?!?) This was a thought exercise and guesswork about what variation is possible and exposing tools to play with that.

These two problem sets got me thinking about what was next. While I recognized I could buy breathing room self-injecting DLLs into Beacon’s process, I didn’t see that as the path forward by itself. I needed something else.

What was next?

In 2019, I was working my plans for what would become Cobalt Strike 4.0. During this time, I was in discussions with Fortra about a potential acquisition of my company. And, I had a choice to navigate. I could keep improving the 3.x codebase and try to focus on the acquisition itself. Or, I could work a 4.0 release and set the bones and direction of the product for the next four years. I chose to work the 4.0 release and even asked for a pause in the acquisition discussion, to finish out the 4.0 release and 2019 red team training course. My future-colleagues at Fortra were more than gracious about this.

One of the things on my mind, for the 4.0 release, was this post-exploitation weaponization problem. I wanted another way to bring external capability into the Beacon agent, but without fork&run and without memory-injected DLLs.

My instinct was to create something I called inline-execute. Basically, a mechanism to run a capability passed to Beacon, give it access to some internal APIs, and have Beacon clean it out of memory after it had run.

I saw this mechanism as useful for tools that might have done well baked directly into Beacon, but instead lived as fork&run DLLs because Cobalt Strike had no other mechanism for external post-ex tools. Think privilege escalations, information gathering utilities, lateral movement primitives, etc.

Given that I wanted to avoid memory-injected DLLs, my first instinct for inline-execute was to look at writing post-exploitation tools as position-independent code. I knew some folks had success with Matt Graeber’s 2013 Writing Optimized Shellcode in C and that’s what I took cues from.

I copied one of my precious Win32 Reflective DLL project templates and started to write my various inline-execute PIC tools into a single DLL. I used function inlining to create a situation where each-capability compiled into one mega-function exported by the post-ex DLL. All Cobalt Strike had to do was identify the bounds of that function in the DLL, carve it out as needed, and voila—position-independent code.

To support building capability, I created a source code framework to aid resolving Win32 APIs and making the needed functions and pointers available within my PIC capability. This old screenshot gives a good flavor of what I had:

After porting several capabilities over to this framework, I concluded I didn’t have something I could expose for others to build on. What I had was a level of tedium and error-prone development that I wasn’t willing to wish on my worst enemy.

That’s something about position-independent code that doesn’t come up much. It’s very error prone. When you rip shellcode from a DLL, executable, or COFF—your compiler and linker assume that the linker and operating system loader are going to share the burden giving your program all the details it needs to run, before it runs. When you go to PIC, you’re taking the operating system loader (and sometimes, the linker) out of the equation and there’s no error checking to help you. You’re on your own to know the limitations, know why things can break, and to pro-actively avoid those. (I’m trying to alleviate some of this with Crystal Palace).

The above is what I shipped with Cobalt Strike 4.0 though. And, if you fired any of the inline-execute tools in that release, they were built with this approach. Still, I knew I needed something else.

Beacon Object Files

Shortly after Cobalt Strike 4.0 shipped, maybe December 2019, I was rifling around in the bin/ folder of my compiler output. And, I was looking at the .o files. I was interested in whether or not I could parse the COFF and find ways to make my inline-execute DLL carving tighter. I started playing around with objdump and looking at the information in these files, and it hit me: maybe I can just parse and run my capability from these files directly.

Now, knowing what I was coming from with straight PIC—imagine my delight, when I first POC’d the above, and found I had global variables, strings, function pointers, and a lot of other things suddenly “worked” in this context—and important to Cobalt Strike, at the time, in both x86 and x64 builds.

But, this discovery wasn’t the full equation of the feature. I had other problems to solve.

I wanted to get rid of the source code framework altogether. And, that meant, one of the first problems I had to solve was resolving Win32 APIs. I was firm on the design goal that I wanted these programs to act as stand-alone units of execution (e.g., everything needed to load and run them is within the file). I didn’t want the user to have to provide “hints” about which libraries were associated with a Win32 API. I didn’t want the loader, for these files, to “guess” the appropriate library either. This is where the Dynamic Function Resolution convention of MODULE$Function comes from. Some folks have an aversion to the Dynamic Function Resolution mechanism that puzzles me. DFR literally happens when the capability is loaded and its memory fingerprints (in the data memory) is just an array of function pointers. The module/function strings aren’t in the post-load code or data. Further, the loader determines how these functions are resolved (e.g., you don’t have to use GetProcAddress and LoadLibraryA) and the loader can zero out the file’s pre-load content (where the DFR strings live) before the capability runs.

Another point of concern is around argument passing. While BOFs did become a convention adopted by most other C2s later, my interest was a successful and thought out system for Cobalt Strike users. And, in this context, argument passing was appropriately handled. One of my use cases for BOFs was privilege escalation. And, to do privilege escalation, in some contexts I needed to pass shellcode. It made sense to me to just pack arguments, based on their type, and provide a developer-friendly C API to recover those packed arguments. The piece Cobalt Strike has, that not every C2 framework got the memo on, is an integrated glue scripting language. Sleep in this case. Some consider Sleep as Cobalt Strike’s nepo-library (e.g., I wrote the language, so it’s what I chose over ‘better’ options). Sleep was a deliberate and thoughtful choice. Sleep was a success in my early-2000s IRC client jIRCii, which had a vibrant community of scripters. When I recast Sleep/Cortana into Aggressor Script in Cobalt Strike 3.0, I drew heavily on IRC scripting community best practices going back to the 1990s. (Does anyone remember the community created mIRC theming standard, using aliases to redefine output in a script neutral way? I do and that’s the “set” keyword in Aggressor Script). Cobalt Strike’s Sleep integration, scoped to acting as a glue language (e.g., defining new aliases, pop-ups, etc.) is part of why BOFs work in Cobalt Strike and why what I delivered feels like a complete-enough solution. Cobalt Strike users, who are also BOF developers, have no issue defining a new command for Beacon and packing BOF arguments as needed.

The loader is a place where I had a lot of choice too. Something I find fascinating, is many C2s opt to send the entire COFF down, parse it within their agent, and make the right things happen there. This is easier and I appreciate why it’s attractive. But, I was trying to preserve the lightness of PIC and I wanted minimal loading code (and strings) within the Beacon agent. I did a lot of the pre-processing of the COFF within Cobalt Strike and I sent something much stripped down to Beacon. This stripped down thing only required a small function to handle dynamic function resolution and a few relocations before passing execution to the capability.

All of this is what I shipped with Cobalt Strike 4.1.

Of course, that’s not the full story. One of the first to see the potential of Beacon Object Files, and jump on it full bore, was Chris Paschen from TrustedSec. He immediately set to work porting several commands from ReactOS to Beacon Object Files and summarized the lessons learned in A Developer’s Introduction to Beacon Object Files. And, in 2021, Kevin Haubris, also from TrustedSec, released the open source COFFLoader to demonstrate how to run BOFs outside of Cobalt Strike. These two actions were catalysts to Beacon Object Files becoming a widespread convention beyond Cobalt Strike and demonstrating the potential of this tech to others.

What I didn’t do…

When I shipped any feature, I tended to favor an approach where I would release something minimal first, see what I learned from the discourse after releasing it, and using that to evolve the feature going forward. I’m a big believer in iterating on something over time and I absolutely relied on the public conversation to do this successfully.

Here are some of the problems I didn’t address:

The first one is uninitialized global variables. The initial release of Cobalt Strike Beacon Object Files did not have an awareness of the .bss section of COFF. I don’t know, if five years later, this was ever addressed or not. This wasn’t a design decision. It was my oversight. If a loader doesn’t support .bss today, I’d consider that a bug. If you’re looking for a design document, on high-approval, or a blog post discussing the benefits of everyone adding .bss support to their BOF runners—consider this your sign, it’s OK to fix this now. It’s an easy fix.

I didn’t have a plan for long-running Beacon Object Files. I don’t consider this an oversight. It was a thought out scoping decision. My goal with BOFs were to move existing Beacon functionality out of the agent and to move stuff that used fork&run (but didn’t need it) away from that convention. That’s it. My run ended before I would have had space to visit this problem. I like how Cobalt Strike 4.11 addressed this problem with asynchronous BOFs.

BOFs have limitations as a single-capability, single compiled file format. If you have a larger project, with modules in different files, the BOF convention is ill-suited to it. Further if you’re trying to use certain non-basic C/C++ language features, your BOF loader may break too. I didn’t feel constrained by these limitations, but I appreciate that they are there. I do want to share one difference of mind. This may make the limitations I accepted make more sense. I didn’t see BOFs as a replacement for DLLs or EXEs. I saw BOFs as an easier way to write position-independent code in C. Where BOFs failed to do something, that absolutely required a DLL, my thinking was that a memory-injected DLL or EXE probably made more sense.

There is some neat discourse proposing solutions to these limitations. For example, boflink is a linker to combine multiple objects (and static libraries?) into a single BOF. The benefit to this tool is its output is compatible with existing BOF runners. And there’s the PE-BOF proposal to execute a PE (EXE) in memory, but use a pseudo-library to make internal agent APIs available to the program. This approach loses some of the advantages I cared about (see below), but solves a lot of problems related to language features and working with larger-projects. It differentiates itself from existing PE-runners by solving the problem of exposing the internal agent API to the PE. Both are interesting. And here, we have a good lesson about research and engineering features. Solving a problem is never about arriving at the universal arch of truth of what is “meant to be”. Solving a problem is always about trade-offs. Here, I’ve made clear my context, thinking, and trade-offs. And, in both of these projects, these authors have made different trade-offs to arrive at solutions with different strengths and weaknesses. The security conversation in action!

Why I like BOFs

I still like BOFs and working with COFFs. And, later, I may argue why I believe COFF + lightweight PIC loaders are an attractive medium-future C2 development solution over position-independent code or memory-injected DLLs. But, that’s a separate discussion. For now, here’s my answer to why I like how BOFs turned out:

I like that BOFs are small and I know exactly what’s in them. Early on I cited size, in terms of C2 channel constraints, as one of the reasons I liked BOF’s smallness. But, I’m also a big believer that one way to reduce content detections (e.g., challenge the cult of the brittle signature) is to simply reduce content. Smaller is better. This isn’t possible only with BOF though. As written about in Beacon Object Files vs Tiny EXE Files: with a .def file, a CRT stub, and some well-placed compiler flags—you can compile a very tiny executable file.

But, part of the magic of BOFs for me is that I don’t need the well-placed compiler flags, .def files, or other crazy recipe to get the compiler to do exactly what’s needed. With a simple -c flag, I get a .o file, and I can try to run it in something that supports it. I loathe complex build environments. I believe one of the keys to engaging hackers who aren’t Visual Studio “mal devs” is to make the build process as absolutely simple, repeatable, possible to understand, and turn-key as possible. BOFs satisfy this.

Another advantage BOFs offer, that I think is worth noting, is you can keep the .text section (eXecutable code) and data in disparate regions of memory. With DLLs and executables, we’re expected to keep the .text section and follow-on data in a contiguous region of memory. And, I hypothesize that breaking this assumption breaks some detection engineering. And, even where it doesn’t, splitting the capability up in memory certainly changes some assumptions about how to recover something from memory.

And finally, if the COFF is processed controller-side, the amount of code needed to finish the linking agent-side is pretty small. Loaded this way, COFF provides most of the advantages and lightness of PIC without the same level of development pain.

BOFs or COFFs?

And, I want to nerd out on one last point! It’s appropriate to refer to Beacon Object Files as Beacon Object Files. I don’t think calling what I created a COFF runner is accurate. Here’s why. The goal of a linking process is to take the intermediate compiler output (COFF) and create something ready-for-consumption by a loader expecting a certain convention. We give these conventions names. PE, elf, etc. And, in the case of this lightweight linking of PIC, dynamic function resolution, the Beacon API, etc. — the things we associate with BOFs are the conventions that BOF loaders expect and act on. It’s a simple convention, but it’s still a convention. In this case, the intermediate file (COFF) processing happens within the agent and/or controller, but that’s still there too. I say this to assert that a simple result doesn’t mean an obvious one. Sometimes, quite a bit of thought and false steps occurred before the simple solution is found.

What I’d like to see for BOFs?

One thing I’d love to see for BOFs is the return of something like Armitage’s module browser in Cobalt Strike (#FastAndEasyHacking baby). It’s not lost on me that some BOFs have many arguments and it takes a bit of reading to get up and running using them. The solution for these kinds of things is to have a common module format, ala Metasploit, and well-defined options an operator can set. From a UX point of view, I think this makes a lot of sense given the sheer number of BOFs that exist out there now. What’s old is new again?

Planting a Tradecraft Garden

Last year, I sat down to explore exception handlers and page permissions for masking payloads in memory. The POC was easy. I hit trouble building it into a position-independent DLL loader. I needed global variables and a means to package multiple resources together. Basic things were just too hard. I realized, wow, these are fundamental problems that could use a common solution. And thus, Tradecraft Garden was born.

Project overview

Tradecraft Garden is a collection of projects centered around the development of position-independent DLL loaders. They include:

The Crystal Palace, a linker and linker script language specialized to the needs of writing position-independent DLL loaders. Crystal Palace solved my page streaming project problems, by making it possible to access appended resources from my PIC via linked symbols. Crystal Palace also gave me PICOs, a BOF-like convention to run one-time or persistent COFFs from my position-independent code. The project has evolved more features since then, but the initial motivation was COFFs and easily working with multiple appended resources.

The Tradecraft Garden, a corpora of well commented, easy to build, and hackable DLL loaders. Today, the focus is on demonstrating design patterns and Crystal Palace features. Over time, the garden will include more examples and commentary on tradecraft ideas best expressed at this place in an attack chain.

And, eventually… my goal is to create a video course, hosted on the site, walking through how to write Win32 evasion tradecraft, both load and runtime, into position-independent DLL loaders.

The above is a typical security research playbook for me. Pick a topic. Develop a foundation or tool to explore the topic. Explore the topic. Invite others to explore the topic with the same tool. Collect the first-hand insights and lessons learned. Publish blog posts or video materials that others can learn from.

Why I chose this project

I’ve chosen this project for a few reasons:

First, it’s interesting to me. And, I believe, it’s an area that’s still interesting to others. I explored in-memory evasions in earnest from 2017 to 2020, back when it was a new area of public work. I also published an in-memory evasion course to capture my understanding of this then-nascent area. And, the shared work between myself and my excellent then-colleagues at Fortra led to User-defined Reflective Loaders—a convention that invited others to explore this research area. This invitation brought ground breaking research [1, 2, 3, 4] and ideas into the public commons. I didn’t do this work to “fuck defenders”, “win at offense”, etc. I did this work to respond to vogue ideas in that security conversation and advance it.

Second, I always considered any work that “enabled” other researchers to far outweigh tactical research I might do on my own. Beacon Object Files, execute-assembly, UDRLs, etc. are far more impactful—as vehicles to express and demonstrate ideas, especially when it’s a community effort, than anything I (or one team) could have done on our own. I believe, with UDRLs, there’s a lot of potential left and public exploration has stalled. It’s stalled, partially, because expressing cool tradecraft as position-independent code is a pain in the ass. With Crystal Palace, I’m making another attempt to make this work “fun”, enable researchers, and argue that decoupling tradecraft from capability is an advantageous approach. The ability to build something once, apply it to multiple capabilities within and across toolsets, is a leverage-granting super power. We’ll see if you agree, but this is a nudge to reboot and re-enable a public eco-system of this kind of security research.

Third, I consider this a way to course correct market failures in the red team C2 space and re-assert my vision for what I think the field should be. I always saw offensive security as understanding what’s possible in computer hacking, contributing to that knowledge where each of us can, and using it to inform the security conversation. This is something that happens at industry-wide and within-organization scales. And, both ARE important. I did NOT see the purpose of red teaming as securing the “Win” and holding onto that knowledge and ability—for the sole sake of continuing to “Win” this engagement and next. But, that’s where the space has gone.

Riddle me this batman? If I’m an operator, and I have someone’s trade secrets in my hands, and I’m wielding ideas that are kept secret—even from me, how the fuck am I supposed to communicate and advise my customers why what I did worked? How am I better enabled as a researcher and security thinker? What can I build on this? The answer is: I can’t. I’m not. And, nothing. This is the by-product of a market approach to spoon feed red team operators, via secretive easy buttons, rather than democratizing knowledge and literacy within and across security professions.

By the way, some may whine, red teaming is fucking hard—but there’s a strategic secret sauce that made my model work. It’s called leverage. Develop technologies that give individual operators and researchers LEVERAGE acting on hypothesis and make it fast to try things, adapt, and modify. Do this right and the fear of “oh my God, vendor X saw this, and my weeks of hard work is burned” goes away. That’s THE secret.

Unfortunately, leverage isn’t just an engineering effort. It’s a discovery effort. And, the lasting leverage wins are few and far between. The open source Metasploit Framework is many leverage-granting wins for vulnerability research and exploit development. Malleable C2, as an implementation and concept, demonstrated leverage over signature-based detection of malicious network traffic. This wasn’t to fuck defenders and be unethical. It was to say: signature-based network detection is not the future of boxing adversaries in and getting security wins. This was the “drive objective and meaningful security advances” of Cobalt Strike’s mission and vision statements. BOFs were a pivot away from fork&run, created a full-fledged eco-system of security research, and brought the lightness advantages of position-independent code with enough development pains addressed to remain a go-to convention, five years on.

When, I was active (before I was shamed and harassed out of the industry), I was trying to find what would bring LEVERAGE to the in-memory evasion security conversation. Again, to advance it. Even if a stand-alone Java coded linker, like Crystal Palace, isn’t THE solution, the search for leverage that ENABLES researcher productivity is something I consider worthwhile. And, it’s a concept that’s LOST, like a Roman concrete recipe, in the current red teaming space.

Fourth, I’m pissed at how the red teaming space has embraced dog-eat-dog, weaponized content marketing—disguised as technical material, but meant as “fuck them, we’re better” signals. As a researcher, I do NOT want to work in a professional community where secretive cliques benefit from the public effort of myself and others AND simultaneously use those public efforts as props to assert unverifiable claims about their secret awesomeness. Defensive security vendors are NOTORIOUS for doing this. Basic humility to acknowledge and maintain respect for how others informed and aided us—is a mandatory glue for having intra-profession good will and collaboration. Seeing the shit-flinging behavior bleed over into the red team community market fucking sucks. By the way. If I seem distant and not eager to suddenly drink beers and swallow SHIT again, it’s because I SEE this dynamic, I recognize its naked ugliness, and I, as a potential contributor, refuse to live THERE. I’m humbly hoping that by falling back to my roots of delivering contextualized security information, without a self-serving agenda, with the goal of serving an honest professional literacy—maybe, it’ll wake some people up that the current arrogance, shitting, and secrecy isn’t the only way forward.

And, last, but not least… I see opportunity for security research that extends beyond C2 and red teaming. I was there, at the before beginning, middle, and late into the ride, for industry interest and grappling with Windows post-exploitation techniques—a shared journey towards finding real defensive leverage on these ideas. When the “Golden Era” of Windows post-exploitation eventually closes, I believe we will see tradecraft have a market value and multiple buyers—similar to vulnerability research. I hope we can see a situation where that market develops in a way that directly serves security and empowers researchers with more ways to earn a living off of their work and insight. I believe common conventions, to containerize security research—separate of specific tools and use cases may help unlock this. BOFs are organically going in this direction. Elastic, is the first vendor I’ve seen, process Beacon Object Files and use them to extract signals that can find coverage gaps that exist deep in the attack chain and usually remain unexamined. Elastic is also, the first vendor I’ve seen, offering a bounty for novel tradecraft. My goal containerizing evasion research into DLL loaders is to nudge towards a common convention for this security research. By the way, when research is packaged this way, and decoupled from tools and use cases—it’s not “Github malware”, salacious naughtys, or dirty tricks meant to flick the security industry in the nose and invite each individual analyst to a one on many chess match. It’s informative ground truth. It’s participation in the conversation. And, it still enables security exercises. But, this research output also has other security uses and opportunities too. I think this is something worth advocating and driving for.

Will this effort succeed? I don’t know. But, I’m happy to advocate for these messages, say some overdue fuck you’s along the way, and make the self-righteous spoon feeders actually sell a vision—rather than letting this devolvement and squandering of the profession’s potential go unchallenged.

Next Steps

If you want an overview of the project, you can watch my demonstration video below. Otherwise, head on over to The Garden to check out the code.

Disclosures: I was a cybersecurity researcher and entrepreneur, until I stepped away from the industry in 2021. The above are my opinions. I do not speak for any projects or companies I’ve had or have affiliation with. I own stock in Fortra and SpecterOps.