In this project, I will explore blueprint ways for custom post-processing in Unreal and how its Post Process Material allows developers to inject custom logic into the rendering pipeline, leveraging existing buffers (SceneColor, SceneDepth, Velocity, etc), focusing initially on creating a simple trails effect and gradually extending it with more advanced features like displacement driven by various render buffers, datamosh and glitches, feedback loops utilizing Render Targets to store the previous frame’s image and feed it back into the current frame.
The end goal is to produce a module or pack of tools that can be comfortably and efficiently reused in different Unreal Engine scenes. This research aims to investigate existing methods and experiment with new approaches.
The video feedback effect—where a video camera captures its own output displayed on a monitor—has evolved from an unintended technical glitch to a significant artistic and cultural phenomenon.
Discovered shortly after the invention of the first video recorder by Charlie Ginsburg in 1956, video feedback was initially seen as a problem. Technicians avoided it because pointing a camera at its own monitor could overload early video pickups, potentially damaging equipment and causing screen burn-in.
In the 1960s, artists began to explore video feedback's creative potential.By the 1970s and 1980s, video feedback found its way into mainstream media. Notably, the opening title sequence of the British TV series Doctor Who (1963–1973) utilized a technique known as "howl-around," a form of video feedback. Music videos like Queen's "Bohemian Rhapsody" (1975) and Earth, Wind & Fire's "September" (1978) also employed feedback effects to create visually captivating experiences.
Post process shader that creates a frame blending effect by simulating a video editing technique in which the footage is manipulated to look "glitchy", usually by manual compression.
This plugin made with low level C++ pipeline hooks. I'm going to explore the boundaries of what is allowed and how deep I can get inside of engine with blueprints to make it accessable even for not programmers. Looking ahead, I should say that this is actually enough!
M_PP_Feedback– A material that blends the last frame with the current one manipulating pixels with different G-buffers.AC_CameraCapture– The main component for the pawn that dynamically creates and renders RenderTargets while adjusting the material of the post-process volume.SceneCapture_Feedback- adjusted SceneCapture2D component.
- PostProcess Volume with an empty slot for the
M_PP_Feedbackmaterial in the pipeline. - SceneCapture2D as a child of the main camera, precisely aligned with its parent.
- Cache Size – The number of RenderTargets to be created.
- Cache Offset – Defines the delay in frames. Useful for cached feedback.
- Highest Dimension – Sets a resolution limit to avoid excessive processing with 2K textures. By default, the component creates scaled
1280textures, or full-size if set to0. - Copy to Render Target (optional) – Assign a RenderTarget if you want to use it as a model texture somewhere.
Since all operations are handled manually, any automatic actions should be disabled. It’s not immediately obvious, but "Persist State" is not enabled by default—unlike the standard camera settings. I'll experiment with other "Final Color" buffers in the future.
Effect Power- Interpolation between the current frame and the last processed frame.UV Roundness- Controls how pixelated the sampling appears.Velocity Impact- A multiplier for displacement based on the Velocity G-Buffer.Velocity Noise Amp- Noise factor added to, or multiplied by, the velocity.Depth Impact- AppliesSmoothstep(From, To)-Depth Impact- AppliesSmoothstep(From, To)in depth measures and value interpolation withLerp(Min, Max).Lerp(Min, Max).Noise- Controls the amplitude and scale of the noise. Speed multiplies Time to animate the noise in 3D space.
Since PostProcess Materials must be compiled before Unreal Engine runtime, this material should be copied and customized for specific use cases — such as utilizing a (blurred) CustomStencil buffer, positioning relative to the player’s camera etc. You'll find useful NamedRoutes inside, and you can freely adjust the node noodles to achieve the desired effect.
Use mouse wheel, control and shift modifiers to tweek params in example.
Migrate ThirdPerson, Characters, StarterContent, LevelPrototyping from another project or use symlinks. Keep it immutable!
Using Custom Stencil we can reach different effects for character.
Predator:
Emitting smoke:
It could look like more satureted with objects on background scene:

And also live constantly evaluting glitches:

G-buffer textures aren't exposed directly from the Unreal render pipeline. The only way to access them is by calling Capture() with a SceneCapture, which essentially triggers another full render. It introduces unnecessary overhead.
I found that CineCaptureComponent2D from still experimental builtin plugin avoids its own additional render but at the same time it has some limitations and overcomplicates some physical camera CineCamera properties tweeks requiring more experiments to achieve practical usability.
The velocity buffer isn't linear as expected—it seems like the vectors are calculated from the lower-left corner of the screen, which makes interpretation tricky. To be honest, I noticed that this problem remains unresolved in most plugins.
The alpha channel isn't available in post-processing, but it could be useful for feeding back a mask through one of the channels.









