The Great Disappearing Act: Making of a Possession Effect

A few weeks ago, we found ourselves crunching towards an expo deadline, prioritizing various polish items and gameplay tweaks. Perhaps our largest chunk of work centred around visual effects – there were quite a few so-called “delighters” that we wanted to add in, and we had little more than a week to put the finishing touches on our demo. But there was one effect in particular that we wanted to implement, and we started out with absolutely no idea of how to handle it – trying to animate a “possession” effect for Spirit. We wanted to give the impression that he dissolved into ghostly energy, which we could then animate on a curve to “enter” different objects. But how could we make a mesh appear as if it was disintegrating into energy, or gradually breaking apart into the aether?

Our artist, Josh, pulled up a few effects from different games that were similar to what we envisioned, giving us a point of reference for what we wanted to achieve:


Top: Simple but functional transformation of Mario into coloured particles in Super Mario Sunshine (source), Bottom: Beautiful and envy-inspiring dematerialization of Link in The Legend of Zelda: Breath of the Wild (source).

A bit more digging online revealed that the effect we were looking for was probably based on a dissolve shader, which we could combine with a particle system to create that suave torn-to-pieces-by-supernatural-forces look. The particle system would be the easy part, tech-wise, and so our big challenge was tackling the dissolve shader.

We wanted something that was flashy, customizable, and portable, so that we could use it on different objects – a custom Unity surface shader with support for fancy materialization and dematerialization effects. The finished product will let us create something like this:


Here’s a quick breakdown of the steps we’ll take to create this effect:

  • Use a grayscale noise texture to fade mesh alpha based on an interpolation factor.
  • Use model-space fragment position to control dissolution based on a specified direction vector.
  • Combine texture- and geometry-based alpha/clipping control to create a hybrid dissolve effect.
  • Add in a glow effect by “predicting” the next areas to dissolve and adjusting model emission accordingly.

The great thing about these features is that they can be easily configured to work in tandem with one another, without interfering with any other shader features you might want to support, such as specular/normal/emission mapping, rimlighting, and so on. For simplicity’s sake, let’s assume we’re starting with Unity’s standard surface shader template, and a humble cube destined for greatness. The first thing you’ll want to do, assuming you want to support a gradual alpha fade, is adjust your shader tags and #pragma declaration accordingly:

    "Queue" = "Transparent"
#pragma surface surf Standard /*...any additional features you want...*/ alpha:fade

If you’d prefer something that eats away at the mesh without fading the alpha gradually, feel free to skip this step. However, when writing to the output of your fragment routine, just make sure to use clip() to cull any dissolved fragments, rather than setting the output alpha value directly (as I’ll be doing here).

The first item on the agenda is to control our dissolution based on a noise texture. This will let us create different effects reminiscent of burning, cracking, slicing, and so on. Here, I’ve used Photoshop’s clouds and difference clouds filters to create some high-contrast Perlin-type noise:


For our object to fade away based on this pattern, we’ll just add it as a texture map to our shader, along with a floating-point parameter, _DissolveScale​, on the range of [0, 1] to control the progression of the effect. For convenience, I’ve set zero to mean “fully intact” and one to mean “completely dematerialized”, so that as we move the slider from left to right in the Inspector, the object will gradually disappear.

If we think of the texture as a map to control our object’s dissolution, we want areas of different values (light/dark) to dissolve at different times. Let’s say that we want the black/dark parts of our texture to dissolve first, giving the appearance that the mesh cracks into pieces which then fade away. To accomplish this, for each fragment, we’ll add the luminance value of the dissolution map to our interpolation factor and use the result as our output alpha value:

//Convert dissolve progression to -1 to 1 scale.
half dBase = -2.0f * _DissolveScale + 1.0f;

//Read from noise texture.
fixed4 dTex = tex2D(_DissolveTex, IN.uv_MainTex);
//Convert dissolve texture sample based on dissolve progression.
half dTexRead = dTex.r + dBase;

//Set output alpha value.
half alpha = clamp(dTexRead, 0.0f, 1.0f);
o.Alpha = alpha;

Note that we’ve converted the interpolation factor to the space of [-1, 1] for this operation – don’t worry if this doesn’t make sense at first. All we’ve done is effectively ensure that our global alpha value will be 1 (fully opaque) at the very start of the effect, and 0 (fully transparent) at the very end. (If you happen to be unfamiliar with this sort of operation, it’s a little trick commonly called range remapping or range conversion, and it’s useful for all sorts of things).

We’re left with an effect that looks like this – not too shabby for a single noise texture and a few lines of code:


The next order of business is controlling this effect based on geometry – what if we want to dissolve the object from top to bottom, for example? There’s two straightforward ways that we might accomplish this. If you’re looking to create a particularly complicated progression (such as dissolving a character’s hands, bow tie, and eyes before the rest of them, for example) – you might just want to create your texture with this in mind, using your object’s UVs as a guide and hand-painting a dissolve texture to your liking (remember, with the code above, darker dissolves first).

A more interesting challenge is to control the effect based on a direction vector. You’ll need three new parameters for this:

  • A Vector for the starting point of the effect in model space.
  • A Vector for the ending point of the effect in model space.
  • A floating-point control representing the width of the “gradient” or “edge” along which the object is dissolving – I call this the “band size”.

You can visualize the effect as a gradient sweeping across the object, controlling the alpha and “wiping” the mesh from your starting point to your endpoint as it vanishes. Achieving this is pretty simple, but you’ll first want to add a vertex routine to your shader program, since you’ll be needing some geometry data that isn’t carried through to the fragment function by default. Outside of any of our shader functions, we’ll calculate a few global values based on our new parameters:

//Precompute dissolve direction.
static float3 dDir = normalize(_DissolveEnd - _DissolveStart);

//Precompute gradient start position.
static float3 dissolveStartConverted = _DissolveStart - _DissolveBand * dDir;

//Precompute reciprocal of band size.
static float dBandFactor = 1.0f / _DissolveBand;

Then, we’ll write our vertex routine to calculate an “alpha value” for the current vertex based on the effect progression. Note that we’ve modified the shader’s fragment Input struct to have an additional parameter for this (dGeometry) – we’ll let Unity handle the interpolation for each individual fragment to help reduce artifacts. Here’s what the complete calculation looks like:

//Don't forget to specify your vertex routine.
#pragma surface surf Standard /*...your other #pragma tags...*/ vertex:vert
void vert (inout appdata_full v, out Input o) 

    //Calculate geometry-based dissolve coefficient.
    //Compute top of dissolution gradient according to dissolve progression.
    float3 dPoint = lerp(dissolveStartConverted, _DissolveEnd, _DissolveScale);

    //Project vector between current vertex and top of gradient onto dissolve direction.
    //Scale coefficient by band (gradient) size.
    o.dGeometry = dot(v.vertex - dPoint, dDir) * dBandFactor;		

Then, in our fragment shader, we simply use the interpolated dGeometry value (clamped to the range of [0, 1]) to set our alpha and we’re left with an effect that progresses like this:


Combining this with our texture-based dissolve to create a hybrid effect is dead simple – just add the raw value of dGeometry to the luminance of the noise texture, clamp to [0, 1] as per usual, and use that as your alpha value:

//Combine texture factor with geometry coefficient from vertex routine.
half dFinal = dTexRead + IN.dGeometry;

//Clamp and set alpha.
half alpha = clamp(dFinal, 0.0f, 1.0f);
o.Alpha = alpha;


Our last task is adding in some emissivity, so that the edges of pieces about to dissolve can glow before fading away. There’s quite a lot of ways to handle this, and the one that works best for you will vary depending on the approach you’ve taken. You can use offset versions of the interpolation parameter to calculate the glow strength, you can shift a “band” of emission down your mesh as it fades away, you can apply thresholding logic to your final alpha value to have a fragment “emit” at low values before clipping itself from view, and so on.

For our purposes here, I’ve chosen an approach which supports the “hybrid” texture/geometry dissolve fairly intuitively, by defining the size of the glow region in accordance with the “band size” specified for the rest of the effect. I use this factor to offset the alpha value calculated previously, using this shifted value to control the glow strength. I’ve also included a couple of additional parameters which control the sharpness of the glow’s edge (an intensity multiplier) and create a gradient to calculate the glow’s colour (start/end colours, and a parameter to shift the boundary between them):

//Shift the computed raw alpha value based on the scale factor of the glow.
//Scale the shifted value based on effect intensity.
half dPredict = (_GlowScale - dFinal) * _GlowIntensity;
//Change colour interpolation by adding in another factor controlling the gradient.
half dPredictCol = (_GlowScale * _GlowColFac - dFinal) * _GlowIntensity;

//Calculate and clamp glow colour.
fixed4 glowCol = dPredict * lerp(_Glow, _GlowEnd, clamp(dPredictCol, 0.0f, 1.0f));
glowCol = clamp(glowCol, 0.0f, 1.0f);

By outputting the computed colour as the emissive colour of the fragment (o.Emission), the mesh will now glow in anticipation of its disappearance. (In the following examples, the albedo tint is adjusted according to the glow factor to boost the colour even more.) You can play with different noise textures, glow colours, and effect parameters to create quite a few different dematerialization effects:


Top: “Magma” effect using Perlin-type noise, high-intensity red-yellow glow, and top-to-bottom effect direction. Middle: “Boules” effect using pin-light radial gradients, purple glow, and bottom-to-top effect direction. Bottom: “Glitch” effect using offset barcode pattern, cyan-green glow, and corner-to-corner effect direction.

For reference, here’s the final property list and Inspector panel for the shader used to create the above effects:

    _Color ("Color", Color) = (1,1,1,1)
    _MainTex ("Albedo (RGB)", 2D) = "white" {}
    _DissolveScale ("Dissolve Progression", Range(0.0, 1.0)) = 0.0
    _DissolveTex("Dissolve Texture", 2D) = "white" {}
    _GlowIntensity("Glow Intensity", Range(0.0, 5.0)) = 0.05
    _GlowScale("Glow Size", Range(0.0, 5.0)) = 1.0
    _Glow("Glow Color", Color) = (1, 1, 1, 1)
    _GlowEnd("Glow End Color", Color) = (1, 1, 1, 1)
    _GlowColFac("Glow Colorshift", Range(0.01, 2.0)) = 0.75
    _DissolveStart("Dissolve Start Point", Vector) = (1, 1, 1, 1)
    _DissolveEnd("Dissolve End Point", Vector) = (0, 0, 0, 1)
    _DissolveBand("Dissolve Band Size", Float) = 0.25


Finally, here’s a look at the effect in action on our little poltergeist fellow, synchronized with a particle system which we’ve animated on a curve to give that extra little bit of spooky panache:


And voilà, now we’ve created a nice, customizable shader perfect for teleportation, burning, dissolving, or any other bit of dematerialization magic.

Shading Spirit

Over the past few weeks, our artist has been fleshing out the details of our final character model and starting on animations. And so, the time had come – no more placeholder shaders for the little guy. Time to sit down and take a crack at a custom surface shader for our poltergeist friend, and we already had a few key features we wanted in mind. Since the beginning, we’d had something in mind similar to the ghosts from Luigi’s Mansion: Dark Moon :

In particular, take a look at the little green guy – he was a big inspiration for Spirit’s character design and shows off some of what we’d like to achieve with our visual effects.

Let’s break down the visual features of the model:

  • Base colour (green)
  • Glowing eyes/mouth
  • Surface detail (bumps/pores)
  • Edge/rim lighting (white/green)
  • Exterior glow (green halo)

Additionally, we wanted Spirit to have adjustable partial alpha, so that he’d appear semi-transparent, for maximum spookiness. Most of what we want to accomplish (aside from the exterior halo, which we’ll add in post-processing) can be done with a standard surface shader in Unity. Here’s a list of the components we’ll need to integrate for each feature:

  • Depth pre-pass and alpha intensity
  • Albedo map and tint
  • Emission map
  • Normal map, intensity, and smoothness
  • Rimlighting map, intensity, and tint

And here’s the texture maps we’ll be using to achieve the final effect:


Unity’s built-in CG features make writing this shader pretty easy if you know which tools to use – for our final effect, we started from the standard surface shader template, which already includes our albedo map, base tint, and smoothness:


The albedo is there, but this hardly looks like a ghost – more like a plastic toy. Let’s add a bit of texture first with our normal map. Shader veterans will be happy to hear that Unity will do all of the tangent-space conversions for us, if you’ve imported your texture with the “Normal Map” texture type selected. All you need to do is use the UnpackNormals function. If you’d like to adjust the intensity of your normal map, just employ one of the worst-kept secrets in computer graphics – multiply the result of your normal map read by a colour with your desired intensity factor plugged into the red and green channels, while leaving the blue channel at 1:

o.Normal = fixed4(_BumpIntensity, _BumpIntensity, 1.0, 1.0)
* UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap));

So here’s what Spirit looks like with some detail, because real ghosts have pores (slightly enhanced for demonstration):


The material texture is closer to correct now, but he still looks like a regular plastic object without any glow. Let’s start by adding our emission map, which will make his eyes glow, by simply setting the Emission property of our output structure to read from our emissive texture:


While it’s a little too satanic for our purposes, we’re starting to see a promising glow – unfortunately, when combined with our full-force smooth albedo, which happens to be a bright base colour, the result is less “mischievous ghost” and more “irradiated cyclops”. Let’s fix this by toning down our albedo map with a darker tint colour and letting most of Spirit’s apparent colour come from our rimlight map, which is a toned-down modification of our base colour map. Rimlighting works by comparing the angle of the viewer’s eye (i.e., the camera view direction vector) with the surface of the object (i.e., our final surface normal). We want the edges of the object to glow, meaning that if the two vectors are perpendicular, the glow should be maximized. Therefore, we’ll use the dot product, clamp it, and subtract the result from 1 to give us our base rimlight intensity, which we can then modify using a custom intensity variable, tint colour, and our rimlight map. For our purposes, we’ll add the resulting colour to our emissive output:

half rimTerm = 1.0 - saturate(dot(normalize(IN.viewDir), o.Normal));
o.Emission = tex2D(_EmissiveTex, IN.uv_EmissiveTex)
+ _RimColor * tex2D(_RimTex, IN.uv_RimTex)
* smoothstep(0.0, 1.0, rimTerm * _RimIntensity);

After tweaking the colours to our liking, we’ve got something like this:


Finally, that looks quite a bit more like what we’re going for. Now, for one last feature – our partial alpha. The tricky part here is getting the depth buffer to behave properly. Here’s what happens if we add an alpha slider and flag the shader as transparent using tags:


Ouch. Not what we want at all – we want to be able to see the background through our little guy, but not his disembodied limbs – note the horrible clipping effect that’s happening as well. Resolving this is surprisingly easy – we complete a pre-pass to fill the depth buffer with an empty colour mask, ensuring that our final render will only deal with the bits of the surface closest to the camera, disregarding all that back geometry. Here’s the code for our pre-pass, which is painfully short:

//First pass.
ZWrite On
ColorMask 0
//Set up our next pass.
Cull Back
ZWrite On
Blend SrcAlpha OneMinusSrcAlpha
//CPROGRAM begins here...

Now let’s have a look at the little guy with some stuff behind him:


There we go! While we’ve got some texturing and post-processing tweaks we can make to improve the effect, there’s our surface shader, now with 100% fewer disembodied limbs. For reference, here’s our final list of properties in the surface shader, and the adjustment panel:

_Color ("Color", Color) = (1,1,1,1)
_Alpha ("Base Alpha", Range(0,1)) = 1.0
_MainTex ("Albedo (RGB)", 2D) = "white" {}
_Glossiness ("Smoothness", Range(0,1)) = 0.5
_BumpMap("Normal Map", 2D) = "bump" {}
_BumpIntensity("Normal Intensity", Float) = 1.0
_EmissiveTex("Emission Map", 2D) = "black" {}
_RimColor("Rimlight Color", Color) = (1,1,1,1)
_RimTex("Rimlight Texture", 2D) = "white" {}
_RimIntensity("Rimlight Intensity", Range(0.0, 2.0)) = 0.0


Finally, here’s a family portrait with our old model on the left, with standard shading, and our new and improved shaded model on the right:


And there we have it – our little friend is ready to wreak havoc in style. We’ll be back soon with more updates on Spirit!

Spirit Development Update: July 2017

Now that we’re rounding out our third official month of development, we’d like to take some time to review our progress and share it with the community. It’s been a busy season for us, with lots of business meetings and new opportunities. We’re excited to announce that we are working with Northumberland CDFC to help fund our development efforts and that we will be participating in the UOIT business incubator throughout the year!

Reagarding Spirit, we’ve nearly finished much of the game’s core prototype functionality, having implemented alpha versions of our core mechanics, user interface, input system, save system, and application management. While we’ve got a long way to go in refining our gameplay and fleshing out our level design, it’s been quite rewarding to see the first few pieces come together. In this post, we’ll reflect on everything we’ve done so far, and how we plan to build on our existing foundation for Spirit in the coming months.


Navigation & Camera

Naturally, our first step in development after setting up our basic input & state management (see below) was the integration of basic player navigation. Our case is a bit unusual in this regard since players will be controlling a lot of different objects, so a one-size-fits all solution just doesn’t work for us. We’ve set up a system that lets us define movement controllers for different objects with varying degrees of deviation from standard rigidbody-based or character controller-based motion, which we will expand as we add new player-controlled objects, integrate animations, and improve the feel of our character movement.


We’ve also set up a basic camera system allowing for locked and free-form camera controls, which supports a couple of different modes of operation depending on the object the player is controlling. It’s currently quite similar to the camera in our initial prototype, with some improvements to interpolation and adjustment behaviour. We’ve also developed a simple cutscene system built from our path editor utility, which has allowed us to start thinking about cinematic aspects of the game. Our next goals with the camera will be the integration of some basic physics and location-based constraints to improve gameplay feel and make it easier for the camera to adapt to different level geometries.


Possession & Summoning

Possession is our core mechanic, and so it will be something that is in a constant state of expansion, refinement, and testing throughout the development process. Right now, we have a few different objects in our prototype for players to control (a large, bouncy ball, a marble, and a paper airplane), in addition to Spirit himself. Objects control quite differently depending on their physical properties – shape, size, weight, air resistance, and so on. We’re using three primary controllers at the moment for our current set of objects – a character controller-based model for Spirit, a controller we’ve designed specifically for flight, and a controller for rolling objects (the latter two are both heavily physics-based).


Additionally, we’re integrating our post-processing system with possession to give each object a unique aesthetic when players are “inside” the object, which we’ll be expanding on in later updates. Right now, we’re experimenting with a few different effects related to image warping and colour distortion. No psychedelics have been involved in the process, we promise.

Rescue & Collection

Players’ primary objective in Spirit is to rescue their ghostly cohorts from a devious team of paranormal exterminators, who’ve trapped them for later disposal. Spirit, who managed to evade the exterminators’ dastardly traps, will not stand by and allow innocent poltergeists to suffer in captivity. After all, a little innocent haunting never hurt anyone!

We’re going to be designing a number of different friends for Spirit to rescue, which our artist Josh will be bringing to life shortly (we’re acquiring supplies for the ritual). In the meantime, we’ve integrated the mechanic with a host of adorable magenta ghost clones, which aren’t terrifying at all, thanks to their giant yellow eyes. Ever watching. Staring. Judging.


Players can also collect little bits of concept art, tutorial images, and assorted bits of photographs and the like throughout their adventure, which they’ll be able to visit in a little journal menu, hosting their collection and detailing their current objectives.

Interaction & Dialogue

Once we’ve fleshed out our story and side characters a bit more, we’ll be integrating a fair bit of narrative, sassy conversation, and general tomfoolery to complement the game world. Right now, we’ve prototyped our system for interaction and conversation with a couple of talkative books. We’ve integrated this system with our collection mechanic, so that players can “take” things from NPCs via interaction, and we’ll be adding some basic fetch quests, riddles, and so on in the future.



A new feature we’ve been working on is an ability/talent tree similar to what you might find in an RPG perk system (though far less complicated!) or a game like Ori and the Blind Forest. At the moment, we’ve just finished implementing a skeleton for defining abilities, acquiring perk points, and spending those points to acquire and use abilities. We’ll be working on designing and implementing unique talents over the next few months.


Application Management & State Saving

I like to have the application back-end up and running before taking on almost anything else, so that we can switch between scenes and deal with global GameObjects effectively. This helps us avoid snafus with the inability to test state transitions, getting caught up with persistent objects, and so on. Thus, our app manager was one of the first things we worked on, and we’ve expanded it steadily to accommodate new features as necessary.

We’ve also been extending our save system to support better player data management, improved file handling, and the ability and collection mechanics.


Input was another of our initial areas of focus, as we wanted to develop a custom wrapper for Unity’s input system that allows us to query input based on actions defined outside of Unity’s input manager. We did this so that we can build a system for players to rebind their inputs effectively in-game, rather than having to rely on the Unity launcher. Furthermore, this leaves us the flexibility to import custom input plugins if we want to integrate support for different controllers or improved input polling in the future.

User Interface

Our UI is largely prototypical for now, with many placeholder assets and sprites taken from our older iterations. However, we’ve fleshed out the functionality of the HUD, menus, and hubworld, and we’ve built a solid foundation for our redesign of UI elements, which we’ll be working on soon. We’ve also spent some time wrestling with Unity’s default UI navigation, to ensure the best experience for players using a gamepad.


Animation & Sound

Our path editor has been serving us well, and we plan to use it as a tool to help animate obstacles, characters, and visual effects once we’ve finalized our level designs. We’ll have the all-new Spirit character model and animations within the next few weeks, but for now, we’ve integrated our old animations into Unity’s animation system, with a small bit of customization built on top for our gameplay needs. We’ll be extending this system as we continue to refine our character movement and generate new designs. For now, little old Spirit still looks pretty adorable, though.

We’ve also integrated some of our old sound effects and music, and we’re working on balancing and extending our simple sound system to better handle transitions and the overlay of multiple effects. We’ll also be working with some brand-new editing software and digital instruments soon, so stay tuned for music updates!


What’s Next

Our next major priority is revamping Spirit’s model and animations, and updating our navigation code to ensure a great platforming experience for players. From there, we’ll be refining our core mechanics and working on level design and asset creation, before drilling down into our puzzle design and adding depth to the game. We’re having a great time working on Spirit and we really hope that you’ll enjoy it when the time comes!