Programming

The Great Disappearing Act: Making of a Possession Effect

A few weeks ago, we found ourselves crunching towards an expo deadline, prioritizing various polish items and gameplay tweaks. Perhaps our largest chunk of work centred around visual effects – there were quite a few so-called “delighters” that we wanted to add in, and we had little more than a week to put the finishing touches on our demo. But there was one effect in particular that we wanted to implement, and we started out with absolutely no idea of how to handle it – trying to animate a “possession” effect for Spirit. We wanted to give the impression that he dissolved into ghostly energy, which we could then animate on a curve to “enter” different objects. But how could we make a mesh appear as if it was disintegrating into energy, or gradually breaking apart into the aether?

Our artist, Josh, pulled up a few effects from different games that were similar to what we envisioned, giving us a point of reference for what we wanted to achieve:

DissolveExampleComposite.png

Top: Simple but functional transformation of Mario into coloured particles in Super Mario Sunshine (source), Bottom: Beautiful and envy-inspiring dematerialization of Link in The Legend of Zelda: Breath of the Wild (source).

A bit more digging online revealed that the effect we were looking for was probably based on a dissolve shader, which we could combine with a particle system to create that suave torn-to-pieces-by-supernatural-forces look. The particle system would be the easy part, tech-wise, and so our big challenge was tackling the dissolve shader.

We wanted something that was flashy, customizable, and portable, so that we could use it on different objects – a custom Unity surface shader with support for fancy materialization and dematerialization effects. The finished product will let us create something like this:

DemoGIF.gif

Here’s a quick breakdown of the steps we’ll take to create this effect:

  • Use a grayscale noise texture to fade mesh alpha based on an interpolation factor.
  • Use model-space fragment position to control dissolution based on a specified direction vector.
  • Combine texture- and geometry-based alpha/clipping control to create a hybrid dissolve effect.
  • Add in a glow effect by “predicting” the next areas to dissolve and adjusting model emission accordingly.

The great thing about these features is that they can be easily configured to work in tandem with one another, without interfering with any other shader features you might want to support, such as specular/normal/emission mapping, rimlighting, and so on. For simplicity’s sake, let’s assume we’re starting with Unity’s standard surface shader template, and a humble cube destined for greatness. The first thing you’ll want to do, assuming you want to support a gradual alpha fade, is adjust your shader tags and #pragma declaration accordingly:

Tags 
{ 
    "Queue" = "Transparent"
    "RenderType"="Fade" 
}
//...//
#pragma surface surf Standard /*...any additional features you want...*/ alpha:fade

If you’d prefer something that eats away at the mesh without fading the alpha gradually, feel free to skip this step. However, when writing to the output of your fragment routine, just make sure to use clip() to cull any dissolved fragments, rather than setting the output alpha value directly (as I’ll be doing here).

The first item on the agenda is to control our dissolution based on a noise texture. This will let us create different effects reminiscent of burning, cracking, slicing, and so on. Here, I’ve used Photoshop’s clouds and difference clouds filters to create some high-contrast Perlin-type noise:

MagmaNoiseBright.png

For our object to fade away based on this pattern, we’ll just add it as a texture map to our shader, along with a floating-point parameter, _DissolveScale​, on the range of [0, 1] to control the progression of the effect. For convenience, I’ve set zero to mean “fully intact” and one to mean “completely dematerialized”, so that as we move the slider from left to right in the Inspector, the object will gradually disappear.

If we think of the texture as a map to control our object’s dissolution, we want areas of different values (light/dark) to dissolve at different times. Let’s say that we want the black/dark parts of our texture to dissolve first, giving the appearance that the mesh cracks into pieces which then fade away. To accomplish this, for each fragment, we’ll add the luminance value of the dissolution map to our interpolation factor and use the result as our output alpha value:

//Convert dissolve progression to -1 to 1 scale.
half dBase = -2.0f * _DissolveScale + 1.0f;

//Read from noise texture.
fixed4 dTex = tex2D(_DissolveTex, IN.uv_MainTex);
//Convert dissolve texture sample based on dissolve progression.
half dTexRead = dTex.r + dBase;

//Set output alpha value.
half alpha = clamp(dTexRead, 0.0f, 1.0f);
o.Alpha = alpha;

Note that we’ve converted the interpolation factor to the space of [-1, 1] for this operation – don’t worry if this doesn’t make sense at first. All we’ve done is effectively ensure that our global alpha value will be 1 (fully opaque) at the very start of the effect, and 0 (fully transparent) at the very end. (If you happen to be unfamiliar with this sort of operation, it’s a little trick commonly called range remapping or range conversion, and it’s useful for all sorts of things).

We’re left with an effect that looks like this – not too shabby for a single noise texture and a few lines of code:

TextureGIF.gif

The next order of business is controlling this effect based on geometry – what if we want to dissolve the object from top to bottom, for example? There’s two straightforward ways that we might accomplish this. If you’re looking to create a particularly complicated progression (such as dissolving a character’s hands, bow tie, and eyes before the rest of them, for example) – you might just want to create your texture with this in mind, using your object’s UVs as a guide and hand-painting a dissolve texture to your liking (remember, with the code above, darker dissolves first).

A more interesting challenge is to control the effect based on a direction vector. You’ll need three new parameters for this:

  • A Vector for the starting point of the effect in model space.
  • A Vector for the ending point of the effect in model space.
  • A floating-point control representing the width of the “gradient” or “edge” along which the object is dissolving – I call this the “band size”.

You can visualize the effect as a gradient sweeping across the object, controlling the alpha and “wiping” the mesh from your starting point to your endpoint as it vanishes. Achieving this is pretty simple, but you’ll first want to add a vertex routine to your shader program, since you’ll be needing some geometry data that isn’t carried through to the fragment function by default. Outside of any of our shader functions, we’ll calculate a few global values based on our new parameters:

//Precompute dissolve direction.
static float3 dDir = normalize(_DissolveEnd - _DissolveStart);

//Precompute gradient start position.
static float3 dissolveStartConverted = _DissolveStart - _DissolveBand * dDir;

//Precompute reciprocal of band size.
static float dBandFactor = 1.0f / _DissolveBand;

Then, we’ll write our vertex routine to calculate an “alpha value” for the current vertex based on the effect progression. Note that we’ve modified the shader’s fragment Input struct to have an additional parameter for this (dGeometry) – we’ll let Unity handle the interpolation for each individual fragment to help reduce artifacts. Here’s what the complete calculation looks like:

//Don't forget to specify your vertex routine.
#pragma surface surf Standard /*...your other #pragma tags...*/ vertex:vert
//...//
void vert (inout appdata_full v, out Input o) 
{
    UNITY_INITIALIZE_OUTPUT(Input,o);

    //Calculate geometry-based dissolve coefficient.
    //Compute top of dissolution gradient according to dissolve progression.
    float3 dPoint = lerp(dissolveStartConverted, _DissolveEnd, _DissolveScale);

    //Project vector between current vertex and top of gradient onto dissolve direction.
    //Scale coefficient by band (gradient) size.
    o.dGeometry = dot(v.vertex - dPoint, dDir) * dBandFactor;		
}

Then, in our fragment shader, we simply use the interpolated dGeometry value (clamped to the range of [0, 1]) to set our alpha and we’re left with an effect that progresses like this:

GeometryGIF.gif

Combining this with our texture-based dissolve to create a hybrid effect is dead simple – just add the raw value of dGeometry to the luminance of the noise texture, clamp to [0, 1] as per usual, and use that as your alpha value:

//Combine texture factor with geometry coefficient from vertex routine.
half dFinal = dTexRead + IN.dGeometry;

//Clamp and set alpha.
half alpha = clamp(dFinal, 0.0f, 1.0f);
o.Alpha = alpha;

CombinedGIF.gif

Our last task is adding in some emissivity, so that the edges of pieces about to dissolve can glow before fading away. There’s quite a lot of ways to handle this, and the one that works best for you will vary depending on the approach you’ve taken. You can use offset versions of the interpolation parameter to calculate the glow strength, you can shift a “band” of emission down your mesh as it fades away, you can apply thresholding logic to your final alpha value to have a fragment “emit” at low values before clipping itself from view, and so on.

For our purposes here, I’ve chosen an approach which supports the “hybrid” texture/geometry dissolve fairly intuitively, by defining the size of the glow region in accordance with the “band size” specified for the rest of the effect. I use this factor to offset the alpha value calculated previously, using this shifted value to control the glow strength. I’ve also included a couple of additional parameters which control the sharpness of the glow’s edge (an intensity multiplier) and create a gradient to calculate the glow’s colour (start/end colours, and a parameter to shift the boundary between them):

//Shift the computed raw alpha value based on the scale factor of the glow.
//Scale the shifted value based on effect intensity.
half dPredict = (_GlowScale - dFinal) * _GlowIntensity;
//Change colour interpolation by adding in another factor controlling the gradient.
half dPredictCol = (_GlowScale * _GlowColFac - dFinal) * _GlowIntensity;

//Calculate and clamp glow colour.
fixed4 glowCol = dPredict * lerp(_Glow, _GlowEnd, clamp(dPredictCol, 0.0f, 1.0f));
glowCol = clamp(glowCol, 0.0f, 1.0f);

By outputting the computed colour as the emissive colour of the fragment (o.Emission), the mesh will now glow in anticipation of its disappearance. (In the following examples, the albedo tint is adjusted according to the glow factor to boost the colour even more.) You can play with different noise textures, glow colours, and effect parameters to create quite a few different dematerialization effects:

CompositeGIF.gif

Top: “Magma” effect using Perlin-type noise, high-intensity red-yellow glow, and top-to-bottom effect direction. Middle: “Boules” effect using pin-light radial gradients, purple glow, and bottom-to-top effect direction. Bottom: “Glitch” effect using offset barcode pattern, cyan-green glow, and corner-to-corner effect direction.

For reference, here’s the final property list and Inspector panel for the shader used to create the above effects:

Properties 
{
    _Color ("Color", Color) = (1,1,1,1)
    _MainTex ("Albedo (RGB)", 2D) = "white" {}
    _DissolveScale ("Dissolve Progression", Range(0.0, 1.0)) = 0.0
    _DissolveTex("Dissolve Texture", 2D) = "white" {}
    _GlowIntensity("Glow Intensity", Range(0.0, 5.0)) = 0.05
    _GlowScale("Glow Size", Range(0.0, 5.0)) = 1.0
    _Glow("Glow Color", Color) = (1, 1, 1, 1)
    _GlowEnd("Glow End Color", Color) = (1, 1, 1, 1)
    _GlowColFac("Glow Colorshift", Range(0.01, 2.0)) = 0.75
    _DissolveStart("Dissolve Start Point", Vector) = (1, 1, 1, 1)
    _DissolveEnd("Dissolve End Point", Vector) = (0, 0, 0, 1)
    _DissolveBand("Dissolve Band Size", Float) = 0.25
}

InspectorPanel.png

Finally, here’s a look at the effect in action on our little poltergeist fellow, synchronized with a particle system which we’ve animated on a curve to give that extra little bit of spooky panache:

PossessGIF.gif

And voilà, now we’ve created a nice, customizable shader perfect for teleportation, burning, dissolving, or any other bit of dematerialization magic.

Prototyping a Dynamic Camera System

Every player seems to have a different idea of the features that are most important to them in a game – depending on who you ask, that might be the level design, graphics, story, music, adequate inclusion of puffins, and so on. However, one key element dictates our perception of each and every one of these features, serving as the player’s window into the game world–the camera. A game’s camera is the oft-unsung hero (or hated villain) of the complete experience, almost solely responsible for defining the player’s perspective. Cameras need to consider everything from user input and avatar movement to physics constraints and cinematic intent. Most players may never notice a great camera system, but most every player will notice a terrible one.

Spirit has been particularly challenging in this regard, as we have a number of factors to consider in designing our camera system. The game world is relatively open, and puzzles are nonlinear, so strict designer-imposed controls are out of the question. We want users to be able to control and reset the camera freely, but integrate a degree of automation to prevent the need for constant manual adjustment. We also need to integrate the camera with our animation system, allowing for cutscenes and dynamic transitions. Finally, for want of a better phrase, we have a lot of stuff in our levels, so physics-based adaptation is a must. We’ve prototyped these features into a single dynamic system that looks something like this:

OverallCamera.gif

As a disclaimer, we’re still quite a ways from some of the amazing dynamic camera systems out there, but our current system has all the functionality we’ll need to move forward with refining the design. Here’s a look at how we’ve designed and developed our prototype camera system using Unity 2017:

Phase I: Basic Controls & Follow Camera

Our first step is creating a basic third-person camera that follows the player around while permitting them to adjust their view and look around. For this task, we designed a few basic constraints defining the valid operating space of the camera:

  • Minimum and maximum pitch angles.
  • Minimum and maximum distance from the player.
  • Incremental yaw around the player, which resets by facing the camera in the same direction as the player.

To start, we can calculate our default position using an offset vector based on the negative of our player’s forward vector, a default zoom distance, and a default pitch angle:

Transform pTarget = player.curObject.camTarget;
Quaternion offsetPitch = Quaternion.AngleAxis(pitchLevel, pTarget.right);
Vector3 offset = offsetPitch * zoomLevel * -pTarget.forward;
Vector3 targetPos = pTarget.position + offset;

From our default position, the camera’s target position changes if the player moves or if they manually adjust the camera’s angle or distance. The three key factors in this adjustment will be pitch, yaw, and zoom. We’ll treat pitch and zoom as a continuum, since we’re defining those qualities relative to the horizon and the player’s position respectively.

Yaw, on the other hand, is a bit trickier. The obvious answer here is to define “zero yaw” as facing the same direction as the player. However, in the interest of avoiding dizziness, vertigo, and the inevitable cavalcade of lawsuits that would follow, we don’t want the camera to turn instantly as the player moves. In fact, we’d like the movement input to appear relative to the camera: the player pushes right on the movement stick, and they appear to move right on the screen, and so on. Thus, we’ll define any yaw change as incremental, by rotating the camera around the player in the y-axis independent of the player’s current movement direction. This will let us more easily reference the camera’s forward vector in our movement logic to determine which way the player should go.

Obtaining the final target position involves a few basic transformation calculations:

  1. Calculate a base offset using the XZ components of the camera’s current forward vector.
  2. Normalize the offset and rotate it by two quaternions, one for  (using the camera’s right vector as a rotation axis), and one for yaw adjustment (using the world up vector).
  3. Multiply the resulting offset by the current zoom level.
  4. Calculate the camera’s position by adding the final offset to the player’s position.
  5. Rotate the camera to look at the player.

We’ll implement that as follows:

Transform pTarget = player.curObject.camTarget;
Transform rTarget = cam.transform;
Vector3 offset = -rTarget.forward;
offset.y = 0.0f;
offset *= zoomLevel;
Quaternion offsetPitch = Quaternion.AngleAxis(pitchLevel, rTarget.right);
Quaternion offsetYaw = Quaternion.AngleAxis(yawAdjust, Vector3.up);
offset = offsetYaw * offsetPitch * offset;
Vector3 targetPos = pTarget.position + offset;

The resulting controls look something like this:

BasicControls

Phase II: Physics

Adding basic physics is actually far less painful than you might think, after you’ve implemented your core control scheme. A lot of the decisions in this respect, at least early on, will be a question of design rather than programming. While the details of this aspect will be dependent on your physics engine, here’s some guidelines we’ve used in developing our camera physics:

  • Use of a spherical collider to maximize smoothness when colliding with objects and reducing the chances that the camera will get “stuck”.
  • Set collider radius to something slightly further than the camera’s near clipping plane to avoid unwanted geometry clipping.
  • Ensure that you can toggle physics on the camera – disabling physics during camera reset and animation is generally a must to prevent unwanted side effects.

If you happen to be using Unity, a quick way to set up your camera physics is to tack on a sphere collider and a rigidbody, and use Rigidbody.MovePosition to update the camera’s target position, using a distance threshold to prevent clipping through thin walls and other geometry:

Vector3 posChange = targetPos - cam.transform.position;
posChange = Mathf.Clamp(posChange.magnitude, 0.0f,
maxSpeed * Time.fixedDeltaTime) * posChange.normalized;
rb.MovePosition(cam.transform.position + posChange);

(As a word of warning for Unity users – if you’re using this, or a variation, as your quick and dirty camera physics solution, be sure to set your camera rigidbody mass to zero and set its velocity to zero during every update – lest you be plagued with unwanted force interactions.)

Here’s a comparison between our initial camera and our physics-capable camera when confronted with a wall:

PhysicsLarge

Admittedly, a fairly basic implementation, but the result is suitably robust in many situations. The resultant camera has respectable behaviour when crammed into walls, floors, and most level geometry. We can improve this with some dynamic physics constraints and raycasting, but that’s a post left for another day.

Phase III: Animation

For our purposes, we have three main animation requirements for our camera:

  • Smooth transition to and from predefined cutscene animations.
  • Short transition sequences for certain mechanics.
  • Automatic camera reset.

We handle each of these slightly differently, though each updates by overriding the default physics and player-controlled camera mode. For cutscenes, we write our camera’s pre-cutscene transformation to the same format used for cutscene keyframes – which we then append to either end of the cutscene’s frame list. The result is a modified animation which transitions to and from the cutscene’s path without hiccuping between camera control modes:

Cutscene

Our transition sequences are used when a player jumps between different controllable objects. To accomplish this, we capture a keyframe of the camera’s transformation at the instant the player triggers the mechanic, and calculate the default position of the camera for the new object in a second keyframe. We then animate the camera between these frames before restoring control to the player:

Possession

Finally, we allow players to “reset” the camera at any time, smoothly swivelling back to the default pitch, yaw, and zoom for the player’s current world position and orientation. Here, we key the camera’s pitch, zoom, and absolute yaw relative to the player for both the current and default camera orientations. We interpolate the values simultaneously to yield a smooth swivel effect:

Reset

Phase IV: Extras

A nice additional feature is the inclusion of a damping system, which adds a light springlike quality to the way that the camera adjusts its position.

The obvious implementation here is to simply apply a damping or acceleration function to our camera’s transformation update. However, damping the camera’s target position alone will have the effect of making our yaw and pitch controls feel sluggish. Instead, we’ll effectively damp the scaling of our offset vector, preserving the snappiness of our view controls while giving the camera a sense of springiness as the player moves around. We can implement this by using Unity’s Vector3.SmoothDamp function (or by keeping track of the camera’s frame-to-frame velocity and applying acceleration manually):

Vector3 realTarget = Vector3.SmoothDamp(cam.transform.position, targetPos,
ref camVel, dampingTime, player.curObject.moveSpeed);
offset.Normalize();
offset *= (realTarget - pTarget.position).magnitude;
targetPos = pTarget.position + offset;
rb.MovePosition(targetPos);

The resultant camera behaves something like this (undamped on the left, and damped on the right – we can reduce the exaggeration of the effect by tweaking the damping time):

DampLarge

With that in place, we’ve got ourselves a simple, versatile camera system that can adapt to all of our basic in-game needs. We’ll be back soon with updates on art direction and our final gameplay prototype!

Shading Spirit

Over the past few weeks, our artist has been fleshing out the details of our final character model and starting on animations. And so, the time had come – no more placeholder shaders for the little guy. Time to sit down and take a crack at a custom surface shader for our poltergeist friend, and we already had a few key features we wanted in mind. Since the beginning, we’d had something in mind similar to the ghosts from Luigi’s Mansion: Dark Moon :

In particular, take a look at the little green guy – he was a big inspiration for Spirit’s character design and shows off some of what we’d like to achieve with our visual effects.

Let’s break down the visual features of the model:

  • Base colour (green)
  • Glowing eyes/mouth
  • Surface detail (bumps/pores)
  • Edge/rim lighting (white/green)
  • Exterior glow (green halo)

Additionally, we wanted Spirit to have adjustable partial alpha, so that he’d appear semi-transparent, for maximum spookiness. Most of what we want to accomplish (aside from the exterior halo, which we’ll add in post-processing) can be done with a standard surface shader in Unity. Here’s a list of the components we’ll need to integrate for each feature:

  • Depth pre-pass and alpha intensity
  • Albedo map and tint
  • Emission map
  • Normal map, intensity, and smoothness
  • Rimlighting map, intensity, and tint

And here’s the texture maps we’ll be using to achieve the final effect:

TextureComposite.png

Unity’s built-in CG features make writing this shader pretty easy if you know which tools to use – for our final effect, we started from the standard surface shader template, which already includes our albedo map, base tint, and smoothness:

spirit-albedo

The albedo is there, but this hardly looks like a ghost – more like a plastic toy. Let’s add a bit of texture first with our normal map. Shader veterans will be happy to hear that Unity will do all of the tangent-space conversions for us, if you’ve imported your texture with the “Normal Map” texture type selected. All you need to do is use the UnpackNormals function. If you’d like to adjust the intensity of your normal map, just employ one of the worst-kept secrets in computer graphics – multiply the result of your normal map read by a colour with your desired intensity factor plugged into the red and green channels, while leaving the blue channel at 1:

o.Normal = fixed4(_BumpIntensity, _BumpIntensity, 1.0, 1.0)
* UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap));

So here’s what Spirit looks like with some detail, because real ghosts have pores (slightly enhanced for demonstration):

spirit-normals

The material texture is closer to correct now, but he still looks like a regular plastic object without any glow. Let’s start by adding our emission map, which will make his eyes glow, by simply setting the Emission property of our output structure to read from our emissive texture:

spirit-emission

While it’s a little too satanic for our purposes, we’re starting to see a promising glow – unfortunately, when combined with our full-force smooth albedo, which happens to be a bright base colour, the result is less “mischievous ghost” and more “irradiated cyclops”. Let’s fix this by toning down our albedo map with a darker tint colour and letting most of Spirit’s apparent colour come from our rimlight map, which is a toned-down modification of our base colour map. Rimlighting works by comparing the angle of the viewer’s eye (i.e., the camera view direction vector) with the surface of the object (i.e., our final surface normal). We want the edges of the object to glow, meaning that if the two vectors are perpendicular, the glow should be maximized. Therefore, we’ll use the dot product, clamp it, and subtract the result from 1 to give us our base rimlight intensity, which we can then modify using a custom intensity variable, tint colour, and our rimlight map. For our purposes, we’ll add the resulting colour to our emissive output:

half rimTerm = 1.0 - saturate(dot(normalize(IN.viewDir), o.Normal));
o.Emission = tex2D(_EmissiveTex, IN.uv_EmissiveTex)
+ _RimColor * tex2D(_RimTex, IN.uv_RimTex)
* smoothstep(0.0, 1.0, rimTerm * _RimIntensity);

After tweaking the colours to our liking, we’ve got something like this:

spirit-rimlight-normally

Finally, that looks quite a bit more like what we’re going for. Now, for one last feature – our partial alpha. The tricky part here is getting the depth buffer to behave properly. Here’s what happens if we add an alpha slider and flag the shader as transparent using tags:

spirit-demonic

Ouch. Not what we want at all – we want to be able to see the background through our little guy, but not his disembodied limbs – note the horrible clipping effect that’s happening as well. Resolving this is surprisingly easy – we complete a pre-pass to fill the depth buffer with an empty colour mask, ensuring that our final render will only deal with the bits of the surface closest to the camera, disregarding all that back geometry. Here’s the code for our pre-pass, which is painfully short:

//First pass.
Pass
{
ZWrite On
ColorMask 0
}
//Set up our next pass.
Cull Back
ZWrite On
Blend SrcAlpha OneMinusSrcAlpha
//CPROGRAM begins here...

Now let’s have a look at the little guy with some stuff behind him:

spirit-final

There we go! While we’ve got some texturing and post-processing tweaks we can make to improve the effect, there’s our surface shader, now with 100% fewer disembodied limbs. For reference, here’s our final list of properties in the surface shader, and the adjustment panel:

Properties
{
_Color ("Color", Color) = (1,1,1,1)
_Alpha ("Base Alpha", Range(0,1)) = 1.0
_MainTex ("Albedo (RGB)", 2D) = "white" {}
_Glossiness ("Smoothness", Range(0,1)) = 0.5
_BumpMap("Normal Map", 2D) = "bump" {}
_BumpIntensity("Normal Intensity", Float) = 1.0
_EmissiveTex("Emission Map", 2D) = "black" {}
_RimColor("Rimlight Color", Color) = (1,1,1,1)
_RimTex("Rimlight Texture", 2D) = "white" {}
_RimIntensity("Rimlight Intensity", Range(0.0, 2.0)) = 0.0
}

shader-panel

Finally, here’s a family portrait with our old model on the left, with standard shading, and our new and improved shaded model on the right:

family-portrait-spoorits

And there we have it – our little friend is ready to wreak havoc in style. We’ll be back soon with more updates on Spirit!

Spirit Development Update: July 2017

Now that we’re rounding out our third official month of development, we’d like to take some time to review our progress and share it with the community. It’s been a busy season for us, with lots of business meetings and new opportunities. We’re excited to announce that we are working with Northumberland CDFC to help fund our development efforts and that we will be participating in the UOIT business incubator throughout the year!

Reagarding Spirit, we’ve nearly finished much of the game’s core prototype functionality, having implemented alpha versions of our core mechanics, user interface, input system, save system, and application management. While we’ve got a long way to go in refining our gameplay and fleshing out our level design, it’s been quite rewarding to see the first few pieces come together. In this post, we’ll reflect on everything we’ve done so far, and how we plan to build on our existing foundation for Spirit in the coming months.

Gameplay

Navigation & Camera

Naturally, our first step in development after setting up our basic input & state management (see below) was the integration of basic player navigation. Our case is a bit unusual in this regard since players will be controlling a lot of different objects, so a one-size-fits all solution just doesn’t work for us. We’ve set up a system that lets us define movement controllers for different objects with varying degrees of deviation from standard rigidbody-based or character controller-based motion, which we will expand as we add new player-controlled objects, integrate animations, and improve the feel of our character movement.

Movement.gif

We’ve also set up a basic camera system allowing for locked and free-form camera controls, which supports a couple of different modes of operation depending on the object the player is controlling. It’s currently quite similar to the camera in our initial prototype, with some improvements to interpolation and adjustment behaviour. We’ve also developed a simple cutscene system built from our path editor utility, which has allowed us to start thinking about cinematic aspects of the game. Our next goals with the camera will be the integration of some basic physics and location-based constraints to improve gameplay feel and make it easier for the camera to adapt to different level geometries.

Camera.gif

Possession & Summoning

Possession is our core mechanic, and so it will be something that is in a constant state of expansion, refinement, and testing throughout the development process. Right now, we have a few different objects in our prototype for players to control (a large, bouncy ball, a marble, and a paper airplane), in addition to Spirit himself. Objects control quite differently depending on their physical properties – shape, size, weight, air resistance, and so on. We’re using three primary controllers at the moment for our current set of objects – a character controller-based model for Spirit, a controller we’ve designed specifically for flight, and a controller for rolling objects (the latter two are both heavily physics-based).

 

Additionally, we’re integrating our post-processing system with possession to give each object a unique aesthetic when players are “inside” the object, which we’ll be expanding on in later updates. Right now, we’re experimenting with a few different effects related to image warping and colour distortion. No psychedelics have been involved in the process, we promise.

Rescue & Collection

Players’ primary objective in Spirit is to rescue their ghostly cohorts from a devious team of paranormal exterminators, who’ve trapped them for later disposal. Spirit, who managed to evade the exterminators’ dastardly traps, will not stand by and allow innocent poltergeists to suffer in captivity. After all, a little innocent haunting never hurt anyone!

We’re going to be designing a number of different friends for Spirit to rescue, which our artist Josh will be bringing to life shortly (we’re acquiring supplies for the ritual). In the meantime, we’ve integrated the mechanic with a host of adorable magenta ghost clones, which aren’t terrifying at all, thanks to their giant yellow eyes. Ever watching. Staring. Judging.

 

Players can also collect little bits of concept art, tutorial images, and assorted bits of photographs and the like throughout their adventure, which they’ll be able to visit in a little journal menu, hosting their collection and detailing their current objectives.

Interaction & Dialogue

Once we’ve fleshed out our story and side characters a bit more, we’ll be integrating a fair bit of narrative, sassy conversation, and general tomfoolery to complement the game world. Right now, we’ve prototyped our system for interaction and conversation with a couple of talkative books. We’ve integrated this system with our collection mechanic, so that players can “take” things from NPCs via interaction, and we’ll be adding some basic fetch quests, riddles, and so on in the future.

Dialogue.gif

Abilities

A new feature we’ve been working on is an ability/talent tree similar to what you might find in an RPG perk system (though far less complicated!) or a game like Ori and the Blind Forest. At the moment, we’ve just finished implementing a skeleton for defining abilities, acquiring perk points, and spending those points to acquire and use abilities. We’ll be working on designing and implementing unique talents over the next few months.

AbilityUI.png

Application Management & State Saving

I like to have the application back-end up and running before taking on almost anything else, so that we can switch between scenes and deal with global GameObjects effectively. This helps us avoid snafus with the inability to test state transitions, getting caught up with persistent objects, and so on. Thus, our app manager was one of the first things we worked on, and we’ve expanded it steadily to accommodate new features as necessary.

We’ve also been extending our save system to support better player data management, improved file handling, and the ability and collection mechanics.

Input

Input was another of our initial areas of focus, as we wanted to develop a custom wrapper for Unity’s input system that allows us to query input based on actions defined outside of Unity’s input manager. We did this so that we can build a system for players to rebind their inputs effectively in-game, rather than having to rely on the Unity launcher. Furthermore, this leaves us the flexibility to import custom input plugins if we want to integrate support for different controllers or improved input polling in the future.

User Interface

Our UI is largely prototypical for now, with many placeholder assets and sprites taken from our older iterations. However, we’ve fleshed out the functionality of the HUD, menus, and hubworld, and we’ve built a solid foundation for our redesign of UI elements, which we’ll be working on soon. We’ve also spent some time wrestling with Unity’s default UI navigation, to ensure the best experience for players using a gamepad.

 

Animation & Sound

Our path editor has been serving us well, and we plan to use it as a tool to help animate obstacles, characters, and visual effects once we’ve finalized our level designs. We’ll have the all-new Spirit character model and animations within the next few weeks, but for now, we’ve integrated our old animations into Unity’s animation system, with a small bit of customization built on top for our gameplay needs. We’ll be extending this system as we continue to refine our character movement and generate new designs. For now, little old Spirit still looks pretty adorable, though.

We’ve also integrated some of our old sound effects and music, and we’re working on balancing and extending our simple sound system to better handle transitions and the overlay of multiple effects. We’ll also be working with some brand-new editing software and digital instruments soon, so stay tuned for music updates!

 

What’s Next

Our next major priority is revamping Spirit’s model and animations, and updating our navigation code to ensure a great platforming experience for players. From there, we’ll be refining our core mechanics and working on level design and asset creation, before drilling down into our puzzle design and adding depth to the game. We’re having a great time working on Spirit and we really hope that you’ll enjoy it when the time comes!

UnderConstruction

Playing Nice with Unity: Editor Customization

Can I please just delete one element in this godforsaken array without having to navigate eight bloody dropdowns, six unnamed text fields, and the 1989 Portuguese Census?

It’s happened to all Unity users at some point. You write a new script, attach it to an object, see it in the Inspector for the very first time, and recoil in disgust. Perhaps it’s the inability to name array elements, or the absence of constraints on one of your variables, a texture preview that’s just a bit too small, or – perish the thought – something’s alignment is off by a few pixels. The ensuing chaos is simply too much to bear. You close the editor, pour yourself a hot cup of chamomile tea, and swear you’ll never have to look at that script ever again. Two fields misaligning by three pixels, is, after all, an offense worthy of litigation in the eyes of your designer, artist, and/or public relations manager. Luckily, in your hour of need, the very editor you’ve cursed as the bane of your existence has had your back the entire time, and in your capable hands, the Unity Editor API shall save your team countless headaches.

GetPropertyHiehgt

Just try and remember to override the GetPropertyHeight function.

A word of caution before we move on – if you’re a pro-tier editor wizard yourself, this article may not be for you. While I’m a decently advanced programmer, I consider myself to be only intermediately experienced with the Editor API – this post is intended to help beginners know why and how they should go about learning the basics and implementing simple editor customizations in their own projects. And so, without further ado, let us delve into the realm of the editor.

What is editor scripting?

Broadly speaking, when I use the term “editor script”, I’m referring to any script that uses the Editor API to explicitly tell the Unity editor what to display, which UI elements to use, and which operations should be performed upon interacting with those UI elements. Examples of editor scripts I will discuss include PropertyAttributes, PropertyDrawers, CustomEditors, and EditorWindows. All of these scripts can be used to customize existing functionality or create new functionality within the Unity Editor at varying levels of scope.

When should I use my own editor scripts?

If you’re relatively new to Unity or working with the Inspector, there are a number of simple customizations for function and form alike that can be accomplished entirely without editor scripting. Unity has a number of built-in Attributes (check the sidebar of the docs for a full list) that function as a tagging system of sorts, allowing you to specify, among other things, script dependencies, Inspector headings, and the use of sliders or input restrictions for variables. Unless you’re looking to impart project-specific or relatively detailed functionality, using these tools can be a great way to tweak the Inspector to your needs without excessive overhead.

Eventually, your requirements will exceed the capabilities of these tweaks, and you’ll write your very first custom editor. Then, your pixel-retentive self might realize that just about every script could use a few extra half-millimetres of layout padding, or perhaps your designer will cry for customized transform gizmos for every different type of enemy in the game. Here’s where you need to know when to pump the brakes – as with any programming job, there’s a trade-off of effort in versus increased productivity here, and it can be a challenging one to gauge.

Writing a new editor for every new MonoBehaviour is, generally speaking, a horrible idea – the default Inspector is there for a reason, and irrespective of its occasional quirkiness, it gets the job done efficiently for the vast majority of scripts, particularly simpler ones. My personal advice is to only use a custom script if you need an improvement of function, rather than form, and if you or someone on your team will be actively using that desired functionality on a semi-regular basis. This avoids both bloat in your codebase and wasted time, as not every little tweak will end up being worth it in the long run.

Writing Editor Scripts

Before delving into any editor scripts, you’ll likely want to familiarize yourself with Unity’s SerializedObject and SerializedProperty classes – these represent how the editor sees your objects, and are used for, among other things, identifying data fields, fetching array elements, and sending updated data back to the object. Luckily, apart from the things you’re actively customizing, Unity will automate the majority of tedious tasks like layout spacing and field display for primitive datatypes.

The type of script you’ll need to create depends entirely on the level of functionality you’re trying to implement. Here’s a quick guide to the technique you’ll most likely want to use, based on your needs:

Modifying UI for individual fields: PropertyDrawers

If you want to customize a certain type of datafield, then a PropertyDrawer is probably the way to go. PropertyDrawers are used to define the Inspector GUI for a specific type – commonly a custom data structure class found in multiple MonoBehaviours. Once the PropertyDrawer has been defined, all instances of that data structure will appear with the same customized Inspector layout, unless overridden. Let’s look at this with an example.

In Spirit, we want to use a few different post-processing effects depending on the object the player is currently controlling. Right now, this includes colourshifting and different types of image warping. I wanted to give our design team a quick way to edit these effects for each object, having context-sensitive fields that would display based on the effect type. Since this functionality was fairly self-contained, I wrapped the necessary fields in a utility data class, FXSpecs, and defined a PropertyDrawer for that data type:

[CustomPropertyDrawer(typeof(PossessionFX.FXSpecs))]
public class FXSpecsPropertyDrawer : PropertyDrawer
{
public override void OnGUI(Rect position, SerializedProperty property, GUIContent label)
{
//Here's where you define your GUI elements:
//Buttons, sliders, textfields, and so on.
}

public override float GetPropertyHeight(SerializedProperty property, GUIContent label)
{
//I keep track of the cumulative height of the panel by modifying a
//variable in OnGUI, which you can then return here.
return currentHeight;
}
}

The result is that any MonoBehaviour with an FXSpecs field will display our custom UI – meaning that we can pop effect data onto other types of behaviours, such as cutscenes or animations, without having to redefine the UI in each instance:

SimplePropertyDrawer

In this way, PropertyDrawers can be thought of as little custom editor modules that can be used to define the UI of individual data structures, rather than modifying the layout for entire script components. (If you do need to redefine the UI for an entire script, you’ll want to create a CustomEditor as outlined below.)

Selectively modifying field display: PropertyAttributes

Consider the following scenario: you have a class with a string field you’d like to mark read-only in the inspector for debugging purposes, while preventing accidental editing. While you’re at it, you’d like to be able to tag any field as read-only for any future applications. The solution – custom PropertyAttributes. In addition to Unity’s built-in attributes (e.g. Range, TextArea, Header), you can create your own. Generally speaking, you’ll need to create two scripts to define a new PropertyAttribute. First, what usually amounts to a shell class declaring your attribute keyword:

public class ReadOnlyPropertyAttribute : PropertyAttribute { }

You’ll define the Inspector behaviour for your attribute in a new PropertyDrawer script:

[CustomPropertyDrawer(typeof(ReadOnlyPropertyAttribute))]
public class ReadOnlyPropertyDrawer : PropertyDrawer
{
public override void OnGUI(Rect pos, SerializedProperty prop, GUIContent label)
{
pos.height = 16;
EditorGUI.LabelField(pos, label, new GUIContent(prop.stringValue));
}
}

Having done this, anything you tag with your attribute will implement the corresponding PropertyDrawer, leaving untagged fields untouched:

SimplePropertyAttribute

I like to think of this as a selective approach to PropertyDrawers – allowing you to choose which fields will implement your custom UI on a case-by-case basis. For certain applications – such as wanting to make some fields readonly, or restricting an input range – this makes much more sense than the broad-brush approach of defining a PropertyDrawer without an accompanying attribute. However, if you want all datafields of a certain type to function the same way in the Inspector, a PropertyDrawer alone is generally sufficient.

Custom Inspectors for certain script types: CustomEditors

A CustomEditor affords the highest level of customization on a per-script basis in the Inspector. This allows you to redefine the display and functionality of all of an object’s fields, as well as imparting any additional behaviours (for example, buttons that execute custom routines on the object). However, it should be noted that creating a CustomEditor is typically much more involved than creating a PropertyDrawer, particularly for scripts with complex datatypes or a large number of data fields. If the functionality you wish to customize can easily be wrapped in a small utility class independent of its parent, a PropertyDrawer may be the wiser choice.

For cases where your desired functionality cannot be accomplished through PropertyDrawers alone, the CustomEditor affords a diverse and powerful toolset for enhancing your script’s Inspector pane.

During the creation of our path animation utility, it quickly became apparent that the default Inspector simply wouldn’t do, for several reasons:

  1. The default layout for arrays would make manually managing a list of waypoints excruciatingly time-consuming.
  2. We wanted to be able to toggle things like curve shape and gizmo drawing from the Inspector with GUI buttons.
  3. We wanted to manage adding and removing waypoints from the Inspector, rather than manually creating and deleting new Transforms and dragging them around into arrays.

Here’s an example of what the default Inspector for our utility would look like:

BasicEditor

Since the functionality we wanted was fairly specific to our path utility, I opted to create a CustomEditor for the utility, adding new UI elements for the desired functionality, redefining the display of the waypoint list, and letting Unity handle the display of simple field types:

[CustomEditor(typeof(OGPath))]
public class PathEditor : Editor
{
public override void OnInspectorGUI()
{
//...With EditorGUILayout spacing is handled for you.
EditorGUILayout.PropertyField(visualizeColor);
EditorGUILayout.PropertyField(showLoop);
//...
if (GUILayout.Button("Set to Linear..."))
path.SetInterpolationMode(OGPath.PathMode.Linear);
//...Make sure you take care of all the fields you want
//to be exposed in the Inspector!
}
}

(Note: The EditorGUILayout class is invaluable here, as it can quickly automate layout tasks that are otherwise borderline agonizing to perfect.)

The result is something like this:

SimpleCustomEditor

The beauty of this is that we can easily edit the functionality of the Inspector if our needs change later on, extending or modifying the behaviours we’ve defined to improve our custom UI.

Global functionality: EditorWindows

When you want to automate a task or expose some functionality pertaining to something other than an individual script – whether your whole application, an entire scene, or a collection of assets – you’ll want to look into EditorWindows. These are standalone panels of functionality that, once created, are accessible from Unity’s Window menu and allow you to create a UI for pretty much anything you can imagine – editing assets, performing operations on an entire level full of objects, resetting configuration files, testing procedural generation…

A simple EditorWindow is easy to create and permits the implementation of any global functionality (or local functionality, if you obtain handles through object fields or by searching the active scene). Let’s look at a simple example – an editor window that interfaces with a static file management class to provide a UI for wiping application data and save files (something that proves to be necessary rather frequently when iterating on your game’s file system, and becomes tedious when done manually). The script looks something like this:

public class SpiritAdminPanel : EditorWindow
{
//This tells Unity where to put your window in the menu.
[MenuItem("Window/Spirit Admin", false, 25)]
public static void ShowWindow()
{
SpiritAdminPanel window = GetWindow();
//Set up your window properties (size, title, etc.) here.
}

private void OnGUI()
{
if (GUILayout.Button("Reset all app data", buttonStyle))
//...And so on, define your UI here.
//GUILayout, like EditorGUILayout, allows for easier UI definition.
}
}

The resulting window is a simple panel with two buttons allowing us to wipe out our data as we please:

SimpleEditorWindow.png

Any time you’re looking to automate some global process that occurs outside of runtime, an EditorWindow can be a great option that maximizes your ability to customize and perform these processes without needing to hard-code every change.

Moving forward with customization

If you’re relatively new to editor customization in Unity, you’ll probably be doing a lot of reading – much of the functionality of the Editor API, however useful, is not always immediately apparent. While follow-along tutorials on editor scripting do exist all over the web, they will rarely accomplish the exact functionality you’re looking for. The result is a lot of digging through documentation, forums, and experimenting on your own. This can be an arduous process, but ultimately, you’ll find that the process can be well worth the effort, as its workflow benefits are potentially immense.

Plus, you’ll finally be rid of those dastardly few pixels standing between you and perfect Feng Shui in the Inspector.

Path Animation in Unity

When we originally came up with the concept for Spirit, we were a bunch of green, bright-eyed second-year undergrads, a little ragged from surviving our first year, but diehard optimists nonetheless. Inevitably, within a few weeks of beginning our first semester, the second-year curriculum had already kicked our teeth in, knocked us to the ground, and started dancing on our broken glasses, just for kicks. You see, our program has us create a game engine from scratch before so much as touching a prebuilt one, a form of education which proved to be both incredibly valuable and inexplicably cruel.

Our shepherd in this matter was the legendary Dan Buckstein, who in the space of a year was responsible for teaching us everything from quaternion math to bloom shaders. However, one of the most invaluable topics covered came along fairly early in the year, in the form of interpolation. While the most obvious application of this technique is the generation of paths from a series of waypoints, I would be remiss if I neglected to mention that interpolation itself is an algorithm applicable to a wide range of objectives – skeletal animation, colour gradients, image warping, and physics approximation, to name a few – after all, it’s just data (thanks, Dan).

Today, I’ll be focusing on path animation, something we used quite extensively to create animations in our original prototype. Since we were stuck without an existing engine as a starting point, we built a basic level editor for our original custom engine (the OG engine, as we dubbed it). The editor had several features built in for creating and transforming objects in a level, with a few sub-editors for object properties – path animation, collision, and in-game properties.

OGPathEditor

If we’re being honest, the path animation editor started as nothing more than a homework requirement, though it became eminently more useful as the year went on – particularly for particle effects, as we could use it to create breadcrumb trails and little flourishes of light and sparks that brought life to even the dustiest corners of the game.

In making the switch to Unity, we had initially [naively] assumed that Unity would have a more polished inbuilt path animation system. Much to our chagrin, despite its myriad of useful features, Unity has no such system out-of-the-box. You can achieve basic linear animations using the animation editor, or use navmesh-based traversal for AI characters, but the vanilla editor is somewhat lacking in the way of path animation. So, if we want to animate our particle systems, or camera, or anything, along a spline that we can visualize in the scene, we need to develop our own system. 

Our system will require a few key features to constitute a working prototype:

  1. A waypoint system that can smoothly animate objects along a curve at runtime.
  2. Support for different types of interpolation.
  3. A custom editor that lets us add, remove, and edit waypoints and/or control handles.
  4. A way of visualizing our path in the scene view so that we can preview and edit it more easily.

I’m going to start by tackling #2, as an understanding of interpolation forms the fundamental basis of any path animation system. Interpolation experts can feel free to skip ahead, and those completely unfamiliar with the topic should definitely look into some further personal research – a firm grasp on the concepts of different kinds of interpolation is immensely helpful in a surprising number of applications.

The basic concept of interpolation is this – you have a number of waypoints, and an equation for defining a curve based on those waypoints. Anything you animate along that curve runs off of a timer – that timer (well, a normalized version of it) is used as input, along with some of your waypoints, to an equation that spits out a point in space representative of your object’s position on the curve at the current moment in time. And that’s it. The equation that you choose defines how your curve will look – a series of lines, a smooth curve, a fancy Illustrator-style Bézier curve, and so forth.

For our path editor, we’ve chosen to support basic linear interpolation, Catmull-Rom interpolation, and cubic Bézier interpolation. Here’s what each of those looks like in a very basic nutshell – I won’t go into the reasoning behind the math here, though you can and should read the mathematics behind each one of these methods.

(Note that the t in each case is the object’s timer for the current frame/waypoint divided by the total time for that segment of the path – something that you can define manually, or with speed control, discussed below.)

Linear Interpolation (LERP):

LERPExample.png

public static Vector3 Lerp(Vector3 p0, Vector3 p1, float t)
{
return (1.0f - t) * p0 + t * p1;
}

Catmull-Rom:

CatmullExample

public static Vector3 Catmull(Vector3 p0, Vector3 p1, Vector3 p2, Vector3 p3, float t)
{
return ((p1 * 2.0f)
+ (-p0 + p2) * t
+ ((p0 * 2.0f) - (p1 * 5.0f) + (p2 * 4.0f) - p3) * (t * t)
+ (-1 * p0 + (p1 * 3.0f) - (p2 * 3.0f) + p3) * (t * t * t)) * 0.5f;
}

Bézier:

BezierExample

public static Vector3 Bezier(Vector3 p0, Vector3 p1, Vector3 p2, Vector3 p3, float t)
{
return (1.0f - t) * (1.0f - t) * (1.0f - t) * p0
+ 3.0f * (1.0f - t) * (1.0f - t) * t * p1
+ 3.0f * (1.0f - t) * t * t * p2
+ t * t * t * p3;
}

(For this implementation of Bézier, points p1 and p2 are actually the handles on the curve, which essentially define tangents that can shape smooth or sharp corners, or create loops, as I’ve done above.)

Note that each of these curves uses waypoints that are more or less in the exact same positions – so the interpolation method you choose is instrumental in determining how the final animation will look.

To define your object’s motion along the path, you have a couple of options; you can manually define the time taken for each curve segment – which requires a lot of tweaking and causes inconsistent speeds along tight curves, or you can implement a technique called speed control. Speed control lets you define your object’s desired movement speed and uses a table of curve samples to create smooth, consistent motion. The process goes something like this:

  1. Resample each segment of the curve using your interpolation method of choice, calculating a number of subsamples in between (8 is a nice number for this, 16 if you want extra precision or have particularly long curves).
  2. As you resample, use the distance between samples to calculate the approximate cumulative path length.
  3. Calculate the time taken to traverse the entire curve based on your object’s speed and the total length of the curve.
  4. As you animate the object, use the object’s timer to measure its traversal along the entire curve, using the timer to track its place in the table of samples and using LERP to move the object between all subsamples.

Using these concepts, it’s fairly obvious how you might build a simple system for storing a path and animating objects along that path – keep a list of Transforms pointing to your waypoints in the scene, use their positions for interpolation and/or subsampling, and adjust your object’s trajectory in the Update function.

The real magic here is in creating a custom editor for your paths, so that you can switch between interpolation methods, add and remove waypoints, push them around, and watch as your path changes in the scene. Here’s an example of what we’re shooting for (excuse GIF quality):

PathSweg

To achieve this, we’ll want to append the [ExecuteInEditMode] tag to our main path script for the purposes of gizmo drawing, and create a basic Editor script with a custom layout for adding/removing waypoints and setting them up automatically on our curve. Check out Unity’s documentation on the Editor utility, GUILayout, and SerializedProperties for a quick rundown, and make sure to tag your Editor script with [CustomEditor(typeof(YourClass))] To display custom property fields, we’ll need to grab a handle to the serialized version of our object:

private void OnEnable()
{
//The 'target' variable is built-in to the Editor class.
//Cast it according to the class type for which you're building your Editor.
path = (OGPath)target;
serial = new SerializedObject(path);

waypoints = serial.FindProperty("waypoints");
lockTangents = serial.FindProperty("lockTangents");
visualizeColor = serial.FindProperty("visualizeColor");
showLoop = serial.FindProperty("showLoop");
}

From here, we can grab properties using the FindProperty function, and display those using PropertyFields in OnInspectorGUI – just remember to call Update on your serialized object at the beginning of the routine, and ApplyModifiedProperties at the end, lest you be baffled by checkboxes that refuse to change. Here’s an example of how that looks:

public override void OnInspectorGUI()
{
serial.Update();

//Insert PropertyFields, Layout functions, buttons, etc.
//Check out the documentation for some examples of what you can do.
//Here's a snippet of what we used for our purposes:

EditorGUILayout.PropertyField(visualizeColor);
EditorGUILayout.PropertyField(showLoop);

GUILayout.Label(string.Format("Interpolation: {0}", path.pathMode.ToString()));

EditorGUILayout.PropertyField(lockTangents);

if (GUILayout.Button("Set to Linear..."))
path.SetInterpolationMode(OGPath.PathMode.Linear)
//...And so on for your other Inspector functionality...

serial.ApplyModifiedProperties();
}

Getting the curve to display in the editor was far easier than I had anticipated – simply implement the OnDrawGizmos function in your main path script, and use the DrawLine function to plot your curve along your waypoints (in the case of LERP) or your subsample table (if you’re using Catmull, Bézier, or some other more complex method). Bézier can be a bit tricky due to the fact that some waypoints are actually “handles” – I got around any potential confusion for our level designers by keeping the handles in a separate list, so that swapping between different interpolation methods would keep a more consistent general trajectory. Regardless of how you choose to handle different curves (sorry), though, a custom editor is key, and not all that difficult to set up. The resulting prototype functions something like this:

CandleAnimate.gif

In the couple of weeks since developing our initial prototype, we’ve extended it to support features like transitioning between curves, modes of object rotation, and a basic cutscene system built on chaining and timing multiple paths together.

In the next couple of weeks, I’ll be posting an update on working more in-depth with Unity’s systems for implementing custom editors, property drawers, and so on. Until then, it’s back to the grindstone! Thanks for reading.

Feel free to reach out in the comments if you have any questions or you’d like to add to the discussion!

Persistence in Unity

Since we’re designing Spirit with revisitable levels, persistent collectibles, and a simple system for player abilities, players will be able to save their progress in the world and return to their adventure later, picking up where they left off. Putting together a skeleton system for this functionality seemed like a pretty straightforward task – tap into Unity’s persistent object ID system, and serialize each dynamic object’s transformation info to a file alongside an ID for later retrieval.

Naturally, Unity has no persistent object ID system, and Unity’s Transform class can’t be serialized. Done and done. It’s perfect.

Hold on, I know what you’re thinking – “But what about the GetInstanceID function?” – which is all fine and well, but Unity regenerates these IDs every time your application is run, so while using instance IDs to identify objects between runs in the editor might very well just work, the next time you launch Unity (or eventually, your player restarts the application) – you’ll be confronted with a mass of data and virtually no means of identifying its context.

instance-id.png

If you set your Inspector to Debug mode, you can view an object’s Instance ID – but unless you expect players to keep their computers on with your game running in the background 24/7, you’ll be out of luck trying to use this for saving purposes.

A number of tutorials exist on the subject of overcoming this unfortunate little roadblock, exercising options from the mundane to the ridiculous. For saving basic data and configurations, Unity offers the PlayerPrefs utility, a nifty tool for things like persisting in-game options in the form of little data chunks. However, I’m fairly certain that if you attempted to use PlayerPrefs for saving an entire game state, someone would find you and hit you over the head with a spatula in a last-ditch effort to knock some sense into you.

C# offers a myriad of options for file IO, but without some method of identifying objects uniquely and automatically, we’re left with a number of unappealing options:

  1. Identifying GameObjects in the hierarchy by name or tag, which is horribly unadvisable as objects can be renamed/retagged and this process is manual.
  2. Re-instantiating most of the dynamic things in your level/world from prefabs based on your saved file, which requires labyrinthian save structures for anything remotely complex and reduces your designers’ ability to move things about without ruining your save system.
  3. Buying a pint of ice cream and drowning your sorrows whilst redesigning your game to not require a save system.

The fourth, and far less demoralizing option, is to write our own persistent, automatic ID system with a centralized manager. Following this, we can write some simple serializable data structures to store basic information (e.g. transformations), associate each object’s data with its ID, and write the lot to a file that we can reread to restore the game state at a later time. (As a sidenote, I highly recommend Joshua Smyth’s blogpost on automatic IDs in Unity, linked at the end of this post, which was an invaluable reference while working on this.)

Our first step is the matter of choosing a method to generate our IDs – a hash based on the time of creation would be suitable, but luckily, C# has a built-in GUID class that can generate a guaranteed-unique string for us in a single line:

id = System.Guid.NewGuid().ToString();

To associate our ID with a particular object, we’ll need a global manager class – which is really nothing more than a singleton with a couple of containers for key lookup (Dictionaries, most likely, assuming we’re working in C#). Depending on any extra functionality you want from the manager, you can build in some basic initialization/setup into an Awake function (updating your script execution order to ensure that the manager precedes the id scripts themselves, if necessary). At any rate, you’ll probably want two main lookups – one for mapping ID to instance ID (for checking duplicates, as noted below) and one for mapping ID directly to GameObject (for finding the objects based on ID while loading a file).

For our ID script, we need to generate the ID automatically on object creation and persist between runs. We’ll need to keep a few key things in mind when writing our script:

  1. Store the id in a public string variable, so that it will be saved with the object and persist between launches when Unity saves and reloads our scene. (To display the value as read-only in the inspector, we can use a custom property drawer – this should be very straightforward for anyone familiar with Unity property drawers, and the post I’ve linked at the end has a helpful tutorial for the uninitiated.)
  2. Generate the id in Awake() if it is null or invalid*, and register it with our manager.
  3. De-register the key from the manager in OnDestroy(), to prevent us from accidentally finding a null reference if we iterate through our lookup list later on.

*We can check if a key is “valid” to avoid duplicates by seeing if the object’s instance ID matches the one the manager has “on file” for our custom ID. If it doesn’t, we generate a new custom ID. Why does this happen? If we instantiate an ID’d prefab or duplicate a GameObject with an ID, Unity will also copy our once-unique ID, and there’s no way to tell in Awake() (or anywhere else) whether or not this has happened. However, our manager will have the ID on file if we’ve set it up properly, so we can simply check in the lookup to see whether or not our key is an impostor, and remedy the situation if this is the case.

Lastly, make sure to tag the ID and manager scripts with [ExecuteInEditMode] to make sure our utility will actually run while we are editing our scene.

After the manager and ID scripts are set up properly, we can simply add an ID component to any object we want to be able to identify for loading data later on – to reduce bloat, I suggest only adding the script to objects you’ll actually need to restore – no need to tag every static piece of décor with a GUID.

id-system.png

Here, I’ve used the OGID script to tag the object with a unique ID – the Saveable script is another little utility written purely for data storage – internally it builds a structure (tagged with the [System.Serializable] attribute) to store the data needed to load the object’s state later on – for example, position and orientation.

The rest of the process for creating a saved-game system is fairly straightforward – using built-in path variables and C#’s directory utilities, getting a basic file system manager off of the ground is a breeze. I’m a fan of using C#’s DateTime utility for generating save IDs, as it’s a great way to create a human-readable and unique (at least, unique to the system the app is running on, assuming no time travel shenanigans) tags for identifying saved games.

Saving and loading the data itself is actually the simplest part of the process – create a few custom classes for storing an object’s transformation and any other data you’d like to save, write functions to copy the data to and from GameObjects, and stick them on any ID’d objects you’d like to save. From here, you could choose to write a custom file format for your data – or tag everything with the System.Serializable property, make another serializable class with arrays of your custom containers, copy over references to your data containers in your save function, and then use just four lines to scribble the whole thing to a file:

System.IO.Stream s = new System.IO.FileStream(filepath, System.IO.FileMode.Create, System.IO.FileAccess.Write);
System.Runtime.Serialization.IFormatter formatter = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();
formatter.Serialize(s, saveGame);
s.Close();

You can choose to use a binary or XML serializer to encode your data. While I’m by no means an expert in making the choice, here’s a quick rundown of my understanding of the pros/cons on either side:

Binary – harder to read/edit as a text file (not great for development but might be a plus to avoid player trickery), slightly faster to read/write, slightly more prone to corruption.

XML – easier to read/edit as a text file, slightly slower to read/write, less prone to corruption/easier to “fix” a savefile if something goes horribly wrong.

I’ve gone with binary for now, because there’s something oddly satisfying about opening your savefile to see it come out as a string of gobbledygook that magically turns back into a game state upon loading:

System.IO.Stream s = new System.IO.FileStream(filepath, System.IO.FileMode.Open, System.IO.FileAccess.Read);
System.Runtime.Serialization.IFormatter formatter = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();
SaveGame saveGame = (SaveGame) formatter.Deserialize(s);
s.Close();

After you’ve deserialized your savefile, you can call your own custom routines to copy data back from your containers to your GameObjects (using your ID lookup to reference the appropriate object as a matter of course) – presto, you’ve successfully loaded your game! Assuming you build your file by iterating through ID’d objects, new objects will add to your savefiles automatically, and you’ll only need to go back to your code if you need to save additional data types or you want to tinker with file management.

Save Menu

And with a bit of finagling with Unity’s UI Canvas, a custom screenshot utility, and some layout tweaks – you can create a nifty little save/load menu that can decode your savefile’s name and/or custom metadata – I use a hash of player ID and timestamp – to display some info at-a-glance for the player’s saved files.

And that’s just about it for some basic, but fairly robust, persistence in Unity.

If you’re interested in developing your own system for saved games using Unity, I highly recommend these pages as follow-up reads:

That’s it for today, thanks for reading, and feel free to reach out if you have any questions.