The Great Disappearing Act: Making of a Possession Effect

A few weeks ago, we found ourselves crunching towards an expo deadline, prioritizing various polish items and gameplay tweaks. Perhaps our largest chunk of work centred around visual effects – there were quite a few so-called “delighters” that we wanted to add in, and we had little more than a week to put the finishing touches on our demo. But there was one effect in particular that we wanted to implement, and we started out with absolutely no idea of how to handle it – trying to animate a “possession” effect for Spirit. We wanted to give the impression that he dissolved into ghostly energy, which we could then animate on a curve to “enter” different objects. But how could we make a mesh appear as if it was disintegrating into energy, or gradually breaking apart into the aether?

Our artist, Josh, pulled up a few effects from different games that were similar to what we envisioned, giving us a point of reference for what we wanted to achieve:

DissolveExampleComposite.png

Top: Simple but functional transformation of Mario into coloured particles in Super Mario Sunshine (source), Bottom: Beautiful and envy-inspiring dematerialization of Link in The Legend of Zelda: Breath of the Wild (source).

A bit more digging online revealed that the effect we were looking for was probably based on a dissolve shader, which we could combine with a particle system to create that suave torn-to-pieces-by-supernatural-forces look. The particle system would be the easy part, tech-wise, and so our big challenge was tackling the dissolve shader.

We wanted something that was flashy, customizable, and portable, so that we could use it on different objects – a custom Unity surface shader with support for fancy materialization and dematerialization effects. The finished product will let us create something like this:

DemoGIF.gif

Here’s a quick breakdown of the steps we’ll take to create this effect:

  • Use a grayscale noise texture to fade mesh alpha based on an interpolation factor.
  • Use model-space fragment position to control dissolution based on a specified direction vector.
  • Combine texture- and geometry-based alpha/clipping control to create a hybrid dissolve effect.
  • Add in a glow effect by “predicting” the next areas to dissolve and adjusting model emission accordingly.

The great thing about these features is that they can be easily configured to work in tandem with one another, without interfering with any other shader features you might want to support, such as specular/normal/emission mapping, rimlighting, and so on. For simplicity’s sake, let’s assume we’re starting with Unity’s standard surface shader template, and a humble cube destined for greatness. The first thing you’ll want to do, assuming you want to support a gradual alpha fade, is adjust your shader tags and #pragma declaration accordingly:

If you’d prefer something that eats away at the mesh without fading the alpha gradually, feel free to skip this step. However, when writing to the output of your fragment routine, just make sure to use clip() to cull any dissolved fragments, rather than setting the output alpha value directly (as I’ll be doing here).

The first item on the agenda is to control our dissolution based on a noise texture. This will let us create different effects reminiscent of burning, cracking, slicing, and so on. Here, I’ve used Photoshop’s clouds and difference clouds filters to create some high-contrast Perlin-type noise:

MagmaNoiseBright.png

For our object to fade away based on this pattern, we’ll just add it as a texture map to our shader, along with a floating-point parameter, _DissolveScale​, on the range of [0, 1] to control the progression of the effect. For convenience, I’ve set zero to mean “fully intact” and one to mean “completely dematerialized”, so that as we move the slider from left to right in the Inspector, the object will gradually disappear.

If we think of the texture as a map to control our object’s dissolution, we want areas of different values (light/dark) to dissolve at different times. Let’s say that we want the black/dark parts of our texture to dissolve first, giving the appearance that the mesh cracks into pieces which then fade away. To accomplish this, for each fragment, we’ll add the luminance value of the dissolution map to our interpolation factor and use the result as our output alpha value:

Note that we’ve converted the interpolation factor to the space of [-1, 1] for this operation – don’t worry if this doesn’t make sense at first. All we’ve done is effectively ensure that our global alpha value will be 1 (fully opaque) at the very start of the effect, and 0 (fully transparent) at the very end. (If you happen to be unfamiliar with this sort of operation, it’s a little trick commonly called range remapping or range conversion, and it’s useful for all sorts of things).

We’re left with an effect that looks like this – not too shabby for a single noise texture and a few lines of code:

TextureGIF.gif

The next order of business is controlling this effect based on geometry – what if we want to dissolve the object from top to bottom, for example? There’s two straightforward ways that we might accomplish this. If you’re looking to create a particularly complicated progression (such as dissolving a character’s hands, bow tie, and eyes before the rest of them, for example) – you might just want to create your texture with this in mind, using your object’s UVs as a guide and hand-painting a dissolve texture to your liking (remember, with the code above, darker dissolves first).

A more interesting challenge is to control the effect based on a direction vector. You’ll need three new parameters for this:

  • A Vector for the starting point of the effect in model space.
  • A Vector for the ending point of the effect in model space.
  • A floating-point control representing the width of the “gradient” or “edge” along which the object is dissolving – I call this the “band size”.

You can visualize the effect as a gradient sweeping across the object, controlling the alpha and “wiping” the mesh from your starting point to your endpoint as it vanishes. Achieving this is pretty simple, but you’ll first want to add a vertex routine to your shader program, since you’ll be needing some geometry data that isn’t carried through to the fragment function by default. Outside of any of our shader functions, we’ll calculate a few global values based on our new parameters:

Then, we’ll write our vertex routine to calculate an “alpha value” for the current vertex based on the effect progression. Note that we’ve modified the shader’s fragment Input struct to have an additional parameter for this (dGeometry) – we’ll let Unity handle the interpolation for each individual fragment to help reduce artifacts. Here’s what the complete calculation looks like:

Then, in our fragment shader, we simply use the interpolated dGeometry value (clamped to the range of [0, 1]) to set our alpha and we’re left with an effect that progresses like this:

GeometryGIF.gif

Combining this with our texture-based dissolve to create a hybrid effect is dead simple – just add the raw value of dGeometry to the luminance of the noise texture, clamp to [0, 1] as per usual, and use that as your alpha value:

CombinedGIF.gif

Our last task is adding in some emissivity, so that the edges of pieces about to dissolve can glow before fading away. There’s quite a lot of ways to handle this, and the one that works best for you will vary depending on the approach you’ve taken. You can use offset versions of the interpolation parameter to calculate the glow strength, you can shift a “band” of emission down your mesh as it fades away, you can apply thresholding logic to your final alpha value to have a fragment “emit” at low values before clipping itself from view, and so on.

For our purposes here, I’ve chosen an approach which supports the “hybrid” texture/geometry dissolve fairly intuitively, by defining the size of the glow region in accordance with the “band size” specified for the rest of the effect. I use this factor to offset the alpha value calculated previously, using this shifted value to control the glow strength. I’ve also included a couple of additional parameters which control the sharpness of the glow’s edge (an intensity multiplier) and create a gradient to calculate the glow’s colour (start/end colours, and a parameter to shift the boundary between them):

By outputting the computed colour as the emissive colour of the fragment (o.Emission), the mesh will now glow in anticipation of its disappearance. (In the following examples, the albedo tint is adjusted according to the glow factor to boost the colour even more.) You can play with different noise textures, glow colours, and effect parameters to create quite a few different dematerialization effects:

CompositeGIF.gif

Top: “Magma” effect using Perlin-type noise, high-intensity red-yellow glow, and top-to-bottom effect direction. Middle: “Boules” effect using pin-light radial gradients, purple glow, and bottom-to-top effect direction. Bottom: “Glitch” effect using offset barcode pattern, cyan-green glow, and corner-to-corner effect direction.

For reference, here’s the final property list and Inspector panel for the shader used to create the above effects:

InspectorPanel.png

Finally, here’s a look at the effect in action on our little poltergeist fellow, synchronized with a particle system which we’ve animated on a curve to give that extra little bit of spooky panache:

PossessGIF.gif

And voilà, now we’ve created a nice, customizable shader perfect for teleportation, burning, dissolving, or any other bit of dematerialization magic.

A User-Centred Approach to Control Design

It’s a familiar image – the furious slamming of fists into a laggy keyboard; the innocent mouse knocked aggressively from its perch; the once-glorious gamepad, now laying cracked on the floor beneath a film of cheese dust. Who among us has never resorted to blaming the controller for our own failures? And yet, in some cases, perhaps we are justified in our rage against the machine, for a poor control implementation can lead to any manner of misclicks, misunderstandings, and missed opportunities. Controls are a fundamental aspect of any game’s design, serving as a key factor in determining the game’s playability. Simply put, a game’s controls facilitate each and every interaction available to the player. This holds true regardless of input device – whether mouse and keyboard, gamepad, motion sensors, or brain-computer interface. No matter the input chosen, developers are tasked with designing a set of controls that is logical, easy to learn, fluid, responsive, and as unobtrusive to gameplay as possible. Ultimately, this process boils down to the creation of an interaction schema that effectively maps real-world actions, such as a button press or a wave of the arm, into an in-game action, like jumping or swinging a sword.

All of this rhetoric prompts us to inquire, how might we define good or great interaction design for game controls? We might assess a control scheme as effective if it is usable and contributes to a good user experience – that is to say, it enhances a player’s experience, rather than detracting from it. However, accurately measuring usability and user experience necessitates usertesting, which presumes that we’ve already implemented our design. May we determine some aspects of our design a priori as a pre-development measure, thus improving the quality of our initial efforts? The answer, thankfully, is yes, through the application of user-centred design, or UCD. UCD is a process oft-applied in the realm of productivity and web applications, though it is increasingly applied to the development of interactive entertainment, including games. In a nutshell, UCD focuses on understanding user needs, developing system requirements based on those needs, prototyping alternative designs, and finally evaluating the effectiveness of those designs. In this post, we’ll focus on how we can leverage the first two phases of UCD methodology – understanding needs and formulating requirements – to inform our designs pre-implementation.

Case Study: Designing Controls for Spirit

Our team’s interest in UCD is motivated by our current project, a 3D puzzle-platformer in a quasi-open world. Controls are of particular importance in the platforming genre, where players are frequently tasked with executing a precisely timed sequence of movement, jumps, and other abilities. Poorly-mapped or unresponsive controls spell disaster for any platforming game, as they maximize player frustration, or worse, make certain challenges impossible to complete. Our need for great controller design is compounded by the nature of our project in particular; since our core mechanic allows players to control a number of different objects, we may find ourselves designing a dozen control variants for any given input device. Furthermore, many of our puzzles are physics-based, demanding that our controls seem physically realistic while maintaining a good game feel. We’ve chosen to apply UCD in achieving these objectives to ensure that our players’ needs form the basis of our interaction design.

Spoorit-Wave.png

The first step in our design process is the establishment of our target user population, and an understanding of player needs based on their demographics, preferences, and past experience. Next, we formulate design requirements based on lessons learned from existing titles, expected use context, and player behaviours. Following this, we create our initial designs for three different player-controlled entities and refine them through early internal testing. Finally, we plan our next steps in usertesting and iteration to validate and refine our designs.

Understanding Players & Establishing Design Requirements

Our target audience for Spirit comprises players between the ages of 18 and 34 with at least a moderate amount of gaming experience. The ideal user will have fairly extensive experience with platforming games, enjoys exploration and puzzle-solving, and is willing to devote an hour or more to individual play sessions.

Based on the needs of our target players, we can categorize the requirements of our design into a few key groups:

Functional Requirements – What the controls should do.

For each set of controls, we need to support core game interactions – primarily, we are concerned with movement, jumping, interacting with objects, and manipulating the camera. Each interaction should be mapped to its own region on the appropriate input device, and real-world manipulation of the input should translate sensibly into in-game action.

Non-Functional Requirements – What the controls should be.

We’re developing Spirit for PC, so we’d like to offer both keyboard and gamepad support for players. Right now, we’re focused on interaction design for both mouse/keyboard and Xbox controllers. Since players will find themselves in situations where they might need to rapidly time jumps, precise changes in direction, or switching between objects, responsiveness and fluidity should feature prominently in the eventual implementation.

Data Requirements – What the controls should know.

Our controls need to respond differently depending on game scenarios – connecting with in-game feedback like contextual hints, restricting input when appropriate, and even responding to in-game physics. Thus, our control system should interface with game data to pull information regarding the camera, game state, position of interactive objects, and so on.

Context Requirements – How the controls will be used.

Since players will want to concentrate on what’s happening on-screen, we need to ensure that they won’t feel the need to glance down at the keyboard or gamepad to be sure of their next input. We expect that players will have prior gaming experience, and so our design can borrow from established conventions in the genre to assist in this effort.

Usability and Experience Requirements – What the user should perceive.

We want our controls to feel unobtrusive and fluid, minimizing the barrier that users perceive between their actions and in-game results. Controls should be easy to learn, and easy to use – we want challenge to come from puzzle-solving and platforming, not wrangling a gamepad. Lastly, we want players to feel good about mastering the abilities of any given object that they control, and so our controls should integrate with our animation and gameplay systems to create the most fluid experience possible.

Following the establishment of our design requirements, we examined (and played!) a number of different games. Since we’re concerned with designing controls for a several different objects, our research extended beyond the platforming genre to include games like flight simulators and shooters. For the purposes of our case study, we’ll look at the first few entities that we’ve implemented into our gameplay prototype – our main character, a beach ball, a marble, and a paper plane. Each design follows from a core set of universal attributes that we’ve developed based on estimated player expectations, with refinements to individual objects focused on improving game feel and maximizing usability.

Control Designs

Universal attributes. At its core, Spirit is a platformer, and so we looked at a lot of different platforming games to get a feel for the sort of controls players would expect – from classics like Super Mario 64, Banjo-Kazooie, and Chibi-Robo to modern incarnations of the genre, like Yooka-Laylee. We also played quite a bit of Ori and the Blind Forest – though a 2D platformer, the controls in Ori are outstandingly responsive and fluid, with snappy animations that respond near-instantaneously to most inputs.

ScreenshotCollection.png

Since players will be in a 3rd-person, 3D environment regardless of the object they’re controlling, we also looked at games like The Legend of Zelda: Breath of the Wild to learn from some truly great 3rd-person control schemes. Ultimately, we decided on a few standards from which we could build and refine each individual entity’s control scheme:

Locomotion. No need to reinvent the wheel on mapping basic movement – we’re going to keep primary directional movement on WASD for keyboard users and LS for gamepad players. We’ll map jumping to spacebar on keyboard, and A on gamepad. For each individual control variant, we’ll use the physical qualities of the entity that the player controls to determine how movement controls should behave – including acceleration, directional changes, and any animation delays.

Camera Movement. Following the aiming conventions of first- and third-person games alike, we’ll map this movement to RS and mouse movement. For keyboard and mouse users, we’ll allow toggling of locked and free camera modes with the tab key.

Interactions. We’ll map primary and secondary actions, like possessing objects and interacting with NPCs, to Q/E on keyboard and Y/X on gamepad. We’ve opted for a primarily one-button scheme (using E and X respectively), which will perform the correct action based on the interaction available in-world. To accomplish this, we’ll check for interactive areas within the player’s FOV and display a prompt in-world to highlight the available interaction.

Spoorit-Prompt

Each of the control variants below is based on these core universal attributes, with variations based on the physical attributes of the entity in question, and any expectations players may have from previous gaming experiences.

Third-person humanoid. Players will spend most of their time as Spirit himself, a tiny ghost with roughly humanoid features. We want motion to feel snappy and responsive, so for this design, we’ll map the movement axes directly to the player’s velocity, with a slight acceleration timed to match the character’s run animation. We’ll base this on a character controller that considers game feel first, and physics second – to give players a fluid experience that integrates well with animation. Following the standard of most platformers and third-person games, we’ll allow players to “turn on a dime” – turning animations are nice and cinematic, but may prove frustrating when players want tight controls above all else. The result in-prototype looks something like this:

Spoorit-Move

Round objects (physics-based). Two of our initial objects, the beach ball and marble, both follow a scheme inspired by the feel of locomotion in games like Katamari Damacy or Super Monkey Ball. In contrast to the main ghost controller, we’ll base this scheme almost entirely on physics, mapping movement axes to forces applied on the object, rather than instantly changing the object’s velocity. By configuring the amount of acceleration applied, we can create the feeling of a large, weighty rubber ball or a small marble with a tight turning radius. We’ll let the physics engine handle angular momentum, preserving it through jumps and collisions, to create an experience that feels more physically realistic. In past iterations, we experimented with more direct schemes that were less physics-based, as in the main character’s controls; the consensus from the majority of players was that they preferred and expected more physics-dependent behaviour for traditionally “inanimate” objects. The end result is a nice, responsive force-based controller that “fights back” if this object controlled is particularly weighty:

Ball-Move

Flight (hybrid). Developing controls for a paper airplane was particularly interesting – though inanimate like the ball, the plane functions as more of a vehicle than a dead weight, and so we looked to spaceflight and flight simulators like X-Plane for inspiration. Stripping away the complexities of a bona fide flight sim, we can reduce the act of flight to a few primary controls – throttle (forward motion), yaw (turning), pitch, and roll.

Since throttle and yaw correspond to motion in the XZ plane, we can associate this with the “locomotion” controls for other objects – as such, we map throttle to the z-axis of movement (W/S on keyboard, or up/down on LS for gamepad) and yaw (which we’ll tie into roll, for smoother animation, to the x-axis of movement (A/D on keyboard, left/right on LS). Pitch and roll are a bit more interesting – in keeping with the conventions of traditional flight controls, we’ll lock the camera to the direction the aircraft is travelling, and use the axes freed up from camera controls to modulate pitch (up/down with mouse or RS) and roll (which, coupled with yaw, is offered secondarily by using left/right with mouse or RS). The result is something that feels like a simple, zippy little flight simulator:

Plane-Move

Next Steps

Over the course of our work so far, communication among the dev team has proven crucial – our initial implementations have undergone many iterations to improve responsiveness, physical accuracy, and animation. However, we’re still very much at a prototype stage, and we’ll need to test our current variations with real players to validate and further improve our designs. Our next step will be finalizing control variants for a few more in-game objects, before proceeding to some early alpha testing with potential players. As part of our usertesting efforts, we’ll be integrating techniques like gameplay recording, questionnaires, and semi-structured interviews to understand our players’ perspectives on controls and interactions in our game. In the meantime, we’ll be working on improving in-game feedback to help users learn available interactions more effectively, and designing some simple puzzles to facilitate an in-game environment for testing where users will be able to focus most of their efforts on evaluating the game’s controls.

Overall, the UCD approach has proven immensely helpful to our interaction design process, improving the quality of our initial designs and our efficiency as a team. Be sure to check back in soon for an update on our creative direction and level design as we move forward from our initial prototype!

Prototyping a Dynamic Camera System

Every player seems to have a different idea of the features that are most important to them in a game – depending on who you ask, that might be the level design, graphics, story, music, adequate inclusion of puffins, and so on. However, one key element dictates our perception of each and every one of these features, serving as the player’s window into the game world–the camera. A game’s camera is the oft-unsung hero (or hated villain) of the complete experience, almost solely responsible for defining the player’s perspective. Cameras need to consider everything from user input and avatar movement to physics constraints and cinematic intent. Most players may never notice a great camera system, but most every player will notice a terrible one.

Spirit has been particularly challenging in this regard, as we have a number of factors to consider in designing our camera system. The game world is relatively open, and puzzles are nonlinear, so strict designer-imposed controls are out of the question. We want users to be able to control and reset the camera freely, but integrate a degree of automation to prevent the need for constant manual adjustment. We also need to integrate the camera with our animation system, allowing for cutscenes and dynamic transitions. Finally, for want of a better phrase, we have a lot of stuff in our levels, so physics-based adaptation is a must. We’ve prototyped these features into a single dynamic system that looks something like this:

OverallCamera.gif

As a disclaimer, we’re still quite a ways from some of the amazing dynamic camera systems out there, but our current system has all the functionality we’ll need to move forward with refining the design. Here’s a look at how we’ve designed and developed our prototype camera system using Unity 2017:

Phase I: Basic Controls & Follow Camera

Our first step is creating a basic third-person camera that follows the player around while permitting them to adjust their view and look around. For this task, we designed a few basic constraints defining the valid operating space of the camera:

  • Minimum and maximum pitch angles.
  • Minimum and maximum distance from the player.
  • Incremental yaw around the player, which resets by facing the camera in the same direction as the player.

To start, we can calculate our default position using an offset vector based on the negative of our player’s forward vector, a default zoom distance, and a default pitch angle:

Transform pTarget = player.curObject.camTarget;
Quaternion offsetPitch = Quaternion.AngleAxis(pitchLevel, pTarget.right);
Vector3 offset = offsetPitch * zoomLevel * -pTarget.forward;
Vector3 targetPos = pTarget.position + offset;

From our default position, the camera’s target position changes if the player moves or if they manually adjust the camera’s angle or distance. The three key factors in this adjustment will be pitch, yaw, and zoom. We’ll treat pitch and zoom as a continuum, since we’re defining those qualities relative to the horizon and the player’s position respectively.

Yaw, on the other hand, is a bit trickier. The obvious answer here is to define “zero yaw” as facing the same direction as the player. However, in the interest of avoiding dizziness, vertigo, and the inevitable cavalcade of lawsuits that would follow, we don’t want the camera to turn instantly as the player moves. In fact, we’d like the movement input to appear relative to the camera: the player pushes right on the movement stick, and they appear to move right on the screen, and so on. Thus, we’ll define any yaw change as incremental, by rotating the camera around the player in the y-axis independent of the player’s current movement direction. This will let us more easily reference the camera’s forward vector in our movement logic to determine which way the player should go.

Obtaining the final target position involves a few basic transformation calculations:

  1. Calculate a base offset using the XZ components of the camera’s current forward vector.
  2. Normalize the offset and rotate it by two quaternions, one for  (using the camera’s right vector as a rotation axis), and one for yaw adjustment (using the world up vector).
  3. Multiply the resulting offset by the current zoom level.
  4. Calculate the camera’s position by adding the final offset to the player’s position.
  5. Rotate the camera to look at the player.

We’ll implement that as follows:

Transform pTarget = player.curObject.camTarget;
Transform rTarget = cam.transform;
Vector3 offset = -rTarget.forward;
offset.y = 0.0f;
offset *= zoomLevel;
Quaternion offsetPitch = Quaternion.AngleAxis(pitchLevel, rTarget.right);
Quaternion offsetYaw = Quaternion.AngleAxis(yawAdjust, Vector3.up);
offset = offsetYaw * offsetPitch * offset;
Vector3 targetPos = pTarget.position + offset;

The resulting controls look something like this:

BasicControls

Phase II: Physics

Adding basic physics is actually far less painful than you might think, after you’ve implemented your core control scheme. A lot of the decisions in this respect, at least early on, will be a question of design rather than programming. While the details of this aspect will be dependent on your physics engine, here’s some guidelines we’ve used in developing our camera physics:

  • Use of a spherical collider to maximize smoothness when colliding with objects and reducing the chances that the camera will get “stuck”.
  • Set collider radius to something slightly further than the camera’s near clipping plane to avoid unwanted geometry clipping.
  • Ensure that you can toggle physics on the camera – disabling physics during camera reset and animation is generally a must to prevent unwanted side effects.

If you happen to be using Unity, a quick way to set up your camera physics is to tack on a sphere collider and a rigidbody, and use Rigidbody.MovePosition to update the camera’s target position, using a distance threshold to prevent clipping through thin walls and other geometry:

Vector3 posChange = targetPos - cam.transform.position;
posChange = Mathf.Clamp(posChange.magnitude, 0.0f,
maxSpeed * Time.fixedDeltaTime) * posChange.normalized;
rb.MovePosition(cam.transform.position + posChange);

(As a word of warning for Unity users – if you’re using this, or a variation, as your quick and dirty camera physics solution, be sure to set your camera rigidbody mass to zero and set its velocity to zero during every update – lest you be plagued with unwanted force interactions.)

Here’s a comparison between our initial camera and our physics-capable camera when confronted with a wall:

PhysicsLarge

Admittedly, a fairly basic implementation, but the result is suitably robust in many situations. The resultant camera has respectable behaviour when crammed into walls, floors, and most level geometry. We can improve this with some dynamic physics constraints and raycasting, but that’s a post left for another day.

Phase III: Animation

For our purposes, we have three main animation requirements for our camera:

  • Smooth transition to and from predefined cutscene animations.
  • Short transition sequences for certain mechanics.
  • Automatic camera reset.

We handle each of these slightly differently, though each updates by overriding the default physics and player-controlled camera mode. For cutscenes, we write our camera’s pre-cutscene transformation to the same format used for cutscene keyframes – which we then append to either end of the cutscene’s frame list. The result is a modified animation which transitions to and from the cutscene’s path without hiccuping between camera control modes:

Cutscene

Our transition sequences are used when a player jumps between different controllable objects. To accomplish this, we capture a keyframe of the camera’s transformation at the instant the player triggers the mechanic, and calculate the default position of the camera for the new object in a second keyframe. We then animate the camera between these frames before restoring control to the player:

Possession

Finally, we allow players to “reset” the camera at any time, smoothly swivelling back to the default pitch, yaw, and zoom for the player’s current world position and orientation. Here, we key the camera’s pitch, zoom, and absolute yaw relative to the player for both the current and default camera orientations. We interpolate the values simultaneously to yield a smooth swivel effect:

Reset

Phase IV: Extras

A nice additional feature is the inclusion of a damping system, which adds a light springlike quality to the way that the camera adjusts its position.

The obvious implementation here is to simply apply a damping or acceleration function to our camera’s transformation update. However, damping the camera’s target position alone will have the effect of making our yaw and pitch controls feel sluggish. Instead, we’ll effectively damp the scaling of our offset vector, preserving the snappiness of our view controls while giving the camera a sense of springiness as the player moves around. We can implement this by using Unity’s Vector3.SmoothDamp function (or by keeping track of the camera’s frame-to-frame velocity and applying acceleration manually):

Vector3 realTarget = Vector3.SmoothDamp(cam.transform.position, targetPos,
ref camVel, dampingTime, player.curObject.moveSpeed);
offset.Normalize();
offset *= (realTarget - pTarget.position).magnitude;
targetPos = pTarget.position + offset;
rb.MovePosition(targetPos);

The resultant camera behaves something like this (undamped on the left, and damped on the right – we can reduce the exaggeration of the effect by tweaking the damping time):

DampLarge

With that in place, we’ve got ourselves a simple, versatile camera system that can adapt to all of our basic in-game needs. We’ll be back soon with updates on art direction and our final gameplay prototype!

Shading Spirit

Over the past few weeks, our artist has been fleshing out the details of our final character model and starting on animations. And so, the time had come – no more placeholder shaders for the little guy. Time to sit down and take a crack at a custom surface shader for our poltergeist friend, and we already had a few key features we wanted in mind. Since the beginning, we’d had something in mind similar to the ghosts from Luigi’s Mansion: Dark Moon :

In particular, take a look at the little green guy – he was a big inspiration for Spirit’s character design and shows off some of what we’d like to achieve with our visual effects.

Let’s break down the visual features of the model:

  • Base colour (green)
  • Glowing eyes/mouth
  • Surface detail (bumps/pores)
  • Edge/rim lighting (white/green)
  • Exterior glow (green halo)

Additionally, we wanted Spirit to have adjustable partial alpha, so that he’d appear semi-transparent, for maximum spookiness. Most of what we want to accomplish (aside from the exterior halo, which we’ll add in post-processing) can be done with a standard surface shader in Unity. Here’s a list of the components we’ll need to integrate for each feature:

  • Depth pre-pass and alpha intensity
  • Albedo map and tint
  • Emission map
  • Normal map, intensity, and smoothness
  • Rimlighting map, intensity, and tint

And here’s the texture maps we’ll be using to achieve the final effect:

TextureComposite.png

Unity’s built-in CG features make writing this shader pretty easy if you know which tools to use – for our final effect, we started from the standard surface shader template, which already includes our albedo map, base tint, and smoothness:

spirit-albedo

The albedo is there, but this hardly looks like a ghost – more like a plastic toy. Let’s add a bit of texture first with our normal map. Shader veterans will be happy to hear that Unity will do all of the tangent-space conversions for us, if you’ve imported your texture with the “Normal Map” texture type selected. All you need to do is use the UnpackNormals function. If you’d like to adjust the intensity of your normal map, just employ one of the worst-kept secrets in computer graphics – multiply the result of your normal map read by a colour with your desired intensity factor plugged into the red and green channels, while leaving the blue channel at 1:

o.Normal = fixed4(_BumpIntensity, _BumpIntensity, 1.0, 1.0)
* UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap));

So here’s what Spirit looks like with some detail, because real ghosts have pores (slightly enhanced for demonstration):

spirit-normals

The material texture is closer to correct now, but he still looks like a regular plastic object without any glow. Let’s start by adding our emission map, which will make his eyes glow, by simply setting the Emission property of our output structure to read from our emissive texture:

spirit-emission

While it’s a little too satanic for our purposes, we’re starting to see a promising glow – unfortunately, when combined with our full-force smooth albedo, which happens to be a bright base colour, the result is less “mischievous ghost” and more “irradiated cyclops”. Let’s fix this by toning down our albedo map with a darker tint colour and letting most of Spirit’s apparent colour come from our rimlight map, which is a toned-down modification of our base colour map. Rimlighting works by comparing the angle of the viewer’s eye (i.e., the camera view direction vector) with the surface of the object (i.e., our final surface normal). We want the edges of the object to glow, meaning that if the two vectors are perpendicular, the glow should be maximized. Therefore, we’ll use the dot product, clamp it, and subtract the result from 1 to give us our base rimlight intensity, which we can then modify using a custom intensity variable, tint colour, and our rimlight map. For our purposes, we’ll add the resulting colour to our emissive output:

half rimTerm = 1.0 - saturate(dot(normalize(IN.viewDir), o.Normal));
o.Emission = tex2D(_EmissiveTex, IN.uv_EmissiveTex)
+ _RimColor * tex2D(_RimTex, IN.uv_RimTex)
* smoothstep(0.0, 1.0, rimTerm * _RimIntensity);

After tweaking the colours to our liking, we’ve got something like this:

spirit-rimlight-normally

Finally, that looks quite a bit more like what we’re going for. Now, for one last feature – our partial alpha. The tricky part here is getting the depth buffer to behave properly. Here’s what happens if we add an alpha slider and flag the shader as transparent using tags:

spirit-demonic

Ouch. Not what we want at all – we want to be able to see the background through our little guy, but not his disembodied limbs – note the horrible clipping effect that’s happening as well. Resolving this is surprisingly easy – we complete a pre-pass to fill the depth buffer with an empty colour mask, ensuring that our final render will only deal with the bits of the surface closest to the camera, disregarding all that back geometry. Here’s the code for our pre-pass, which is painfully short:

//First pass.
Pass
{
ZWrite On
ColorMask 0
}
//Set up our next pass.
Cull Back
ZWrite On
Blend SrcAlpha OneMinusSrcAlpha
//CPROGRAM begins here...

Now let’s have a look at the little guy with some stuff behind him:

spirit-final

There we go! While we’ve got some texturing and post-processing tweaks we can make to improve the effect, there’s our surface shader, now with 100% fewer disembodied limbs. For reference, here’s our final list of properties in the surface shader, and the adjustment panel:

Properties
{
_Color ("Color", Color) = (1,1,1,1)
_Alpha ("Base Alpha", Range(0,1)) = 1.0
_MainTex ("Albedo (RGB)", 2D) = "white" {}
_Glossiness ("Smoothness", Range(0,1)) = 0.5
_BumpMap("Normal Map", 2D) = "bump" {}
_BumpIntensity("Normal Intensity", Float) = 1.0
_EmissiveTex("Emission Map", 2D) = "black" {}
_RimColor("Rimlight Color", Color) = (1,1,1,1)
_RimTex("Rimlight Texture", 2D) = "white" {}
_RimIntensity("Rimlight Intensity", Range(0.0, 2.0)) = 0.0
}

shader-panel

Finally, here’s a family portrait with our old model on the left, with standard shading, and our new and improved shaded model on the right:

family-portrait-spoorits

And there we have it – our little friend is ready to wreak havoc in style. We’ll be back soon with more updates on Spirit!

Spirit Development Update: July 2017

Now that we’re rounding out our third official month of development, we’d like to take some time to review our progress and share it with the community. It’s been a busy season for us, with lots of business meetings and new opportunities. We’re excited to announce that we are working with Northumberland CDFC to help fund our development efforts and that we will be participating in the UOIT business incubator throughout the year!

Reagarding Spirit, we’ve nearly finished much of the game’s core prototype functionality, having implemented alpha versions of our core mechanics, user interface, input system, save system, and application management. While we’ve got a long way to go in refining our gameplay and fleshing out our level design, it’s been quite rewarding to see the first few pieces come together. In this post, we’ll reflect on everything we’ve done so far, and how we plan to build on our existing foundation for Spirit in the coming months.

Gameplay

Navigation & Camera

Naturally, our first step in development after setting up our basic input & state management (see below) was the integration of basic player navigation. Our case is a bit unusual in this regard since players will be controlling a lot of different objects, so a one-size-fits all solution just doesn’t work for us. We’ve set up a system that lets us define movement controllers for different objects with varying degrees of deviation from standard rigidbody-based or character controller-based motion, which we will expand as we add new player-controlled objects, integrate animations, and improve the feel of our character movement.

Movement.gif

We’ve also set up a basic camera system allowing for locked and free-form camera controls, which supports a couple of different modes of operation depending on the object the player is controlling. It’s currently quite similar to the camera in our initial prototype, with some improvements to interpolation and adjustment behaviour. We’ve also developed a simple cutscene system built from our path editor utility, which has allowed us to start thinking about cinematic aspects of the game. Our next goals with the camera will be the integration of some basic physics and location-based constraints to improve gameplay feel and make it easier for the camera to adapt to different level geometries.

Camera.gif

Possession & Summoning

Possession is our core mechanic, and so it will be something that is in a constant state of expansion, refinement, and testing throughout the development process. Right now, we have a few different objects in our prototype for players to control (a large, bouncy ball, a marble, and a paper airplane), in addition to Spirit himself. Objects control quite differently depending on their physical properties – shape, size, weight, air resistance, and so on. We’re using three primary controllers at the moment for our current set of objects – a character controller-based model for Spirit, a controller we’ve designed specifically for flight, and a controller for rolling objects (the latter two are both heavily physics-based).

 

Additionally, we’re integrating our post-processing system with possession to give each object a unique aesthetic when players are “inside” the object, which we’ll be expanding on in later updates. Right now, we’re experimenting with a few different effects related to image warping and colour distortion. No psychedelics have been involved in the process, we promise.

Rescue & Collection

Players’ primary objective in Spirit is to rescue their ghostly cohorts from a devious team of paranormal exterminators, who’ve trapped them for later disposal. Spirit, who managed to evade the exterminators’ dastardly traps, will not stand by and allow innocent poltergeists to suffer in captivity. After all, a little innocent haunting never hurt anyone!

We’re going to be designing a number of different friends for Spirit to rescue, which our artist Josh will be bringing to life shortly (we’re acquiring supplies for the ritual). In the meantime, we’ve integrated the mechanic with a host of adorable magenta ghost clones, which aren’t terrifying at all, thanks to their giant yellow eyes. Ever watching. Staring. Judging.

 

Players can also collect little bits of concept art, tutorial images, and assorted bits of photographs and the like throughout their adventure, which they’ll be able to visit in a little journal menu, hosting their collection and detailing their current objectives.

Interaction & Dialogue

Once we’ve fleshed out our story and side characters a bit more, we’ll be integrating a fair bit of narrative, sassy conversation, and general tomfoolery to complement the game world. Right now, we’ve prototyped our system for interaction and conversation with a couple of talkative books. We’ve integrated this system with our collection mechanic, so that players can “take” things from NPCs via interaction, and we’ll be adding some basic fetch quests, riddles, and so on in the future.

Dialogue.gif

Abilities

A new feature we’ve been working on is an ability/talent tree similar to what you might find in an RPG perk system (though far less complicated!) or a game like Ori and the Blind Forest. At the moment, we’ve just finished implementing a skeleton for defining abilities, acquiring perk points, and spending those points to acquire and use abilities. We’ll be working on designing and implementing unique talents over the next few months.

AbilityUI.png

Application Management & State Saving

I like to have the application back-end up and running before taking on almost anything else, so that we can switch between scenes and deal with global GameObjects effectively. This helps us avoid snafus with the inability to test state transitions, getting caught up with persistent objects, and so on. Thus, our app manager was one of the first things we worked on, and we’ve expanded it steadily to accommodate new features as necessary.

We’ve also been extending our save system to support better player data management, improved file handling, and the ability and collection mechanics.

Input

Input was another of our initial areas of focus, as we wanted to develop a custom wrapper for Unity’s input system that allows us to query input based on actions defined outside of Unity’s input manager. We did this so that we can build a system for players to rebind their inputs effectively in-game, rather than having to rely on the Unity launcher. Furthermore, this leaves us the flexibility to import custom input plugins if we want to integrate support for different controllers or improved input polling in the future.

User Interface

Our UI is largely prototypical for now, with many placeholder assets and sprites taken from our older iterations. However, we’ve fleshed out the functionality of the HUD, menus, and hubworld, and we’ve built a solid foundation for our redesign of UI elements, which we’ll be working on soon. We’ve also spent some time wrestling with Unity’s default UI navigation, to ensure the best experience for players using a gamepad.

 

Animation & Sound

Our path editor has been serving us well, and we plan to use it as a tool to help animate obstacles, characters, and visual effects once we’ve finalized our level designs. We’ll have the all-new Spirit character model and animations within the next few weeks, but for now, we’ve integrated our old animations into Unity’s animation system, with a small bit of customization built on top for our gameplay needs. We’ll be extending this system as we continue to refine our character movement and generate new designs. For now, little old Spirit still looks pretty adorable, though.

We’ve also integrated some of our old sound effects and music, and we’re working on balancing and extending our simple sound system to better handle transitions and the overlay of multiple effects. We’ll also be working with some brand-new editing software and digital instruments soon, so stay tuned for music updates!

 

What’s Next

Our next major priority is revamping Spirit’s model and animations, and updating our navigation code to ensure a great platforming experience for players. From there, we’ll be refining our core mechanics and working on level design and asset creation, before drilling down into our puzzle design and adding depth to the game. We’re having a great time working on Spirit and we really hope that you’ll enjoy it when the time comes!

UnderConstruction

Playing Nice with Unity: Editor Customization

Can I please just delete one element in this godforsaken array without having to navigate eight bloody dropdowns, six unnamed text fields, and the 1989 Portuguese Census?

It’s happened to all Unity users at some point. You write a new script, attach it to an object, see it in the Inspector for the very first time, and recoil in disgust. Perhaps it’s the inability to name array elements, or the absence of constraints on one of your variables, a texture preview that’s just a bit too small, or – perish the thought – something’s alignment is off by a few pixels. The ensuing chaos is simply too much to bear. You close the editor, pour yourself a hot cup of chamomile tea, and swear you’ll never have to look at that script ever again. Two fields misaligning by three pixels, is, after all, an offense worthy of litigation in the eyes of your designer, artist, and/or public relations manager. Luckily, in your hour of need, the very editor you’ve cursed as the bane of your existence has had your back the entire time, and in your capable hands, the Unity Editor API shall save your team countless headaches.

GetPropertyHiehgt

Just try and remember to override the GetPropertyHeight function.

A word of caution before we move on – if you’re a pro-tier editor wizard yourself, this article may not be for you. While I’m a decently advanced programmer, I consider myself to be only intermediately experienced with the Editor API – this post is intended to help beginners know why and how they should go about learning the basics and implementing simple editor customizations in their own projects. And so, without further ado, let us delve into the realm of the editor.

What is editor scripting?

Broadly speaking, when I use the term “editor script”, I’m referring to any script that uses the Editor API to explicitly tell the Unity editor what to display, which UI elements to use, and which operations should be performed upon interacting with those UI elements. Examples of editor scripts I will discuss include PropertyAttributes, PropertyDrawers, CustomEditors, and EditorWindows. All of these scripts can be used to customize existing functionality or create new functionality within the Unity Editor at varying levels of scope.

When should I use my own editor scripts?

If you’re relatively new to Unity or working with the Inspector, there are a number of simple customizations for function and form alike that can be accomplished entirely without editor scripting. Unity has a number of built-in Attributes (check the sidebar of the docs for a full list) that function as a tagging system of sorts, allowing you to specify, among other things, script dependencies, Inspector headings, and the use of sliders or input restrictions for variables. Unless you’re looking to impart project-specific or relatively detailed functionality, using these tools can be a great way to tweak the Inspector to your needs without excessive overhead.

Eventually, your requirements will exceed the capabilities of these tweaks, and you’ll write your very first custom editor. Then, your pixel-retentive self might realize that just about every script could use a few extra half-millimetres of layout padding, or perhaps your designer will cry for customized transform gizmos for every different type of enemy in the game. Here’s where you need to know when to pump the brakes – as with any programming job, there’s a trade-off of effort in versus increased productivity here, and it can be a challenging one to gauge.

Writing a new editor for every new MonoBehaviour is, generally speaking, a horrible idea – the default Inspector is there for a reason, and irrespective of its occasional quirkiness, it gets the job done efficiently for the vast majority of scripts, particularly simpler ones. My personal advice is to only use a custom script if you need an improvement of function, rather than form, and if you or someone on your team will be actively using that desired functionality on a semi-regular basis. This avoids both bloat in your codebase and wasted time, as not every little tweak will end up being worth it in the long run.

Writing Editor Scripts

Before delving into any editor scripts, you’ll likely want to familiarize yourself with Unity’s SerializedObject and SerializedProperty classes – these represent how the editor sees your objects, and are used for, among other things, identifying data fields, fetching array elements, and sending updated data back to the object. Luckily, apart from the things you’re actively customizing, Unity will automate the majority of tedious tasks like layout spacing and field display for primitive datatypes.

The type of script you’ll need to create depends entirely on the level of functionality you’re trying to implement. Here’s a quick guide to the technique you’ll most likely want to use, based on your needs:

Modifying UI for individual fields: PropertyDrawers

If you want to customize a certain type of datafield, then a PropertyDrawer is probably the way to go. PropertyDrawers are used to define the Inspector GUI for a specific type – commonly a custom data structure class found in multiple MonoBehaviours. Once the PropertyDrawer has been defined, all instances of that data structure will appear with the same customized Inspector layout, unless overridden. Let’s look at this with an example.

In Spirit, we want to use a few different post-processing effects depending on the object the player is currently controlling. Right now, this includes colourshifting and different types of image warping. I wanted to give our design team a quick way to edit these effects for each object, having context-sensitive fields that would display based on the effect type. Since this functionality was fairly self-contained, I wrapped the necessary fields in a utility data class, FXSpecs, and defined a PropertyDrawer for that data type:

[CustomPropertyDrawer(typeof(PossessionFX.FXSpecs))]
public class FXSpecsPropertyDrawer : PropertyDrawer
{
public override void OnGUI(Rect position, SerializedProperty property, GUIContent label)
{
//Here's where you define your GUI elements:
//Buttons, sliders, textfields, and so on.
}

public override float GetPropertyHeight(SerializedProperty property, GUIContent label)
{
//I keep track of the cumulative height of the panel by modifying a
//variable in OnGUI, which you can then return here.
return currentHeight;
}
}

The result is that any MonoBehaviour with an FXSpecs field will display our custom UI – meaning that we can pop effect data onto other types of behaviours, such as cutscenes or animations, without having to redefine the UI in each instance:

SimplePropertyDrawer

In this way, PropertyDrawers can be thought of as little custom editor modules that can be used to define the UI of individual data structures, rather than modifying the layout for entire script components. (If you do need to redefine the UI for an entire script, you’ll want to create a CustomEditor as outlined below.)

Selectively modifying field display: PropertyAttributes

Consider the following scenario: you have a class with a string field you’d like to mark read-only in the inspector for debugging purposes, while preventing accidental editing. While you’re at it, you’d like to be able to tag any field as read-only for any future applications. The solution – custom PropertyAttributes. In addition to Unity’s built-in attributes (e.g. Range, TextArea, Header), you can create your own. Generally speaking, you’ll need to create two scripts to define a new PropertyAttribute. First, what usually amounts to a shell class declaring your attribute keyword:

public class ReadOnlyPropertyAttribute : PropertyAttribute { }

You’ll define the Inspector behaviour for your attribute in a new PropertyDrawer script:

[CustomPropertyDrawer(typeof(ReadOnlyPropertyAttribute))]
public class ReadOnlyPropertyDrawer : PropertyDrawer
{
public override void OnGUI(Rect pos, SerializedProperty prop, GUIContent label)
{
pos.height = 16;
EditorGUI.LabelField(pos, label, new GUIContent(prop.stringValue));
}
}

Having done this, anything you tag with your attribute will implement the corresponding PropertyDrawer, leaving untagged fields untouched:

SimplePropertyAttribute

I like to think of this as a selective approach to PropertyDrawers – allowing you to choose which fields will implement your custom UI on a case-by-case basis. For certain applications – such as wanting to make some fields readonly, or restricting an input range – this makes much more sense than the broad-brush approach of defining a PropertyDrawer without an accompanying attribute. However, if you want all datafields of a certain type to function the same way in the Inspector, a PropertyDrawer alone is generally sufficient.

Custom Inspectors for certain script types: CustomEditors

A CustomEditor affords the highest level of customization on a per-script basis in the Inspector. This allows you to redefine the display and functionality of all of an object’s fields, as well as imparting any additional behaviours (for example, buttons that execute custom routines on the object). However, it should be noted that creating a CustomEditor is typically much more involved than creating a PropertyDrawer, particularly for scripts with complex datatypes or a large number of data fields. If the functionality you wish to customize can easily be wrapped in a small utility class independent of its parent, a PropertyDrawer may be the wiser choice.

For cases where your desired functionality cannot be accomplished through PropertyDrawers alone, the CustomEditor affords a diverse and powerful toolset for enhancing your script’s Inspector pane.

During the creation of our path animation utility, it quickly became apparent that the default Inspector simply wouldn’t do, for several reasons:

  1. The default layout for arrays would make manually managing a list of waypoints excruciatingly time-consuming.
  2. We wanted to be able to toggle things like curve shape and gizmo drawing from the Inspector with GUI buttons.
  3. We wanted to manage adding and removing waypoints from the Inspector, rather than manually creating and deleting new Transforms and dragging them around into arrays.

Here’s an example of what the default Inspector for our utility would look like:

BasicEditor

Since the functionality we wanted was fairly specific to our path utility, I opted to create a CustomEditor for the utility, adding new UI elements for the desired functionality, redefining the display of the waypoint list, and letting Unity handle the display of simple field types:

[CustomEditor(typeof(OGPath))]
public class PathEditor : Editor
{
public override void OnInspectorGUI()
{
//...With EditorGUILayout spacing is handled for you.
EditorGUILayout.PropertyField(visualizeColor);
EditorGUILayout.PropertyField(showLoop);
//...
if (GUILayout.Button("Set to Linear..."))
path.SetInterpolationMode(OGPath.PathMode.Linear);
//...Make sure you take care of all the fields you want
//to be exposed in the Inspector!
}
}

(Note: The EditorGUILayout class is invaluable here, as it can quickly automate layout tasks that are otherwise borderline agonizing to perfect.)

The result is something like this:

SimpleCustomEditor

The beauty of this is that we can easily edit the functionality of the Inspector if our needs change later on, extending or modifying the behaviours we’ve defined to improve our custom UI.

Global functionality: EditorWindows

When you want to automate a task or expose some functionality pertaining to something other than an individual script – whether your whole application, an entire scene, or a collection of assets – you’ll want to look into EditorWindows. These are standalone panels of functionality that, once created, are accessible from Unity’s Window menu and allow you to create a UI for pretty much anything you can imagine – editing assets, performing operations on an entire level full of objects, resetting configuration files, testing procedural generation…

A simple EditorWindow is easy to create and permits the implementation of any global functionality (or local functionality, if you obtain handles through object fields or by searching the active scene). Let’s look at a simple example – an editor window that interfaces with a static file management class to provide a UI for wiping application data and save files (something that proves to be necessary rather frequently when iterating on your game’s file system, and becomes tedious when done manually). The script looks something like this:

public class SpiritAdminPanel : EditorWindow
{
//This tells Unity where to put your window in the menu.
[MenuItem("Window/Spirit Admin", false, 25)]
public static void ShowWindow()
{
SpiritAdminPanel window = GetWindow();
//Set up your window properties (size, title, etc.) here.
}

private void OnGUI()
{
if (GUILayout.Button("Reset all app data", buttonStyle))
//...And so on, define your UI here.
//GUILayout, like EditorGUILayout, allows for easier UI definition.
}
}

The resulting window is a simple panel with two buttons allowing us to wipe out our data as we please:

SimpleEditorWindow.png

Any time you’re looking to automate some global process that occurs outside of runtime, an EditorWindow can be a great option that maximizes your ability to customize and perform these processes without needing to hard-code every change.

Moving forward with customization

If you’re relatively new to editor customization in Unity, you’ll probably be doing a lot of reading – much of the functionality of the Editor API, however useful, is not always immediately apparent. While follow-along tutorials on editor scripting do exist all over the web, they will rarely accomplish the exact functionality you’re looking for. The result is a lot of digging through documentation, forums, and experimenting on your own. This can be an arduous process, but ultimately, you’ll find that the process can be well worth the effort, as its workflow benefits are potentially immense.

Plus, you’ll finally be rid of those dastardly few pixels standing between you and perfect Feng Shui in the Inspector.

Path Animation in Unity

When we originally came up with the concept for Spirit, we were a bunch of green, bright-eyed second-year undergrads, a little ragged from surviving our first year, but diehard optimists nonetheless. Inevitably, within a few weeks of beginning our first semester, the second-year curriculum had already kicked our teeth in, knocked us to the ground, and started dancing on our broken glasses, just for kicks. You see, our program has us create a game engine from scratch before so much as touching a prebuilt one, a form of education which proved to be both incredibly valuable and inexplicably cruel.

Our shepherd in this matter was the legendary Dan Buckstein, who in the space of a year was responsible for teaching us everything from quaternion math to bloom shaders. However, one of the most invaluable topics covered came along fairly early in the year, in the form of interpolation. While the most obvious application of this technique is the generation of paths from a series of waypoints, I would be remiss if I neglected to mention that interpolation itself is an algorithm applicable to a wide range of objectives – skeletal animation, colour gradients, image warping, and physics approximation, to name a few – after all, it’s just data (thanks, Dan).

Today, I’ll be focusing on path animation, something we used quite extensively to create animations in our original prototype. Since we were stuck without an existing engine as a starting point, we built a basic level editor for our original custom engine (the OG engine, as we dubbed it). The editor had several features built in for creating and transforming objects in a level, with a few sub-editors for object properties – path animation, collision, and in-game properties.

OGPathEditor

If we’re being honest, the path animation editor started as nothing more than a homework requirement, though it became eminently more useful as the year went on – particularly for particle effects, as we could use it to create breadcrumb trails and little flourishes of light and sparks that brought life to even the dustiest corners of the game.

In making the switch to Unity, we had initially [naively] assumed that Unity would have a more polished inbuilt path animation system. Much to our chagrin, despite its myriad of useful features, Unity has no such system out-of-the-box. You can achieve basic linear animations using the animation editor, or use navmesh-based traversal for AI characters, but the vanilla editor is somewhat lacking in the way of path animation. So, if we want to animate our particle systems, or camera, or anything, along a spline that we can visualize in the scene, we need to develop our own system. 

Our system will require a few key features to constitute a working prototype:

  1. A waypoint system that can smoothly animate objects along a curve at runtime.
  2. Support for different types of interpolation.
  3. A custom editor that lets us add, remove, and edit waypoints and/or control handles.
  4. A way of visualizing our path in the scene view so that we can preview and edit it more easily.

I’m going to start by tackling #2, as an understanding of interpolation forms the fundamental basis of any path animation system. Interpolation experts can feel free to skip ahead, and those completely unfamiliar with the topic should definitely look into some further personal research – a firm grasp on the concepts of different kinds of interpolation is immensely helpful in a surprising number of applications.

The basic concept of interpolation is this – you have a number of waypoints, and an equation for defining a curve based on those waypoints. Anything you animate along that curve runs off of a timer – that timer (well, a normalized version of it) is used as input, along with some of your waypoints, to an equation that spits out a point in space representative of your object’s position on the curve at the current moment in time. And that’s it. The equation that you choose defines how your curve will look – a series of lines, a smooth curve, a fancy Illustrator-style Bézier curve, and so forth.

For our path editor, we’ve chosen to support basic linear interpolation, Catmull-Rom interpolation, and cubic Bézier interpolation. Here’s what each of those looks like in a very basic nutshell – I won’t go into the reasoning behind the math here, though you can and should read the mathematics behind each one of these methods.

(Note that the t in each case is the object’s timer for the current frame/waypoint divided by the total time for that segment of the path – something that you can define manually, or with speed control, discussed below.)

Linear Interpolation (LERP):

LERPExample.png

public static Vector3 Lerp(Vector3 p0, Vector3 p1, float t)
{
return (1.0f - t) * p0 + t * p1;
}

Catmull-Rom:

CatmullExample

public static Vector3 Catmull(Vector3 p0, Vector3 p1, Vector3 p2, Vector3 p3, float t)
{
return ((p1 * 2.0f)
+ (-p0 + p2) * t
+ ((p0 * 2.0f) - (p1 * 5.0f) + (p2 * 4.0f) - p3) * (t * t)
+ (-1 * p0 + (p1 * 3.0f) - (p2 * 3.0f) + p3) * (t * t * t)) * 0.5f;
}

Bézier:

BezierExample

public static Vector3 Bezier(Vector3 p0, Vector3 p1, Vector3 p2, Vector3 p3, float t)
{
return (1.0f - t) * (1.0f - t) * (1.0f - t) * p0
+ 3.0f * (1.0f - t) * (1.0f - t) * t * p1
+ 3.0f * (1.0f - t) * t * t * p2
+ t * t * t * p3;
}

(For this implementation of Bézier, points p1 and p2 are actually the handles on the curve, which essentially define tangents that can shape smooth or sharp corners, or create loops, as I’ve done above.)

Note that each of these curves uses waypoints that are more or less in the exact same positions – so the interpolation method you choose is instrumental in determining how the final animation will look.

To define your object’s motion along the path, you have a couple of options; you can manually define the time taken for each curve segment – which requires a lot of tweaking and causes inconsistent speeds along tight curves, or you can implement a technique called speed control. Speed control lets you define your object’s desired movement speed and uses a table of curve samples to create smooth, consistent motion. The process goes something like this:

  1. Resample each segment of the curve using your interpolation method of choice, calculating a number of subsamples in between (8 is a nice number for this, 16 if you want extra precision or have particularly long curves).
  2. As you resample, use the distance between samples to calculate the approximate cumulative path length.
  3. Calculate the time taken to traverse the entire curve based on your object’s speed and the total length of the curve.
  4. As you animate the object, use the object’s timer to measure its traversal along the entire curve, using the timer to track its place in the table of samples and using LERP to move the object between all subsamples.

Using these concepts, it’s fairly obvious how you might build a simple system for storing a path and animating objects along that path – keep a list of Transforms pointing to your waypoints in the scene, use their positions for interpolation and/or subsampling, and adjust your object’s trajectory in the Update function.

The real magic here is in creating a custom editor for your paths, so that you can switch between interpolation methods, add and remove waypoints, push them around, and watch as your path changes in the scene. Here’s an example of what we’re shooting for (excuse GIF quality):

PathSweg

To achieve this, we’ll want to append the [ExecuteInEditMode] tag to our main path script for the purposes of gizmo drawing, and create a basic Editor script with a custom layout for adding/removing waypoints and setting them up automatically on our curve. Check out Unity’s documentation on the Editor utility, GUILayout, and SerializedProperties for a quick rundown, and make sure to tag your Editor script with [CustomEditor(typeof(YourClass))] To display custom property fields, we’ll need to grab a handle to the serialized version of our object:

private void OnEnable()
{
//The 'target' variable is built-in to the Editor class.
//Cast it according to the class type for which you're building your Editor.
path = (OGPath)target;
serial = new SerializedObject(path);

waypoints = serial.FindProperty("waypoints");
lockTangents = serial.FindProperty("lockTangents");
visualizeColor = serial.FindProperty("visualizeColor");
showLoop = serial.FindProperty("showLoop");
}

From here, we can grab properties using the FindProperty function, and display those using PropertyFields in OnInspectorGUI – just remember to call Update on your serialized object at the beginning of the routine, and ApplyModifiedProperties at the end, lest you be baffled by checkboxes that refuse to change. Here’s an example of how that looks:

public override void OnInspectorGUI()
{
serial.Update();

//Insert PropertyFields, Layout functions, buttons, etc.
//Check out the documentation for some examples of what you can do.
//Here's a snippet of what we used for our purposes:

EditorGUILayout.PropertyField(visualizeColor);
EditorGUILayout.PropertyField(showLoop);

GUILayout.Label(string.Format("Interpolation: {0}", path.pathMode.ToString()));

EditorGUILayout.PropertyField(lockTangents);

if (GUILayout.Button("Set to Linear..."))
path.SetInterpolationMode(OGPath.PathMode.Linear)
//...And so on for your other Inspector functionality...

serial.ApplyModifiedProperties();
}

Getting the curve to display in the editor was far easier than I had anticipated – simply implement the OnDrawGizmos function in your main path script, and use the DrawLine function to plot your curve along your waypoints (in the case of LERP) or your subsample table (if you’re using Catmull, Bézier, or some other more complex method). Bézier can be a bit tricky due to the fact that some waypoints are actually “handles” – I got around any potential confusion for our level designers by keeping the handles in a separate list, so that swapping between different interpolation methods would keep a more consistent general trajectory. Regardless of how you choose to handle different curves (sorry), though, a custom editor is key, and not all that difficult to set up. The resulting prototype functions something like this:

CandleAnimate.gif

In the couple of weeks since developing our initial prototype, we’ve extended it to support features like transitioning between curves, modes of object rotation, and a basic cutscene system built on chaining and timing multiple paths together.

In the next couple of weeks, I’ll be posting an update on working more in-depth with Unity’s systems for implementing custom editors, property drawers, and so on. Until then, it’s back to the grindstone! Thanks for reading.

Feel free to reach out in the comments if you have any questions or you’d like to add to the discussion!