Design

A User-Centred Approach to Control Design

It’s a familiar image – the furious slamming of fists into a laggy keyboard; the innocent mouse knocked aggressively from its perch; the once-glorious gamepad, now laying cracked on the floor beneath a film of cheese dust. Who among us has never resorted to blaming the controller for our own failures? And yet, in some cases, perhaps we are justified in our rage against the machine, for a poor control implementation can lead to any manner of misclicks, misunderstandings, and missed opportunities. Controls are a fundamental aspect of any game’s design, serving as a key factor in determining the game’s playability. Simply put, a game’s controls facilitate each and every interaction available to the player. This holds true regardless of input device – whether mouse and keyboard, gamepad, motion sensors, or brain-computer interface. No matter the input chosen, developers are tasked with designing a set of controls that is logical, easy to learn, fluid, responsive, and as unobtrusive to gameplay as possible. Ultimately, this process boils down to the creation of an interaction schema that effectively maps real-world actions, such as a button press or a wave of the arm, into an in-game action, like jumping or swinging a sword.

All of this rhetoric prompts us to inquire, how might we define good or great interaction design for game controls? We might assess a control scheme as effective if it is usable and contributes to a good user experience – that is to say, it enhances a player’s experience, rather than detracting from it. However, accurately measuring usability and user experience necessitates usertesting, which presumes that we’ve already implemented our design. May we determine some aspects of our design a priori as a pre-development measure, thus improving the quality of our initial efforts? The answer, thankfully, is yes, through the application of user-centred design, or UCD. UCD is a process oft-applied in the realm of productivity and web applications, though it is increasingly applied to the development of interactive entertainment, including games. In a nutshell, UCD focuses on understanding user needs, developing system requirements based on those needs, prototyping alternative designs, and finally evaluating the effectiveness of those designs. In this post, we’ll focus on how we can leverage the first two phases of UCD methodology – understanding needs and formulating requirements – to inform our designs pre-implementation.

Case Study: Designing Controls for Spirit

Our team’s interest in UCD is motivated by our current project, a 3D puzzle-platformer in a quasi-open world. Controls are of particular importance in the platforming genre, where players are frequently tasked with executing a precisely timed sequence of movement, jumps, and other abilities. Poorly-mapped or unresponsive controls spell disaster for any platforming game, as they maximize player frustration, or worse, make certain challenges impossible to complete. Our need for great controller design is compounded by the nature of our project in particular; since our core mechanic allows players to control a number of different objects, we may find ourselves designing a dozen control variants for any given input device. Furthermore, many of our puzzles are physics-based, demanding that our controls seem physically realistic while maintaining a good game feel. We’ve chosen to apply UCD in achieving these objectives to ensure that our players’ needs form the basis of our interaction design.

Spoorit-Wave.png

The first step in our design process is the establishment of our target user population, and an understanding of player needs based on their demographics, preferences, and past experience. Next, we formulate design requirements based on lessons learned from existing titles, expected use context, and player behaviours. Following this, we create our initial designs for three different player-controlled entities and refine them through early internal testing. Finally, we plan our next steps in usertesting and iteration to validate and refine our designs.

Understanding Players & Establishing Design Requirements

Our target audience for Spirit comprises players between the ages of 18 and 34 with at least a moderate amount of gaming experience. The ideal user will have fairly extensive experience with platforming games, enjoys exploration and puzzle-solving, and is willing to devote an hour or more to individual play sessions.

Based on the needs of our target players, we can categorize the requirements of our design into a few key groups:

Functional Requirements – What the controls should do.

For each set of controls, we need to support core game interactions – primarily, we are concerned with movement, jumping, interacting with objects, and manipulating the camera. Each interaction should be mapped to its own region on the appropriate input device, and real-world manipulation of the input should translate sensibly into in-game action.

Non-Functional Requirements – What the controls should be.

We’re developing Spirit for PC, so we’d like to offer both keyboard and gamepad support for players. Right now, we’re focused on interaction design for both mouse/keyboard and Xbox controllers. Since players will find themselves in situations where they might need to rapidly time jumps, precise changes in direction, or switching between objects, responsiveness and fluidity should feature prominently in the eventual implementation.

Data Requirements – What the controls should know.

Our controls need to respond differently depending on game scenarios – connecting with in-game feedback like contextual hints, restricting input when appropriate, and even responding to in-game physics. Thus, our control system should interface with game data to pull information regarding the camera, game state, position of interactive objects, and so on.

Context Requirements – How the controls will be used.

Since players will want to concentrate on what’s happening on-screen, we need to ensure that they won’t feel the need to glance down at the keyboard or gamepad to be sure of their next input. We expect that players will have prior gaming experience, and so our design can borrow from established conventions in the genre to assist in this effort.

Usability and Experience Requirements – What the user should perceive.

We want our controls to feel unobtrusive and fluid, minimizing the barrier that users perceive between their actions and in-game results. Controls should be easy to learn, and easy to use – we want challenge to come from puzzle-solving and platforming, not wrangling a gamepad. Lastly, we want players to feel good about mastering the abilities of any given object that they control, and so our controls should integrate with our animation and gameplay systems to create the most fluid experience possible.

Following the establishment of our design requirements, we examined (and played!) a number of different games. Since we’re concerned with designing controls for a several different objects, our research extended beyond the platforming genre to include games like flight simulators and shooters. For the purposes of our case study, we’ll look at the first few entities that we’ve implemented into our gameplay prototype – our main character, a beach ball, a marble, and a paper plane. Each design follows from a core set of universal attributes that we’ve developed based on estimated player expectations, with refinements to individual objects focused on improving game feel and maximizing usability.

Control Designs

Universal attributes. At its core, Spirit is a platformer, and so we looked at a lot of different platforming games to get a feel for the sort of controls players would expect – from classics like Super Mario 64, Banjo-Kazooie, and Chibi-Robo to modern incarnations of the genre, like Yooka-Laylee. We also played quite a bit of Ori and the Blind Forest – though a 2D platformer, the controls in Ori are outstandingly responsive and fluid, with snappy animations that respond near-instantaneously to most inputs.

ScreenshotCollection.png

Since players will be in a 3rd-person, 3D environment regardless of the object they’re controlling, we also looked at games like The Legend of Zelda: Breath of the Wild to learn from some truly great 3rd-person control schemes. Ultimately, we decided on a few standards from which we could build and refine each individual entity’s control scheme:

Locomotion. No need to reinvent the wheel on mapping basic movement – we’re going to keep primary directional movement on WASD for keyboard users and LS for gamepad players. We’ll map jumping to spacebar on keyboard, and A on gamepad. For each individual control variant, we’ll use the physical qualities of the entity that the player controls to determine how movement controls should behave – including acceleration, directional changes, and any animation delays.

Camera Movement. Following the aiming conventions of first- and third-person games alike, we’ll map this movement to RS and mouse movement. For keyboard and mouse users, we’ll allow toggling of locked and free camera modes with the tab key.

Interactions. We’ll map primary and secondary actions, like possessing objects and interacting with NPCs, to Q/E on keyboard and Y/X on gamepad. We’ve opted for a primarily one-button scheme (using E and X respectively), which will perform the correct action based on the interaction available in-world. To accomplish this, we’ll check for interactive areas within the player’s FOV and display a prompt in-world to highlight the available interaction.

Spoorit-Prompt

Each of the control variants below is based on these core universal attributes, with variations based on the physical attributes of the entity in question, and any expectations players may have from previous gaming experiences.

Third-person humanoid. Players will spend most of their time as Spirit himself, a tiny ghost with roughly humanoid features. We want motion to feel snappy and responsive, so for this design, we’ll map the movement axes directly to the player’s velocity, with a slight acceleration timed to match the character’s run animation. We’ll base this on a character controller that considers game feel first, and physics second – to give players a fluid experience that integrates well with animation. Following the standard of most platformers and third-person games, we’ll allow players to “turn on a dime” – turning animations are nice and cinematic, but may prove frustrating when players want tight controls above all else. The result in-prototype looks something like this:

Spoorit-Move

Round objects (physics-based). Two of our initial objects, the beach ball and marble, both follow a scheme inspired by the feel of locomotion in games like Katamari Damacy or Super Monkey Ball. In contrast to the main ghost controller, we’ll base this scheme almost entirely on physics, mapping movement axes to forces applied on the object, rather than instantly changing the object’s velocity. By configuring the amount of acceleration applied, we can create the feeling of a large, weighty rubber ball or a small marble with a tight turning radius. We’ll let the physics engine handle angular momentum, preserving it through jumps and collisions, to create an experience that feels more physically realistic. In past iterations, we experimented with more direct schemes that were less physics-based, as in the main character’s controls; the consensus from the majority of players was that they preferred and expected more physics-dependent behaviour for traditionally “inanimate” objects. The end result is a nice, responsive force-based controller that “fights back” if this object controlled is particularly weighty:

Ball-Move

Flight (hybrid). Developing controls for a paper airplane was particularly interesting – though inanimate like the ball, the plane functions as more of a vehicle than a dead weight, and so we looked to spaceflight and flight simulators like X-Plane for inspiration. Stripping away the complexities of a bona fide flight sim, we can reduce the act of flight to a few primary controls – throttle (forward motion), yaw (turning), pitch, and roll.

Since throttle and yaw correspond to motion in the XZ plane, we can associate this with the “locomotion” controls for other objects – as such, we map throttle to the z-axis of movement (W/S on keyboard, or up/down on LS for gamepad) and yaw (which we’ll tie into roll, for smoother animation, to the x-axis of movement (A/D on keyboard, left/right on LS). Pitch and roll are a bit more interesting – in keeping with the conventions of traditional flight controls, we’ll lock the camera to the direction the aircraft is travelling, and use the axes freed up from camera controls to modulate pitch (up/down with mouse or RS) and roll (which, coupled with yaw, is offered secondarily by using left/right with mouse or RS). The result is something that feels like a simple, zippy little flight simulator:

Plane-Move

Next Steps

Over the course of our work so far, communication among the dev team has proven crucial – our initial implementations have undergone many iterations to improve responsiveness, physical accuracy, and animation. However, we’re still very much at a prototype stage, and we’ll need to test our current variations with real players to validate and further improve our designs. Our next step will be finalizing control variants for a few more in-game objects, before proceeding to some early alpha testing with potential players. As part of our usertesting efforts, we’ll be integrating techniques like gameplay recording, questionnaires, and semi-structured interviews to understand our players’ perspectives on controls and interactions in our game. In the meantime, we’ll be working on improving in-game feedback to help users learn available interactions more effectively, and designing some simple puzzles to facilitate an in-game environment for testing where users will be able to focus most of their efforts on evaluating the game’s controls.

Overall, the UCD approach has proven immensely helpful to our interaction design process, improving the quality of our initial designs and our efficiency as a team. Be sure to check back in soon for an update on our creative direction and level design as we move forward from our initial prototype!

Prototyping a Dynamic Camera System

Every player seems to have a different idea of the features that are most important to them in a game – depending on who you ask, that might be the level design, graphics, story, music, adequate inclusion of puffins, and so on. However, one key element dictates our perception of each and every one of these features, serving as the player’s window into the game world–the camera. A game’s camera is the oft-unsung hero (or hated villain) of the complete experience, almost solely responsible for defining the player’s perspective. Cameras need to consider everything from user input and avatar movement to physics constraints and cinematic intent. Most players may never notice a great camera system, but most every player will notice a terrible one.

Spirit has been particularly challenging in this regard, as we have a number of factors to consider in designing our camera system. The game world is relatively open, and puzzles are nonlinear, so strict designer-imposed controls are out of the question. We want users to be able to control and reset the camera freely, but integrate a degree of automation to prevent the need for constant manual adjustment. We also need to integrate the camera with our animation system, allowing for cutscenes and dynamic transitions. Finally, for want of a better phrase, we have a lot of stuff in our levels, so physics-based adaptation is a must. We’ve prototyped these features into a single dynamic system that looks something like this:

OverallCamera.gif

As a disclaimer, we’re still quite a ways from some of the amazing dynamic camera systems out there, but our current system has all the functionality we’ll need to move forward with refining the design. Here’s a look at how we’ve designed and developed our prototype camera system using Unity 2017:

Phase I: Basic Controls & Follow Camera

Our first step is creating a basic third-person camera that follows the player around while permitting them to adjust their view and look around. For this task, we designed a few basic constraints defining the valid operating space of the camera:

  • Minimum and maximum pitch angles.
  • Minimum and maximum distance from the player.
  • Incremental yaw around the player, which resets by facing the camera in the same direction as the player.

To start, we can calculate our default position using an offset vector based on the negative of our player’s forward vector, a default zoom distance, and a default pitch angle:

Transform pTarget = player.curObject.camTarget;
Quaternion offsetPitch = Quaternion.AngleAxis(pitchLevel, pTarget.right);
Vector3 offset = offsetPitch * zoomLevel * -pTarget.forward;
Vector3 targetPos = pTarget.position + offset;

From our default position, the camera’s target position changes if the player moves or if they manually adjust the camera’s angle or distance. The three key factors in this adjustment will be pitch, yaw, and zoom. We’ll treat pitch and zoom as a continuum, since we’re defining those qualities relative to the horizon and the player’s position respectively.

Yaw, on the other hand, is a bit trickier. The obvious answer here is to define “zero yaw” as facing the same direction as the player. However, in the interest of avoiding dizziness, vertigo, and the inevitable cavalcade of lawsuits that would follow, we don’t want the camera to turn instantly as the player moves. In fact, we’d like the movement input to appear relative to the camera: the player pushes right on the movement stick, and they appear to move right on the screen, and so on. Thus, we’ll define any yaw change as incremental, by rotating the camera around the player in the y-axis independent of the player’s current movement direction. This will let us more easily reference the camera’s forward vector in our movement logic to determine which way the player should go.

Obtaining the final target position involves a few basic transformation calculations:

  1. Calculate a base offset using the XZ components of the camera’s current forward vector.
  2. Normalize the offset and rotate it by two quaternions, one for  (using the camera’s right vector as a rotation axis), and one for yaw adjustment (using the world up vector).
  3. Multiply the resulting offset by the current zoom level.
  4. Calculate the camera’s position by adding the final offset to the player’s position.
  5. Rotate the camera to look at the player.

We’ll implement that as follows:

Transform pTarget = player.curObject.camTarget;
Transform rTarget = cam.transform;
Vector3 offset = -rTarget.forward;
offset.y = 0.0f;
offset *= zoomLevel;
Quaternion offsetPitch = Quaternion.AngleAxis(pitchLevel, rTarget.right);
Quaternion offsetYaw = Quaternion.AngleAxis(yawAdjust, Vector3.up);
offset = offsetYaw * offsetPitch * offset;
Vector3 targetPos = pTarget.position + offset;

The resulting controls look something like this:

BasicControls

Phase II: Physics

Adding basic physics is actually far less painful than you might think, after you’ve implemented your core control scheme. A lot of the decisions in this respect, at least early on, will be a question of design rather than programming. While the details of this aspect will be dependent on your physics engine, here’s some guidelines we’ve used in developing our camera physics:

  • Use of a spherical collider to maximize smoothness when colliding with objects and reducing the chances that the camera will get “stuck”.
  • Set collider radius to something slightly further than the camera’s near clipping plane to avoid unwanted geometry clipping.
  • Ensure that you can toggle physics on the camera – disabling physics during camera reset and animation is generally a must to prevent unwanted side effects.

If you happen to be using Unity, a quick way to set up your camera physics is to tack on a sphere collider and a rigidbody, and use Rigidbody.MovePosition to update the camera’s target position, using a distance threshold to prevent clipping through thin walls and other geometry:

Vector3 posChange = targetPos - cam.transform.position;
posChange = Mathf.Clamp(posChange.magnitude, 0.0f,
maxSpeed * Time.fixedDeltaTime) * posChange.normalized;
rb.MovePosition(cam.transform.position + posChange);

(As a word of warning for Unity users – if you’re using this, or a variation, as your quick and dirty camera physics solution, be sure to set your camera rigidbody mass to zero and set its velocity to zero during every update – lest you be plagued with unwanted force interactions.)

Here’s a comparison between our initial camera and our physics-capable camera when confronted with a wall:

PhysicsLarge

Admittedly, a fairly basic implementation, but the result is suitably robust in many situations. The resultant camera has respectable behaviour when crammed into walls, floors, and most level geometry. We can improve this with some dynamic physics constraints and raycasting, but that’s a post left for another day.

Phase III: Animation

For our purposes, we have three main animation requirements for our camera:

  • Smooth transition to and from predefined cutscene animations.
  • Short transition sequences for certain mechanics.
  • Automatic camera reset.

We handle each of these slightly differently, though each updates by overriding the default physics and player-controlled camera mode. For cutscenes, we write our camera’s pre-cutscene transformation to the same format used for cutscene keyframes – which we then append to either end of the cutscene’s frame list. The result is a modified animation which transitions to and from the cutscene’s path without hiccuping between camera control modes:

Cutscene

Our transition sequences are used when a player jumps between different controllable objects. To accomplish this, we capture a keyframe of the camera’s transformation at the instant the player triggers the mechanic, and calculate the default position of the camera for the new object in a second keyframe. We then animate the camera between these frames before restoring control to the player:

Possession

Finally, we allow players to “reset” the camera at any time, smoothly swivelling back to the default pitch, yaw, and zoom for the player’s current world position and orientation. Here, we key the camera’s pitch, zoom, and absolute yaw relative to the player for both the current and default camera orientations. We interpolate the values simultaneously to yield a smooth swivel effect:

Reset

Phase IV: Extras

A nice additional feature is the inclusion of a damping system, which adds a light springlike quality to the way that the camera adjusts its position.

The obvious implementation here is to simply apply a damping or acceleration function to our camera’s transformation update. However, damping the camera’s target position alone will have the effect of making our yaw and pitch controls feel sluggish. Instead, we’ll effectively damp the scaling of our offset vector, preserving the snappiness of our view controls while giving the camera a sense of springiness as the player moves around. We can implement this by using Unity’s Vector3.SmoothDamp function (or by keeping track of the camera’s frame-to-frame velocity and applying acceleration manually):

Vector3 realTarget = Vector3.SmoothDamp(cam.transform.position, targetPos,
ref camVel, dampingTime, player.curObject.moveSpeed);
offset.Normalize();
offset *= (realTarget - pTarget.position).magnitude;
targetPos = pTarget.position + offset;
rb.MovePosition(targetPos);

The resultant camera behaves something like this (undamped on the left, and damped on the right – we can reduce the exaggeration of the effect by tweaking the damping time):

DampLarge

With that in place, we’ve got ourselves a simple, versatile camera system that can adapt to all of our basic in-game needs. We’ll be back soon with updates on art direction and our final gameplay prototype!

Spirit Development Update: July 2017

Now that we’re rounding out our third official month of development, we’d like to take some time to review our progress and share it with the community. It’s been a busy season for us, with lots of business meetings and new opportunities. We’re excited to announce that we are working with Northumberland CDFC to help fund our development efforts and that we will be participating in the UOIT business incubator throughout the year!

Reagarding Spirit, we’ve nearly finished much of the game’s core prototype functionality, having implemented alpha versions of our core mechanics, user interface, input system, save system, and application management. While we’ve got a long way to go in refining our gameplay and fleshing out our level design, it’s been quite rewarding to see the first few pieces come together. In this post, we’ll reflect on everything we’ve done so far, and how we plan to build on our existing foundation for Spirit in the coming months.

Gameplay

Navigation & Camera

Naturally, our first step in development after setting up our basic input & state management (see below) was the integration of basic player navigation. Our case is a bit unusual in this regard since players will be controlling a lot of different objects, so a one-size-fits all solution just doesn’t work for us. We’ve set up a system that lets us define movement controllers for different objects with varying degrees of deviation from standard rigidbody-based or character controller-based motion, which we will expand as we add new player-controlled objects, integrate animations, and improve the feel of our character movement.

Movement.gif

We’ve also set up a basic camera system allowing for locked and free-form camera controls, which supports a couple of different modes of operation depending on the object the player is controlling. It’s currently quite similar to the camera in our initial prototype, with some improvements to interpolation and adjustment behaviour. We’ve also developed a simple cutscene system built from our path editor utility, which has allowed us to start thinking about cinematic aspects of the game. Our next goals with the camera will be the integration of some basic physics and location-based constraints to improve gameplay feel and make it easier for the camera to adapt to different level geometries.

Camera.gif

Possession & Summoning

Possession is our core mechanic, and so it will be something that is in a constant state of expansion, refinement, and testing throughout the development process. Right now, we have a few different objects in our prototype for players to control (a large, bouncy ball, a marble, and a paper airplane), in addition to Spirit himself. Objects control quite differently depending on their physical properties – shape, size, weight, air resistance, and so on. We’re using three primary controllers at the moment for our current set of objects – a character controller-based model for Spirit, a controller we’ve designed specifically for flight, and a controller for rolling objects (the latter two are both heavily physics-based).

 

Additionally, we’re integrating our post-processing system with possession to give each object a unique aesthetic when players are “inside” the object, which we’ll be expanding on in later updates. Right now, we’re experimenting with a few different effects related to image warping and colour distortion. No psychedelics have been involved in the process, we promise.

Rescue & Collection

Players’ primary objective in Spirit is to rescue their ghostly cohorts from a devious team of paranormal exterminators, who’ve trapped them for later disposal. Spirit, who managed to evade the exterminators’ dastardly traps, will not stand by and allow innocent poltergeists to suffer in captivity. After all, a little innocent haunting never hurt anyone!

We’re going to be designing a number of different friends for Spirit to rescue, which our artist Josh will be bringing to life shortly (we’re acquiring supplies for the ritual). In the meantime, we’ve integrated the mechanic with a host of adorable magenta ghost clones, which aren’t terrifying at all, thanks to their giant yellow eyes. Ever watching. Staring. Judging.

 

Players can also collect little bits of concept art, tutorial images, and assorted bits of photographs and the like throughout their adventure, which they’ll be able to visit in a little journal menu, hosting their collection and detailing their current objectives.

Interaction & Dialogue

Once we’ve fleshed out our story and side characters a bit more, we’ll be integrating a fair bit of narrative, sassy conversation, and general tomfoolery to complement the game world. Right now, we’ve prototyped our system for interaction and conversation with a couple of talkative books. We’ve integrated this system with our collection mechanic, so that players can “take” things from NPCs via interaction, and we’ll be adding some basic fetch quests, riddles, and so on in the future.

Dialogue.gif

Abilities

A new feature we’ve been working on is an ability/talent tree similar to what you might find in an RPG perk system (though far less complicated!) or a game like Ori and the Blind Forest. At the moment, we’ve just finished implementing a skeleton for defining abilities, acquiring perk points, and spending those points to acquire and use abilities. We’ll be working on designing and implementing unique talents over the next few months.

AbilityUI.png

Application Management & State Saving

I like to have the application back-end up and running before taking on almost anything else, so that we can switch between scenes and deal with global GameObjects effectively. This helps us avoid snafus with the inability to test state transitions, getting caught up with persistent objects, and so on. Thus, our app manager was one of the first things we worked on, and we’ve expanded it steadily to accommodate new features as necessary.

We’ve also been extending our save system to support better player data management, improved file handling, and the ability and collection mechanics.

Input

Input was another of our initial areas of focus, as we wanted to develop a custom wrapper for Unity’s input system that allows us to query input based on actions defined outside of Unity’s input manager. We did this so that we can build a system for players to rebind their inputs effectively in-game, rather than having to rely on the Unity launcher. Furthermore, this leaves us the flexibility to import custom input plugins if we want to integrate support for different controllers or improved input polling in the future.

User Interface

Our UI is largely prototypical for now, with many placeholder assets and sprites taken from our older iterations. However, we’ve fleshed out the functionality of the HUD, menus, and hubworld, and we’ve built a solid foundation for our redesign of UI elements, which we’ll be working on soon. We’ve also spent some time wrestling with Unity’s default UI navigation, to ensure the best experience for players using a gamepad.

 

Animation & Sound

Our path editor has been serving us well, and we plan to use it as a tool to help animate obstacles, characters, and visual effects once we’ve finalized our level designs. We’ll have the all-new Spirit character model and animations within the next few weeks, but for now, we’ve integrated our old animations into Unity’s animation system, with a small bit of customization built on top for our gameplay needs. We’ll be extending this system as we continue to refine our character movement and generate new designs. For now, little old Spirit still looks pretty adorable, though.

We’ve also integrated some of our old sound effects and music, and we’re working on balancing and extending our simple sound system to better handle transitions and the overlay of multiple effects. We’ll also be working with some brand-new editing software and digital instruments soon, so stay tuned for music updates!

 

What’s Next

Our next major priority is revamping Spirit’s model and animations, and updating our navigation code to ensure a great platforming experience for players. From there, we’ll be refining our core mechanics and working on level design and asset creation, before drilling down into our puzzle design and adding depth to the game. We’re having a great time working on Spirit and we really hope that you’ll enjoy it when the time comes!

UnderConstruction