This is a work in progress of a tank battle scene. Everything is work in progress still.
This is a personal project I’ve recently started to build using Steam/HTC Vive with room-scale and a teleporting mechanic. I’ve also using Substance Designer & Quixel Suite to improve texturing workflow and utilising UE4’s fantastic material layering to enable smart materials. Please keep in mind this is all extremely early work in progress. I have the teleport mechanic working as well as the start of the flooring, some bedding and stand-in geo for light placement / furniture.
This is an asset I made several years ago for a PS Vita title. Left click-drag on it to tumble the model:
The goal of was to create an animated crowd that has customisable shirt colours without further degrading performance on extremely limited OpenGL ES2.0 devices (Mali 400 (Samsung S3) and fill-rate bound platforms like the iPad 3).
I also had to keep the application size as small as possible (~5MB = 2 x compressed 1024 RGBA textures) and to limit shader complexity to a minimum (ES2.0 only)
Since the PS(3)60 console era there have been some great examples of dynamic crowds in modern sports games . These tend to favour using dynamic imposters coupled with animated spritesheet shaders. These shaders use deferred render targets, to relight the 2D screen-aligned quads (per crowd member) using the normal and diffuse outputs. [There is a great write up in GPU Pro 3, Chapter 3 by Alan Chambers who describes this technique in Rugby Challenge.]
Due to limitations on our target platforms it was not possible to use deferred rendering (since it requires OpenGLES 3.0). Also a quad (2 tris) required on screen for each crowd member was too expensive (as would be the texture space for the different angles required by imposters). [At the time of writing (July 2016) Unity 5 is just beginning to implement GPU batching and it still has limitations in this use case.]
Our current crowd is using simple rectangular quads per row of crowd (up to approximately 20 people wide). [To improve static batching I did a lot of performance profiling and tested with single quads (lower tri count lots per batch and better frustum culling) vs the entire crowd divided into six meshes (more quads wasted off screen but far less batches). Static batching was far more performance friendly with few batches so we’re currently using six meshes for the crowd.]
Concept Phase 1 (animated crowd):
- Sculpt and Rig some characters over a couple of days to use as crowd members.
- Create a basic looping clapping / idle animation [I experimented with Brekel/Kinect2.0 at this point]
- Setup materials and lighting
- Render out a PNG RGBA beauty-pass image sequence
- Create a sprite-sheet and assign to existing crowd quads (save time here by reusing existing assets / placement)
Concept Phase 2 (tinting):
Create RGBA masks by using specific material settings and recombining spritesheets and extracting data where necessary using custom shaders.
I knew that creating an animated spritesheet for the entire crowd quads would either require either far too much texture resolution (therefore stressing both app size and hardware) or result in sub par texel resolution. This would mean creating a shader that could both run a sprite-sheet AND tile it within the same quad.
Also because we wanted to tint the colour of our crowd’s shirts; the crowd sprite-sheet would need to have animated masks for the shirt areas as well as different masks for variation (we don’t want every crowd member to have the same t-shirt colour (even if it’s tinted at runtime)).
Whilst ideally their would be a lot of shirt colour variation, each mask required a texture channel (some masks were created at runtime by inverting existing masks and using shader math – but this had to be kept to a minimum). Another issue is that whilst a white base in the original sprite-sheet could be tinted using a mask – the result would not be pleasing when closer up (there would be no definition on the shirts). To remedy this I used a greyscale AO map (to multiply colour against) packed into the original spritesheet’s Red channel.
To understand why the tiling of the sprite-sheet was a problem it’s important to realise that the custom sprite-sheet shader is essentially offsetting X pixels (number of pixels per column) per unit T (time) and Y pixels every N (number of rows per column)). The issue with tiling this arises because it’s already doing offset operations and the shader does everything in the same pass. The best way around this was to do a second pass; however this isn’t possible in Shader Forge and with all the art assets to do sculpt/texture/animate/light/render – I decided to stick with using Shader Forge and to look at a way to ‘fake’ a shader pass.
As a workaround for being limited to a single pass – I split the shader workload into two separate shaders (less math for older GPUs as well) and decided to use first do all colour, timing and other modifications in the first shader and draw this on a transparent quad outside of the stadium (where no one will see it). This quad would then be captured to a Render Texture (rTex) using another camera. That rTex could easily be tiled offset and used like any normal texture.
Creating the Art
For the crowd I wanted to reuse as many assets as possible. I’d already sculpted the heads, arms and a t-shirt for our football players and created a customized biped rig used in game. I just needed an alternative top (I sculpted a quick shirt), some trousers and really simple boots. I also added some geo in max for a beanie and a scarf. I added some cloth sim to the scarf (a quick win for adding movement and shape contrast).
The crowd members were made in Zbrush and rigged, textured, shaded and rendered in 3dsmax (using Mental Ray / IBL). No texturing was needed due to how small they’d be (AO / good lighting would provide enough detail). I reused the rig I’d created for the main players and skin wrapped the new geo to them. I’d been experimenting with Kinect 2.0 and Brekel and so made some (somewhat embarrassing) captures of myself cheering / clapping etc. These came through extremely roughly but were fun to do and provided a better starting point than from scratch.
For rendering I setup a few MR shaders for cloth and a very basic skin shader from a default node (the Fast SSS+ type shaders were unnecessary here). I used IBL for the ambient light and a strong key light. I rendered out animation in fourths (1 every four frames) so as to provide more texel res on the final sheet. After trying some (painfully slow) PhotoShop action scripts off GitHub – I used the premium version of Texture Packer which is superb for packing sprite sheets.
Proof of Concept (Unity)
Phase 2 – Tinting using packed RGBA masks (and Vertex Colours / UV Coordinate Masking)
At this stage the proof of concept was working and we were quite pleased with the results. I decided that we needed a lot more variety though but we were already using two RGBA textures. So I decided I could expose a variable (float) as a slider called ‘Random’ which would check the current pixel location’s U value (horizontal space of the currently displayed animation cell) and compare that to a vertex colour.
Cloth Sim Sprite-Sheets
There’s a lot I’d like to improve with more time and resources. I think I’ve achieved the goal of adding a lot more ‘life’ to the crowd (mainly through movement) and perhaps more importantly – through the crowd being representative of the player’s team – by more members of the crowd representing the player’s chosen dominant colours – as the player’s team does better.
Obviously this approach cannot compete with true imposters and viewing the crowd from above or at oblique angles destroys the illusion. I did some tests using a shader to do perspective warping based on the view direction (similar to Photoshop’s perspective warp tool). This actually worked pretty well through required logic to determine which side of the coordinate system you were looking down. I got a proof of concept working however the risks of running extra math ops on really old mobile phones made the extra expense too much of an unnecessary trade off.
Extra animation sheets could be added – say – one for the crowd jumping up in the air to celebrate. At the moment we use a scoreboard animation that I created in 3dsmax and edited further in After Effects
At the game studio where I work, we use Unity3D which has no native retargeting (until Mecanim which we cannot currently use).
This meant that (as the artist/animator) every time I make changes to the proportions of the rig (fairly often due to tight deadlines and iteration) all the animations have to be loaded, one by one and have their figure (.fig) files reset. Then they have to be exported to fbx (again one at a time). This was a very time consuming process (especially when cleaning up thousands of mocap animations)- so I created this script to make it effortless and more importantly – fast.
It now quickly runs through a batch input folder (c:\batch\in) and applies all the retargeting automatically.
The video here demos this. I hope it helps someone else. This is my first attempt at a maxscript so i’m sure it could be improved but has made my life much nicer 🙂
You can download the script at ScriptSpot here http://www.scriptspot.com/3ds-max/scripts/biped-bip-animation-batch-retargeting-fbx-export
2 Day Freelance Job – Design/Sculpt/LowPoly/Rig/Setup(Unreal Engine)
Freelance job done over 2 Days (short notice) and with no sleep (I was asked to make the rig on the day of the Brixton Academy show). Skrillex used the creature (bionic grasshopper) for the finale of his whole European tour in 2012. Despite the fact I can see many many flaws (especially on the deformation side) I’m happy to have worked on this exciting project.