The office I work in has just run a Game Jam so I took the risk of humilating myself by entering against a bunch of coders when I can hardly rub two bits of JS together. What made it more fun was I chose to do it in 2D and use C# as the language which I’ve never used before.
As its a running gag about my surname being spelled Kirby, not Kirkby, I went with the idea of creating a tiny me that looks like the Nintendo character. I created all of the graphics in game in Photoshop and then imported them into Unity. It was an interesting little project and I can think of a few things to tweak to improve it if I ever find the motivation.
To make the second video I shot some wobbly video of a table then motion tracked it in After Effects create a 3d camera. The camera was then imported into Cinema 4D where the 3d scene was created, then that was then imported back into After Effects using Cineware rather than rendering the footage out of C4D. This meant I was able to tweak the project on the fly rather than have to rerender everything every time I made a change.
Finally, just to prove that sometimes my work and home life overlaps a bit, I was asked to make one of the awards for the Jam. So, I present the “Peoples Choice Award Golden Joystick” made from an old Atari joystick, sprayed gold.
The final winner for both the main competition and People choice award was Clone Attack and well done to all who entered. I will try and grab some links to everyones game and add them below when I can.
Monday night (May 27th) was FE Suffolk night and I’d volunteered to give a talk about my use of Cinema 4d and how I have used it to make content for websites, games, videos and other such tomfoolery. To try and make it interesting to the web people in the audience who have probably never been near a NURBS or a metaball in their lives, I thought it would be more fun to demo both Cinema 4d and Unity to give an idea about how a 3d app can be slotted into a workflow. That was the theory anyway but as the saying goes, the best laid plans…
I thought a good way to grab the audiences attention would be to build an iconic object from a well know game so picked the weighted companion cube simply as it’s a cube shape so works well with going from a primitive shape up to something more complex. I built the cube on a Sunday afternoon the week before the talk and spent about a few hours over the next couple of nights putting the texture together while having some fun with the model. This shows my step by step process starting with a primitive cube. The second one is a demo to show a default cube with its standard UV and how simple it is to drop a texture on to get a basic object textured. The third cube is me using a boolean to build some of the shape up, yeah yeah yeah… booleans are not the best way to work but sometimes they can be used to get good results. The fourth cube is the final thing cleaned up and then five is the final textured cube with the UV’s cleaned up as the process of modeling the cube creates some horrible overlapping on the UV. I would go into depth on what some of the terms are but I doubt I could explain them well enough without a load of pictures.
A quick video was rendered up to show an example of dynamics.
I then edited an existing Unity sandbox I’ve made to include the cube, along with a script to interact with it (pick it up and chuck it) but I left that as a backup as I really wanted to demo how to import objects from C4D into Unity and then work with them. The landscape was built using a selection of the default items and scripts within Unity but some of the assets are mine such as the crate, the rather crude toilet (crude model rather than crude as in rude) and the picture of famed Sci-fi author Isaac Asimov is something I don’t really think I can explain. Finally I installed the unity player onto my laptop, tested it, copied all the files over and then set off for the talk.
Not like this:
So not one to be daunted too much, I shuffled on and managed to go through a few of the steps I took in creating the cube and explaining a bit about UV mapping. I should add that while UV mapping is a pretty important step when trying to make a good model, its also one of the most mind numbing processes ever developed by man, and it often takes me the longest out of any stage of making something as I tend to lose the will to live while UVing.
The biggest hurdle came when trying to work with unity as I couldn’t find any of the stuff I wanted to talk through properly and then to rub it in further, the object imported fine into Unity but refused to be imported into the scene. I ended up going straight to the finished version and back-peddling through the steps to show how I added a box collider and a rigid body then applied the script to the scene.
The test scene worked fine in Unity but fell over when I exported it and ran it as it turns out theres an export to offline mode. Well I didn’t know did I? It seems you need a whole load of js nonsense to make it work and without an internet connection it just fails so theres a little tickbox for export to offline mode that pumps out the needed files. You live and learn.
In the end I think I got there, just about, but I think GLaDOS had a hand in a few of the things that threw me off.
A big thanks to Kerry Buckley for taking a photo of me mid ramble.
If anyone has any questions, feel free to chip in below but finally… here’s the link to the scene: FE Suffolk Unity Demo
There was a bit of banter on twitter about DOF in Cinema 4D and I have to say I’ve never had really good results with it. Most people seem to complain about it not working with fur, reflections and other post effects it’s common issues from what I gather but my reason for not using it was always time. It takes too long to run out the effect, test to see if it looks ok, tweak and re-render. So years ago while I was still working in video I stumbled across the rpf file format. This was in the days before twitter, blogging and general social sharing of info so it was a shocker at the time to find info you needed sometimes. rpf files store a lot of info and you actually end up with a set of files that have not only your scene, alpha but also some 3d data. What I will attempt to demo is how to use of this info. I admit that this may not be the best workflow, several people have mentioned other tricks that I may play with but this is to show how I’ve done things in the past.
So enough blathering, the first thing is to put together a basic scene to demo how the rpf file works. Here I’ve got a bunch of cubes stretching away, a couple of lights and a camera. Heres one of the first things to note, there is a light with a brightness of 0. I’ve called it ‘place 1’ and its tucked away between the 4th and 5th cube. More on that later.
Next I animate the camera and go to the render settings, fill in the options and this is our second take note part. See the ‘compositing project file’ options? Ever seen that before? No? Great Scott man you’ve missed out then. Tick the save box and the 3d data box then click the save project file option, select your destination and hit render.
What you get is a bunch of rpf files and a project file with the extension aec. ‘Whats that then?’ I hear you cry (actually everyone is snoring so I’ll shout it myself.
WHATS THAT THEN?
Well, its an After Effects project file but if you try to open it in after effects right now it will just throw a fit, get moody and ignore it so you need to install the AE plugin that comes with Cinema 4D.
Go to the Cinema 4D folder, find the ‘Exchange Plugins’ folder > After Effect > win/osx folder (pick for your system) and within there should be CS folders with plugins for each version of CS. Open up your one, unzip the file within and place it within your AE plugins folder.
Good, now go start up AE and goto import > File and pick the aec file NOT the rpf file. I know normal instinct is to import an image sequence but this is the magic part.
What imports (if your plugin is in place) is a folder and a couple of comps. One of them will be the scene while the other will be called something like Camera+Light, the third thing will be your image sequence of the rpf files. Time to have some fun.
Double click the camera+light comp and you will see each of your lights and a camera as a layer. Each item will be labeled as you left it in Cinema 4D so some logic helps and you should find your placeholders there. Here I’ve imported a random image into the comp, made it 3d and copied the x,y,z position from ‘place 1’ and put it into twittertop’s (0,0,785.158)… And lo, did the jpg take on the exact position of the placeholder.
Its also worth remembering that you can edit the lights with AE and change the colour, brightness etc… it wont affect your rendered rpf files but it will affect anything you add to the scene in AE.
Now we switch back to our main scene (untitled 1 on the screen shots) and if you scrub the timeline back and forth you will see the camera pan around the jpg as if its part of the scene, well thats a bit rubbish isn’t it? It just appears over the top of the scene. Pointless at this stage but lets dig a little deeper into AE.
Firstly put the Camera+lights layer below the rpf layer. Select the rpf later and goto effect > 3d channel > Depth Matte and apply it to the sequence. You may find your layer vanishes in the preview window but this is fine, drop down the effects on the rpf layer and then drop down the depth matte. Now comes the magic, you need to tweak this number until you see something happen, with this the number sometimes has to be quite high and in this case its -2600. As you change the number you should see the cubes vanish as the depth matte moves down the depth. Oooooo its like some kind of evil wizardry that’s stealing the cubes….
Scrub the timeline back and forth and you may notice that the depth matte isn’t perfect if you camera moves, its working on a fixed point from the camera so you may need to keyfrome the effect if your camera moves.
Once you have this setup you want to have the other cubes appear so select the rpf layer, duplicate it (cmd+d on a mac, buggered if i know on a pc) and move it below the camera+light layer. Finally Invert the depth matte with the invert option (its that simple). We now have our image wedged inbetween our cubes.
Now we come to the final bit and what kicked all of this off, DOF. You can use the effect from effect > 3d channel > Depth of field for this. At this point the info from the depth matte comes in handy as if you want the DOF to be based around your imported image you simply use the same depth numbers here. Yes you will need to keyframe but you know how to copy and paste keyframes don’t you? If you don’t you can copy a keyfrome, select where you want to paste it. So you copy the depth key frome and paste it onto the DOF focal plane. Copy the DOF filter and paste it onto the other RPF layer and your done apart from the mass amount of tinkering your mind is currently ticking over with.
I used this depth trick to produce the Kamppi advert a few years ago so you can see it in action on my YouTube channel.
For those who care to try out a final thing, heres my quick project with files and stuff. You may need to tweak some of the paths but thats your problem, not mine. Suck it up and act like a man… or a woman. I’m not sexist.
rpf-demo project download its 16mb if your interested. Please feel free to ask away either on here or twitter and I will do my best to answer as best I can.