Brick Street View



Had some more street view panorama fun a couple of weeks ago. This time combining it with virtual LEGO. If you havn’t seen the site, go there first and I’ll tell you more afterwards:

A short walkthrough of what’s going on:


To make the map look like a giant baseplate, I use a tiled transparent texture overlay. When you move the center of the map the background-position of the layer is updated to match the movement of the map. When zooming it the background-size it’s resized. On top of that I first add a gradient fill for some shininess, and then the threejs layer on top. This contains some trees and bushes, and also the landmarks on predefined locations (found in the shortcuts-popup marked with a star-symbol in the top menu). Every object placed on the map has a corresponding google.maps.Marker. When needed (map-center change or zooming) the marker position is transformed into 3D-space to update the meshes. It’s limited to desktop-users, because there is a bug on touch-enabled devices, that prevent the marker-positions to update while dragging (or actually the projection setting on the map that is used internally to calculate the container position in pixels). The result is that the meshes jumps to the correct position only on the touchend-event. To make the objects look even more attached to the map, I added soft shadows.

To place the trees in good spots, I use the Google Places API, searching for parks within a radius of 1500 m. To display the location where the center currently is, I query the geocoder library in Google Maps.

Street View

I have to render the scene in two steps. First the background, then the rest of the scene in a second pass. I do so to make sure the panorama-sphere is always sorted between the background and the lego-models. There is low additional cost for this, both the background and the panorama is very low poly.

Constructing the LEGO panorama

This is the base idea with the whole experiment. To be honest,  the range of the results spans from ‘nice idea, but…’ to ‘WTF’. It very much depends on the quality of the depth data and also the color of the sky. Here is the workflow for creating a panorama:

– Load the low-res panorama with a low zoom-value. I use GSVPano for this, which supports high resolution tiling, but my goal is to pixelate it so the information is not needed. Returns the result in a canvas.

Flood-fill the canvas along the top edge of the texture, where it’s hopefully a clear blue sky (or smog for that matter). I sample some more points vertically in the direction of streets to wipe out some more there. I have fiddled with this back and forth to find a good settings but there are room for improvements still, like color detection, and comparing against the depth/normal-data. Replace the erased pixels with a single color, for easy detection in the next step.

– Create a high-res canvas and draw lego-bricks into it based on the color values of the smaller canvas. For pixels with the color defined by the floodfill, flag those as sky and avoid to draw those bricks, and all the bricks above a detected sky pixel. In that way clouds above a detected sky-pixel also gets erased so bricks don’t float in the air. For the ground we use the depth data to exclude those bricks. If you browse the site and press “C” and you’ll see the normal-map as an overlay. Pressing “X” reveals the original image. For each brick, I check if it belongs to the grund plane (in the pano depths data) and skip those as well.

– There are three different bricks drawn to make it less repeating. A flat, one with a variable rotation and one with a peg on the side. The last one is used more often if the color is closer to green, so trees and bushes gets some more volume. The build is nowhere close to being correct in any sense, especially when it comes to perspective, sizes or wrapping textures on spheres, so you need some imagination (or scroll around very fast) to accept the illusion.

– Then the fun starts, playing with real LEGO:


Lego models are added such as cars, flowers and trees. Two kinds of street baseplates are loaded to be used for a crossroad in the center and roads. Those will only be visible if there are provided links in the panorama meta data. Sometimes in crossings you can see the road in the imagery, but it’s not linked until the next panorama in front of you, so it does not reflect visual roads correctly. For the 3d-models I use a format called LDraw. An open standard for LEGO CAD programs that allow the user to create virtual LEGO models and scenes. All parts is stored in a folder and each file can reference other parts in the library. For example, every single peg on all bricks is just a reference to the same file called stud.dat. With software like Bricksmith and LEGO Digital Designer you can create models and export to this format. It’s also loads of models available on the internet, created by the LEGO community. Since it’s just a text file containing references, color, position and rotation, it’s straight forward to parse as geometry.


To show the LDraw models I use BRIGL, a Javascript library made by Nicola Lugato that loads ldraw files into threejs. I modified it a bit to work with the latest revision of threejs and material handling.

It takes some CPU power to parse and create meshes. I tried implementing a WebWorker to parse the geometry, but without success. Would be nice to parse them without freezing the browser. Using BufferGeometries would also be better for parsing and memory. The web is perhaps not the best medium for this format, since every triangle in all pieces is parsed and rendered, even those not visible, like the inside pegs of a brick. In this demo, I do a very simple optimisation on the landmark models, while parsing the specification I exclude parts that is inside or beneath bricks, like the inner peg that connects to the brick beneath. This reduced the triangle-count and parse-time by 50%.

To avoid hundreds of requests in runtime when loading the models subparts I use a gulp-script to lookup all active models (.ldr, .mpd, .dat) which recursively transform parts to JSON and adds them into a bundle as cached modules so they can be required instead of loaded. The size compared to storing the final parsed geometry as JSON is significantly smaller since many parts are reused across models.

For more information, visit the about page. And source code is available on github.



I played around some with the physics-library cannon.js together with three.js and ended up with this Christmas card.

To show the power of iterations, this is a screenshot from the first version that I thought was almost finished, since my main target was to play around with physics, and this is how physics demos usually use to look like.


Each small bulb in the garland is a joint constraint. The ending joint have zero mass so it will stick to the facade without falling down. You can click/tap to change the position of those positions. To create the reflections the geometry is cloned and inverted, added to a second scene which renders to a separate render target with blur applied to it. The rest of the scene is then rendered on top of the first pass without clearing the colours, just the depth buffer. It was a bit tricky to render in the correct order and clearing the right stuff and I’m sure there is a more optimised way of doing this without the need of two scenes.

A transparent layer is also added to the ground level in the “above” scene to add some bluish diffuse and catch the lighting coming from the stars. Each star has a point-light as a child. A tiled diffuse map and a normal map are assigned to the ground floor to add details and an interesting surface. All geometry besides the star and the sign is created procedurally with just primitives (Box,plane and sphere). The facade got a specular map so the light could be more reflective in the windows. A bump map adds some depths as well.

Then finally, couldn’t get the Christmas feeling without the mighty Bloom filter pass 🙂

Try to mess around with the attachment points af the garland to see the physics, lighting and reflections in action. Notice how the wind effecting the lights and the snow, it’s actually a impulse force applied to the scene each frame.

Psst! Even though it’s Christmas, there is an easter egg somewhere. Can you find it?

Open Card

Interested in source? Here is the Github repo

Urban Jungle Street View – Behind the scenes


Urban Jungle Street View is an experiment that I did to play with the depth data in Google Steet View. I knew that the information was there but had not successfully managed to parse the information. But some weeks ago I found a library that provided this for me. About the same time I started to play “Last Of Us” from Naughty Dogs. About 15 minutes in, stunned by the environments, I just had to try out this idea instead (damn you inspiration).


Here is a screenshot with the final result: endresult

I posted the link on Twitter and the response was overwhelming. About 200.000 visits the first week and it reached the front page on,,, among others. Even television picked it up, got interviewed by the Japanese TV show “Sukkiri!”. Something triggers the inner adventurer in the audience for sure 🙂

2D Map

map2d Using the Google Maps API is so easy and powerful. So much data in your hands. For the style of the map, I use the Styled Maps Wizard to see what different kind of settings there are and export the settings as JSON. One tips is to turn off all features first with

{"stylers": [
    { "visibility": "off" },

and after that turn on the things you want to show and style.

Street view coverage layer

I use the StreetViewCoverageLayer while dragging the little guy to show all available roads in Street View. It’s not just roads but also indoor places and special sites like Taj Mahal. I like the effect in Google Maps when holding Pegman over a road and he is snapped to the road with a direction-arrow aligning the road. But that functionality is currently only available in the Google Maps website and not in the API version 3. But I made a custom version that works not near as good but close enough. You can calculate the pixel-position and the tile-index and then load the coverage-tile manually, write it to a canvas and read the pixel at the corresponding local position. If the pixel is not transparent then we hover a street-view enabled road or place. I also inspect the color to avoid changing the state if you hover a indoor-panorama, which is orange. You can still drop him, but it adds a visual queue that it’s not optimal to drop him there. Sometimes you end up inside a building anyway, but I have not found a way to notice that without loading the data first.

Street View

I use a library called GSVPano, created by Jaume Sanchez, to show the panoramas in WebGL and three.js. The source is removed from the repo though. To access the depth and normal data I used a different library. Read more about that here. depthmap

The data for each panorama is not a depth-map directly (from the API) but instead stored in a Base64-encoded string to save bandwidth, requests and in a way that makes it quick to parse for the actual purpose of it; to find the normal (direction of a surface) and the distance on a point in the panorama to aid the navigation UI. If you hover the ground with the mouse-cursor in Google Streetview you get the circle cursor and if you point at a building you get a rectangle aligned with the surface of the building. I’m using that behaviour as well when plotting out the foliage, but also passing the data to the shader as textures.

In the demo, the normal-map takes care of making the ground transparent in the shader, so I can place a plane under the panorama-sphere. The ground plane will be visible through the transparent ‘hole’ (could maybe create geometry instead somehow). The depth-map is used to make a interpolation from transparent to opaque ground. The depth-data has a limited range (the visible range), so it will look strange otherwise with a hard edge.

I can also add a little fog to the environment. The planes in the map is not exactly matching the real-world geometry so it can not be too much fog, because the edges will be visible around the buildings. In the end I removed it almost completely since I liked a more sunny weather, but here is screenshot with some more added to it:



Foliage are sprites (flat billboards) or simple geometry that is positioned in 3d space. To place them I look in random positions in a predefined area of the texture. First, to get the sense of how a flat image is wrapped around a sphere, look at this image: uv-wrap Image from Wikipedia

Here you see the depth-data parsed and displayed as a normal texture. The dots is where I calculated things to grow (I describe that more below).


When selecting a point in the texture you get the position on a sphere like this:

var lat = u * 180-90;
var lon = v * 360-180;
var r = Math.cos(DEG_TO_RAD * lat);

//range between 0-1
var pos = new THREE.Vector3();
pos.x = (r * Math.cos(DEG_TO_RAD * lon) );
pos.y = (Math.sin(DEG_TO_RAD * lat));
pos.z = (r * Math.sin(DEG_TO_RAD * lon));

But if all objects are placed at a fixed distance they will the same scale, so the distance from the camera is important, like pushing the object away from the camera in the same angle. That value is stored in the depth data so:


If I want the object to point at the camera, like the trees, it’s just to set


But If the object should align the surface it suppose to stick to, we should use the normal value, the direction the wall in facing:

var v = pos.clone();
v.add( pointData.normal );


Here is a demo with this setting. To get a true 3d perspective, the sprite could be replaced with real geometry, but leafs are quite forgiving to perspective issues and sprites are much more performant. If the angle to the camera is too big, it looks really strange though.

If you click or tap the scene you can put more foliage into it. Then it’s just the same process but reversed. Get the collision point on the sphere, convert it to UV-space (0-1) on the texture. Test against the data and get the preferred plant for that position if any.

Fine tuning

Every time you visit a place it’s randomly generated. There is some estimations that I use to get a bit more smarter distribution of plants. If the calculated direction is pointing upwards, make grass growing on the ground. If it’s perpendicular to the ground, we can put some wall foliage sprites there. To hide the seam between the ground and the walls I do a trick to find that edge. The ground has always the same normal, as you can see as the pink color above. I iterate the pixels in the texture from bottom and up to find the first pixel in a different color than the ground. That is where the edge is. Put a grass-sprite there and go to the next column of pixels to do the same. To enhance the feeling of walls even further lets introduce another type of plant: vines. These have stalks made of 3d geometry (a modified ExtrudeGeometry extruded along a bezier path, or “spline”) with sprite-based leafs. To detect a good place for them to grow I make another lookup in the normal-map from where I detected the edge previously. About 50 pixels up from these points I test if the value is still the same as the wall I found first. If it’s a different color it’s probably a small wall and not an optimal place for a vine to grow, otherwise I can place it there but not too many so test for a minimum distance between them. It’s buggy sometimes, but it’s close enough for this demo. A couple of trees is also inserted as sprites pointing toward the camera. They could be more procedural for a more interesting result.


The link to nearby panoramas is included in the API-result so it’s just to add the arrows and detect interactions with THREE.RayCaster (example). The effect between the locations is part of the post effects pass, I just set the amount of blur in a blur pass…

Post effects

Again, Jaume Sanchez has more valuable source code in form of Wagner, a post-effect composer for three.js. To make the panorama layer and the 3D layer blend together a couple of effects are added to the end result. First a Bloom-filter, which adds glow to bright areas. And a dirt-pass, that adds the dust and raindrops to the composition. The final touch is a little movement to camera so the scene look more alive and less of a still image. There is definitely room for improvement, especially with the color balance in the scene, and more advanced geometry and procedurally generated plants.

Try it now

Now when that is out of the system, lets return to “Last Of Us” finally…



This was a demo I did for the advent calendar Create a number of snowballs and put them together to a snowman. The trail is created by drawing into a canvas which is sent into the ground-shader as a height-map. In the vertex shader the vertices is displaced along the y-axis and. In the fragment-shader it is used for bump-mapping. The snow also becomes a bit darker when it’s below a threshold height. The shape of the track is defined by the radial gradient used when drawing to the canvas. I like that you can run over the existing tracks and leave a bit of a trail of the current direction. That adds more depths than just a static layer painted on top.

In a earlier version the ball geometry was affected by actually touching the snow, extruding vertices once per rotation if vertex was below the snow. But when it became more focused on creating a snowman, it made more sense to make perfectly round snowballs. I’m not so happy about the material on the snowballs. The UV-wrapping and the spherical texture creates familiar artefacts in the poles, pinching the texture. But there was not enough time to look into this at that time. A procedural approach would have been nice to avoid it. Or not use UV-coordinated, but calculate them in a shader. Like these guys (@thespite@claudioguglieri@Sejnulla) who did a nice site about the same time with a much better snow-shader: That shader can also be seen here.

Collision detection is just a simple distance-test between the spheres and the bounding-box.

Try it out here.

View the source at github.

Cube Slam – Behind the THREE.Scene()


Cube Slam is a online game conceived, designed & built for Google Creative Lab that showcases the possibilities with the WebRTC API in a playful manner. We, North Kingdom, together with Public Class and Dinahmoe, shaped the creative concept, user experience, technical direction, design, art direction and production. It’s a Pong-like game, taken to next level. We added in physics, obstacles, extras and effects. But most importantly, you can invite friends to play face to face, peer to peer, in real-time with your webcam. The game logic communicating via RTCDataChannels and you and your friend can see each other inside the WebGL-powered world with the help of getUserMedia and RTCMediaStream. If you want to play the game alone, we have created Bob, our AI bear. Try to beat him, the longer you play the better he becomes. And you as well.

As a Technical Director and developer on the project, it has been extremely educating, challenging and fun. I also got the chance to practice my WebGL skills for the first time in a real project, which was a huge opportunity, leaving the playground and make it for real. This blog has slowly faded away, but I’m glad to share this with you now, maybe I get time to make more stuff in this space.

In this article I will, as the title implies, share some insights and tips from the process of making the WebGL related stuff with the 3d-engine three.js. I will also reveal some easter-eggs and show some prototypes and demos.

Devices and desktop fallback

Screen Shot 2013-07-07 at 10.33.46 PMBefore we dive into the WebGL stuff I also want to mention the mobile version of the game, which Public Class pulled of with some CSS magic. We soon have WebGL and webRTC support in Chrome for Android (currently behind a flags and in beta) and hopefully other devices, but in the meantime we made a CSS version to reach as many users as possible. It’s still viewed in a 3D perspective but we are using sprite-sheets for the assets and CSS to position the elements. It runs smooth as long as hardware accelerated CSS-transitions is supported. It even runs in 60fps on a iPad 1, which is pretty amazing.

The game-logic in the game is completely separated from the presentation layer. This makes it possible to provide with different renderers. We have a canvas renderer in 2d for debugging purposes, a CSS3-version for mobile and browsers without WebGL, and a full blown version with three.js for those with support. Three.js has built-in support for different renderers like canvas and CSS, but we chose to build a CSS-version from scratch to make use of the features in the best way.

It turned out, many players have not noticed they are running the fallback version, since it’s still 3d and the gameplay is the same. But as long as they enjoyed it, it’s fine I guess. IE is still not supported though, since CSS3D is not fully implemented. Our approach needed nested 3d layers  with inherited perspective for it to work and IE does not support that currently. I’m so happy that they decided to jump on board the WebGL train with IE11, so there is hope for IE users.

Creating the world

So here we go, lets start with the scene and how it’s built. The world is quite simple, in the low-poly style we have used in many of our earlier projects at North Kingdom. To make it a bit  dynamic (and fun to program) I aimed at creating some elements procedurally to make each game unique. In the end pretty much everything is static meshes, but it really helped in the process of creating the world.


The terrain in the distance is made of regular planes that is manipulated with the help of Perlin noise. Pretty redundant but a fun detail is that the mountains is random each time you visit the page. To avoid the faces to look like a grid when just offsetting the vertices of a plane I first added a random value to the vertex in the x- and z-direction, then merged some vertices if the distance between them was close enough and finally offsetting along the y-axis. Three different planes with the same process is added, but with different parameters, to create the different levels of terrain. The terrain closest to the arena needed to look more nature-like so that is a hand modeled mesh. We also have animals walking on the mesh, so a static model makes it easier to optimize away some raycasting when attaching them to the surface.



Trees are also distributed using Perlin noise, but the computation-time to place the trees on the terrain was too long when initiating the game. The method was used to generate the final distribution, but instead to save the position of the trees to an array that can be parsed during runtime. If I want to have another forest I can just regenerate the data. Since it’s just built with primitives, there is no need to load external models, just the positions to plot them out. I added a tool to create a new forest in the settings-panel if you want to try it out. Beware, it take some time to parse if it’s too many trees, mainly because of the ray-casting calculation. For better performance all trees is merged into a single geometry so there is only one draw-call for all the trees. And one call for the shadow-planes. The shadow is aligned to the normal of the collision-point (also saved during the generating), but I could not get 100% accuracy with the rotation, that’s why the shadows is sometimes colliding with the triangle underneath, which is also the case if the plane intersect a neighbouring triangle that is positioned higher.


These little fellows were fun to work with. So much personality in just a few polygons.

Here is an example of two ways to control a morph-animation. By manually setting the play-head or play it as an ordinary animation. Move your mouse to make the bear look, and click to trigger a little animation.

And here are all animals, click to look closer and swap between them.

To make them blend in nicely in the environment I did a little trick with the texturing. The texture does not contain the final colors, instead we store different information in them and doing the colouring in the shader. The texture look like this:

image09Red channel for diffuse amount, green for ambient and blue for the white details on the animals. With these parameters we can change the diffuse color (including shadows and ambient occlusion) and making the terrain reflect colors on selected areas of the mesh and keep the white details like the nose or wings the same. We don’t calculate any light on these animals so the coloring is pretty cheap. Try to change the color of the terrain in the settings-panel and notice how the animals blend into the nature when the color changes. A downside is that the animations are not reflected in the lighting, so shadows are static on the mesh. All animals share the same texture so one material to rule them all.

uniform sampler2D map;
uniform vec3 ambient;
uniform vec3 details;
uniform vec3 diffuse;
varying vec2 vUv;

void main() {

  gl_FragColor = vec4(1.0);

  vec4 texelColor = texture2D( map, vUv );
  vec3 bodyColor = (diffuse*texelColor.r*0.3+(texelColor.g*ambient));
  gl_FragColor = vec4( bodyColor + vec3(step(0.9,texelColor.b)*details)*bodyColor*8.0,1.0); = sqrt( );


Here is the result, notice how the are blended with the color of the terrain.


To make them walk and fly around, TweenMax and TimelineMax from GreenSock has a nifty feature to animate objects along a spline made of control-points. I also wanted them to walk on the surface of the terrain so I saved a list of the raycasted y-positions during one loop. Next time those values are used instead.

var tl = new TimelineLite({paused:true});
tl.append(, 1, {bezier:{values:this.controlPoints, autoRotate:["x","z","rotation",-Math.PI*2,true]}, ease: Linear.easeNone}) );

//3d-line for debugging path
var beziers = BezierPlugin.bezierThrough(this.path,1,true);
var line = new THREE.Line(new THREE.Geometry(), new THREE.LineBasicMaterial( { color: 0xffffff, opacity: 1, linewidth: 1 } ) );

The arena

This is created with simple planes and boxes. In the beginning of the project we were able to set all dimensions with settings during runtime. Later, when we found a good balance, dimensions were locked and the settings-option was taken away so we could adjust the surroundings. Most work was put into the reflections. Lots of trying out stuff like environment-maps, transparency, depth-sorting issues. More about that in the reflections-section below.



Bob the Bear. He became a annoying friend during development. As his teacher and trainer, I’m kind of proud of him now, at the time writing this, over 2 million players have met him :). He’s using a mix of morph-animations (idle loop, blinking, morph-frame expressions and triggered animations) and transforms (shaking, walking out, focusing or follow ball etc).

An early exploration experimenting with expressions using transforms. Press the walkOut-button with the HAL-character, I love that animation 🙂
Here is a demo where you can control Bobs expressions:

Video destruction


When you hit the opponent screen the display is partly destroyed, and when you win the game it is shattered into pieces. I tried some different approaches:

Screen front made up of small tiles that fall apart: (same demo as one of the above, but this time focus on the video-screen and see how it breaks when you click it):

Cubes falling apart:

Cubes with real physics (slow)

Animated cubes


If you have noticed, the table is slightly reflective. This is done with a mix of simple tricks. We are not dealing with ray-traced rendering here so we have to use geometry and shaders. One way could be to use the stencil buffer while rendering to use the floor as a mask and invert all the geometry along the y-axis. Instead, the floor is made of a box where we create our local clear-color so to speak. The top is transparent and the inner sides has the same color, but opaque, so it will act like its part of the floor plane. Now, geometry put inside this box will look as reflections if we adjust the transparency in the floor, without the need of extra draw-calls. The pucks, paddles and obstacles is just twice as high with the center point at the level of the floor, so no need for multiple geometry there. And animating the height is automatically correct. The extras-icons is a bit more special. These floating objects is duplicated and put underneath the floor. Since the bouncy animation is handled in the vertex-shader we can just invert the y-scale, and the mesh will animate correctly.

The video-box reflection is also reflected in the floor. This box moves up and down, so the reflection can not just be an inverted geometry, it looks wrong if the reflection-box just scales. Instead, it has to be the same height as the visible part of the box, but without distorting the uv:s. A plane with the same material as the video in the cube is placed under the floor, inside the reflection-box mentioned above. Then the uv-coordinates is adjusted to match the face of the box above while animating. For a nice gradient fade a second plane is placed in front, just a couple of units away. I could do this in a shader but I wanted to reuse the material in the video-cube. The trick here is to make this gradient the same color as the floor, so it blends with the reflection-box. A canvas is created and a gradient filled rectangle is drawn with the floor diffuse color and alphas ranging from 0 to 1. I struggled a bit with alpha here and the solution might not be best in class, but when dealing with transparent objects and depth sorting I always get into strange iterations, sometimes it works and I keep it.

It’s hardly noticed, but an extra gradient is added into the bottom area of the video-screen to reflect back the floor color.

Seen from above:


Seen from inside the reflection box:


Optimising performance

Performance was really important because we needed to make room for the game-engine and especially the webRTC encoding/decoding. I can mention some of the things I implemented:

Disabling anti-aliasing

An obvious one. This parameter has been toggled on/off many times during the development. I really wanted to have it enabled, because the many straight lines that squared-shaped objects creating, is looking pretty jagged, especially with all the slow camera movements. But the performance on slower machines made us take the decision to turn it off by default. Too bad you can’t toggle anti-aliasing during runtime.

Anti-aliasing substitute trick

When having simple planes, with just a basic color-material and no anti-aliasing, it looked better to use a texture instead and having a small padding in the texture. Then the smooth edge is created by the fragment-shader. So instead of having the graphics right to the edge of the texture, add a small space before the edge to allow the texture-filtering to create a smooth edge for you. Maybe not great for performance with extra texture lookups, but it’s always a balance. The whole arena could have been a model with a single texture, but I wanted full and dynamic control of all dimensions and elements.


Post-effects alternatives

Post rendering effects were also too slow to implement when the frame-budget was so restrictive. But we do have an overlay rendered on top of the scene, the dusty tape-texture and a scanline effect (the lines was removed in the final version, but optional in the settings menu). This would naturally be added in a post-effect stage. Instead we used a plane that is added as a child of the camera, placed at a z-position with a specific scale to match the screen (thanks mr.Doob for the advice). Note that the plane will be part of the depth writing so be careful in which distance you place the plane and if you have geometry close to your camera, otherwise the fragments will be overwritten by your scene. Without depth-writing, you need to be sure the plane is drawn last to the screen. One limitation is that you don’t have access to the pre-rendered scene, so effects like depth-of-field, blur or color-correction are not possible.

Multiple render targets

Some of the effects needed more render targets. A render target is a canvas that we render to, and you can have ones that are not displayed on screen, and use them as a texture in your main scene. The camera that records Bob for example. That is a separate scene, rendered separately and sent as input to the video-box shader. Adjusting the resolution of the separate render-target is important for performance so we don’t render more pixels than we need. And set generateMipMap to false if the texture is not power of two.


The mirror effect is also using a render target of it’s own. You can try the effect by pressing “M” in the game. The scene is rendered to this a render target and put as a texture on a plane that matches the screen size which we can animate. When the transition is completed the body-tag is styled with a “transform scaleX(-1)” and we can switch to regular render-mode again. A bonus is that all the html is inverted as well. Try it out, but it’s really hard to play, so add ?god to the url to keep playing.

Garbage and object pooling

Keeping the garbage collector as calm as possible is very important. The GC will always have stuff to do even with highly optimized code, but avoid unnecessary garbage. A basic example; instead of creating new THREE.Vector3() when you position objects each frame, use .set(x,y,z) on the existing object instead. And use object pooling. For common objects, allocate as much of them that you know you will use up front and save and reuse objects that you know will appear again. Allocate more objects by extending the pool automatically, or perhaps in a state where it’s not that equally important with a steady framerate, like right after you get game over or between rounds. Not everything needs to be pooled, sometimes it’s better to let the GC take care of them. It’s a balance, and measuring is the key. You can also put in a console.warn each if you allocate more than a fixed threshold and you can quickly see if there is a potential leak.

Mini tutorial 1: Making a simple pool

function Pool(C,size){
  var totalPooled = size || 1;
  var freeList = [];

  function expand(howMany){
    console.warn('pool expand %s: %s',,howMany)
    for(var i=0; i < howMany; i++ ){
      freeList[i] = new C;
    totalPooled += howMany;


  C.alloc = function(){
    if( freeList.length < 1 ){
      expand(totalPooled) // *= 2
    var instance = freeList.pop();
    instance.alloc && instance.alloc()
    return instance;
  } = function(instance){ &&

To use it just wrap your function with the Pool-function like this.

var MyObject = {}
pool(MyObject, 5);

Then the methods alloc and free are available. So instead of using var myObject=new MyObject(), use:

var myObject = MyObject.alloc();

and the pool will return a new or a recycled object. To give it back to the pool run:;

Remember to reset stuff before you return it or when initiating it, it’s easy to forget that the instance have the variables and previous state inside it.

Mini tutorial 2: Making a scaleable frame

Some little tips regarding the overlay-texture. If you want an overlay to act like a frame, you can use a 9-slice-scale trick with the vertices and the UVs on a plane geometry with 9 faces. You can then lower the texture-size and make the plane match full screen, and still keep the ratio of the borders.

Here is the result rendered with a UV debug texture, notice the fixed margin values:

Screen Shot 2013-07-05 at 10.11.09 PM

To see how it behaves in action, try this demo. The texture used then look like this:


Some code:

Create the plane:

var planeGeo = new THREE.PlaneGeometry(100,100,3,3);
var uvs = planeGeo.faceVertexUvs[0];

face-index:      vertex-index:   uv-mapping:
-------------    1---2           0,1 --- 1,1
| 0 | 1 | 2 |    |  /|              |  /|
-------------    | / |              | / |
| 3 | 4 | 5 |    |/  |              |/  |
-------------    0---3           0,0 --- 1,0
| 6 | 7 | 8 |

var marginTop = 0.1
  , marginLeft = marginTop
  , marginBottom = 0.9
  , marginRight = marginBottom;

//center face
var face = uvs[4];
face[0].x = face[1].x = marginLeft;
face[0].y = face[3].y = marginRight;
face[1].y = face[2].y = marginTop;
face[2].x = face[3].x = marginBottom;

//top left
face = uvs[0];
face[0].x = face[1].x = 0;
face[0].y = face[3].y = 1;
face[1].y = face[2].y = marginRight;
face[2].x = face[3].x = marginLeft;

//top right
face = uvs[2];
face[0].x = face[1].x = marginBottom;
face[0].y = face[2].x = 1;
face[1].y = face[2].y = marginRight;
face[3].x = face[3].y = 1;

//top center
face = uvs[1];
face[0].x = face[1].x =marginLeft;
face[0].y = face[3].y =1;
face[1].y = face[2].y = marginRight;
face[2].x = face[3].x = marginBottom;

//bottom left
face = uvs[6];
face[0].x = face[1].x = 0;
face[0].y = face[3].y = marginTop;
face[1].y = face[2].y = 0;
face[2].x = face[3].x = marginLeft;

//bottom center
face = uvs[7];
face[0].x = face[1].x = marginLeft;
face[0].y = face[3].y = marginTop;
face[1].y = face[2].y = 0;
face[2].x = face[3].x = marginBottom;

//top bottom
face = uvs[8];
face[0].x = face[1].x = marginBottom;
face[0].y = face[3].y = marginTop;
face[1].y = face[2].y = 0;
face[2].x = face[3].x = 1;

//center left
face = uvs[3];
face[0].x = face[1].x = 0;
face[0].y = face[3].y = marginRight;
face[1].y = face[2].y = marginTop;
face[2].x = face[3].x = marginLeft;

//center right
face = uvs[5];
face[0].x = face[1].x = marginBottom;
face[0].y = face[3].y = marginRight;
face[1].y = face[2].y = marginTop;
face[2].x = face[3].x = 1;

planeGeo.uvsNeedUpdate = true;

var plane = new THREE.Mesh(planeGeo, Materials.overlay )

And scale and position the plane on init and window resize:

var w = window.innerWidth
, h = window.innerHeight
, cornerDistW = 50-texture_width/10/w*100
, cornerDistH = 50-texture_height/10/h*100;
this.overlay.scale.set( w/1000, h/1000,1);

this.overlay.position.z = -h*0.1 /(2*Math.tan(*(Math.PI/360)) );

var verts = this.overlay.geometry.vertices;
verts[1].x = -cornerDistW;
verts[5].x = -cornerDistW;
verts[9].x = -cornerDistW;
verts[13].x = -cornerDistW;

verts[2].x = cornerDistW;
verts[6].x = cornerDistW;
verts[10].x = cornerDistW;
verts[14].x = cornerDistW;

verts[4].y = cornerDistH;
verts[5].y = cornerDistH;
verts[6].y = cornerDistH;
verts[7].y = cornerDistH;

verts[8].y = -cornerDistH;
verts[9].y = -cornerDistH;
verts[10].y = -cornerDistH;
verts[11].y = -cornerDistH;

this.overlay.geometry.verticesNeedUpdate = true;

Try the game to see how the final result looks like.

Optimizing workflow

3d assets

One thing that can be really repetitive is to convert and load 3d-models. There is numerous ways of doing this, but in this setup we exported obj-files from 3d Studio Max, and converted them with the python script that is available as a standalone command line tool. So to make life easier we added this command to the build process and the folder with models to the watch list to monitor changes, so when dropping a file in the folder it automatically created the json-model. We took it even a couple of steps further and stringified it and converted it as a js-file that could be added as a module in requirejs. So drop a file, register it in packages.json and the model was automatically referenced as an object ready to be used in the application without the need of loading it during runtime.

The same setup with shader-files, a folder with glsl-files was easy referenced as shortcuts in the material-module.

Settings panel and querystrings

Building tools to streamline the workflow is really important, everyone making applications and games know that. It can be a hidden menu, keyboard shortcuts, querystrings/deeplinking, build processes or anything that makes you test quicker or collaborate smoother.

Like adding this to the URL: “?level=4&extras=multiball,extralife&god&play&dev”. That loads the game on level 4, jumps straight to gameplay, makes you and Bob immortal, spawning the extras you want to debug and showing the settings-panel. A time saver indeed, and good to be able to send around to clients or team-mates, or letting designers try the new specific asset quickly. We also using a debug component in each module. And we can select a query to choose what to output, like this: ?d=networking:webrtc. No need to adding and removing console.logs all the time. And handy to send a URL with a selected query to the client and ask for the result in the console.

The dat.GUI settings panel have also been really helpful.


With that we could adjust colors, lights, gameplay params, camera-settings and more in real-time. Try for yourself with the ?dev querystring.

Some of the settings also bind to a key. In the game, try to select thedifferent cameras by pressing 1-5. Or press E and H to preview the explosion/heal effect. Or perhaps stresstest the physics by enable God Mode in the menu and press “Create Puck” or hit the P.


Early prototypes

Well, what I just explained so far is what we ended up doing in the project, graphics wise. To get to that point the process was, as always, full of iterations and changes, trial and most of all errors. When we started the project the idea was based around motion tracking as input control. We started to explore that as primary control plus a secondary control with mouse or keyboard. We got the motion-tracking working ok, but not good enough. And as the game was progressing it was more and more about speed and accuracy, which wasn’t really playing well with the lack of precision with motion tracking in general. Problems related to camera processing, lighting conditions and noise, was too severe and not as stable that the wider public would rightfully expect. It also did not feel intuitive and simple enough to select a tracking-color, or learning the boundaries of the motion area. With more forgiving gameplay, or a better feature/color-tracking, and the right visual feedback, it might still be feasible. But there is another problem that we would have tackled if we took that path. The logic to keep the game in sync in two player mode depend on delta-movements, like saying ‘I moved the mouse in this direction with this speed’, which works well with touch and keyboard, but less optional with direct controls like mouse or a position of an object in your webcam. For the latter you need to send the exact position every frame causing bloated data traffic. And it makes it harder to interpolate/extrapolate the game to hide latency. So I’m glad we took the decision to steer away from motion-tracking, even though I spend almost a month waving in the air with random colourful objects.

Here is some of the early test-prototypes. To process the image and isolate the movement I use a modified version of glfx.js. Don’t miss to allow webcam, it’s not that good UX in these demos. Also, be sure to select a good tracking-color by clicking in the video monitor up in the left corner. No game-logic here, just color-tracking exploration:








Most of the available easter-eggs is available is the dev-menu up in the left corner when you append ?dev to the url, or click this link: But there is some more, so I created this page for some more info. Another thing you might not know, is that you can run the WebGL version on your device if you go to this url: There are other flags you can enable for even more fun, but more on that in a forthcoming version 😉


There is a bunch of people at North Kingdom that made this come true, but I want to send some extra love to the guys at Public Class that made most of the heavy code lifting. They made the framework, took lead on the game engine, did html5 front-end, did magic in the mobile game, handled webRTC, set up TURN servers, coded backend in the Go language! They have really shown their dedication and their excellent skills, working their asses off during a long time, and for that I’m really happy, so hats off to you guys! Love you!

Dinahmoe did also a excellent work as always, reinventing themselves over and over. Next time you play the game, take some time to really listen to the original music and the effects. Feel how responsive and dynamic it is, reacting to everything you do, and always on the beat.

Thanks to all the folks at Google that has given us the opportunity and helped on with this amazing project!

Also, many thanks to all gamers! Over 2 million visitors in a month!



I almost forgot to post this little demo. It’s a remake of an old experiment with procedural eyes I made in flash long time ago. (Actually, the old one look kind of cooler than this, think it’s the normal-mapping that does the difference. Haven’t gotten to the point of making that in a procedural shader yet, probably have to use off-screen renderers and stuff)

This time it’s done with three.js and shaders. So basically I just started of with the sphere-geometry. The vertices near the edge of the z-axis got an inverted z-position in relation to the original radius, forming the iris as a concave parabolic disc on the sphere.

The shader is then doing it’s thing. By reading the uv-coordinate and the surface-positions I apply different patterns to different sections. I separate those sections in the shader by using a combination of step (or smoothstep) with mix. Step is almost like a if-statement, returning a factor that is 1 if inside the range, and 0 outside. Smoothstep blurs the edge for a smoother transition. You can then use that in the mix-method to choose between two colors, for example the pupille and the iris. I mentioned this method in my last post as well.

Check out the fragment shader here

To give the eye the reflective look, I added a second sphere just outside the first one with a environment-map. The reason for not using the same geometry was that I wanted the reflection on the correct surface around the iris. And by making the second sphere just slightly larger it gives the surface some kind of thickness.

To spice it up little, I added some audio-analysing realtime effects, with this little nifty library called Audiokeys.js. With that you can specify a range of frequencies to listen in to and get a sum of that level. Perfect for controlling parameters and send them to the shaders as uniforms.

Open demo

WebGL Lathe Workshop

This is take two on a little toy that I posted some weeks ago on Twitter.  I’ve added some new features so it’s not completely old news. There is two new materials and dynamic sounds (chrome only).

This demo is a fun way to demonstrate a simple procedural shader. As you start working with the lathe machine you will carve into the imaginary space defined by numbers and calculations. I like to imagine the surface as a viewport to a calculated world, ready to be explored and tweaked. This is nothing fancy compared to the  fractal ray-marching shaders around, but It’s kind of magical about setting up some simple rules and be able to travel through it visually. Remember, you get access to the objects surface-coordinates in 3D for each rendered pixel. This is a big difference when comparing to 3d without shaders, where you have to calculate this on a 2D plane and wrap it around the object with UV-coordinates instead. I did an experiment in flash 10 ages ago, doing just that, one face at the time generated to a bitmapData. Now you get it for free.

Some words about the process of making this demo. Some knowledge in GLSL may come in handy, but it’s more of a personal log than a public tutorial. I will not recommend any particular workflow since it’s really depending on your needs from case to case. My demos is also completely unoptimized and with lots of scattered files all over the place. That why I’m not on Github 🙂 I’m already off to the next experiment instead.

When testing a new idea for a shader, I’m often assemble all bits and pieces into a pair of external glsl-files to get a quick overview. There is often some tweaks I want to try that’s not supported or structured in the same way  in the standard shader-chunks available. And my theory is also that if I edit the source manually and viewing it more often I will get a better understanding of what’s going on. And don’t underestimate the force of changing values randomly (A strong force for me). You can literally turn stone into gold if you are lucky.

The stone material in the demo is just ordinary procedural Perlin-noise. The wood and metal shaders though is built around these different features:

1. Coat

What if I could have a layer that is shown before you start carving? My first idea was to have a canvas to draw on offstage and then mix this with the procedural shader like a splat-map.  But then I found another obvious solution. I just use the radius with the step-method (where you can set the edge-values for a range of values). Like setting the value to 1 when the normalized radius is more than 0.95 and to 0 if it’s less. With smoothstep this edge getting less sharp. Then this value and using mix and interpolates between two different colors, the coat and the underlaying part. In the wood-shader coat is a texture and in the the metal-shader it’s a single color.

DiffuseColour = mix(DiffuseColor1, DiffuseColor2 , smoothstep( .91, .98, distance( mPosition, vec3(0.0,0.0,mPosition.z))/uMaxRadius));

2. Patterns

This is the grains revealed inside the wood and the tracks  that the chisel makes in the metal-cylinder. Basically I again interpolate between two different colors depending on the fraction-part or the Math.sin-output of the position. Say we make all positions between x.8 and x  in along the x-axis a different color.  A width of 10 is 10 stripes that is 0.2 thick. By scaling the values you get different sizes of the pattern. But it’s pretty static without the…

3. Noise

To make the pattern from previous step look more realistic I turn to a very common method: making noise.  I’m using a 3D Simlex noise (Perlin noise 2.0 kind of) with the surface position to get the corresponding noise-value. The value returned for each calculation/pixel is a single float that in turn modifies the mix between the two colors in the stripes and rings. The result is like a 3d smudge-tool, pushing the texture in different directions.

4. Lighting

I’m using Phong-shading with three directional light sources to make the metal looking more reflective without the need of an environment map. You can see how the lights forms three horizontal trails in the cylinder, with the brightest one in the middle.

5. Shadows

I just copied the shadow map part from WebGLShaders.js into my shader. One thing to mention is the properties available on directional lights to set the shadow-frustum (shadowCameraLeft,  shadowCameraRight, shadowCameraTop and shadowCameraBottom). You also have helper guides if you enable dirLight.shadowCameraVisible=true, then it’s easier to set the values. When the shadows is calculated and rendered the light is used as a camera, and with these settings you can limit the “area” of the lights view that will be squeezed into the shadow map. With spotlights you already have the frustum set, but with directional lights you can define one that suites your needs. The shadow-map  have a fixed size for each light and when drawing to this buffer the render pipeline will automatically fit all your shadows into it. So the smaller bounding box of geometries you send to it, the higher resolution of details you get. So play around with this settings if you have a larger scene with limited shadowed areas. Just keep an eye on the resolution of the shadows. If the scene is small, your shadowmap-size large and the shadows are low res, your settings are probably wrong. Also, check the scales in your scene. I’ve had troubles in som cases when exporting from 3d studio max if the scaling is to small or big, so that the relations to the cameras goes all wrong or something.

That’s it for know, try it out here. And please, ask questions and correct me if I’m wrong or missing something. See you later!



Clove orange

One last post before the holidays. Some days ago I posted a link to a WebGL christmas craft demo made with three.js. For those wondering what this was about: It’s called a “Christams Orange Pomander”,  a decoration that we use to do in Sweden and some other European countries around Christmas. You put cloves, that nice fragrant spice, into an orange or clementine. Just random or like me, in structured patterns. Put some silk ribbons around it and hang them somewhere for a nice christmas atmosphere. Try to make your own here. The fun part is to attach the cloves with javascript, just open the “into code?”-button and play around. As an example, the image above loads and plots a twitter-avatar on the orange. You can also save your orange as an image or share the 3D version with an unique url.

I want to take the opportunity to thank all readers and followers for a great year! I’m amazed that over 62.000 users found this little blog. I have really enjoyed experimenting and doing demos to share with you. Many blogs in the community has disappeared in favor to instant twitter publishing. I will also continue to use Twitter as a important channel, but I will also try to keep this space updated as well, since I can go a little deeper and do some write-ups about the techniques behind.

Next year has even more in store for me. I will join the talented people over at North Kingdom, Stockholm. Exciting future ahead!

Enjoy your holiday and see you next year!

Plasma ball

Run demo

More noisy balls. Time to be nostalgic with a classic from the 80’s: The plasma lamp. I have never had the opportunity to own one myself, maybe that’s why I’m so fascinated by them. There is even small USB driven ones nowadays. I’ve had this idea for a demo a long time and finally made a rather simple one.

The electric lightning  is made up by tubes that is displaced and animated in the vertex shader, just as in my last experiment.  I reuse the procedurally generated noise-texture that is applied to the emitting sphere. I displace local x- and z-position with an offset multiplied by the value I get from the noise-texture on the corresponding uv-position (along y-axis). To make the lightning progress more naturally I also use a easing-function on this factor. Notice that it’s less displacement near the center and more at the edges.

I’m have this idea of controlling the lightning with a multi-touch websocket-connection between a mobile browser and the browser just to learn node.js and sockets. Let’s wait and see how this evolves. Running this on on a touch-screen directly would have been even nicer of course. If someone can do that, I will add multiple touch-points, just let me know.

Set a sphere on fire with three.js

Fire it up!

Next element of nature in line, smoke and fire! I did a version in flash about a year ago, playing with 4D Perlin noise. That bitmapdata was like 128×128 big using haxe to work with fast memory. At 20fps. Now we can do fullscreen in 60+ fps. Sweet.

First, note that code snippets below is taken out of context and that they using built-in variables and syntax from three.js. If you have done shaders before you will recognize variables like mvPosition and viewMatrix, or uniforms and texture-lookups.

The noise is generated within a fragment shader, applied to a plane inside a hidden scene, rendered to a texture and linked to the target material. This process is just forked from this demo by @alteredq. Before wrapping the texture around the sphere we have to adjust the output of the noise to be suitable for spherical mapping. If we don’t, the texture will look pinched at the poles as seen in many examples. This is one of the advantages of generating procedural noise; that you can modify the output right from the start. By doing this, each fragment on the sphere get the same level of details in the texture instead of stretching the image across a surface. For spheres, this following technique is one way of finding where (in the imaginary volume of noise) you should fetch the noise-value:

float PI = 3.14159265;
float TWOPI = 6.28318531;
float baseRadius = 1.0;

vec3 sphere( float u, float v) {
	u *= PI;
	v *= TWOPI;
	vec3 pSphere;
	pSphere.x = baseRadius * cos(v) * sin(u);
	pSphere.y = baseRadius * sin(v) * sin(u);
	pSphere.z = baseRadius * cos(u);
	return pSphere;


Notice the 2d-position that that goes into the function and the returning vec3-type. The uv-coordinates is translated to a 3d-position on the sphere. Now we can just look up that noise-value and plot it to the origin uv-position. Changing the baseRadius affects the repeating of the texture.

Another approach is to do everything in the same shader-pair, both noise and final output. We get the interpolated world-position of each fragment from the vertex-shader, so no need for manipulating right? But since this example uses displacements in the vertex-shader I need to create the texture in a separate pass and send it as a flat texture to the last shader.

So now we have a sphere with a noise texture applied to it. It look nice but the surface is flat.

I mentioned displacements, this is all you have to do:

	vec3 texturePosition = texture2D( tHeightMap, vUv ).xyz;
	float dispFactor = uDispScale * texturePosition.x + uDispBias;
	vec4 dispPos = vec4( * dispFactor, 0.0 ) + mvPosition;
	gl_Position = projectionMatrix * dispPos;
	gl_Position = projectionMatrix * mvPosition;


uDispScale and uDispBias is uniform variables that we can control with javascript at runtime. We just add a value to the original position along the normal direction. The #ifdef-condition let us use a fallback for those on browsers or environments that doesn’t support textures in the vertex-shader. It’s not available in all implementations of webGL, but depending on your hardware and drivers it should work in latest Chrome and Firefox builds. Correct me if I’m wrong here. This feature is crucial for this demo, so I prompt a alert-box for those unlucky users.

After adding displacement I also animate the vertices. You can e.g. modify the vertex-positions based on time, the vertex position itself or the uv-coordinates to make the effects more dynamic. One of the effects available in the settings-panel is “twist”. I found it somewhere I don’t remember now, but here it is if you’re interested:

vec4 DoTwist( vec4 pos, float t )
	float st = sin(t);
	float ct = cos(t);
	vec4 new_pos;

	new_pos.x = pos.x*ct - pos.z*st;
	new_pos.z = pos.x*st + pos.z*ct;

	new_pos.y = pos.y;
	new_pos.w = pos.w;

	return( new_pos );

//this goes into the main function
float angle_rad = uTwist * 3.14159 / 180.0;
float force = (position.y-uHeight*0.5)/-uHeight * angle_rad;

vec4 twistedPosition = DoTwist(mPosition, force);
vec4 twistedNormal = DoTwist(vec4(vNormal,1.0), force);

//change matrix
vec4 mvPosition = viewMatrix * twistedPosition;


The variable force has to be changed to your need. Basically you want to set a starting position and a range that is going to be modified.

Then we reach the final fragment-shader. This one is fairly simple. I blend two colors. Multiply those with one of the channels in the height-map (the noise texture). Then mix this color with the smoke color. All colors get their values based on the screen-coord, a quickfix for this solution. Lastly I apply fog to make fire blend into the background, looking less like a sphere.

For some final touch, a bloom filter is applied and some particles is thrown in as well. Had some help by @aerotwist and his particle tutorial.

That’s it folks, thanks for reading.




More time playing with Three.js and WebGL. My primary goal was to get a deeper understanding in shaders by just messing around a little. If you want to know the basics about shaders in three.js I recommend reading this tutorial by Paul Lewis (@aerotwist). In this post I will try to explain some of the things I’ve learned in the process so far. I know this it’s like kindergarten for “real” CG programmers, but history is often repeating in this field. Let us pretend that we are cool for a while 😉

Run the demo

The wave

I started of making a wave-model, and a simple plane as ocean in the distance, in 3D Studio Max. I later on painted the wave with vertex-colors. This colors was used in the shader to mimic the translucence of the wave. Could have use real SubSurfaceScattering, but this simple workaround worked out well. The vertex-colors is mixed with regular diffuse and ambient colors. A normal map was then applied with a tiled noise pattern for ripples. The normals for this texture don’t update relative to the vertex-animation, but that’s fine. On top of that, a foam-texture with white perlin noise was blended with the diffuse color. I tried a environment-map for reflections as well, but that was removed in the final version, the water looked more like metal or glass.

I always likes the “behind the scene” part showing the passes of a rendering with all layers separated and finally blended, so here is one of my own so you can see the different steps:


Animating with the vertex shader

My wave is a static mesh. For the wobbly motion across the wave, I animate vertices in the vertex shader. One movement along the y-axis that creates a wavy motion and one along the x-axis for some smaller ripples. You have probably seen this often, since it’s a easy way to animate something with just a time-value passed in every frame.

vertexPosition.y += sin( (vertexPosition.x/waveLength) + theta )*maxDiff;

The time-offset (theta) together with the x-position goes inside a sin-function that generates a periodic cycle with a value between -1 and 1. Multiply that with a factor and you animate the vertices up and down. To get more crests along the mesh, divide the position with waveLength.

A side note: I tried to use vertex texture look-ups (supported in ANGLE version) for real vertex displacements. With that I could get more unique and dynamic animations. I plotted a trail in a hidden canvas to get jetski backwash, but the resolution of the wave-mesh was to low and I had problem matching the position of the jetski with the UV-mapping. I flat surface is simple, but the jetski using collisiondetection to stick to the surface, and I have no idea how to get the uv from the collision point. However, I learned to use a separate dynamic canvas as an texture. I use similar technique in my snowboard-experiment in flash, but that is of course without hardware acceleration.

Animating with the fragment shader

The wave is not actually moving, it’s just the uv-mapping. The same time-offset used in the vertex shader is here responsible for “pushing” the uv-map forward. You can also edit the UV-mapping in your 3D program and modify the motion. In this case, the top of the breaking part of the wave had to go in the opposite direction, so I just rotate the uv-mapping 180 degrees for those faces. My solution works fine as long as you don’t look over the ridge of the wave and see the seam. But I think, in theory, you could make a seemless transition between the two direction, but it’s beyond my UV-mapping skills. You can also use different offset-values for different textures, like a Quake sky with parallaxed layers of clouds.

Fighting about z with the Depth Buffer

I was aware about the depth buffer, or z-buffer, which stores the depth value for each fragment. Besides lighting calculation, post-effects and other stuff, this data is used to determined if a fragment is in front or behind other fragments. But I have not reflected how it actually works before. For us coming from Flash 3D (pre Molehill) this is a new technique in the pipeline ( even though some experiment like this using it). Now, when the mighty GPU is in charge we don’t have flickering sorting problem right? No, artifact still appear. We have the same problem as with triangles, but now on a fragment level. Basically, it happens when two primitives (triangle, line or point) have the same value in the buffer, with the result that sometimes it shows up and sometimes it don’t. The reason is that the resolution of the buffer is to small in that specific range, which leads to z-values snapping to the same depth-map-value.

It can be a little hard to imagine the meening of resolution, but I will try to explain, with emphasis on trying, since I just learned this. The buffer is non-linear. The amount of numbers available is related to the z-position, the near and the far plane. There are more numbers to use, or higher resolution, closer to the eye/camera, and fewer and fewer assigned numbers as the distance increases. Stuff in the distance will share a shorter range of available numbers than the things right in front of you. That is quite useful, you probably need more details right in front of you where details are larger. The total amount of numbers, or call it precision, is different depending on the graphics-card with the possible 16 bit, 24 or 32 bit buffers. Simply put more space for data.

To avoid these artifact we have some techniques to be aware of, and here comes the newbie lesson learned this time. By adjusting the far and near planes, to the camera object, you can decide where the resolution will be focused. Previously, I used near=0.001 and far=”Some large number”. This causing the resolution to be very high between 0.0001 and “an other small number”. (There is a formula for this, but it’s not important right now) Thus, the depth buffer resolution was to low when the nearest triangle was drawn. So the simple thing to remember is: Set the near-value as far as you can for your solution to work. The value of the far plane is not as important because the nonlinear nature of the scale. As long as the near value is relatively small that is.

Another problem with occlusion calculations is when using color values with alpha. In those cases we sometimes got multiple “layers” of primitives, but only one value in the depth-buffer. That can be handled by blending the color with the one already stored in the output (frame buffer). An other solution is to set depthTest to false on the material, but obviously that is not possible in cases where we need the face to be behind something. We can also draw the scene from front to back or in different passes, with the transparent object last, but I have not used that technique yet.

Thanks AlteredQualia (@alteredq) to pointing this out!

Collision detection

I first used the original mesh for collision detection. It ran pretty fast in Chrome but painfully slow in Firefox. Iterating arrays seems to differ greatly in performance between browsers. A quick tip: When using collision detection, use separate hidden meshes with only the faces needed to do the job.

Smoke, Splashes and Sprites

All the smoke and white water is instances of the object-type: Sprite. A faster and more lightweight type of object, always pointed at you like a billboard. I have said it before, but one important aspect here is to use a object pool so that you can reuse every sprite created. Keep creating new ones will kill performance and memory allocation.

I’m not really pleased with the final solution of these particles, but good enough for a quick experiment like this.

Learning Javascript

As a newcomer to the Javascript-world I’m still learning the obvious stuff. Like console debugging. It is a huge benefit to change variables in real-time via the console in Chrome. For instance, setting the camera position without the need for a UI or switching programs.

And I still ends up writing almost all of my code in one huge file. With all frameworks and building-tools around I’m not really sure where to start. It’s a lame excuse I know, but in the same time, when doing experiments I rarely know where I’m going, so writing “correct” OOP and structured code is not the main goal.

Once again, thanks for reading! And please share your thoughts with me, I really appreciate feedback, especially corrections or pointing out misunderstandings that can make me do things in a better manner.