Brick Street View



Had some more street view panorama fun a couple of weeks ago. This time combining it with virtual LEGO. If you havn’t seen the site, go there first and I’ll tell you more afterwards:

A short walkthrough of what’s going on:


To make the map look like a giant baseplate, I use a tiled transparent texture overlay. When you move the center of the map the background-position of the layer is updated to match the movement of the map. When zooming it the background-size it’s resized. On top of that I first add a gradient fill for some shininess, and then the threejs layer on top. This contains some trees and bushes, and also the landmarks on predefined locations (found in the shortcuts-popup marked with a star-symbol in the top menu). Every object placed on the map has a corresponding google.maps.Marker. When needed (map-center change or zooming) the marker position is transformed into 3D-space to update the meshes. It’s limited to desktop-users, because there is a bug on touch-enabled devices, that prevent the marker-positions to update while dragging (or actually the projection setting on the map that is used internally to calculate the container position in pixels). The result is that the meshes jumps to the correct position only on the touchend-event. To make the objects look even more attached to the map, I added soft shadows.

To place the trees in good spots, I use the Google Places API, searching for parks within a radius of 1500 m. To display the location where the center currently is, I query the geocoder library in Google Maps.

Street View

I have to render the scene in two steps. First the background, then the rest of the scene in a second pass. I do so to make sure the panorama-sphere is always sorted between the background and the lego-models. There is low additional cost for this, both the background and the panorama is very low poly.

Constructing the LEGO panorama

This is the base idea with the whole experiment. To be honest,  the range of the results spans from ‘nice idea, but…’ to ‘WTF’. It very much depends on the quality of the depth data and also the color of the sky. Here is the workflow for creating a panorama:

– Load the low-res panorama with a low zoom-value. I use GSVPano for this, which supports high resolution tiling, but my goal is to pixelate it so the information is not needed. Returns the result in a canvas.

Flood-fill the canvas along the top edge of the texture, where it’s hopefully a clear blue sky (or smog for that matter). I sample some more points vertically in the direction of streets to wipe out some more there. I have fiddled with this back and forth to find a good settings but there are room for improvements still, like color detection, and comparing against the depth/normal-data. Replace the erased pixels with a single color, for easy detection in the next step.

– Create a high-res canvas and draw lego-bricks into it based on the color values of the smaller canvas. For pixels with the color defined by the floodfill, flag those as sky and avoid to draw those bricks, and all the bricks above a detected sky pixel. In that way clouds above a detected sky-pixel also gets erased so bricks don’t float in the air. For the ground we use the depth data to exclude those bricks. If you browse the site and press “C” and you’ll see the normal-map as an overlay. Pressing “X” reveals the original image. For each brick, I check if it belongs to the grund plane (in the pano depths data) and skip those as well.

– There are three different bricks drawn to make it less repeating. A flat, one with a variable rotation and one with a peg on the side. The last one is used more often if the color is closer to green, so trees and bushes gets some more volume. The build is nowhere close to being correct in any sense, especially when it comes to perspective, sizes or wrapping textures on spheres, so you need some imagination (or scroll around very fast) to accept the illusion.

– Then the fun starts, playing with real LEGO:


Lego models are added such as cars, flowers and trees. Two kinds of street baseplates are loaded to be used for a crossroad in the center and roads. Those will only be visible if there are provided links in the panorama meta data. Sometimes in crossings you can see the road in the imagery, but it’s not linked until the next panorama in front of you, so it does not reflect visual roads correctly. For the 3d-models I use a format called LDraw. An open standard for LEGO CAD programs that allow the user to create virtual LEGO models and scenes. All parts is stored in a folder and each file can reference other parts in the library. For example, every single peg on all bricks is just a reference to the same file called stud.dat. With software like Bricksmith and LEGO Digital Designer you can create models and export to this format. It’s also loads of models available on the internet, created by the LEGO community. Since it’s just a text file containing references, color, position and rotation, it’s straight forward to parse as geometry.


To show the LDraw models I use BRIGL, a Javascript library made by Nicola Lugato that loads ldraw files into threejs. I modified it a bit to work with the latest revision of threejs and material handling.

It takes some CPU power to parse and create meshes. I tried implementing a WebWorker to parse the geometry, but without success. Would be nice to parse them without freezing the browser. Using BufferGeometries would also be better for parsing and memory. The web is perhaps not the best medium for this format, since every triangle in all pieces is parsed and rendered, even those not visible, like the inside pegs of a brick. In this demo, I do a very simple optimisation on the landmark models, while parsing the specification I exclude parts that is inside or beneath bricks, like the inner peg that connects to the brick beneath. This reduced the triangle-count and parse-time by 50%.

To avoid hundreds of requests in runtime when loading the models subparts I use a gulp-script to lookup all active models (.ldr, .mpd, .dat) which recursively transform parts to JSON and adds them into a bundle as cached modules so they can be required instead of loaded. The size compared to storing the final parsed geometry as JSON is significantly smaller since many parts are reused across models.

For more information, visit the about page. And source code is available on github.



I played around some with the physics-library cannon.js together with three.js and ended up with this Christmas card.

To show the power of iterations, this is a screenshot from the first version that I thought was almost finished, since my main target was to play around with physics, and this is how physics demos usually use to look like.


Each small bulb in the garland is a joint constraint. The ending joint have zero mass so it will stick to the facade without falling down. You can click/tap to change the position of those positions. To create the reflections the geometry is cloned and inverted, added to a second scene which renders to a separate render target with blur applied to it. The rest of the scene is then rendered on top of the first pass without clearing the colours, just the depth buffer. It was a bit tricky to render in the correct order and clearing the right stuff and I’m sure there is a more optimised way of doing this without the need of two scenes.

A transparent layer is also added to the ground level in the “above” scene to add some bluish diffuse and catch the lighting coming from the stars. Each star has a point-light as a child. A tiled diffuse map and a normal map are assigned to the ground floor to add details and an interesting surface. All geometry besides the star and the sign is created procedurally with just primitives (Box,plane and sphere). The facade got a specular map so the light could be more reflective in the windows. A bump map adds some depths as well.

Then finally, couldn’t get the Christmas feeling without the mighty Bloom filter pass 🙂

Try to mess around with the attachment points af the garland to see the physics, lighting and reflections in action. Notice how the wind effecting the lights and the snow, it’s actually a impulse force applied to the scene each frame.

Psst! Even though it’s Christmas, there is an easter egg somewhere. Can you find it?

Open Card

Interested in source? Here is the Github repo

Urban Jungle Street View – Behind the scenes


Urban Jungle Street View is an experiment that I did to play with the depth data in Google Steet View. I knew that the information was there but had not successfully managed to parse the information. But some weeks ago I found a library that provided this for me. About the same time I started to play “Last Of Us” from Naughty Dogs. About 15 minutes in, stunned by the environments, I just had to try out this idea instead (damn you inspiration).


Here is a screenshot with the final result: endresult

I posted the link on Twitter and the response was overwhelming. About 200.000 visits the first week and it reached the front page on,,, among others. Even television picked it up, got interviewed by the Japanese TV show “Sukkiri!”. Something triggers the inner adventurer in the audience for sure 🙂

2D Map

map2d Using the Google Maps API is so easy and powerful. So much data in your hands. For the style of the map, I use the Styled Maps Wizard to see what different kind of settings there are and export the settings as JSON. One tips is to turn off all features first with

{"stylers": [
    { "visibility": "off" },

and after that turn on the things you want to show and style.

Street view coverage layer

I use the StreetViewCoverageLayer while dragging the little guy to show all available roads in Street View. It’s not just roads but also indoor places and special sites like Taj Mahal. I like the effect in Google Maps when holding Pegman over a road and he is snapped to the road with a direction-arrow aligning the road. But that functionality is currently only available in the Google Maps website and not in the API version 3. But I made a custom version that works not near as good but close enough. You can calculate the pixel-position and the tile-index and then load the coverage-tile manually, write it to a canvas and read the pixel at the corresponding local position. If the pixel is not transparent then we hover a street-view enabled road or place. I also inspect the color to avoid changing the state if you hover a indoor-panorama, which is orange. You can still drop him, but it adds a visual queue that it’s not optimal to drop him there. Sometimes you end up inside a building anyway, but I have not found a way to notice that without loading the data first.

Street View

I use a library called GSVPano, created by Jaume Sanchez, to show the panoramas in WebGL and three.js. The source is removed from the repo though. To access the depth and normal data I used a different library. Read more about that here. depthmap

The data for each panorama is not a depth-map directly (from the API) but instead stored in a Base64-encoded string to save bandwidth, requests and in a way that makes it quick to parse for the actual purpose of it; to find the normal (direction of a surface) and the distance on a point in the panorama to aid the navigation UI. If you hover the ground with the mouse-cursor in Google Streetview you get the circle cursor and if you point at a building you get a rectangle aligned with the surface of the building. I’m using that behaviour as well when plotting out the foliage, but also passing the data to the shader as textures.

In the demo, the normal-map takes care of making the ground transparent in the shader, so I can place a plane under the panorama-sphere. The ground plane will be visible through the transparent ‘hole’ (could maybe create geometry instead somehow). The depth-map is used to make a interpolation from transparent to opaque ground. The depth-data has a limited range (the visible range), so it will look strange otherwise with a hard edge.

I can also add a little fog to the environment. The planes in the map is not exactly matching the real-world geometry so it can not be too much fog, because the edges will be visible around the buildings. In the end I removed it almost completely since I liked a more sunny weather, but here is screenshot with some more added to it:



Foliage are sprites (flat billboards) or simple geometry that is positioned in 3d space. To place them I look in random positions in a predefined area of the texture. First, to get the sense of how a flat image is wrapped around a sphere, look at this image: uv-wrap Image from Wikipedia

Here you see the depth-data parsed and displayed as a normal texture. The dots is where I calculated things to grow (I describe that more below).


When selecting a point in the texture you get the position on a sphere like this:

var lat = u * 180-90;
var lon = v * 360-180;
var r = Math.cos(DEG_TO_RAD * lat);

//range between 0-1
var pos = new THREE.Vector3();
pos.x = (r * Math.cos(DEG_TO_RAD * lon) );
pos.y = (Math.sin(DEG_TO_RAD * lat));
pos.z = (r * Math.sin(DEG_TO_RAD * lon));

But if all objects are placed at a fixed distance they will the same scale, so the distance from the camera is important, like pushing the object away from the camera in the same angle. That value is stored in the depth data so:


If I want the object to point at the camera, like the trees, it’s just to set


But If the object should align the surface it suppose to stick to, we should use the normal value, the direction the wall in facing:

var v = pos.clone();
v.add( pointData.normal );


Here is a demo with this setting. To get a true 3d perspective, the sprite could be replaced with real geometry, but leafs are quite forgiving to perspective issues and sprites are much more performant. If the angle to the camera is too big, it looks really strange though.

If you click or tap the scene you can put more foliage into it. Then it’s just the same process but reversed. Get the collision point on the sphere, convert it to UV-space (0-1) on the texture. Test against the data and get the preferred plant for that position if any.

Fine tuning

Every time you visit a place it’s randomly generated. There is some estimations that I use to get a bit more smarter distribution of plants. If the calculated direction is pointing upwards, make grass growing on the ground. If it’s perpendicular to the ground, we can put some wall foliage sprites there. To hide the seam between the ground and the walls I do a trick to find that edge. The ground has always the same normal, as you can see as the pink color above. I iterate the pixels in the texture from bottom and up to find the first pixel in a different color than the ground. That is where the edge is. Put a grass-sprite there and go to the next column of pixels to do the same. To enhance the feeling of walls even further lets introduce another type of plant: vines. These have stalks made of 3d geometry (a modified ExtrudeGeometry extruded along a bezier path, or “spline”) with sprite-based leafs. To detect a good place for them to grow I make another lookup in the normal-map from where I detected the edge previously. About 50 pixels up from these points I test if the value is still the same as the wall I found first. If it’s a different color it’s probably a small wall and not an optimal place for a vine to grow, otherwise I can place it there but not too many so test for a minimum distance between them. It’s buggy sometimes, but it’s close enough for this demo. A couple of trees is also inserted as sprites pointing toward the camera. They could be more procedural for a more interesting result.


The link to nearby panoramas is included in the API-result so it’s just to add the arrows and detect interactions with THREE.RayCaster (example). The effect between the locations is part of the post effects pass, I just set the amount of blur in a blur pass…

Post effects

Again, Jaume Sanchez has more valuable source code in form of Wagner, a post-effect composer for three.js. To make the panorama layer and the 3D layer blend together a couple of effects are added to the end result. First a Bloom-filter, which adds glow to bright areas. And a dirt-pass, that adds the dust and raindrops to the composition. The final touch is a little movement to camera so the scene look more alive and less of a still image. There is definitely room for improvement, especially with the color balance in the scene, and more advanced geometry and procedurally generated plants.

Try it now

Now when that is out of the system, lets return to “Last Of Us” finally…



This was a demo I did for the advent calendar Create a number of snowballs and put them together to a snowman. The trail is created by drawing into a canvas which is sent into the ground-shader as a height-map. In the vertex shader the vertices is displaced along the y-axis and. In the fragment-shader it is used for bump-mapping. The snow also becomes a bit darker when it’s below a threshold height. The shape of the track is defined by the radial gradient used when drawing to the canvas. I like that you can run over the existing tracks and leave a bit of a trail of the current direction. That adds more depths than just a static layer painted on top.

In a earlier version the ball geometry was affected by actually touching the snow, extruding vertices once per rotation if vertex was below the snow. But when it became more focused on creating a snowman, it made more sense to make perfectly round snowballs. I’m not so happy about the material on the snowballs. The UV-wrapping and the spherical texture creates familiar artefacts in the poles, pinching the texture. But there was not enough time to look into this at that time. A procedural approach would have been nice to avoid it. Or not use UV-coordinated, but calculate them in a shader. Like these guys (@thespite@claudioguglieri@Sejnulla) who did a nice site about the same time with a much better snow-shader: That shader can also be seen here.

Collision detection is just a simple distance-test between the spheres and the bounding-box.

Try it out here.

View the source at github.



I almost forgot to post this little demo. It’s a remake of an old experiment with procedural eyes I made in flash long time ago. (Actually, the old one look kind of cooler than this, think it’s the normal-mapping that does the difference. Haven’t gotten to the point of making that in a procedural shader yet, probably have to use off-screen renderers and stuff)

This time it’s done with three.js and shaders. So basically I just started of with the sphere-geometry. The vertices near the edge of the z-axis got an inverted z-position in relation to the original radius, forming the iris as a concave parabolic disc on the sphere.

The shader is then doing it’s thing. By reading the uv-coordinate and the surface-positions I apply different patterns to different sections. I separate those sections in the shader by using a combination of step (or smoothstep) with mix. Step is almost like a if-statement, returning a factor that is 1 if inside the range, and 0 outside. Smoothstep blurs the edge for a smoother transition. You can then use that in the mix-method to choose between two colors, for example the pupille and the iris. I mentioned this method in my last post as well.

Check out the fragment shader here

To give the eye the reflective look, I added a second sphere just outside the first one with a environment-map. The reason for not using the same geometry was that I wanted the reflection on the correct surface around the iris. And by making the second sphere just slightly larger it gives the surface some kind of thickness.

To spice it up little, I added some audio-analysing realtime effects, with this little nifty library called Audiokeys.js. With that you can specify a range of frequencies to listen in to and get a sum of that level. Perfect for controlling parameters and send them to the shaders as uniforms.

Open demo

WebGL Lathe Workshop

This is take two on a little toy that I posted some weeks ago on Twitter.  I’ve added some new features so it’s not completely old news. There is two new materials and dynamic sounds (chrome only).

This demo is a fun way to demonstrate a simple procedural shader. As you start working with the lathe machine you will carve into the imaginary space defined by numbers and calculations. I like to imagine the surface as a viewport to a calculated world, ready to be explored and tweaked. This is nothing fancy compared to the  fractal ray-marching shaders around, but It’s kind of magical about setting up some simple rules and be able to travel through it visually. Remember, you get access to the objects surface-coordinates in 3D for each rendered pixel. This is a big difference when comparing to 3d without shaders, where you have to calculate this on a 2D plane and wrap it around the object with UV-coordinates instead. I did an experiment in flash 10 ages ago, doing just that, one face at the time generated to a bitmapData. Now you get it for free.

Some words about the process of making this demo. Some knowledge in GLSL may come in handy, but it’s more of a personal log than a public tutorial. I will not recommend any particular workflow since it’s really depending on your needs from case to case. My demos is also completely unoptimized and with lots of scattered files all over the place. That why I’m not on Github 🙂 I’m already off to the next experiment instead.

When testing a new idea for a shader, I’m often assemble all bits and pieces into a pair of external glsl-files to get a quick overview. There is often some tweaks I want to try that’s not supported or structured in the same way  in the standard shader-chunks available. And my theory is also that if I edit the source manually and viewing it more often I will get a better understanding of what’s going on. And don’t underestimate the force of changing values randomly (A strong force for me). You can literally turn stone into gold if you are lucky.

The stone material in the demo is just ordinary procedural Perlin-noise. The wood and metal shaders though is built around these different features:

1. Coat

What if I could have a layer that is shown before you start carving? My first idea was to have a canvas to draw on offstage and then mix this with the procedural shader like a splat-map.  But then I found another obvious solution. I just use the radius with the step-method (where you can set the edge-values for a range of values). Like setting the value to 1 when the normalized radius is more than 0.95 and to 0 if it’s less. With smoothstep this edge getting less sharp. Then this value and using mix and interpolates between two different colors, the coat and the underlaying part. In the wood-shader coat is a texture and in the the metal-shader it’s a single color.

DiffuseColour = mix(DiffuseColor1, DiffuseColor2 , smoothstep( .91, .98, distance( mPosition, vec3(0.0,0.0,mPosition.z))/uMaxRadius));

2. Patterns

This is the grains revealed inside the wood and the tracks  that the chisel makes in the metal-cylinder. Basically I again interpolate between two different colors depending on the fraction-part or the Math.sin-output of the position. Say we make all positions between x.8 and x  in along the x-axis a different color.  A width of 10 is 10 stripes that is 0.2 thick. By scaling the values you get different sizes of the pattern. But it’s pretty static without the…

3. Noise

To make the pattern from previous step look more realistic I turn to a very common method: making noise.  I’m using a 3D Simlex noise (Perlin noise 2.0 kind of) with the surface position to get the corresponding noise-value. The value returned for each calculation/pixel is a single float that in turn modifies the mix between the two colors in the stripes and rings. The result is like a 3d smudge-tool, pushing the texture in different directions.

4. Lighting

I’m using Phong-shading with three directional light sources to make the metal looking more reflective without the need of an environment map. You can see how the lights forms three horizontal trails in the cylinder, with the brightest one in the middle.

5. Shadows

I just copied the shadow map part from WebGLShaders.js into my shader. One thing to mention is the properties available on directional lights to set the shadow-frustum (shadowCameraLeft,  shadowCameraRight, shadowCameraTop and shadowCameraBottom). You also have helper guides if you enable dirLight.shadowCameraVisible=true, then it’s easier to set the values. When the shadows is calculated and rendered the light is used as a camera, and with these settings you can limit the “area” of the lights view that will be squeezed into the shadow map. With spotlights you already have the frustum set, but with directional lights you can define one that suites your needs. The shadow-map  have a fixed size for each light and when drawing to this buffer the render pipeline will automatically fit all your shadows into it. So the smaller bounding box of geometries you send to it, the higher resolution of details you get. So play around with this settings if you have a larger scene with limited shadowed areas. Just keep an eye on the resolution of the shadows. If the scene is small, your shadowmap-size large and the shadows are low res, your settings are probably wrong. Also, check the scales in your scene. I’ve had troubles in som cases when exporting from 3d studio max if the scaling is to small or big, so that the relations to the cameras goes all wrong or something.

That’s it for know, try it out here. And please, ask questions and correct me if I’m wrong or missing something. See you later!



Clove orange

One last post before the holidays. Some days ago I posted a link to a WebGL christmas craft demo made with three.js. For those wondering what this was about: It’s called a “Christams Orange Pomander”,  a decoration that we use to do in Sweden and some other European countries around Christmas. You put cloves, that nice fragrant spice, into an orange or clementine. Just random or like me, in structured patterns. Put some silk ribbons around it and hang them somewhere for a nice christmas atmosphere. Try to make your own here. The fun part is to attach the cloves with javascript, just open the “into code?”-button and play around. As an example, the image above loads and plots a twitter-avatar on the orange. You can also save your orange as an image or share the 3D version with an unique url.

I want to take the opportunity to thank all readers and followers for a great year! I’m amazed that over 62.000 users found this little blog. I have really enjoyed experimenting and doing demos to share with you. Many blogs in the community has disappeared in favor to instant twitter publishing. I will also continue to use Twitter as a important channel, but I will also try to keep this space updated as well, since I can go a little deeper and do some write-ups about the techniques behind.

Next year has even more in store for me. I will join the talented people over at North Kingdom, Stockholm. Exciting future ahead!

Enjoy your holiday and see you next year!

Set a sphere on fire with three.js

Fire it up!

Next element of nature in line, smoke and fire! I did a version in flash about a year ago, playing with 4D Perlin noise. That bitmapdata was like 128×128 big using haxe to work with fast memory. At 20fps. Now we can do fullscreen in 60+ fps. Sweet.

First, note that code snippets below is taken out of context and that they using built-in variables and syntax from three.js. If you have done shaders before you will recognize variables like mvPosition and viewMatrix, or uniforms and texture-lookups.

The noise is generated within a fragment shader, applied to a plane inside a hidden scene, rendered to a texture and linked to the target material. This process is just forked from this demo by @alteredq. Before wrapping the texture around the sphere we have to adjust the output of the noise to be suitable for spherical mapping. If we don’t, the texture will look pinched at the poles as seen in many examples. This is one of the advantages of generating procedural noise; that you can modify the output right from the start. By doing this, each fragment on the sphere get the same level of details in the texture instead of stretching the image across a surface. For spheres, this following technique is one way of finding where (in the imaginary volume of noise) you should fetch the noise-value:

float PI = 3.14159265;
float TWOPI = 6.28318531;
float baseRadius = 1.0;

vec3 sphere( float u, float v) {
	u *= PI;
	v *= TWOPI;
	vec3 pSphere;
	pSphere.x = baseRadius * cos(v) * sin(u);
	pSphere.y = baseRadius * sin(v) * sin(u);
	pSphere.z = baseRadius * cos(u);
	return pSphere;


Notice the 2d-position that that goes into the function and the returning vec3-type. The uv-coordinates is translated to a 3d-position on the sphere. Now we can just look up that noise-value and plot it to the origin uv-position. Changing the baseRadius affects the repeating of the texture.

Another approach is to do everything in the same shader-pair, both noise and final output. We get the interpolated world-position of each fragment from the vertex-shader, so no need for manipulating right? But since this example uses displacements in the vertex-shader I need to create the texture in a separate pass and send it as a flat texture to the last shader.

So now we have a sphere with a noise texture applied to it. It look nice but the surface is flat.

I mentioned displacements, this is all you have to do:

	vec3 texturePosition = texture2D( tHeightMap, vUv ).xyz;
	float dispFactor = uDispScale * texturePosition.x + uDispBias;
	vec4 dispPos = vec4( * dispFactor, 0.0 ) + mvPosition;
	gl_Position = projectionMatrix * dispPos;
	gl_Position = projectionMatrix * mvPosition;


uDispScale and uDispBias is uniform variables that we can control with javascript at runtime. We just add a value to the original position along the normal direction. The #ifdef-condition let us use a fallback for those on browsers or environments that doesn’t support textures in the vertex-shader. It’s not available in all implementations of webGL, but depending on your hardware and drivers it should work in latest Chrome and Firefox builds. Correct me if I’m wrong here. This feature is crucial for this demo, so I prompt a alert-box for those unlucky users.

After adding displacement I also animate the vertices. You can e.g. modify the vertex-positions based on time, the vertex position itself or the uv-coordinates to make the effects more dynamic. One of the effects available in the settings-panel is “twist”. I found it somewhere I don’t remember now, but here it is if you’re interested:

vec4 DoTwist( vec4 pos, float t )
	float st = sin(t);
	float ct = cos(t);
	vec4 new_pos;

	new_pos.x = pos.x*ct - pos.z*st;
	new_pos.z = pos.x*st + pos.z*ct;

	new_pos.y = pos.y;
	new_pos.w = pos.w;

	return( new_pos );

//this goes into the main function
float angle_rad = uTwist * 3.14159 / 180.0;
float force = (position.y-uHeight*0.5)/-uHeight * angle_rad;

vec4 twistedPosition = DoTwist(mPosition, force);
vec4 twistedNormal = DoTwist(vec4(vNormal,1.0), force);

//change matrix
vec4 mvPosition = viewMatrix * twistedPosition;


The variable force has to be changed to your need. Basically you want to set a starting position and a range that is going to be modified.

Then we reach the final fragment-shader. This one is fairly simple. I blend two colors. Multiply those with one of the channels in the height-map (the noise texture). Then mix this color with the smoke color. All colors get their values based on the screen-coord, a quickfix for this solution. Lastly I apply fog to make fire blend into the background, looking less like a sphere.

For some final touch, a bloom filter is applied and some particles is thrown in as well. Had some help by @aerotwist and his particle tutorial.

That’s it folks, thanks for reading.




More time playing with Three.js and WebGL. My primary goal was to get a deeper understanding in shaders by just messing around a little. If you want to know the basics about shaders in three.js I recommend reading this tutorial by Paul Lewis (@aerotwist). In this post I will try to explain some of the things I’ve learned in the process so far. I know this it’s like kindergarten for “real” CG programmers, but history is often repeating in this field. Let us pretend that we are cool for a while 😉

Run the demo

The wave

I started of making a wave-model, and a simple plane as ocean in the distance, in 3D Studio Max. I later on painted the wave with vertex-colors. This colors was used in the shader to mimic the translucence of the wave. Could have use real SubSurfaceScattering, but this simple workaround worked out well. The vertex-colors is mixed with regular diffuse and ambient colors. A normal map was then applied with a tiled noise pattern for ripples. The normals for this texture don’t update relative to the vertex-animation, but that’s fine. On top of that, a foam-texture with white perlin noise was blended with the diffuse color. I tried a environment-map for reflections as well, but that was removed in the final version, the water looked more like metal or glass.

I always likes the “behind the scene” part showing the passes of a rendering with all layers separated and finally blended, so here is one of my own so you can see the different steps:


Animating with the vertex shader

My wave is a static mesh. For the wobbly motion across the wave, I animate vertices in the vertex shader. One movement along the y-axis that creates a wavy motion and one along the x-axis for some smaller ripples. You have probably seen this often, since it’s a easy way to animate something with just a time-value passed in every frame.

vertexPosition.y += sin( (vertexPosition.x/waveLength) + theta )*maxDiff;

The time-offset (theta) together with the x-position goes inside a sin-function that generates a periodic cycle with a value between -1 and 1. Multiply that with a factor and you animate the vertices up and down. To get more crests along the mesh, divide the position with waveLength.

A side note: I tried to use vertex texture look-ups (supported in ANGLE version) for real vertex displacements. With that I could get more unique and dynamic animations. I plotted a trail in a hidden canvas to get jetski backwash, but the resolution of the wave-mesh was to low and I had problem matching the position of the jetski with the UV-mapping. I flat surface is simple, but the jetski using collisiondetection to stick to the surface, and I have no idea how to get the uv from the collision point. However, I learned to use a separate dynamic canvas as an texture. I use similar technique in my snowboard-experiment in flash, but that is of course without hardware acceleration.

Animating with the fragment shader

The wave is not actually moving, it’s just the uv-mapping. The same time-offset used in the vertex shader is here responsible for “pushing” the uv-map forward. You can also edit the UV-mapping in your 3D program and modify the motion. In this case, the top of the breaking part of the wave had to go in the opposite direction, so I just rotate the uv-mapping 180 degrees for those faces. My solution works fine as long as you don’t look over the ridge of the wave and see the seam. But I think, in theory, you could make a seemless transition between the two direction, but it’s beyond my UV-mapping skills. You can also use different offset-values for different textures, like a Quake sky with parallaxed layers of clouds.

Fighting about z with the Depth Buffer

I was aware about the depth buffer, or z-buffer, which stores the depth value for each fragment. Besides lighting calculation, post-effects and other stuff, this data is used to determined if a fragment is in front or behind other fragments. But I have not reflected how it actually works before. For us coming from Flash 3D (pre Molehill) this is a new technique in the pipeline ( even though some experiment like this using it). Now, when the mighty GPU is in charge we don’t have flickering sorting problem right? No, artifact still appear. We have the same problem as with triangles, but now on a fragment level. Basically, it happens when two primitives (triangle, line or point) have the same value in the buffer, with the result that sometimes it shows up and sometimes it don’t. The reason is that the resolution of the buffer is to small in that specific range, which leads to z-values snapping to the same depth-map-value.

It can be a little hard to imagine the meening of resolution, but I will try to explain, with emphasis on trying, since I just learned this. The buffer is non-linear. The amount of numbers available is related to the z-position, the near and the far plane. There are more numbers to use, or higher resolution, closer to the eye/camera, and fewer and fewer assigned numbers as the distance increases. Stuff in the distance will share a shorter range of available numbers than the things right in front of you. That is quite useful, you probably need more details right in front of you where details are larger. The total amount of numbers, or call it precision, is different depending on the graphics-card with the possible 16 bit, 24 or 32 bit buffers. Simply put more space for data.

To avoid these artifact we have some techniques to be aware of, and here comes the newbie lesson learned this time. By adjusting the far and near planes, to the camera object, you can decide where the resolution will be focused. Previously, I used near=0.001 and far=”Some large number”. This causing the resolution to be very high between 0.0001 and “an other small number”. (There is a formula for this, but it’s not important right now) Thus, the depth buffer resolution was to low when the nearest triangle was drawn. So the simple thing to remember is: Set the near-value as far as you can for your solution to work. The value of the far plane is not as important because the nonlinear nature of the scale. As long as the near value is relatively small that is.

Another problem with occlusion calculations is when using color values with alpha. In those cases we sometimes got multiple “layers” of primitives, but only one value in the depth-buffer. That can be handled by blending the color with the one already stored in the output (frame buffer). An other solution is to set depthTest to false on the material, but obviously that is not possible in cases where we need the face to be behind something. We can also draw the scene from front to back or in different passes, with the transparent object last, but I have not used that technique yet.

Thanks AlteredQualia (@alteredq) to pointing this out!

Collision detection

I first used the original mesh for collision detection. It ran pretty fast in Chrome but painfully slow in Firefox. Iterating arrays seems to differ greatly in performance between browsers. A quick tip: When using collision detection, use separate hidden meshes with only the faces needed to do the job.

Smoke, Splashes and Sprites

All the smoke and white water is instances of the object-type: Sprite. A faster and more lightweight type of object, always pointed at you like a billboard. I have said it before, but one important aspect here is to use a object pool so that you can reuse every sprite created. Keep creating new ones will kill performance and memory allocation.

I’m not really pleased with the final solution of these particles, but good enough for a quick experiment like this.

Learning Javascript

As a newcomer to the Javascript-world I’m still learning the obvious stuff. Like console debugging. It is a huge benefit to change variables in real-time via the console in Chrome. For instance, setting the camera position without the need for a UI or switching programs.

And I still ends up writing almost all of my code in one huge file. With all frameworks and building-tools around I’m not really sure where to start. It’s a lame excuse I know, but in the same time, when doing experiments I rarely know where I’m going, so writing “correct” OOP and structured code is not the main goal.

Once again, thanks for reading! And please share your thoughts with me, I really appreciate feedback, especially corrections or pointing out misunderstandings that can make me do things in a better manner.

The beanstalk

Some of you has already seen my latest experiment that I published via twitter some days ago. Nothing new since then, I just want to put it up here as well. This is my second time trying out three.js. It’s a very competent framework with lots of features and examples available. If you got an idea, it’s blazing fast to put together a quick demo.

The inspiration for this experiment is planted in my own garden. The picture to the left shows the beanstalk growing with massive speed.  The virtual one uses the object from the last demo as a base, but without the recursive children branches. It’s a custom geometry object, so I can control the initial shape and uvs, and save references of the vertices to different arrays for later manipulation. The illusion of movement is made up with points acting as a offset values for each ring of vertices in this tube (or cone), but controlling the xy-position only. Those values is calculated just as a classic ribbon, the body recursively follows the head. In each frame the offset of one joint is copied from the one in front of it. The speed is directly related to the frame-rate and the length of a segment, otherwise I have to interpolate the values somehow.

At a given interval a leaf (object exported from 3d studio max) is spawned at the position of the head and then on each frame translated along the z-axis with the length of a segment. One important thing here is to reuse those leafs, so when they are behind the camera it’s time for a swim in the objectpool for later use.

The control of the plant is made up of three components. Mouse movements, sound spectrum and a circular movement. The style changing every 15 seconds and blend these component in various ways. The sound spectrum, or amplitude, is exported with this little python-snippet from @mrdoob.

I made some tests with post-processing effects like bloom as well, but the result was not what I wanted, so I ended up disable them. The bloom-effect is still in the source though if you want to see how it’s done. (forked from BTW…)

I having some problems with artifacts between intersecting object though. I have minimal experience with the depth-buffer/tests and triangle-sorting to recognize if  it’s a standard problems or caused by me or the engine. Depth-buffer resolution is my guess, to long range between far and close and to few “steps” in the depth-texture object. Maybe it’s different on different machines and hardware. If you have a clue, just let me know right?

I really want to do real projects with real 3D graphics soon. I envy you that can put on the label “a chrome experience”. However, I hope these little demos made by the community helps to spread the knowledge and inspiration that you can do more than games with this technique. Some of you got clients with technique friendly target groups with the latest browsers installed or who knows where the update button is. Lets help the future and start accelerating some hardware!

Enough talking, show me the demo already!

Some more pictures:


Tree.s and Three.js

This is a remake of an old experiment I did in papervision3D couple of years ago. This version is written in javascript and Three.js (by @mrdoob and his friends). It is really nice to play with the powers of WebGL. I have only used the built in shaders in Three.js so far, but it seems to be a quite straight forward process. Molehill and Away3D still get a lot of attention from me, but it is a special feeling to work straight in the browser. I can really recommend to try it out.

Launch demo

A brief introduction to 3D

With WebGL and Molehill around there is a new playground for us flash developers. First I felt a bit stressed over the fact that I have to compete with people doing this for years on the desktop, consoles, mobiles and with plugins like Unity. But it’s just a good thing. There is loads of information and knowledge around waiting for us to consume. We can cut to the chase and use the techniques developed by people before us. We are also several years left behind cutting edge 3D regarding performance and features, so the 3D community have produced loads of forum-threads, tutorials and papers. It will be interesting to see how the flash community will embrace this “new” technique. Games is a obvious field, but what about campaigns and microsites?

Some days ago I had the opportunity to hold a presentation about 3D basics. My goal was to give a overview of topics that could be good to look into as a starter. I have tried to pick them from the perspective of a “online developer”, with MolehillWebGL and Unity as a foundation. The word-cloud above show some of the words I put into context.  It’s basic stuff, in keywords and bullet-lists, so think of this as a dictionary of topics to find more information about on your own. There is some cool demos and resources in there as well. If you are into 3D programming already, this is probably not so interesting for you though, instead you can help me review the slides and point out possible improvements ;).

Here is the presentation as PowerPoint (16 MB, with fancy transitions and animations) and pdf (6 MB, not so fancy but comments are more visible). If you don’t have PowerPoint installed you can  just download PowerPoint viewer here instead.

Important topics missing? Thing I got wrong? I would love to hear some feedback on things to improve.