Brick Street View



Had some more street view panorama fun a couple of weeks ago. This time combining it with virtual LEGO. If you havn’t seen the site, go there first and I’ll tell you more afterwards:

A short walkthrough of what’s going on:


To make the map look like a giant baseplate, I use a tiled transparent texture overlay. When you move the center of the map the background-position of the layer is updated to match the movement of the map. When zooming it the background-size it’s resized. On top of that I first add a gradient fill for some shininess, and then the threejs layer on top. This contains some trees and bushes, and also the landmarks on predefined locations (found in the shortcuts-popup marked with a star-symbol in the top menu). Every object placed on the map has a corresponding google.maps.Marker. When needed (map-center change or zooming) the marker position is transformed into 3D-space to update the meshes. It’s limited to desktop-users, because there is a bug on touch-enabled devices, that prevent the marker-positions to update while dragging (or actually the projection setting on the map that is used internally to calculate the container position in pixels). The result is that the meshes jumps to the correct position only on the touchend-event. To make the objects look even more attached to the map, I added soft shadows.

To place the trees in good spots, I use the Google Places API, searching for parks within a radius of 1500 m. To display the location where the center currently is, I query the geocoder library in Google Maps.

Street View

I have to render the scene in two steps. First the background, then the rest of the scene in a second pass. I do so to make sure the panorama-sphere is always sorted between the background and the lego-models. There is low additional cost for this, both the background and the panorama is very low poly.

Constructing the LEGO panorama

This is the base idea with the whole experiment. To be honest,  the range of the results spans from ‘nice idea, but…’ to ‘WTF’. It very much depends on the quality of the depth data and also the color of the sky. Here is the workflow for creating a panorama:

– Load the low-res panorama with a low zoom-value. I use GSVPano for this, which supports high resolution tiling, but my goal is to pixelate it so the information is not needed. Returns the result in a canvas.

Flood-fill the canvas along the top edge of the texture, where it’s hopefully a clear blue sky (or smog for that matter). I sample some more points vertically in the direction of streets to wipe out some more there. I have fiddled with this back and forth to find a good settings but there are room for improvements still, like color detection, and comparing against the depth/normal-data. Replace the erased pixels with a single color, for easy detection in the next step.

– Create a high-res canvas and draw lego-bricks into it based on the color values of the smaller canvas. For pixels with the color defined by the floodfill, flag those as sky and avoid to draw those bricks, and all the bricks above a detected sky pixel. In that way clouds above a detected sky-pixel also gets erased so bricks don’t float in the air. For the ground we use the depth data to exclude those bricks. If you browse the site and press “C” and you’ll see the normal-map as an overlay. Pressing “X” reveals the original image. For each brick, I check if it belongs to the grund plane (in the pano depths data) and skip those as well.

– There are three different bricks drawn to make it less repeating. A flat, one with a variable rotation and one with a peg on the side. The last one is used more often if the color is closer to green, so trees and bushes gets some more volume. The build is nowhere close to being correct in any sense, especially when it comes to perspective, sizes or wrapping textures on spheres, so you need some imagination (or scroll around very fast) to accept the illusion.

– Then the fun starts, playing with real LEGO:


Lego models are added such as cars, flowers and trees. Two kinds of street baseplates are loaded to be used for a crossroad in the center and roads. Those will only be visible if there are provided links in the panorama meta data. Sometimes in crossings you can see the road in the imagery, but it’s not linked until the next panorama in front of you, so it does not reflect visual roads correctly. For the 3d-models I use a format called LDraw. An open standard for LEGO CAD programs that allow the user to create virtual LEGO models and scenes. All parts is stored in a folder and each file can reference other parts in the library. For example, every single peg on all bricks is just a reference to the same file called stud.dat. With software like Bricksmith and LEGO Digital Designer you can create models and export to this format. It’s also loads of models available on the internet, created by the LEGO community. Since it’s just a text file containing references, color, position and rotation, it’s straight forward to parse as geometry.


To show the LDraw models I use BRIGL, a Javascript library made by Nicola Lugato that loads ldraw files into threejs. I modified it a bit to work with the latest revision of threejs and material handling.

It takes some CPU power to parse and create meshes. I tried implementing a WebWorker to parse the geometry, but without success. Would be nice to parse them without freezing the browser. Using BufferGeometries would also be better for parsing and memory. The web is perhaps not the best medium for this format, since every triangle in all pieces is parsed and rendered, even those not visible, like the inside pegs of a brick. In this demo, I do a very simple optimisation on the landmark models, while parsing the specification I exclude parts that is inside or beneath bricks, like the inner peg that connects to the brick beneath. This reduced the triangle-count and parse-time by 50%.

To avoid hundreds of requests in runtime when loading the models subparts I use a gulp-script to lookup all active models (.ldr, .mpd, .dat) which recursively transform parts to JSON and adds them into a bundle as cached modules so they can be required instead of loaded. The size compared to storing the final parsed geometry as JSON is significantly smaller since many parts are reused across models.

For more information, visit the about page. And source code is available on github.

Urban Jungle Street View – Behind the scenes


Urban Jungle Street View is an experiment that I did to play with the depth data in Google Steet View. I knew that the information was there but had not successfully managed to parse the information. But some weeks ago I found a library that provided this for me. About the same time I started to play “Last Of Us” from Naughty Dogs. About 15 minutes in, stunned by the environments, I just had to try out this idea instead (damn you inspiration).


Here is a screenshot with the final result: endresult

I posted the link on Twitter and the response was overwhelming. About 200.000 visits the first week and it reached the front page on,,, among others. Even television picked it up, got interviewed by the Japanese TV show “Sukkiri!”. Something triggers the inner adventurer in the audience for sure 🙂

2D Map

map2d Using the Google Maps API is so easy and powerful. So much data in your hands. For the style of the map, I use the Styled Maps Wizard to see what different kind of settings there are and export the settings as JSON. One tips is to turn off all features first with

{"stylers": [
    { "visibility": "off" },

and after that turn on the things you want to show and style.

Street view coverage layer

I use the StreetViewCoverageLayer while dragging the little guy to show all available roads in Street View. It’s not just roads but also indoor places and special sites like Taj Mahal. I like the effect in Google Maps when holding Pegman over a road and he is snapped to the road with a direction-arrow aligning the road. But that functionality is currently only available in the Google Maps website and not in the API version 3. But I made a custom version that works not near as good but close enough. You can calculate the pixel-position and the tile-index and then load the coverage-tile manually, write it to a canvas and read the pixel at the corresponding local position. If the pixel is not transparent then we hover a street-view enabled road or place. I also inspect the color to avoid changing the state if you hover a indoor-panorama, which is orange. You can still drop him, but it adds a visual queue that it’s not optimal to drop him there. Sometimes you end up inside a building anyway, but I have not found a way to notice that without loading the data first.

Street View

I use a library called GSVPano, created by Jaume Sanchez, to show the panoramas in WebGL and three.js. The source is removed from the repo though. To access the depth and normal data I used a different library. Read more about that here. depthmap

The data for each panorama is not a depth-map directly (from the API) but instead stored in a Base64-encoded string to save bandwidth, requests and in a way that makes it quick to parse for the actual purpose of it; to find the normal (direction of a surface) and the distance on a point in the panorama to aid the navigation UI. If you hover the ground with the mouse-cursor in Google Streetview you get the circle cursor and if you point at a building you get a rectangle aligned with the surface of the building. I’m using that behaviour as well when plotting out the foliage, but also passing the data to the shader as textures.

In the demo, the normal-map takes care of making the ground transparent in the shader, so I can place a plane under the panorama-sphere. The ground plane will be visible through the transparent ‘hole’ (could maybe create geometry instead somehow). The depth-map is used to make a interpolation from transparent to opaque ground. The depth-data has a limited range (the visible range), so it will look strange otherwise with a hard edge.

I can also add a little fog to the environment. The planes in the map is not exactly matching the real-world geometry so it can not be too much fog, because the edges will be visible around the buildings. In the end I removed it almost completely since I liked a more sunny weather, but here is screenshot with some more added to it:



Foliage are sprites (flat billboards) or simple geometry that is positioned in 3d space. To place them I look in random positions in a predefined area of the texture. First, to get the sense of how a flat image is wrapped around a sphere, look at this image: uv-wrap Image from Wikipedia

Here you see the depth-data parsed and displayed as a normal texture. The dots is where I calculated things to grow (I describe that more below).


When selecting a point in the texture you get the position on a sphere like this:

var lat = u * 180-90;
var lon = v * 360-180;
var r = Math.cos(DEG_TO_RAD * lat);

//range between 0-1
var pos = new THREE.Vector3();
pos.x = (r * Math.cos(DEG_TO_RAD * lon) );
pos.y = (Math.sin(DEG_TO_RAD * lat));
pos.z = (r * Math.sin(DEG_TO_RAD * lon));

But if all objects are placed at a fixed distance they will the same scale, so the distance from the camera is important, like pushing the object away from the camera in the same angle. That value is stored in the depth data so:


If I want the object to point at the camera, like the trees, it’s just to set


But If the object should align the surface it suppose to stick to, we should use the normal value, the direction the wall in facing:

var v = pos.clone();
v.add( pointData.normal );


Here is a demo with this setting. To get a true 3d perspective, the sprite could be replaced with real geometry, but leafs are quite forgiving to perspective issues and sprites are much more performant. If the angle to the camera is too big, it looks really strange though.

If you click or tap the scene you can put more foliage into it. Then it’s just the same process but reversed. Get the collision point on the sphere, convert it to UV-space (0-1) on the texture. Test against the data and get the preferred plant for that position if any.

Fine tuning

Every time you visit a place it’s randomly generated. There is some estimations that I use to get a bit more smarter distribution of plants. If the calculated direction is pointing upwards, make grass growing on the ground. If it’s perpendicular to the ground, we can put some wall foliage sprites there. To hide the seam between the ground and the walls I do a trick to find that edge. The ground has always the same normal, as you can see as the pink color above. I iterate the pixels in the texture from bottom and up to find the first pixel in a different color than the ground. That is where the edge is. Put a grass-sprite there and go to the next column of pixels to do the same. To enhance the feeling of walls even further lets introduce another type of plant: vines. These have stalks made of 3d geometry (a modified ExtrudeGeometry extruded along a bezier path, or “spline”) with sprite-based leafs. To detect a good place for them to grow I make another lookup in the normal-map from where I detected the edge previously. About 50 pixels up from these points I test if the value is still the same as the wall I found first. If it’s a different color it’s probably a small wall and not an optimal place for a vine to grow, otherwise I can place it there but not too many so test for a minimum distance between them. It’s buggy sometimes, but it’s close enough for this demo. A couple of trees is also inserted as sprites pointing toward the camera. They could be more procedural for a more interesting result.


The link to nearby panoramas is included in the API-result so it’s just to add the arrows and detect interactions with THREE.RayCaster (example). The effect between the locations is part of the post effects pass, I just set the amount of blur in a blur pass…

Post effects

Again, Jaume Sanchez has more valuable source code in form of Wagner, a post-effect composer for three.js. To make the panorama layer and the 3D layer blend together a couple of effects are added to the end result. First a Bloom-filter, which adds glow to bright areas. And a dirt-pass, that adds the dust and raindrops to the composition. The final touch is a little movement to camera so the scene look more alive and less of a still image. There is definitely room for improvement, especially with the color balance in the scene, and more advanced geometry and procedurally generated plants.

Try it now

Now when that is out of the system, lets return to “Last Of Us” finally…



This was a demo I did for the advent calendar Create a number of snowballs and put them together to a snowman. The trail is created by drawing into a canvas which is sent into the ground-shader as a height-map. In the vertex shader the vertices is displaced along the y-axis and. In the fragment-shader it is used for bump-mapping. The snow also becomes a bit darker when it’s below a threshold height. The shape of the track is defined by the radial gradient used when drawing to the canvas. I like that you can run over the existing tracks and leave a bit of a trail of the current direction. That adds more depths than just a static layer painted on top.

In a earlier version the ball geometry was affected by actually touching the snow, extruding vertices once per rotation if vertex was below the snow. But when it became more focused on creating a snowman, it made more sense to make perfectly round snowballs. I’m not so happy about the material on the snowballs. The UV-wrapping and the spherical texture creates familiar artefacts in the poles, pinching the texture. But there was not enough time to look into this at that time. A procedural approach would have been nice to avoid it. Or not use UV-coordinated, but calculate them in a shader. Like these guys (@thespite@claudioguglieri@Sejnulla) who did a nice site about the same time with a much better snow-shader: That shader can also be seen here.

Collision detection is just a simple distance-test between the spheres and the bounding-box.

Try it out here.

View the source at github.

Plus & Minus


This is a small game that my oldest daughter Hilda helped me out with. We made this so she could practice basic math numbers, addition and subtraction, in a fun and simple way. When you answer correctly the guard let you into the gate and another track is added to the soundtrack, evolving the music as you go. The country-inspired music is a 8 bar loop with multiple tracks made in Logic. Every track is added to the WebAudioAPI-powered sound controller and starts to play at the same time to keep sync. I found this neat helper to quickly put it together. Each track gets it’s own volume-control.  The drawings is captured with an iPhone camera, a really quick way to getting the graphics in. Animations and transitions is managed with CSS3/SASS and a little bit of Javascript to trigger them and add some logic. The game also have support for touch-input and various screen-sizes, so you can practice on the school-bus.

Play now