Brick Street View



Had some more street view panorama fun a couple of weeks ago. This time combining it with virtual LEGO. If you havn’t seen the site, go there first and I’ll tell you more afterwards:

A short walkthrough of what’s going on:


To make the map look like a giant baseplate, I use a tiled transparent texture overlay. When you move the center of the map the background-position of the layer is updated to match the movement of the map. When zooming it the background-size it’s resized. On top of that I first add a gradient fill for some shininess, and then the threejs layer on top. This contains some trees and bushes, and also the landmarks on predefined locations (found in the shortcuts-popup marked with a star-symbol in the top menu). Every object placed on the map has a corresponding google.maps.Marker. When needed (map-center change or zooming) the marker position is transformed into 3D-space to update the meshes. It’s limited to desktop-users, because there is a bug on touch-enabled devices, that prevent the marker-positions to update while dragging (or actually the projection setting on the map that is used internally to calculate the container position in pixels). The result is that the meshes jumps to the correct position only on the touchend-event. To make the objects look even more attached to the map, I added soft shadows.

To place the trees in good spots, I use the Google Places API, searching for parks within a radius of 1500 m. To display the location where the center currently is, I query the geocoder library in Google Maps.

Street View

I have to render the scene in two steps. First the background, then the rest of the scene in a second pass. I do so to make sure the panorama-sphere is always sorted between the background and the lego-models. There is low additional cost for this, both the background and the panorama is very low poly.

Constructing the LEGO panorama

This is the base idea with the whole experiment. To be honest,  the range of the results spans from ‘nice idea, but…’ to ‘WTF’. It very much depends on the quality of the depth data and also the color of the sky. Here is the workflow for creating a panorama:

– Load the low-res panorama with a low zoom-value. I use GSVPano for this, which supports high resolution tiling, but my goal is to pixelate it so the information is not needed. Returns the result in a canvas.

Flood-fill the canvas along the top edge of the texture, where it’s hopefully a clear blue sky (or smog for that matter). I sample some more points vertically in the direction of streets to wipe out some more there. I have fiddled with this back and forth to find a good settings but there are room for improvements still, like color detection, and comparing against the depth/normal-data. Replace the erased pixels with a single color, for easy detection in the next step.

– Create a high-res canvas and draw lego-bricks into it based on the color values of the smaller canvas. For pixels with the color defined by the floodfill, flag those as sky and avoid to draw those bricks, and all the bricks above a detected sky pixel. In that way clouds above a detected sky-pixel also gets erased so bricks don’t float in the air. For the ground we use the depth data to exclude those bricks. If you browse the site and press “C” and you’ll see the normal-map as an overlay. Pressing “X” reveals the original image. For each brick, I check if it belongs to the grund plane (in the pano depths data) and skip those as well.

– There are three different bricks drawn to make it less repeating. A flat, one with a variable rotation and one with a peg on the side. The last one is used more often if the color is closer to green, so trees and bushes gets some more volume. The build is nowhere close to being correct in any sense, especially when it comes to perspective, sizes or wrapping textures on spheres, so you need some imagination (or scroll around very fast) to accept the illusion.

– Then the fun starts, playing with real LEGO:


Lego models are added such as cars, flowers and trees. Two kinds of street baseplates are loaded to be used for a crossroad in the center and roads. Those will only be visible if there are provided links in the panorama meta data. Sometimes in crossings you can see the road in the imagery, but it’s not linked until the next panorama in front of you, so it does not reflect visual roads correctly. For the 3d-models I use a format called LDraw. An open standard for LEGO CAD programs that allow the user to create virtual LEGO models and scenes. All parts is stored in a folder and each file can reference other parts in the library. For example, every single peg on all bricks is just a reference to the same file called stud.dat. With software like Bricksmith and LEGO Digital Designer you can create models and export to this format. It’s also loads of models available on the internet, created by the LEGO community. Since it’s just a text file containing references, color, position and rotation, it’s straight forward to parse as geometry.


To show the LDraw models I use BRIGL, a Javascript library made by Nicola Lugato that loads ldraw files into threejs. I modified it a bit to work with the latest revision of threejs and material handling.

It takes some CPU power to parse and create meshes. I tried implementing a WebWorker to parse the geometry, but without success. Would be nice to parse them without freezing the browser. Using BufferGeometries would also be better for parsing and memory. The web is perhaps not the best medium for this format, since every triangle in all pieces is parsed and rendered, even those not visible, like the inside pegs of a brick. In this demo, I do a very simple optimisation on the landmark models, while parsing the specification I exclude parts that is inside or beneath bricks, like the inner peg that connects to the brick beneath. This reduced the triangle-count and parse-time by 50%.

To avoid hundreds of requests in runtime when loading the models subparts I use a gulp-script to lookup all active models (.ldr, .mpd, .dat) which recursively transform parts to JSON and adds them into a bundle as cached modules so they can be required instead of loaded. The size compared to storing the final parsed geometry as JSON is significantly smaller since many parts are reused across models.

For more information, visit the about page. And source code is available on github.