Streetlights

streetlights

I played around some with the physics-library cannon.js together with three.js and ended up with this Christmas card.

To show the power of iterations, this is a screenshot from the first version that I thought was almost finished, since my main target was to play around with physics, and this is how physics demos usually use to look like.

prestagelights

Each small bulb in the garland is a joint constraint. The ending joint have zero mass so it will stick to the facade without falling down. You can click/tap to change the position of those positions. To create the reflections the geometry is cloned and inverted, added to a second scene which renders to a separate render target with blur applied to it. The rest of the scene is then rendered on top of the first pass without clearing the colours, just the depth buffer. It was a bit tricky to render in the correct order and clearing the right stuff and I’m sure there is a more optimised way of doing this without the need of two scenes.

A transparent layer is also added to the ground level in the “above” scene to add some bluish diffuse and catch the lighting coming from the stars. Each star has a point-light as a child. A tiled diffuse map and a normal map are assigned to the ground floor to add details and an interesting surface. All geometry besides the star and the sign is created procedurally with just primitives (Box,plane and sphere). The facade got a specular map so the light could be more reflective in the windows. A bump map adds some depths as well.

Then finally, couldn’t get the Christmas feeling without the mighty Bloom filter pass :)

Try to mess around with the attachment points af the garland to see the physics, lighting and reflections in action. Notice how the wind effecting the lights and the snow, it’s actually a impulse force applied to the scene each frame.

Psst! Even though it’s Christmas, there is an easter egg somewhere. Can you find it?

Open Card

Interested in source? Here is the Github repo

Urban Jungle Street View – Behind the scenes

Introduction

Urban Jungle Street View is an experiment that I did to play with the depth data in Google Steet View. I knew that the information was there but had not successfully managed to parse the information. But some weeks ago I found a library that provided this for me. About the same time I started to play “Last Of Us” from Naughty Dogs. About 15 minutes in, stunned by the environments, I just had to try out this idea instead (damn you inspiration).

last-of-us

Here is a screenshot with the final result: endresult

I posted the link on Twitter and the response was overwhelming. About 200.000 visits the first week and it reached the front page on theverge.com, dailymail.co.uk, wired.com, psfk.com among others. Even television picked it up, got interviewed by the Japanese TV show “Sukkiri!”. Something triggers the inner adventurer in the audience for sure :)

2D Map

map2d Using the Google Maps API is so easy and powerful. So much data in your hands. For the style of the map, I use the Styled Maps Wizard to see what different kind of settings there are and export the settings as JSON. One tips is to turn off all features first with

{"stylers": [
    { "visibility": "off" },
]},

and after that turn on the things you want to show and style.

Street view coverage layer

I use the StreetViewCoverageLayer while dragging the little guy to show all available roads in Street View. It’s not just roads but also indoor places and special sites like Taj Mahal. I like the effect in Google Maps when holding Pegman over a road and he is snapped to the road with a direction-arrow aligning the road. But that functionality is currently only available in the Google Maps website and not in the API version 3. But I made a custom version that works not near as good but close enough. You can calculate the pixel-position and the tile-index and then load the coverage-tile manually, write it to a canvas and read the pixel at the corresponding local position. If the pixel is not transparent then we hover a street-view enabled road or place. I also inspect the color to avoid changing the state if you hover a indoor-panorama, which is orange. You can still drop him, but it adds a visual queue that it’s not optimal to drop him there. Sometimes you end up inside a building anyway, but I have not found a way to notice that without loading the data first.

Street View

I use a library called GSVPano, created by Jaume Sanchez, to show the panoramas in WebGL and three.js. The source is removed from the repo though. To access the depth and normal data I used a different library. Read more about that here. depthmap

The data for each panorama is not a depth-map directly (from the API) but instead stored in a Base64-encoded string to save bandwidth, requests and in a way that makes it quick to parse for the actual purpose of it; to find the normal (direction of a surface) and the distance on a point in the panorama to aid the navigation UI. If you hover the ground with the mouse-cursor in Google Streetview you get the circle cursor and if you point at a building you get a rectangle aligned with the surface of the building. I’m using that behaviour as well when plotting out the foliage, but also passing the data to the shader as textures.

In the demo, the normal-map takes care of making the ground transparent in the shader, so I can place a plane under the panorama-sphere. The ground plane will be visible through the transparent ‘hole’ (could maybe create geometry instead somehow). The depth-map is used to make a interpolation from transparent to opaque ground. The depth-data has a limited range (the visible range), so it will look strange otherwise with a hard edge.

I can also add a little fog to the environment. The planes in the map is not exactly matching the real-world geometry so it can not be too much fog, because the edges will be visible around the buildings. In the end I removed it almost completely since I liked a more sunny weather, but here is screenshot with some more added to it:

extra-fog

Foliage

Foliage are sprites (flat billboards) or simple geometry that is positioned in 3d space. To place them I look in random positions in a predefined area of the texture. First, to get the sense of how a flat image is wrapped around a sphere, look at this image: uv-wrap Image from Wikipedia

Here you see the depth-data parsed and displayed as a normal texture. The dots is where I calculated things to grow (I describe that more below).

normalmap

When selecting a point in the texture you get the position on a sphere like this:

var lat = u * 180-90;
var lon = v * 360-180;
var r = Math.cos(DEG_TO_RAD * lat);

//range between 0-1
var pos = new THREE.Vector3();
pos.x = (r * Math.cos(DEG_TO_RAD * lon) );
pos.y = (Math.sin(DEG_TO_RAD * lat));
pos.z = (r * Math.sin(DEG_TO_RAD * lon));

But if all objects are placed at a fixed distance they will the same scale, so the distance from the camera is important, like pushing the object away from the camera in the same angle. That value is stored in the depth data so:

pos.multiplyScalar(pointData.distance);

If I want the object to point at the camera, like the trees, it’s just to set

myPlantObject.lookAt(camera).

But If the object should align the surface it suppose to stick to, we should use the normal value, the direction the wall in facing:

var v = pos.clone();
v.add( pointData.normal );
myPlantObject.lookAt(v);

normalmap3d

Here is a demo with this setting. To get a true 3d perspective, the sprite could be replaced with real geometry, but leafs are quite forgiving to perspective issues and sprites are much more performant. If the angle to the camera is too big, it looks really strange though.

If you click or tap the scene you can put more foliage into it. Then it’s just the same process but reversed. Get the collision point on the sphere, convert it to UV-space (0-1) on the texture. Test against the data and get the preferred plant for that position if any.

Fine tuning

Every time you visit a place it’s randomly generated. There is some estimations that I use to get a bit more smarter distribution of plants. If the calculated direction is pointing upwards, make grass growing on the ground. If it’s perpendicular to the ground, we can put some wall foliage sprites there. To hide the seam between the ground and the walls I do a trick to find that edge. The ground has always the same normal, as you can see as the pink color above. I iterate the pixels in the texture from bottom and up to find the first pixel in a different color than the ground. That is where the edge is. Put a grass-sprite there and go to the next column of pixels to do the same. To enhance the feeling of walls even further lets introduce another type of plant: vines. These have stalks made of 3d geometry (a modified ExtrudeGeometry extruded along a bezier path, or “spline”) with sprite-based leafs. To detect a good place for them to grow I make another lookup in the normal-map from where I detected the edge previously. About 50 pixels up from these points I test if the value is still the same as the wall I found first. If it’s a different color it’s probably a small wall and not an optimal place for a vine to grow, otherwise I can place it there but not too many so test for a minimum distance between them. It’s buggy sometimes, but it’s close enough for this demo. A couple of trees is also inserted as sprites pointing toward the camera. They could be more procedural for a more interesting result.

Navigation

The link to nearby panoramas is included in the API-result so it’s just to add the arrows and detect interactions with THREE.RayCaster (example). The effect between the locations is part of the post effects pass, I just set the amount of blur in a blur pass…

Post effects

Again, Jaume Sanchez has more valuable source code in form of Wagner, a post-effect composer for three.js. To make the panorama layer and the 3D layer blend together a couple of effects are added to the end result. First a Bloom-filter, which adds glow to bright areas. And a dirt-pass, that adds the dust and raindrops to the composition. The final touch is a little movement to camera so the scene look more alive and less of a still image. There is definitely room for improvement, especially with the color balance in the scene, and more advanced geometry and procedurally generated plants.

Try it now

Now when that is out of the system, lets return to “Last Of Us” finally…

Snowroller

snowroller

This was a demo I did for the advent calendar christmasexperiments.com. Create a number of snowballs and put them together to a snowman. The trail is created by drawing into a canvas which is sent into the ground-shader as a height-map. In the vertex shader the vertices is displaced along the y-axis and. In the fragment-shader it is used for bump-mapping. The snow also becomes a bit darker when it’s below a threshold height. The shape of the track is defined by the radial gradient used when drawing to the canvas. I like that you can run over the existing tracks and leave a bit of a trail of the current direction. That adds more depths than just a static layer painted on top.

In a earlier version the ball geometry was affected by actually touching the snow, extruding vertices once per rotation if vertex was below the snow. But when it became more focused on creating a snowman, it made more sense to make perfectly round snowballs. I’m not so happy about the material on the snowballs. The UV-wrapping and the spherical texture creates familiar artefacts in the poles, pinching the texture. But there was not enough time to look into this at that time. A procedural approach would have been nice to avoid it. Or not use UV-coordinated, but calculate them in a shader. Like these guys (@thespite@claudioguglieri@Sejnulla) who did a nice site about the same time with a much better snow-shader: http://www.snwbx.com/. That shader can also be seen here.

Collision detection is just a simple distance-test between the spheres and the bounding-box.

Try it out here.

View the source at github.

Plus & Minus

plusminus

This is a small game that my oldest daughter Hilda helped me out with. We made this so she could practice basic math numbers, addition and subtraction, in a fun and simple way. When you answer correctly the guard let you into the gate and another track is added to the soundtrack, evolving the music as you go. The country-inspired music is a 8 bar loop with multiple tracks made in Logic. Every track is added to the WebAudioAPI-powered sound controller and starts to play at the same time to keep sync. I found this neat helper to quickly put it together. Each track gets it’s own volume-control.  The drawings is captured with an iPhone camera, a really quick way to getting the graphics in. Animations and transitions is managed with CSS3/SASS and a little bit of Javascript to trigger them and add some logic. The game also have support for touch-input and various screen-sizes, so you can practice on the school-bus.

Play now

Clove orange

One last post before the holidays. Some days ago I posted a link to a WebGL christmas craft demo made with three.js. For those wondering what this was about: It’s called a “Christams Orange Pomander”,  a decoration that we use to do in Sweden and some other European countries around Christmas. You put cloves, that nice fragrant spice, into an orange or clementine. Just random or like me, in structured patterns. Put some silk ribbons around it and hang them somewhere for a nice christmas atmosphere. Try to make your own here. The fun part is to attach the cloves with javascript, just open the “into code?”-button and play around. As an example, the image above loads and plots a twitter-avatar on the orange. You can also save your orange as an image or share the 3D version with an unique url.

I want to take the opportunity to thank all readers and followers for a great year! I’m amazed that over 62.000 users found this little blog. I have really enjoyed experimenting and doing demos to share with you. Many blogs in the community has disappeared in favor to instant twitter publishing. I will also continue to use Twitter as a important channel, but I will also try to keep this space updated as well, since I can go a little deeper and do some write-ups about the techniques behind.

Next year has even more in store for me. I will join the talented people over at North Kingdom, Stockholm. Exciting future ahead!

Enjoy your holiday and see you next year!

Plasma ball

Run demo

More noisy balls. Time to be nostalgic with a classic from the 80′s: The plasma lamp. I have never had the opportunity to own one myself, maybe that’s why I’m so fascinated by them. There is even small USB driven ones nowadays. I’ve had this idea for a demo a long time and finally made a rather simple one.

The electric lightning  is made up by tubes that is displaced and animated in the vertex shader, just as in my last experiment.  I reuse the procedurally generated noise-texture that is applied to the emitting sphere. I displace local x- and z-position with an offset multiplied by the value I get from the noise-texture on the corresponding uv-position (along y-axis). To make the lightning progress more naturally I also use a easing-function on this factor. Notice that it’s less displacement near the center and more at the edges.

I’m have this idea of controlling the lightning with a multi-touch websocket-connection between a mobile browser and the browser just to learn node.js and sockets. Let’s wait and see how this evolves. Running this on on a touch-screen directly would have been even nicer of course. If someone can do that, I will add multiple touch-points, just let me know.

Tree.s and Three.js

This is a remake of an old experiment I did in papervision3D couple of years ago. This version is written in javascript and Three.js (by @mrdoob and his friends). It is really nice to play with the powers of WebGL. I have only used the built in shaders in Three.js so far, but it seems to be a quite straight forward process. Molehill and Away3D still get a lot of attention from me, but it is a special feeling to work straight in the browser. I can really recommend to try it out.

Launch demo