VisualEYEzer

 

I almost forgot to post this little demo. It’s a remake of an old experiment with procedural eyes I made in flash long time ago. (Actually, the old one look kind of cooler than this, think it’s the normal-mapping that does the difference. Haven’t gotten to the point of making that in a procedural shader yet, probably have to use off-screen renderers and stuff)

This time it’s done with three.js and shaders. So basically I just started of with the sphere-geometry. The vertices near the edge of the z-axis got an inverted z-position in relation to the original radius, forming the iris as a concave parabolic disc on the sphere.

The shader is then doing it’s thing. By reading the uv-coordinate and the surface-positions I apply different patterns to different sections. I separate those sections in the shader by using a combination of step (or smoothstep) with mix. Step is almost like a if-statement, returning a factor that is 1 if inside the range, and 0 outside. Smoothstep blurs the edge for a smoother transition. You can then use that in the mix-method to choose between two colors, for example the pupille and the iris. I mentioned this method in my last post as well.

Check out the fragment shader here

To give the eye the reflective look, I added a second sphere just outside the first one with a environment-map. The reason for not using the same geometry was that I wanted the reflection on the correct surface around the iris. And by making the second sphere just slightly larger it gives the surface some kind of thickness.

To spice it up little, I added some audio-analysing realtime effects, with this little nifty library called Audiokeys.js. With that you can specify a range of frequencies to listen in to and get a sum of that level. Perfect for controlling parameters and send them to the shaders as uniforms.

Open demo

Plasma ball

Run demo

More noisy balls. Time to be nostalgic with a classic from the 80′s: The plasma lamp. I have never had the opportunity to own one myself, maybe that’s why I’m so fascinated by them. There is even small USB driven ones nowadays. I’ve had this idea for a demo a long time and finally made a rather simple one.

The electric lightning  is made up by tubes that is displaced and animated in the vertex shader, just as in my last experiment.  I reuse the procedurally generated noise-texture that is applied to the emitting sphere. I displace local x- and z-position with an offset multiplied by the value I get from the noise-texture on the corresponding uv-position (along y-axis). To make the lightning progress more naturally I also use a easing-function on this factor. Notice that it’s less displacement near the center and more at the edges.

I’m have this idea of controlling the lightning with a multi-touch websocket-connection between a mobile browser and the browser just to learn node.js and sockets. Let’s wait and see how this evolves. Running this on on a touch-screen directly would have been even nicer of course. If someone can do that, I will add multiple touch-points, just let me know.

Set a sphere on fire with three.js

Fire it up!

Next element of nature in line, smoke and fire! I did a version in flash about a year ago, playing with 4D Perlin noise. That bitmapdata was like 128×128 big using haxe to work with fast memory. At 20fps. Now we can do fullscreen in 60+ fps. Sweet.

First, note that code snippets below is taken out of context and that they using built-in variables and syntax from three.js. If you have done shaders before you will recognize variables like mvPosition and viewMatrix, or uniforms and texture-lookups.

The noise is generated within a fragment shader, applied to a plane inside a hidden scene, rendered to a texture and linked to the target material. This process is just forked from this demo by @alteredq. Before wrapping the texture around the sphere we have to adjust the output of the noise to be suitable for spherical mapping. If we don’t, the texture will look pinched at the poles as seen in many examples. This is one of the advantages of generating procedural noise; that you can modify the output right from the start. By doing this, each fragment on the sphere get the same level of details in the texture instead of stretching the image across a surface. For spheres, this following technique is one way of finding where (in the imaginary volume of noise) you should fetch the noise-value:

float PI = 3.14159265;
float TWOPI = 6.28318531;
float baseRadius = 1.0;

vec3 sphere( float u, float v) {
	u *= PI;
	v *= TWOPI;
	vec3 pSphere;
	pSphere.x = baseRadius * cos(v) * sin(u);
	pSphere.y = baseRadius * sin(v) * sin(u);
	pSphere.z = baseRadius * cos(u);
	return pSphere;
}

 

Notice the 2d-position that that goes into the function and the returning vec3-type. The uv-coordinates is translated to a 3d-position on the sphere. Now we can just look up that noise-value and plot it to the origin uv-position. Changing the baseRadius affects the repeating of the texture.

Another approach is to do everything in the same shader-pair, both noise and final output. We get the interpolated world-position of each fragment from the vertex-shader, so no need for manipulating right? But since this example uses displacements in the vertex-shader I need to create the texture in a separate pass and send it as a flat texture to the last shader.

So now we have a sphere with a noise texture applied to it. It look nice but the surface is flat.

I mentioned displacements, this is all you have to do:

#ifdef VERTEX_TEXTURES
	vec3 texturePosition = texture2D( tHeightMap, vUv ).xyz;
	float dispFactor = uDispScale * texturePosition.x + uDispBias;
	vec4 dispPos = vec4( normal.xyz * dispFactor, 0.0 ) + mvPosition;
	gl_Position = projectionMatrix * dispPos;
#else
	gl_Position = projectionMatrix * mvPosition;
#endif

 

uDispScale and uDispBias is uniform variables that we can control with javascript at runtime. We just add a value to the original position along the normal direction. The #ifdef-condition let us use a fallback for those on browsers or environments that doesn’t support textures in the vertex-shader. It’s not available in all implementations of webGL, but depending on your hardware and drivers it should work in latest Chrome and Firefox builds. Correct me if I’m wrong here. This feature is crucial for this demo, so I prompt a alert-box for those unlucky users.

After adding displacement I also animate the vertices. You can e.g. modify the vertex-positions based on time, the vertex position itself or the uv-coordinates to make the effects more dynamic. One of the effects available in the settings-panel is “twist”. I found it somewhere I don’t remember now, but here it is if you’re interested:

vec4 DoTwist( vec4 pos, float t )
{
	float st = sin(t);
	float ct = cos(t);
	vec4 new_pos;

	new_pos.x = pos.x*ct - pos.z*st;
	new_pos.z = pos.x*st + pos.z*ct;

	new_pos.y = pos.y;
	new_pos.w = pos.w;

	return( new_pos );
}

//this goes into the main function
float angle_rad = uTwist * 3.14159 / 180.0;
float force = (position.y-uHeight*0.5)/-uHeight * angle_rad;

vec4 twistedPosition = DoTwist(mPosition, force);
vec4 twistedNormal = DoTwist(vec4(vNormal,1.0), force);

//change matrix
vec4 mvPosition = viewMatrix * twistedPosition;

 

The variable force has to be changed to your need. Basically you want to set a starting position and a range that is going to be modified.

Then we reach the final fragment-shader. This one is fairly simple. I blend two colors. Multiply those with one of the channels in the height-map (the noise texture). Then mix this color with the smoke color. All colors get their values based on the screen-coord, a quickfix for this solution. Lastly I apply fog to make fire blend into the background, looking less like a sphere.

For some final touch, a bloom filter is applied and some particles is thrown in as well. Had some help by @aerotwist and his particle tutorial.

That’s it folks, thanks for reading.

 

 

Wave

More time playing with Three.js and WebGL. My primary goal was to get a deeper understanding in shaders by just messing around a little. If you want to know the basics about shaders in three.js I recommend reading this tutorial by Paul Lewis (@aerotwist). In this post I will try to explain some of the things I’ve learned in the process so far. I know this it’s like kindergarten for “real” CG programmers, but history is often repeating in this field. Let us pretend that we are cool for a while ;)

Run the demo

The wave

I started of making a wave-model, and a simple plane as ocean in the distance, in 3D Studio Max. I later on painted the wave with vertex-colors. This colors was used in the shader to mimic the translucence of the wave. Could have use real SubSurfaceScattering, but this simple workaround worked out well. The vertex-colors is mixed with regular diffuse and ambient colors. A normal map was then applied with a tiled noise pattern for ripples. The normals for this texture don’t update relative to the vertex-animation, but that’s fine. On top of that, a foam-texture with white perlin noise was blended with the diffuse color. I tried a environment-map for reflections as well, but that was removed in the final version, the water looked more like metal or glass.

I always likes the “behind the scene” part showing the passes of a rendering with all layers separated and finally blended, so here is one of my own so you can see the different steps:

 

Animating with the vertex shader

My wave is a static mesh. For the wobbly motion across the wave, I animate vertices in the vertex shader. One movement along the y-axis that creates a wavy motion and one along the x-axis for some smaller ripples. You have probably seen this often, since it’s a easy way to animate something with just a time-value passed in every frame.

vertexPosition.y += sin( (vertexPosition.x/waveLength) + theta )*maxDiff;

The time-offset (theta) together with the x-position goes inside a sin-function that generates a periodic cycle with a value between -1 and 1. Multiply that with a factor and you animate the vertices up and down. To get more crests along the mesh, divide the position with waveLength.

A side note: I tried to use vertex texture look-ups (supported in ANGLE version) for real vertex displacements. With that I could get more unique and dynamic animations. I plotted a trail in a hidden canvas to get jetski backwash, but the resolution of the wave-mesh was to low and I had problem matching the position of the jetski with the UV-mapping. I flat surface is simple, but the jetski using collisiondetection to stick to the surface, and I have no idea how to get the uv from the collision point. However, I learned to use a separate dynamic canvas as an texture. I use similar technique in my snowboard-experiment in flash, but that is of course without hardware acceleration.

Animating with the fragment shader

The wave is not actually moving, it’s just the uv-mapping. The same time-offset used in the vertex shader is here responsible for “pushing” the uv-map forward. You can also edit the UV-mapping in your 3D program and modify the motion. In this case, the top of the breaking part of the wave had to go in the opposite direction, so I just rotate the uv-mapping 180 degrees for those faces. My solution works fine as long as you don’t look over the ridge of the wave and see the seam. But I think, in theory, you could make a seemless transition between the two direction, but it’s beyond my UV-mapping skills. You can also use different offset-values for different textures, like a Quake sky with parallaxed layers of clouds.

Fighting about z with the Depth Buffer

I was aware about the depth buffer, or z-buffer, which stores the depth value for each fragment. Besides lighting calculation, post-effects and other stuff, this data is used to determined if a fragment is in front or behind other fragments. But I have not reflected how it actually works before. For us coming from Flash 3D (pre Molehill) this is a new technique in the pipeline ( even though some experiment like this using it). Now, when the mighty GPU is in charge we don’t have flickering sorting problem right? No, artifact still appear. We have the same problem as with triangles, but now on a fragment level. Basically, it happens when two primitives (triangle, line or point) have the same value in the buffer, with the result that sometimes it shows up and sometimes it don’t. The reason is that the resolution of the buffer is to small in that specific range, which leads to z-values snapping to the same depth-map-value.

It can be a little hard to imagine the meening of resolution, but I will try to explain, with emphasis on trying, since I just learned this. The buffer is non-linear. The amount of numbers available is related to the z-position, the near and the far plane. There are more numbers to use, or higher resolution, closer to the eye/camera, and fewer and fewer assigned numbers as the distance increases. Stuff in the distance will share a shorter range of available numbers than the things right in front of you. That is quite useful, you probably need more details right in front of you where details are larger. The total amount of numbers, or call it precision, is different depending on the graphics-card with the possible 16 bit, 24 or 32 bit buffers. Simply put more space for data.

To avoid these artifact we have some techniques to be aware of, and here comes the newbie lesson learned this time. By adjusting the far and near planes, to the camera object, you can decide where the resolution will be focused. Previously, I used near=0.001 and far=”Some large number”. This causing the resolution to be very high between 0.0001 and “an other small number”. (There is a formula for this, but it’s not important right now) Thus, the depth buffer resolution was to low when the nearest triangle was drawn. So the simple thing to remember is: Set the near-value as far as you can for your solution to work. The value of the far plane is not as important because the nonlinear nature of the scale. As long as the near value is relatively small that is.

Another problem with occlusion calculations is when using color values with alpha. In those cases we sometimes got multiple “layers” of primitives, but only one value in the depth-buffer. That can be handled by blending the color with the one already stored in the output (frame buffer). An other solution is to set depthTest to false on the material, but obviously that is not possible in cases where we need the face to be behind something. We can also draw the scene from front to back or in different passes, with the transparent object last, but I have not used that technique yet.

Thanks AlteredQualia (@alteredq) to pointing this out!

Collision detection

I first used the original mesh for collision detection. It ran pretty fast in Chrome but painfully slow in Firefox. Iterating arrays seems to differ greatly in performance between browsers. A quick tip: When using collision detection, use separate hidden meshes with only the faces needed to do the job.

Smoke, Splashes and Sprites

All the smoke and white water is instances of the object-type: Sprite. A faster and more lightweight type of object, always pointed at you like a billboard. I have said it before, but one important aspect here is to use a object pool so that you can reuse every sprite created. Keep creating new ones will kill performance and memory allocation.

I’m not really pleased with the final solution of these particles, but good enough for a quick experiment like this.

Learning Javascript

As a newcomer to the Javascript-world I’m still learning the obvious stuff. Like console debugging. It is a huge benefit to change variables in real-time via the console in Chrome. For instance, setting the camera position without the need for a UI or switching programs.

And I still ends up writing almost all of my code in one huge file. With all frameworks and building-tools around I’m not really sure where to start. It’s a lame excuse I know, but in the same time, when doing experiments I rarely know where I’m going, so writing “correct” OOP and structured code is not the main goal.

Once again, thanks for reading! And please share your thoughts with me, I really appreciate feedback, especially corrections or pointing out misunderstandings that can make me do things in a better manner.

Interior mapping in Unity3D

Background

I have finally started to play with Unity3D. First goal is to get an overview of how shaders work. This will come in handy regardless of what platform I choose as my target. My favorite method when it comes to learning is to port stuff from another languages. Here I get the opportunity to learn two new languages at the same time. And all the trial and error involved often inspires me to do things that wasn’t intended in the first place.

My choice of shader to implement is written by Joost van Dongen (@JoostDevBlog) and he calls it “interior mapping”. It’s a technique to render buildings with a facade with reflections and lighting, and perspectively correct looking interiors (well, at least the walls is correct. Furniture or details on the walls is of course flat) seen through the windows, giving an illusion of loads of geometry on a wall with just two triangles. All done by a single shader. The use of raycasting together with interpolated fragments opens up for a lot of interesting ideas. It’s not a new technique though, I’m just exploring this field for the first time. Check out his excellent paper for details, complete with sources and explanations.

In this demo I use another source of inspiration as well. If you’re into Unity, you have probably seen this cityscape made by @bartekd. I using his example to plot out procedural buildings in a grid, and in the same time learning how to create meshes by scripting. That leaded me to add roads, sidewalks and lights. I also added a simple day/night-system which fades between two sky-boxes (looks weird with  fading sun, but that’s enough for this demo ), this parameter also control the lighting and if the activity in the building. Check it out:

Demo

View the procedural city here


Optimization

One thing that I’m totally missing here is optimization skills. Unity folks will shake their heads and laugh out loud.  I don’t know where to start really. The buildings and ground is just few triangles, but I have added loads of point lights and streetlight models. I should probably use some sort of light-mapping, but I think that is pro-licence-feature only. The ground is also a grid that looks exactly the same in each cell, maybe there is a way to optimize that. I have tried combining children and meshes, but I got stuck with light-problems or hitting the roof of max vertices. The shader has to be unique for each building, so I can’t combine those either. About shader models, this shader uses shader model 3.0, could not fit the instructions needed otherwise. But get rid of random walls from a texture-atlas and it will compile to model 2.0. But framerate is pretty smooth for this experiment and at least I got a playground where I can practice my forthcoming skills.

Source

Download the building shader I put together in Unity. It would be nice to do a surface-shader instead to get real-time shadows, but once again i’m not a owner of a pro license, so that have to wait. If you like, check out the whole scene package. It’s my first Unity experiments, so please be honest and tell me what can be done better.

Now back to the drawingboard, I have an idea that involves physics, so thats my next stop on the Unity bus. Until then, thanks for reading.