Wave

More time playing with Three.js and WebGL. My primary goal was to get a deeper understanding in shaders by just messing around a little. If you want to know the basics about shaders in three.js I recommend reading this tutorial by Paul Lewis (@aerotwist). In this post I will try to explain some of the things I’ve learned in the process so far. I know this it’s like kindergarten for “real” CG programmers, but history is often repeating in this field. Let us pretend that we are cool for a while ;)

Run the demo

The wave

I started of making a wave-model, and a simple plane as ocean in the distance, in 3D Studio Max. I later on painted the wave with vertex-colors. This colors was used in the shader to mimic the translucence of the wave. Could have use real SubSurfaceScattering, but this simple workaround worked out well. The vertex-colors is mixed with regular diffuse and ambient colors. A normal map was then applied with a tiled noise pattern for ripples. The normals for this texture don’t update relative to the vertex-animation, but that’s fine. On top of that, a foam-texture with white perlin noise was blended with the diffuse color. I tried a environment-map for reflections as well, but that was removed in the final version, the water looked more like metal or glass.

I always likes the “behind the scene” part showing the passes of a rendering with all layers separated and finally blended, so here is one of my own so you can see the different steps:

 

Animating with the vertex shader

My wave is a static mesh. For the wobbly motion across the wave, I animate vertices in the vertex shader. One movement along the y-axis that creates a wavy motion and one along the x-axis for some smaller ripples. You have probably seen this often, since it’s a easy way to animate something with just a time-value passed in every frame.

vertexPosition.y += sin( (vertexPosition.x/waveLength) + theta )*maxDiff;

The time-offset (theta) together with the x-position goes inside a sin-function that generates a periodic cycle with a value between -1 and 1. Multiply that with a factor and you animate the vertices up and down. To get more crests along the mesh, divide the position with waveLength.

A side note: I tried to use vertex texture look-ups (supported in ANGLE version) for real vertex displacements. With that I could get more unique and dynamic animations. I plotted a trail in a hidden canvas to get jetski backwash, but the resolution of the wave-mesh was to low and I had problem matching the position of the jetski with the UV-mapping. I flat surface is simple, but the jetski using collisiondetection to stick to the surface, and I have no idea how to get the uv from the collision point. However, I learned to use a separate dynamic canvas as an texture. I use similar technique in my snowboard-experiment in flash, but that is of course without hardware acceleration.

Animating with the fragment shader

The wave is not actually moving, it’s just the uv-mapping. The same time-offset used in the vertex shader is here responsible for “pushing” the uv-map forward. You can also edit the UV-mapping in your 3D program and modify the motion. In this case, the top of the breaking part of the wave had to go in the opposite direction, so I just rotate the uv-mapping 180 degrees for those faces. My solution works fine as long as you don’t look over the ridge of the wave and see the seam. But I think, in theory, you could make a seemless transition between the two direction, but it’s beyond my UV-mapping skills. You can also use different offset-values for different textures, like a Quake sky with parallaxed layers of clouds.

Fighting about z with the Depth Buffer

I was aware about the depth buffer, or z-buffer, which stores the depth value for each fragment. Besides lighting calculation, post-effects and other stuff, this data is used to determined if a fragment is in front or behind other fragments. But I have not reflected how it actually works before. For us coming from Flash 3D (pre Molehill) this is a new technique in the pipeline ( even though some experiment like this using it). Now, when the mighty GPU is in charge we don’t have flickering sorting problem right? No, artifact still appear. We have the same problem as with triangles, but now on a fragment level. Basically, it happens when two primitives (triangle, line or point) have the same value in the buffer, with the result that sometimes it shows up and sometimes it don’t. The reason is that the resolution of the buffer is to small in that specific range, which leads to z-values snapping to the same depth-map-value.

It can be a little hard to imagine the meening of resolution, but I will try to explain, with emphasis on trying, since I just learned this. The buffer is non-linear. The amount of numbers available is related to the z-position, the near and the far plane. There are more numbers to use, or higher resolution, closer to the eye/camera, and fewer and fewer assigned numbers as the distance increases. Stuff in the distance will share a shorter range of available numbers than the things right in front of you. That is quite useful, you probably need more details right in front of you where details are larger. The total amount of numbers, or call it precision, is different depending on the graphics-card with the possible 16 bit, 24 or 32 bit buffers. Simply put more space for data.

To avoid these artifact we have some techniques to be aware of, and here comes the newbie lesson learned this time. By adjusting the far and near planes, to the camera object, you can decide where the resolution will be focused. Previously, I used near=0.001 and far=”Some large number”. This causing the resolution to be very high between 0.0001 and “an other small number”. (There is a formula for this, but it’s not important right now) Thus, the depth buffer resolution was to low when the nearest triangle was drawn. So the simple thing to remember is: Set the near-value as far as you can for your solution to work. The value of the far plane is not as important because the nonlinear nature of the scale. As long as the near value is relatively small that is.

Another problem with occlusion calculations is when using color values with alpha. In those cases we sometimes got multiple “layers” of primitives, but only one value in the depth-buffer. That can be handled by blending the color with the one already stored in the output (frame buffer). An other solution is to set depthTest to false on the material, but obviously that is not possible in cases where we need the face to be behind something. We can also draw the scene from front to back or in different passes, with the transparent object last, but I have not used that technique yet.

Thanks AlteredQualia (@alteredq) to pointing this out!

Collision detection

I first used the original mesh for collision detection. It ran pretty fast in Chrome but painfully slow in Firefox. Iterating arrays seems to differ greatly in performance between browsers. A quick tip: When using collision detection, use separate hidden meshes with only the faces needed to do the job.

Smoke, Splashes and Sprites

All the smoke and white water is instances of the object-type: Sprite. A faster and more lightweight type of object, always pointed at you like a billboard. I have said it before, but one important aspect here is to use a object pool so that you can reuse every sprite created. Keep creating new ones will kill performance and memory allocation.

I’m not really pleased with the final solution of these particles, but good enough for a quick experiment like this.

Learning Javascript

As a newcomer to the Javascript-world I’m still learning the obvious stuff. Like console debugging. It is a huge benefit to change variables in real-time via the console in Chrome. For instance, setting the camera position without the need for a UI or switching programs.

And I still ends up writing almost all of my code in one huge file. With all frameworks and building-tools around I’m not really sure where to start. It’s a lame excuse I know, but in the same time, when doing experiments I rarely know where I’m going, so writing “correct” OOP and structured code is not the main goal.

Once again, thanks for reading! And please share your thoughts with me, I really appreciate feedback, especially corrections or pointing out misunderstandings that can make me do things in a better manner.

Comments (8)

  1. Hey, congrats on your latest experiment! This looks like another very valuable set of lessons to have learned. I especially appreciate that you’ve shared your observations on the z-buffer and near clipping plane. It probably would have taken me a rather long time to discover the non-linear nature of that beast myself had you not pointed it out in this post.

    On the topic of being a newcomer to Javascript, I just recently finished reading “Javasctipt: The Good Parts”, and I’ve got to say – holy crap that book was awesome. So many misconceptions blown away, so many features I had been abusing terribly were corrected with infinitely more elegant approaches, and so many syntactical questions answered with simple absolutes… the book was simply a masterpiece. I would certainly say that the book offered me perspectives that made me a better programmer in general. In addition, since ActionScript3 is a dialect of ECMAScript (where Javascript is an implementation of ECMAScript), I would imagine that if I ever end up writing in ActionScript3 again, my practices there will be much improved as well. Seriously, if you are confused about or having problems understanding anything in Javascript at all, pick up a copy of “Javasctipt: The Good Parts” and your confusion will melt away. Douglas Crockford For The Win!

  2. Thanks! I really enjoy doing this stuff. I only wish that the clients thinks their customers will enjoy it as well :) Anyhow, got some time gaining experience before “modern” browsers become standard to them.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>