I almost forgot to post this little demo. It’s a remake of an old experiment with procedural eyes I made in flash long time ago. (Actually, the old one look kind of cooler than this, think it’s the normal-mapping that does the difference. Haven’t gotten to the point of making that in a procedural shader yet, probably have to use off-screen renderers and stuff)
This time it’s done with three.js and shaders. So basically I just started of with the sphere-geometry. The vertices near the edge of the z-axis got an inverted z-position in relation to the original radius, forming the iris as a concave parabolic disc on the sphere.
The shader is then doing it’s thing. By reading the uv-coordinate and the surface-positions I apply different patterns to different sections. I separate those sections in the shader by using a combination of step (or smoothstep) with mix. Step is almost like a if-statement, returning a factor that is 1 if inside the range, and 0 outside. Smoothstep blurs the edge for a smoother transition. You can then use that in the mix-method to choose between two colors, for example the pupille and the iris. I mentioned this method in my last post as well.
Check out the fragment shader here
To give the eye the reflective look, I added a second sphere just outside the first one with a environment-map. The reason for not using the same geometry was that I wanted the reflection on the correct surface around the iris. And by making the second sphere just slightly larger it gives the surface some kind of thickness.
To spice it up little, I added some audio-analysing realtime effects, with this little nifty library called Audiokeys.js. With that you can specify a range of frequencies to listen in to and get a sum of that level. Perfect for controlling parameters and send them to the shaders as uniforms.
This is take two on a little toy that I posted some weeks ago on Twitter. I’ve added some new features so it’s not completely old news. There is two new materials and dynamic sounds (chrome only).
This demo is a fun way to demonstrate a simple procedural shader. As you start working with the lathe machine you will carve into the imaginary space defined by numbers and calculations. I like to imagine the surface as a viewport to a calculated world, ready to be explored and tweaked. This is nothing fancy compared to the fractal ray-marching shaders around, but It’s kind of magical about setting up some simple rules and be able to travel through it visually. Remember, you get access to the objects surface-coordinates in 3D for each rendered pixel. This is a big difference when comparing to 3d without shaders, where you have to calculate this on a 2D plane and wrap it around the object with UV-coordinates instead. I did an experiment in flash 10 ages ago, doing just that, one face at the time generated to a bitmapData. Now you get it for free.
Some words about the process of making this demo. Some knowledge in GLSL may come in handy, but it’s more of a personal log than a public tutorial. I will not recommend any particular workflow since it’s really depending on your needs from case to case. My demos is also completely unoptimized and with lots of scattered files all over the place. That why I’m not on Github I’m already off to the next experiment instead.
When testing a new idea for a shader, I’m often assemble all bits and pieces into a pair of external glsl-files to get a quick overview. There is often some tweaks I want to try that’s not supported or structured in the same way in the standard shader-chunks available. And my theory is also that if I edit the source manually and viewing it more often I will get a better understanding of what’s going on. And don’t underestimate the force of changing values randomly (A strong force for me). You can literally turn stone into gold if you are lucky.
The stone material in the demo is just ordinary procedural Perlin-noise. The wood and metal shaders though is built around these different features:
What if I could have a layer that is shown before you start carving? My first idea was to have a canvas to draw on offstage and then mix this with the procedural shader like a splat-map. But then I found another obvious solution. I just use the radius with the step-method (where you can set the edge-values for a range of values). Like setting the value to 1 when the normalized radius is more than 0.95 and to 0 if it’s less. With smoothstep this edge getting less sharp. Then this value and using mix and interpolates between two different colors, the coat and the underlaying part. In the wood-shader coat is a texture and in the the metal-shader it’s a single color.
DiffuseColour = mix(DiffuseColor1, DiffuseColor2 , smoothstep( .91, .98, distance( mPosition, vec3(0.0,0.0,mPosition.z))/uMaxRadius));
This is the grains revealed inside the wood and the tracks that the chisel makes in the metal-cylinder. Basically I again interpolate between two different colors depending on the fraction-part or the Math.sin-output of the position. Say we make all positions between x.8 and x in along the x-axis a different color. A width of 10 is 10 stripes that is 0.2 thick. By scaling the values you get different sizes of the pattern. But it’s pretty static without the…
To make the pattern from previous step look more realistic I turn to a very common method: making noise. I’m using a 3D Simlex noise (Perlin noise 2.0 kind of) with the surface position to get the corresponding noise-value. The value returned for each calculation/pixel is a single float that in turn modifies the mix between the two colors in the stripes and rings. The result is like a 3d smudge-tool, pushing the texture in different directions.
I’m using Phong-shading with three directional light sources to make the metal looking more reflective without the need of an environment map. You can see how the lights forms three horizontal trails in the cylinder, with the brightest one in the middle.
I just copied the shadow map part from WebGLShaders.js into my shader. One thing to mention is the properties available on directional lights to set the shadow-frustum (shadowCameraLeft, shadowCameraRight, shadowCameraTop and shadowCameraBottom). You also have helper guides if you enable dirLight.shadowCameraVisible=true, then it’s easier to set the values. When the shadows is calculated and rendered the light is used as a camera, and with these settings you can limit the “area” of the lights view that will be squeezed into the shadow map. With spotlights you already have the frustum set, but with directional lights you can define one that suites your needs. The shadow-map have a fixed size for each light and when drawing to this buffer the render pipeline will automatically fit all your shadows into it. So the smaller bounding box of geometries you send to it, the higher resolution of details you get. So play around with this settings if you have a larger scene with limited shadowed areas. Just keep an eye on the resolution of the shadows. If the scene is small, your shadowmap-size large and the shadows are low res, your settings are probably wrong. Also, check the scales in your scene. I’ve had troubles in som cases when exporting from 3d studio max if the scaling is to small or big, so that the relations to the cameras goes all wrong or something.
That’s it for know, try it out here. And please, ask questions and correct me if I’m wrong or missing something. See you later!
More noisy balls. Time to be nostalgic with a classic from the 80’s: The plasma lamp. I have never had the opportunity to own one myself, maybe that’s why I’m so fascinated by them. There is even small USB driven ones nowadays. I’ve had this idea for a demo a long time and finally made a rather simple one.
The electric lightning is made up by tubes that is displaced and animated in the vertex shader, just as in my last experiment. I reuse the procedurally generated noise-texture that is applied to the emitting sphere. I displace local x- and z-position with an offset multiplied by the value I get from the noise-texture on the corresponding uv-position (along y-axis). To make the lightning progress more naturally I also use a easing-function on this factor. Notice that it’s less displacement near the center and more at the edges.
I’m have this idea of controlling the lightning with a multi-touch websocket-connection between a mobile browser and the browser just to learn node.js and sockets. Let’s wait and see how this evolves. Running this on on a touch-screen directly would have been even nicer of course. If someone can do that, I will add multiple touch-points, just let me know.
Time for one more step in this snowy experiment. I can’t help feeling like I’m dropped of the wagon right now with all kick-ass 3D demos around for the upcoming flash 3D API molehill. Of course, it’s not just about the amount of triangles, but also the creativity and love put into those you got. Anyway, I hope that you will like this piece while we all waiting for the salvation. Just put a smooth-modifier in your mental stack.
To mention some of the additions to this version, I have added collada-animations, a background view, different camera-angles including the typical fish-eye lens seen in many snowboard-videos. But the main feature is that you now can control it and make nice bezier curves in the snow yourself. You have three different inputs to play with:
Steer with the arrow-keys. Press down-arrow to bend your knees and push the snow a little harder. With this input-method it’s simple to keep a continuous pace, but it feels a little static.
In my opinion a more dynamic way to get smoother turns and more control. Until you found your pace it can look a little funny, because the engine need a previous turn to calculate the leaning angle. Try to move the mouse back and fourth as an arc across the slope to get a balance of speed and pressure, simulating the g-force when surfing through the snow.
This is quite fun to play with. I like the idea of moving your own body to control the movement. The face-detection-algorithm steals some CPU, so it’s maybe not the best choice. Use the same technique as with the mouse. Y-axis controls the pressure. Calibrate yourself against the webcam preview image to find a good position.
Now, lets make some turns.
I don’t have a multi-touch trackpad, otherwise that would have been cool to control each foot with a finger, like a finger board. Or a wacom-board, with different levels of pressure.
Would be even cooler to control the board in 3D. Or maybe connect a wii-control to it?
To get a natural and correct looking turn, the character have to lean the body through to whole turn, almost before the turn even starting. So how do we know which phase of the turn we’re in? We could “record” the position and interploate and adjust it afterwards, like a bezier-drawing application. But I do not want this delay as it’s affect the response and feeling of riding in real-time. I came up with a different approach. I have two markers, xMin and xMax. Each time a the turn reach it’s maximum or minimum position the values are updated. I then have an estimated range of the next turn (if I assume that the next turn will be the same length). The current x-position is then compared with the estimate range and I got a normalized value between 0-1. Now I can ease that value with a Sine.easeInOut-function. That eased value is then used for the leaning angle. If you do a shorter or longer turn than expected it will of course look different, or before you find your pace, but it is still looking ok.
That’s all for now. Thanks for reading!
I have some improvements for you since last time! New stuff in this version is terrain, a rider and some snow-particles. In this demo, try to combine different values to see how the rider and the snow is affected. All parameters is connected together, so you can get pretty cool results.
I mentioned that I could add effects to each layer in the heightmap/texture-comp. I now added perlin noise to the base-layer. The offset for each octave is the same as the track-layer-scroll so they are in sync with each other. By resizing the y-scale of the noise the terrain got more stretched look, like ridges, or dunes, created by the wind. It’s a simple example of distortion and more parameters and effects will be added to create different slopes. I also read out the height-value at the snowboarders position. Notice how the rider floats on top of the terrain minus the snow-depth and depending on the current pressure.
The guy is a fully rigged model imported from 3d studio max. I have just begun to test the bone-support in Away3D, that is way he is looking like a rookie. For now I just control joint rotations, but later on I will try to add bones-animations and poses, especially if I add jumps and tricks. I tried to move the joints up and down, but I could not sort that out with correct IK and without destroying the mesh. I’m sure that method would be pretty CPU-intensive as well with all the recursive translations going on in IK. I have to dig deeper into that later.
Just for the record, I’m in fact a skier, so maybe the movements are completely wrong since I never even tried a snowboard
I also added some snow-particles, or animated sprites to be more specific. The animation is dynamic depending on the speed and the pressure. If more pressure and the snow is deep, the particle will fly longer.
I guess next time I will try to make the rider controllable.
The winter is just around the corner. If you, as myself, enjoying alpine winter sports, this little toy will get you into the mood. Imagine that you have climbed all the way up the mountain, before the lifts starts feeding the mountain with people. You want to be sure that you get a ride on fresh untouched snow all the way down. You now standing on the top and watching the sun rise behind a nearby peak. Googles on. Gears on the back. It’s you and the mountain… I love that feeling!
This is the first part of this experiment of making a off-piste simulator/game in Away3D. The first step is to create the slope and generate tracks in the snow. Check it out the first test
I got inspired by @tonylukasavage and his morphing mesh experiments with HeightMapModifier in Away3D.
Some notes about the texture; I start of with a bitmap filled with the color-value 0x80000. That fill represents the center of the offset. I then draw the track to another BitmapData ranging from black to red. To make the scrolling effect I found a method that I’m never used before: BitmapData.scroll(x,y). The two layers is comped together to a single bitmap. One advantage of using layers is that I can add noise, shadows and other effects to each layer. The heightmap is then converted to a normalmap with NormalMapUtil and a texture with paletteMap. Now I got what I need to create a nice phong-shaded Pixelbender material. The grid has very few triangles (25×25 segments), I have to save some cpu for the character and the other stuff. I’m not doing any real-time triangulations either, so in some cases the vertices starts to jump around. I have to accept that for now.
I don’t really know the outcome of all the future parts yet, but next up is to add a character and some controls.
I realize all my posts so far has one thing in common. Perlin noise. So why make an exception this time. Remember the post about fire? Make the animation go in the opposite direction, change the colors and some parameters, put in on a texture and mix with blendmodes and filters and you got yourself another element of nature. Water.
When I play first-person-shooters like Just Cause, Far Cry or Uncharted 2 there is one thing that I always remember most. Water-effects. I forget about the story and just play around with it. I hope you will remember this one 😉
The cliff and water is simple models created in 3D Studio Max, then imported as a scene into Away3D with the Loader3D. The materials are converted to PhongMaterials. I tried to use a normal-map and a pixelbender-material on the cliff to get a wet look, but it was to CPU-intense together with the other effects, so that had to go.
The trick to make the water refract the light was to render the view twice. One with the waterfall alone and with just black and red colors. And another pass with the final result. Then the first pass is used to create a displacementMapFilter on the view. It’s quite slow, but I couldn’t resist to activate it by default when the visual result is much better. Try to switch it off to see the difference in quality and speed.
Update: I strongly recommend using Flash player 10.1 with this one, it’s a huge difference in speed! And how come Chrome is so damn faster than the rest? I added a slider for the texture resolution as well, so that performance can be adjusted. Lame, I know…
I found the texture for the cliff over here.
Procedural graphics is surely my favorite. The concept of create visual things with just code. Really comes in handy when your a crappy designer and still want to do visual stuff. Here you can see the result of yet another experiment on this track. It’s a eye-shader for generating procedural eyes. The 3D-engine of choice is Away3d. I use a CompositeMaterial consisting a PhongMultiPassMaterial and a SphericEnvMapPBMaterial (for the environment reflections). The bitmap that is used in the material is generated by a haxe-swf that is loaded in runtime. Just as previous experiments. The same bitmap is used as a base for the spherical normalmap. That adds a nice displacement to the surface. The iris is quite simple and could have more layers and complexity to get a more realistic pattern. Anyway, the texture looks something like this.
When wrapping this on a sphere it will fit seamlessly . I know, the blood-vessels aren’t that great. Perhaps I should use lines with turbulence instead.
Design your own eye
Here is a tool where you can try the different settings and create a unique eye.
This “mars-attacks”-demo follows the mouse. To get two eyes I duplicates the output from the view. Thats why he can’t look at his nose (if existed). Notice that he is reacting to light, sort of…
Last time I made some wood. Now let’s burn that up. Still experimenting with the same 3D perlin noise as before, but this time animated. When you can take control over each pixel of noise it’s easy to add extra rules to form the shape of the noise such as turbulence based on y-value or an offset animation along the z-axis to gain some depth. Applying “ridges” to the perlin noise (invert all negative values when the range is between -1 to 1) makes the flames look more interesting. The bitmap generated is just 80×80 and 2 octaves of Perlin noise, so the possibilities are quite limited.
Click here or on the fireplace above to get warm.
If you want to see how it works, here is a zip with sources for a simple example. No graphics or UI is included though. The noise is compiled with Haxe, but a swf is included if you want to skip that.
See this post for additional credits.
The images in the last post was created with this tool. I use the heightmap to create a normal map with a excellent class called SphericalDispToNormConverter. So since I now have a heightmap and a normal map, Away3D can do the rest. The material used in this example is the PhongMultiPassMaterial. When “displacement map” is disabled a simple BitmapMaterial is used. The @away3D -team has really done a great job with the pixelbender materials.
Among the settings you can choose between a regular perlin noise or a ridged one. You will notice that the difference is pretty big. Also, check out the different presets.
The resolution on the heightmap is 400×200, but scaled down in the UI. (By the way I can’t tell you enough how I love the minimalcomps by mr @bit101.) The heightmap-generation is pretty heavy on the CPU, so choose browser with care and stay away from debug-players 😉
What do you think? Time for a generative material-library for Away3D?
I’m not quite done experimenting with 3d perlin noise. Another cool feature that I haven’t seen in flash is to wrap the noise on a sphere seamlessly. Here is the result:
Click the spheres for a slightly larger version
To get the texture to wrap seamless without distortion, we can use the 3d nature of the noise in a interesting way. If we evaluating the 3d-position of each point on the surface of a sphere we can get the noise in that particular point. To convert the point from 3d-space to 2d-space some trigonometry is needed. My math-skills really suck, but my cut-n-paste skills are excellent. Once again, LibNoise showed me the way with 5 lines of code. First I only got chaotic noise with some sort of repeating pattern. It took some time to figure out that I had to convert the numbers to a positive range.
lat = py / height * 180- 90;
lon = px / width * 360-180;
r = Math.cos(DEG_TO_RAD * lat);
//range between 0-1
_x = (r * Math.cos (DEG_TO_RAD * lon) +1)*.5
_y = (Math.sin (DEG_TO_RAD * lat)+1)*.5
_z = (r * Math.sin(DEG_TO_RAD * lon)+1)*.5
For the sphere and the material/lighting I use Away3d. In the next post I will show you more about that and the tool for creating the different materials. Oh, and it’s looks a lot better animated