Streetlights

streetlights

I played around some with the physics-library cannon.js together with three.js and ended up with this Christmas card.

To show the power of iterations, this is a screenshot from the first version that I thought was almost finished, since my main target was to play around with physics, and this is how physics demos usually use to look like.

prestagelights

Each small bulb in the garland is a joint constraint. The ending joint have zero mass so it will stick to the facade without falling down. You can click/tap to change the position of those positions. To create the reflections the geometry is cloned and inverted, added to a second scene which renders to a separate render target with blur applied to it. The rest of the scene is then rendered on top of the first pass without clearing the colours, just the depth buffer. It was a bit tricky to render in the correct order and clearing the right stuff and I’m sure there is a more optimised way of doing this without the need of two scenes.

A transparent layer is also added to the ground level in the “above” scene to add some bluish diffuse and catch the lighting coming from the stars. Each star has a point-light as a child. A tiled diffuse map and a normal map are assigned to the ground floor to add details and an interesting surface. All geometry besides the star and the sign is created procedurally with just primitives (Box,plane and sphere). The facade got a specular map so the light could be more reflective in the windows. A bump map adds some depths as well.

Then finally, couldn’t get the Christmas feeling without the mighty Bloom filter pass :)

Try to mess around with the attachment points af the garland to see the physics, lighting and reflections in action. Notice how the wind effecting the lights and the snow, it’s actually a impulse force applied to the scene each frame.

Psst! Even though it’s Christmas, there is an easter egg somewhere. Can you find it?

Open Card

Interested in source? Here is the Github repo

Snowroller

snowroller

This was a demo I did for the advent calendar christmasexperiments.com. Create a number of snowballs and put them together to a snowman. The trail is created by drawing into a canvas which is sent into the ground-shader as a height-map. In the vertex shader the vertices is displaced along the y-axis and. In the fragment-shader it is used for bump-mapping. The snow also becomes a bit darker when it’s below a threshold height. The shape of the track is defined by the radial gradient used when drawing to the canvas. I like that you can run over the existing tracks and leave a bit of a trail of the current direction. That adds more depths than just a static layer painted on top.

In a earlier version the ball geometry was affected by actually touching the snow, extruding vertices once per rotation if vertex was below the snow. But when it became more focused on creating a snowman, it made more sense to make perfectly round snowballs. I’m not so happy about the material on the snowballs. The UV-wrapping and the spherical texture creates familiar artefacts in the poles, pinching the texture. But there was not enough time to look into this at that time. A procedural approach would have been nice to avoid it. Or not use UV-coordinated, but calculate them in a shader. Like these guys (@thespite@claudioguglieri@Sejnulla) who did a nice site about the same time with a much better snow-shader: http://www.snwbx.com/. That shader can also be seen here.

Collision detection is just a simple distance-test between the spheres and the bounding-box.

Try it out here.

View the source at github.

VisualEYEzer

 

I almost forgot to post this little demo. It’s a remake of an old experiment with procedural eyes I made in flash long time ago. (Actually, the old one look kind of cooler than this, think it’s the normal-mapping that does the difference. Haven’t gotten to the point of making that in a procedural shader yet, probably have to use off-screen renderers and stuff)

This time it’s done with three.js and shaders. So basically I just started of with the sphere-geometry. The vertices near the edge of the z-axis got an inverted z-position in relation to the original radius, forming the iris as a concave parabolic disc on the sphere.

The shader is then doing it’s thing. By reading the uv-coordinate and the surface-positions I apply different patterns to different sections. I separate those sections in the shader by using a combination of step (or smoothstep) with mix. Step is almost like a if-statement, returning a factor that is 1 if inside the range, and 0 outside. Smoothstep blurs the edge for a smoother transition. You can then use that in the mix-method to choose between two colors, for example the pupille and the iris. I mentioned this method in my last post as well.

Check out the fragment shader here

To give the eye the reflective look, I added a second sphere just outside the first one with a environment-map. The reason for not using the same geometry was that I wanted the reflection on the correct surface around the iris. And by making the second sphere just slightly larger it gives the surface some kind of thickness.

To spice it up little, I added some audio-analysing realtime effects, with this little nifty library called Audiokeys.js. With that you can specify a range of frequencies to listen in to and get a sum of that level. Perfect for controlling parameters and send them to the shaders as uniforms.

Open demo

WebGL Lathe Workshop

This is take two on a little toy that I posted some weeks ago on Twitter.  I’ve added some new features so it’s not completely old news. There is two new materials and dynamic sounds (chrome only).

This demo is a fun way to demonstrate a simple procedural shader. As you start working with the lathe machine you will carve into the imaginary space defined by numbers and calculations. I like to imagine the surface as a viewport to a calculated world, ready to be explored and tweaked. This is nothing fancy compared to the  fractal ray-marching shaders around, but It’s kind of magical about setting up some simple rules and be able to travel through it visually. Remember, you get access to the objects surface-coordinates in 3D for each rendered pixel. This is a big difference when comparing to 3d without shaders, where you have to calculate this on a 2D plane and wrap it around the object with UV-coordinates instead. I did an experiment in flash 10 ages ago, doing just that, one face at the time generated to a bitmapData. Now you get it for free.

Some words about the process of making this demo. Some knowledge in GLSL may come in handy, but it’s more of a personal log than a public tutorial. I will not recommend any particular workflow since it’s really depending on your needs from case to case. My demos is also completely unoptimized and with lots of scattered files all over the place. That why I’m not on Github :) I’m already off to the next experiment instead.

When testing a new idea for a shader, I’m often assemble all bits and pieces into a pair of external glsl-files to get a quick overview. There is often some tweaks I want to try that’s not supported or structured in the same way  in the standard shader-chunks available. And my theory is also that if I edit the source manually and viewing it more often I will get a better understanding of what’s going on. And don’t underestimate the force of changing values randomly (A strong force for me). You can literally turn stone into gold if you are lucky.

The stone material in the demo is just ordinary procedural Perlin-noise. The wood and metal shaders though is built around these different features:

1. Coat

What if I could have a layer that is shown before you start carving? My first idea was to have a canvas to draw on offstage and then mix this with the procedural shader like a splat-map.  But then I found another obvious solution. I just use the radius with the step-method (where you can set the edge-values for a range of values). Like setting the value to 1 when the normalized radius is more than 0.95 and to 0 if it’s less. With smoothstep this edge getting less sharp. Then this value and using mix and interpolates between two different colors, the coat and the underlaying part. In the wood-shader coat is a texture and in the the metal-shader it’s a single color.

DiffuseColour = mix(DiffuseColor1, DiffuseColor2 , smoothstep( .91, .98, distance( mPosition, vec3(0.0,0.0,mPosition.z))/uMaxRadius));

2. Patterns

This is the grains revealed inside the wood and the tracks  that the chisel makes in the metal-cylinder. Basically I again interpolate between two different colors depending on the fraction-part or the Math.sin-output of the position. Say we make all positions between x.8 and x  in along the x-axis a different color.  A width of 10 is 10 stripes that is 0.2 thick. By scaling the values you get different sizes of the pattern. But it’s pretty static without the…

3. Noise

To make the pattern from previous step look more realistic I turn to a very common method: making noise.  I’m using a 3D Simlex noise (Perlin noise 2.0 kind of) with the surface position to get the corresponding noise-value. The value returned for each calculation/pixel is a single float that in turn modifies the mix between the two colors in the stripes and rings. The result is like a 3d smudge-tool, pushing the texture in different directions.

4. Lighting

I’m using Phong-shading with three directional light sources to make the metal looking more reflective without the need of an environment map. You can see how the lights forms three horizontal trails in the cylinder, with the brightest one in the middle.

5. Shadows

I just copied the shadow map part from WebGLShaders.js into my shader. One thing to mention is the properties available on directional lights to set the shadow-frustum (shadowCameraLeft,  shadowCameraRight, shadowCameraTop and shadowCameraBottom). You also have helper guides if you enable dirLight.shadowCameraVisible=true, then it’s easier to set the values. When the shadows is calculated and rendered the light is used as a camera, and with these settings you can limit the “area” of the lights view that will be squeezed into the shadow map. With spotlights you already have the frustum set, but with directional lights you can define one that suites your needs. The shadow-map  have a fixed size for each light and when drawing to this buffer the render pipeline will automatically fit all your shadows into it. So the smaller bounding box of geometries you send to it, the higher resolution of details you get. So play around with this settings if you have a larger scene with limited shadowed areas. Just keep an eye on the resolution of the shadows. If the scene is small, your shadowmap-size large and the shadows are low res, your settings are probably wrong. Also, check the scales in your scene. I’ve had troubles in som cases when exporting from 3d studio max if the scaling is to small or big, so that the relations to the cameras goes all wrong or something.

That’s it for know, try it out here. And please, ask questions and correct me if I’m wrong or missing something. See you later!

 

 

Clove orange

One last post before the holidays. Some days ago I posted a link to a WebGL christmas craft demo made with three.js. For those wondering what this was about: It’s called a “Christams Orange Pomander”,  a decoration that we use to do in Sweden and some other European countries around Christmas. You put cloves, that nice fragrant spice, into an orange or clementine. Just random or like me, in structured patterns. Put some silk ribbons around it and hang them somewhere for a nice christmas atmosphere. Try to make your own here. The fun part is to attach the cloves with javascript, just open the “into code?”-button and play around. As an example, the image above loads and plots a twitter-avatar on the orange. You can also save your orange as an image or share the 3D version with an unique url.

I want to take the opportunity to thank all readers and followers for a great year! I’m amazed that over 62.000 users found this little blog. I have really enjoyed experimenting and doing demos to share with you. Many blogs in the community has disappeared in favor to instant twitter publishing. I will also continue to use Twitter as a important channel, but I will also try to keep this space updated as well, since I can go a little deeper and do some write-ups about the techniques behind.

Next year has even more in store for me. I will join the talented people over at North Kingdom, Stockholm. Exciting future ahead!

Enjoy your holiday and see you next year!

Plasma ball

Run demo

More noisy balls. Time to be nostalgic with a classic from the 80′s: The plasma lamp. I have never had the opportunity to own one myself, maybe that’s why I’m so fascinated by them. There is even small USB driven ones nowadays. I’ve had this idea for a demo a long time and finally made a rather simple one.

The electric lightning  is made up by tubes that is displaced and animated in the vertex shader, just as in my last experiment.  I reuse the procedurally generated noise-texture that is applied to the emitting sphere. I displace local x- and z-position with an offset multiplied by the value I get from the noise-texture on the corresponding uv-position (along y-axis). To make the lightning progress more naturally I also use a easing-function on this factor. Notice that it’s less displacement near the center and more at the edges.

I’m have this idea of controlling the lightning with a multi-touch websocket-connection between a mobile browser and the browser just to learn node.js and sockets. Let’s wait and see how this evolves. Running this on on a touch-screen directly would have been even nicer of course. If someone can do that, I will add multiple touch-points, just let me know.

Set a sphere on fire with three.js

Fire it up!

Next element of nature in line, smoke and fire! I did a version in flash about a year ago, playing with 4D Perlin noise. That bitmapdata was like 128×128 big using haxe to work with fast memory. At 20fps. Now we can do fullscreen in 60+ fps. Sweet.

First, note that code snippets below is taken out of context and that they using built-in variables and syntax from three.js. If you have done shaders before you will recognize variables like mvPosition and viewMatrix, or uniforms and texture-lookups.

The noise is generated within a fragment shader, applied to a plane inside a hidden scene, rendered to a texture and linked to the target material. This process is just forked from this demo by @alteredq. Before wrapping the texture around the sphere we have to adjust the output of the noise to be suitable for spherical mapping. If we don’t, the texture will look pinched at the poles as seen in many examples. This is one of the advantages of generating procedural noise; that you can modify the output right from the start. By doing this, each fragment on the sphere get the same level of details in the texture instead of stretching the image across a surface. For spheres, this following technique is one way of finding where (in the imaginary volume of noise) you should fetch the noise-value:

float PI = 3.14159265;
float TWOPI = 6.28318531;
float baseRadius = 1.0;

vec3 sphere( float u, float v) {
	u *= PI;
	v *= TWOPI;
	vec3 pSphere;
	pSphere.x = baseRadius * cos(v) * sin(u);
	pSphere.y = baseRadius * sin(v) * sin(u);
	pSphere.z = baseRadius * cos(u);
	return pSphere;
}

 

Notice the 2d-position that that goes into the function and the returning vec3-type. The uv-coordinates is translated to a 3d-position on the sphere. Now we can just look up that noise-value and plot it to the origin uv-position. Changing the baseRadius affects the repeating of the texture.

Another approach is to do everything in the same shader-pair, both noise and final output. We get the interpolated world-position of each fragment from the vertex-shader, so no need for manipulating right? But since this example uses displacements in the vertex-shader I need to create the texture in a separate pass and send it as a flat texture to the last shader.

So now we have a sphere with a noise texture applied to it. It look nice but the surface is flat.

I mentioned displacements, this is all you have to do:

#ifdef VERTEX_TEXTURES
	vec3 texturePosition = texture2D( tHeightMap, vUv ).xyz;
	float dispFactor = uDispScale * texturePosition.x + uDispBias;
	vec4 dispPos = vec4( normal.xyz * dispFactor, 0.0 ) + mvPosition;
	gl_Position = projectionMatrix * dispPos;
#else
	gl_Position = projectionMatrix * mvPosition;
#endif

 

uDispScale and uDispBias is uniform variables that we can control with javascript at runtime. We just add a value to the original position along the normal direction. The #ifdef-condition let us use a fallback for those on browsers or environments that doesn’t support textures in the vertex-shader. It’s not available in all implementations of webGL, but depending on your hardware and drivers it should work in latest Chrome and Firefox builds. Correct me if I’m wrong here. This feature is crucial for this demo, so I prompt a alert-box for those unlucky users.

After adding displacement I also animate the vertices. You can e.g. modify the vertex-positions based on time, the vertex position itself or the uv-coordinates to make the effects more dynamic. One of the effects available in the settings-panel is “twist”. I found it somewhere I don’t remember now, but here it is if you’re interested:

vec4 DoTwist( vec4 pos, float t )
{
	float st = sin(t);
	float ct = cos(t);
	vec4 new_pos;

	new_pos.x = pos.x*ct - pos.z*st;
	new_pos.z = pos.x*st + pos.z*ct;

	new_pos.y = pos.y;
	new_pos.w = pos.w;

	return( new_pos );
}

//this goes into the main function
float angle_rad = uTwist * 3.14159 / 180.0;
float force = (position.y-uHeight*0.5)/-uHeight * angle_rad;

vec4 twistedPosition = DoTwist(mPosition, force);
vec4 twistedNormal = DoTwist(vec4(vNormal,1.0), force);

//change matrix
vec4 mvPosition = viewMatrix * twistedPosition;

 

The variable force has to be changed to your need. Basically you want to set a starting position and a range that is going to be modified.

Then we reach the final fragment-shader. This one is fairly simple. I blend two colors. Multiply those with one of the channels in the height-map (the noise texture). Then mix this color with the smoke color. All colors get their values based on the screen-coord, a quickfix for this solution. Lastly I apply fog to make fire blend into the background, looking less like a sphere.

For some final touch, a bloom filter is applied and some particles is thrown in as well. Had some help by @aerotwist and his particle tutorial.

That’s it folks, thanks for reading.

 

 

Wave

More time playing with Three.js and WebGL. My primary goal was to get a deeper understanding in shaders by just messing around a little. If you want to know the basics about shaders in three.js I recommend reading this tutorial by Paul Lewis (@aerotwist). In this post I will try to explain some of the things I’ve learned in the process so far. I know this it’s like kindergarten for “real” CG programmers, but history is often repeating in this field. Let us pretend that we are cool for a while ;)

Run the demo

The wave

I started of making a wave-model, and a simple plane as ocean in the distance, in 3D Studio Max. I later on painted the wave with vertex-colors. This colors was used in the shader to mimic the translucence of the wave. Could have use real SubSurfaceScattering, but this simple workaround worked out well. The vertex-colors is mixed with regular diffuse and ambient colors. A normal map was then applied with a tiled noise pattern for ripples. The normals for this texture don’t update relative to the vertex-animation, but that’s fine. On top of that, a foam-texture with white perlin noise was blended with the diffuse color. I tried a environment-map for reflections as well, but that was removed in the final version, the water looked more like metal or glass.

I always likes the “behind the scene” part showing the passes of a rendering with all layers separated and finally blended, so here is one of my own so you can see the different steps:

 

Animating with the vertex shader

My wave is a static mesh. For the wobbly motion across the wave, I animate vertices in the vertex shader. One movement along the y-axis that creates a wavy motion and one along the x-axis for some smaller ripples. You have probably seen this often, since it’s a easy way to animate something with just a time-value passed in every frame.

vertexPosition.y += sin( (vertexPosition.x/waveLength) + theta )*maxDiff;

The time-offset (theta) together with the x-position goes inside a sin-function that generates a periodic cycle with a value between -1 and 1. Multiply that with a factor and you animate the vertices up and down. To get more crests along the mesh, divide the position with waveLength.

A side note: I tried to use vertex texture look-ups (supported in ANGLE version) for real vertex displacements. With that I could get more unique and dynamic animations. I plotted a trail in a hidden canvas to get jetski backwash, but the resolution of the wave-mesh was to low and I had problem matching the position of the jetski with the UV-mapping. I flat surface is simple, but the jetski using collisiondetection to stick to the surface, and I have no idea how to get the uv from the collision point. However, I learned to use a separate dynamic canvas as an texture. I use similar technique in my snowboard-experiment in flash, but that is of course without hardware acceleration.

Animating with the fragment shader

The wave is not actually moving, it’s just the uv-mapping. The same time-offset used in the vertex shader is here responsible for “pushing” the uv-map forward. You can also edit the UV-mapping in your 3D program and modify the motion. In this case, the top of the breaking part of the wave had to go in the opposite direction, so I just rotate the uv-mapping 180 degrees for those faces. My solution works fine as long as you don’t look over the ridge of the wave and see the seam. But I think, in theory, you could make a seemless transition between the two direction, but it’s beyond my UV-mapping skills. You can also use different offset-values for different textures, like a Quake sky with parallaxed layers of clouds.

Fighting about z with the Depth Buffer

I was aware about the depth buffer, or z-buffer, which stores the depth value for each fragment. Besides lighting calculation, post-effects and other stuff, this data is used to determined if a fragment is in front or behind other fragments. But I have not reflected how it actually works before. For us coming from Flash 3D (pre Molehill) this is a new technique in the pipeline ( even though some experiment like this using it). Now, when the mighty GPU is in charge we don’t have flickering sorting problem right? No, artifact still appear. We have the same problem as with triangles, but now on a fragment level. Basically, it happens when two primitives (triangle, line or point) have the same value in the buffer, with the result that sometimes it shows up and sometimes it don’t. The reason is that the resolution of the buffer is to small in that specific range, which leads to z-values snapping to the same depth-map-value.

It can be a little hard to imagine the meening of resolution, but I will try to explain, with emphasis on trying, since I just learned this. The buffer is non-linear. The amount of numbers available is related to the z-position, the near and the far plane. There are more numbers to use, or higher resolution, closer to the eye/camera, and fewer and fewer assigned numbers as the distance increases. Stuff in the distance will share a shorter range of available numbers than the things right in front of you. That is quite useful, you probably need more details right in front of you where details are larger. The total amount of numbers, or call it precision, is different depending on the graphics-card with the possible 16 bit, 24 or 32 bit buffers. Simply put more space for data.

To avoid these artifact we have some techniques to be aware of, and here comes the newbie lesson learned this time. By adjusting the far and near planes, to the camera object, you can decide where the resolution will be focused. Previously, I used near=0.001 and far=”Some large number”. This causing the resolution to be very high between 0.0001 and “an other small number”. (There is a formula for this, but it’s not important right now) Thus, the depth buffer resolution was to low when the nearest triangle was drawn. So the simple thing to remember is: Set the near-value as far as you can for your solution to work. The value of the far plane is not as important because the nonlinear nature of the scale. As long as the near value is relatively small that is.

Another problem with occlusion calculations is when using color values with alpha. In those cases we sometimes got multiple “layers” of primitives, but only one value in the depth-buffer. That can be handled by blending the color with the one already stored in the output (frame buffer). An other solution is to set depthTest to false on the material, but obviously that is not possible in cases where we need the face to be behind something. We can also draw the scene from front to back or in different passes, with the transparent object last, but I have not used that technique yet.

Thanks AlteredQualia (@alteredq) to pointing this out!

Collision detection

I first used the original mesh for collision detection. It ran pretty fast in Chrome but painfully slow in Firefox. Iterating arrays seems to differ greatly in performance between browsers. A quick tip: When using collision detection, use separate hidden meshes with only the faces needed to do the job.

Smoke, Splashes and Sprites

All the smoke and white water is instances of the object-type: Sprite. A faster and more lightweight type of object, always pointed at you like a billboard. I have said it before, but one important aspect here is to use a object pool so that you can reuse every sprite created. Keep creating new ones will kill performance and memory allocation.

I’m not really pleased with the final solution of these particles, but good enough for a quick experiment like this.

Learning Javascript

As a newcomer to the Javascript-world I’m still learning the obvious stuff. Like console debugging. It is a huge benefit to change variables in real-time via the console in Chrome. For instance, setting the camera position without the need for a UI or switching programs.

And I still ends up writing almost all of my code in one huge file. With all frameworks and building-tools around I’m not really sure where to start. It’s a lame excuse I know, but in the same time, when doing experiments I rarely know where I’m going, so writing “correct” OOP and structured code is not the main goal.

Once again, thanks for reading! And please share your thoughts with me, I really appreciate feedback, especially corrections or pointing out misunderstandings that can make me do things in a better manner.

The beanstalk

Some of you has already seen my latest experiment that I published via twitter some days ago. Nothing new since then, I just want to put it up here as well. This is my second time trying out three.js. It’s a very competent framework with lots of features and examples available. If you got an idea, it’s blazing fast to put together a quick demo.

The inspiration for this experiment is planted in my own garden. The picture to the left shows the beanstalk growing with massive speed.  The virtual one uses the object from the last demo as a base, but without the recursive children branches. It’s a custom geometry object, so I can control the initial shape and uvs, and save references of the vertices to different arrays for later manipulation. The illusion of movement is made up with points acting as a offset values for each ring of vertices in this tube (or cone), but controlling the xy-position only. Those values is calculated just as a classic ribbon, the body recursively follows the head. In each frame the offset of one joint is copied from the one in front of it. The speed is directly related to the frame-rate and the length of a segment, otherwise I have to interpolate the values somehow.

At a given interval a leaf (object exported from 3d studio max) is spawned at the position of the head and then on each frame translated along the z-axis with the length of a segment. One important thing here is to reuse those leafs, so when they are behind the camera it’s time for a swim in the objectpool for later use.

The control of the plant is made up of three components. Mouse movements, sound spectrum and a circular movement. The style changing every 15 seconds and blend these component in various ways. The sound spectrum, or amplitude, is exported with this little python-snippet from @mrdoob.

I made some tests with post-processing effects like bloom as well, but the result was not what I wanted, so I ended up disable them. The bloom-effect is still in the source though if you want to see how it’s done. (forked from ro.me-project BTW…)

I having some problems with artifacts between intersecting object though. I have minimal experience with the depth-buffer/tests and triangle-sorting to recognize if  it’s a standard problems or caused by me or the engine. Depth-buffer resolution is my guess, to long range between far and close and to few “steps” in the depth-texture object. Maybe it’s different on different machines and hardware. If you have a clue, just let me know right?

I really want to do real projects with real 3D graphics soon. I envy you that can put on the label “a chrome experience”. However, I hope these little demos made by the community helps to spread the knowledge and inspiration that you can do more than games with this technique. Some of you got clients with technique friendly target groups with the latest browsers installed or who knows where the update button is. Lets help the future and start accelerating some hardware!

Enough talking, show me the demo already!

Some more pictures:

 

A brief introduction to 3D

With WebGL and Molehill around there is a new playground for us flash developers. First I felt a bit stressed over the fact that I have to compete with people doing this for years on the desktop, consoles, mobiles and with plugins like Unity. But it’s just a good thing. There is loads of information and knowledge around waiting for us to consume. We can cut to the chase and use the techniques developed by people before us. We are also several years left behind cutting edge 3D regarding performance and features, so the 3D community have produced loads of forum-threads, tutorials and papers. It will be interesting to see how the flash community will embrace this “new” technique. Games is a obvious field, but what about campaigns and microsites?

Some days ago I had the opportunity to hold a presentation about 3D basics. My goal was to give a overview of topics that could be good to look into as a starter. I have tried to pick them from the perspective of a “online developer”, with MolehillWebGL and Unity as a foundation. The word-cloud above show some of the words I put into context.  It’s basic stuff, in keywords and bullet-lists, so think of this as a dictionary of topics to find more information about on your own. There is some cool demos and resources in there as well. If you are into 3D programming already, this is probably not so interesting for you though, instead you can help me review the slides and point out possible improvements ;) .

Here is the presentation as PowerPoint (16 MB, with fancy transitions and animations) and pdf (6 MB, not so fancy but comments are more visible). If you don’t have PowerPoint installed you can  just download PowerPoint viewer here instead.

Important topics missing? Thing I got wrong? I would love to hear some feedback on things to improve.

Off-piste simulator: Part 3

Time for one more step in this snowy experiment. I can’t help feeling like I’m dropped of the wagon right now with all kick-ass 3D demos around for the upcoming flash 3D API molehill. Of course, it’s not just about the amount of triangles, but also the creativity and love put into those you got. Anyway, I hope that you will like this piece while we all waiting for the salvation. Just put a smooth-modifier in your mental stack.

To mention some of the additions to this version, I have added collada-animations, a background view, different camera-angles including the typical fish-eye lens seen in many snowboard-videos. But the main feature is that you now can control it and make nice bezier curves in the snow yourself. You have three different inputs to play with:

Keyboard
Steer with the arrow-keys. Press down-arrow to bend your knees and push the snow a little harder. With this input-method it’s simple to keep a continuous pace, but it feels a little static.

Mouse
In my opinion a more dynamic way to get smoother turns and more control. Until you found your pace it can look a little funny, because the engine need a previous turn to calculate the leaning angle. Try to move the mouse back and fourth as an arc across the slope to get a balance of speed and pressure, simulating the g-force when surfing through the snow.

Webcam
This is quite fun to play with. I like the idea of moving your own body to control the movement. The face-detection-algorithm steals some CPU, so it’s maybe not the best choice. Use the same technique as with the mouse. Y-axis controls the pressure. Calibrate yourself against the webcam preview image to find a good position.

Now, lets make some turns.

Alternative controls

Multi-touch
I don’t have a multi-touch trackpad, otherwise that would have been cool to control each foot with a finger, like a finger board. Or a wacom-board, with different levels of pressure.

Gyro
Would be even cooler to control the board in 3D. Or maybe connect a wii-control to it?

Animation

To get a natural and correct looking turn, the character have to lean the body through to whole turn, almost before the turn even starting. So how do we know which phase of the turn we’re in? We could “record” the position and interploate and adjust it afterwards, like a bezier-drawing application. But I do not want this delay as it’s affect the response and feeling of riding in real-time. I came up with a different approach. I have two markers, xMin and xMax. Each time a the turn reach it’s maximum or minimum position the values are updated. I then have an estimated range of the next turn (if I assume that the next turn will be the same length). The current x-position is then compared with the estimate range and I got a normalized value between 0-1. Now I can ease that value with a Sine.easeInOut-function. That eased value is then used for the leaning angle. If you do a shorter or longer turn than expected it will of course look different, or before you find your pace, but it is still looking ok.

That’s all for now. Thanks for reading!

Off-piste simulator: Part 2

I have some  improvements for you since last time!  New stuff in this version is terrain, a rider and some snow-particles. In this demo, try to combine different values to see how the rider and the snow is affected. All parameters is connected together, so you can get pretty cool results.

Launch demo

Terrain
I mentioned that I could add effects to each layer in the heightmap/texture-comp. I now added perlin noise to the base-layer. The offset for each octave is the same as the track-layer-scroll so they are in sync with each other.  By resizing the y-scale of the noise the terrain got more stretched look, like ridges, or dunes, created by the wind. It’s a simple example of distortion and more parameters and effects will be added to create different slopes. I also read out the height-value at the snowboarders position. Notice how the rider floats on top of the terrain minus the snow-depth and depending on the current pressure.

The rider
The guy is a fully rigged model imported from 3d studio max. I have just begun to test the bone-support in Away3D, that is way he is looking like a rookie. For now I just control joint rotations, but later on I will try to add bones-animations and poses, especially if I add  jumps and tricks. I tried to move the joints up and down, but I could not sort that out with correct IK and without destroying the mesh. I’m sure that method would be pretty CPU-intensive as well with all the recursive translations going on in IK. I have to dig deeper into that later.

Just for the record, I’m in fact a skier, so maybe the movements are completely wrong since I never even tried a snowboard :)

Particles
I also added some snow-particles, or animated sprites to be more specific. The animation is dynamic depending on the speed and the pressure. If more pressure and the snow is deep, the particle will fly longer.

I guess next time I will try to make the rider controllable.

Off-piste simulator: Tracks

The winter is just around the corner. If you, as myself, enjoying alpine winter sports, this little toy will get you into the mood. Imagine that you have climbed all the way up the mountain, before the lifts starts feeding the mountain with people. You want to be sure that you get a ride on fresh untouched snow all the way down. You now standing on the top and watching the sun rise behind a nearby peak. Googles on. Gears on the back. It’s you and the mountain… I love that feeling!

This is the first part of this experiment of making a off-piste simulator/game in Away3D. The first step is to create the slope and generate tracks in the snow. Check it out the first test

I got inspired by @tonylukasavage and his morphing mesh experiments with HeightMapModifier in Away3D.

Some notes about the texture; I start of with a bitmap filled with the color-value 0×80000. That fill represents the center of the offset. I then draw the track to another BitmapData ranging from black to red. To make the scrolling effect I found a method that I’m never used before: BitmapData.scroll(x,y). The two layers is comped together to a single bitmap. One advantage of using layers is that I can add noise, shadows and other effects to each layer. The heightmap is then converted to a normalmap with NormalMapUtil and a texture with paletteMap. Now I got what I need to create a nice phong-shaded Pixelbender material. The grid has very few triangles (25×25 segments), I have to save some cpu for the character and the other stuff. I’m not doing any real-time triangulations either, so in some cases the vertices starts to jump around. I have to accept that for now.

I don’t really know the outcome of all the future parts yet, but next up is to add a character and some controls.