Archive for April, 2008

22
Apr
08

More Isosurface Raytracer Screenshots

I’ve just twigged what was wrong with the perspective in the previous screenshots- I was setting the range for the Z-axis of the eye coordinates so it went negative. Negative Z-values go right through the surface, and you end up looking at it from behind, but with reversed perspective. Very odd.

Here are some screenshots of the corrected version.

Incidentally, I should mention that the Isosurface being rendered is Paul Bourke’s ‘The Blob’, though according to tonfilm’s comments in the HLSL code, you can use other Isosurface recipes too. Time to look for some, I think! This looks like a good candidate.

It’s also worth noting that this shader produces quite a lot of aliasing. I may be wrong, but I think it’s probably quite hard to get around that with this type of rendering method. Raytracing works by casting rays at the object, and ultimately, these rays either hit something, in which case a shading calculation is carried out, or they don’t, and the background colour is returned. They can’t ‘half-hit’ the object, so you get aliasing.

These screenshots look nice and smooth because I captured them full-screen (1440 x 900), then reduced them down in Photoshop. Supersampling, essentially. I can’t imagine supersampling working at anything approaching a decent framerate with this shader in QC though, unless the output size was verry small, or you have a REALLY fast graphics card. I’m sure it will be possible in the future though.



The next challenge will be to make the camera orbit around the raytraced object in a convincing way. Currently, you can only move the virtual camera on the 3 axes, but you can’t change the ‘point of interest’, so what you essentially end up doing is moving the rendered object past the camera. I have a screenshot showing how this is done in VVVV. The difference is, in VVVV, you can get access to the list of vertices outside the shader itself, so it’s possible to apply matrix transformations to the mesh and camera position values outside the shader. This kind of thing isn’t possible in QC, so it has to be implemented in the Vertex Shader. I just have to try and get my head around how to do this though…

21
Apr
08

Isosurface Raytracer GLSL

Another HLSL > GLSL project. This is one I’ve been working on for a while, translating it function-by-function, but it’s only this morning that I finally tried the code out.

It’s a translation of ‘IsoRaytrace.fx’ HLSL shader for vvvv by tonfilm, itself a reworking of Keenan Crane’s ‘Ray Tracing Quaternion Julia Sets on the GPU’ original CG code. So, this code has been translated twice, and still works!!

Virtually the whole thing is done in the Fragment Shader. It’s not really realtime- framerates are pretty low, but it does look cool.




Thanks to desaxismundi for helping me out with the control ranges.

18
Apr
08

Spherical Height Field

It’s dawning on me that it’s not been particularly clear exactly what I’ve been on about with this Height Field stuff, so I thought I’d post some examples of the end result, just to make it a bit clearer. I’ve also made a little demo QTZ, which can be downloaded here

So, this is the result of feeding an image like those in the previous post into the GL Height Field plugin:

This whole thing will be a lot more impressive once I can map images onto the surface of the mesh.

It would also be great to be able stick the whole thing inside a GLSL Shader patch, and apply a sprinkling of GLSL magic. I’m not sure it that’s possible though. The current Height Field plugin won’t render at all when placed inside a GLSL Patch, for some reason. I’ll have to try and find out why, when I have a little more time.

17
Apr
08

Spherical Height Maps

These Heightmaps create distorted spherical forms when fed into the input of the GL Height Field plugin in QC



The three images use different method of combining the live input video with the height field. All three methods seem to produce similar results in terms of the 3D mesh forms created.

11
Apr
08

Spherical Heightmap Modulation CIFilter Code

This code goes in a CIFilter that takes as its inputs the output of the filter in the previous post (or a static image of it, which kind of works), and modulates it with the luminosity of a second image. The offsets and rescaling ensure that the centre of the radial distortion effect remains in the correct place.

/*
Modulates HeightMap image with luminosity of ModImg input.
For creating radially-distorted spherical 3D meshes from
VBO-based heightmap.

NOTE:
Input images must have same dimensions!

toneburst 2008
https://machinesdontcare.wordpress.com
*/

// Constants (thanks to vade for pointing out I can declare constants using the 'const' keyword).
const vec4 lumCoeff = vec4(0.299,0.587,0.114,0.0);

///////////////////////////
////	MAIN LOOP    ////
///////////////////////////

kernel vec4 heightMapModulate(sampler HeightMap, sampler ModImg, float ModAmt)
{
	// Heightmap pixel
	vec4 mapPix = sample(HeightMap, samplerCoord(HeightMap));

	// Shift mapPix to -1.0 > 1.0 range
	// Required or distortion center will be incorrect
	mapPix = 2.0 * mapPix - 1.0;

	// Modulating image pixel
	vec4 modPix = sample(ModImg, samplerCoord(ModImg));

	// Luminosity of mod image pixel
	float lum = dot(modPix,lumCoeff);

	// Displacement amount
	lum = mix(1.0,lum,ModAmt);

	// Remap range to -1.0 > 1.0
	lum = 2.0 * lum - 1.0; 

	// Modulate heightmap image with mod image
	vec4 outPix = mapPix * lum;
	outPix.a = 1.0;

	// Remap outPix to 0.0 > 1.0 range
	outPix = 0.5 * outPix + 0.5;

	// Output modulated displacement map
	return outPix;
}

As usual, I’ve probably sacrificed efficiency for readability, and I’m sure the same thing could be done with far fewer instructions.

11
Apr
08

Spherical Heightmap for GL Height Field Plugin

I’ve been tinkering with the GL Height Field plugin from the Developer/Examples/Quartz Composer/Plugins folder. More specifically, I’ve been attempting to create an image to bend the height field around into a sphere shape. The idea is to them mix this image with another image, to produce radially-distorted meshes. I’ve done this before with GLSL-vertex-shader-based vertex displacement mapping, but I was intrigued to know if this method would be faster.

Here’s the CIFilter code to produce the heightmap:

/*
Generates spherical and planar displacement maps for VBO-based 3D heightfield.

toneburst 2008
https://machinesdontcare.wordpress.com
*/

////////////////////////////
//// 	 CONSTANTS	////
////////////////////////////

const float PI = 3.14159265359;
const float TWOPI = 6.28318530718;

////////////////////////////
//// 	 MAIN LOOP	////
////////////////////////////

kernel vec4 heightMapSphere(sampler Image)
{
	// Normalised pixel coordinates
	vec2 xyNorm = samplerCoord(Image) / samplerSize(Image);

	////////////////////////////
	////   Spherical Map	////
	////////////////////////////

	// Parametric UV coordinates
	float u = xyNorm.x * PI;
	float v = xyNorm.y * TWOPI;
	// Spherical map values
	vec3 spherical;
	spherical.r = cos(v) * sin(u);
	spherical.g = sin(v) * sin(u);
	spherical.b = cos(u);
	
	// Scale and offset color channels to 0.0 > 1.0
	spherical = spherical * 0.5 + 0.5;
	
	// Output displacement map
	// RGB channels mapped to height field XYZ axes
	return vec4(spherical,1.0);
}

The CIFilter should be fed with an image of 256x256px. The content of the image is ignored, and it will produce an output looking like this (click for full-size version):

Pipe this image into the GL Height Field plugin, and you’ll get a spherical mesh. That’s not very exciting on its own, but if you then take this image, and combine it with another image (say a live video input) before sending it to the Height Field patch, you’ll get all sorts of nice mesh distortions.

Incidentally, this filter uses exactly the same formula as the one to produce a spherical mesh from a flat GLSL Grid using GLSL Shader patch vertex shader code. It’s a basic parametric surface equation. The only difference is we’re producing variations on the levels across the RGB channels, rather than variation in the positions of vertices in 3D space. Because we’re dealing with color channels, which are clamped to the range 0.0 > 1.0, we have to scale and offset the results of our parametric equation so it fits into that range (results are initially in the range -1.0 > 1.0). This is what confused me yesterday. Big thanks once again to -NiCo- from the OpenGL GLSL forum for pointing out where I was going wrong.

05
Apr
08

Spherical Harmonics, NVIDIA Cards

Well, I’ve finally got around to trying the Spherical Harmonics qcClip on my new NVIDIA GeForce 8800GT-equipped MacPro.

Doesn’t work. Or rather, it renders with large holes in the mesh.

I’m working on a fix at the moment. It’s looking like it’s a problem with the pow() function, specifically when it receives values of zero or less. It seems on NVIDIA cards, the function returns ‘undefined’, which explains why many of the vertices are missing, where on ATI cards, a value is returned. It’s actually the NVIDIA card that is sticking more strictly to the GLSL language spec here. The trick is to find out what result the ATI card is producing, and reverse-engineer how it got there, to come up with some code that produces the same result on both systems. Sadly, that’s WAY beyond my meagre mathematical abilities. Fortunately, HexCat, from the OpenGL GLSL forum responded to my cry for help, and is very kindly working on an alternative method of calculating powers, which will hopefully emulate the way ATI do it.

He’s actually already come up with a very simple solution:

pow(abs(x),y);
instead of
pow(x,y);

The good news is, this renders fine on my NVIDIA-equipped MacPro. On the downside, it gives VERY different results to the original code running on my MacBook Pro. As soon as I’ve got a working solution, that gives the same results on both GPU architectures, I’ll post an updated tb_sphericalHarmonics.

Until then, it’s ATI-only, sadly…




Twitter

April 2008
M T W T F S S
« Mar   May »
 123456
78910111213
14151617181920
21222324252627
282930  

Links

Blog Stats

  • 468,537 hits