19
Jan
11

### Non-Marching Squares

I has looking into Marching cube algorithms for extracting meshes from volume data yesterday, and thought, as a little test, I’d try making a very simple 2D edge-detection type setup, using some of the same principles. There’s lots of stuff on the web about marching cubes, and the 2D version, marching squares, so I’m not going to go into a detailed explanation here, but to summarise my setup:

• divide the image into tiles
• for each tile, test to see if the colour/luminosity value pixel at each corner is above or below a given threshold value.
• use the results of these tests to lookup into a table.
In my case, this table is an image strip something like this (I’ve added the numbers for debugging purposes)

In the original Marching Cubes code, the logic for choosing the correct cell from the table is quite elegant. You basically create a binary number, where each digit is either a 0 or a 1, according to whether the corresponding grid-point value is above or below the threshold. Decoding the binary into an integer gives you the index into the table. Unfortunately, because GLSL doesn’t have support for binary numbers, I’ve had to kludge that a bit, but it works fine.
• draw the selected table cell into the working tile of the input image

Here’s some results.

It’s a bit glitchy around the edges of the tiles, but I’m quite pleased with it. I’ve added some controls to recolour the tiles in various ways, and to deliberately scan the corners in the wrong order, for a more abstract look.

I call this ‘non-marching cubes’ because I’m not really doing the marching bit. Variations on this technique are often used iteratively to detect, and then follow edges in an image. Tools like the Magic Wand in Photoshop use similar techniques to create selections based on colour values, for example.

Vertex Shader code for the simple black-and-white version:

```uniform float CellCount;
varying vec2 cellCoords;

void main()
{
//Transform vertex by modelview and projection matrices
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;

//Forward current color and texture coordinates after applying texture matrix
gl_FrontColor = gl_Color;
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
cellCoords = gl_TexCoord[0].xy * CellCount;
}```

```uniform sampler2D Texture, EdgeTable;
uniform vec2 CellSize;
uniform float Threshold;
uniform int Scramble;
uniform vec2 ClampAmt;  // Values for clamping edges
varying vec2 cellCoords;
const float eTableCellWidth = 1.0 / 16.0;

void main()
{
// Cell corners (putting in vec4 allows swizzling for different tile patterns)
vec4 corners;
corners[0] = max(floor(cellCoords.x) * CellSize.x, ClampAmt[0]);       // Left
corners[1] = min(ceil(cellCoords.x) *CellSize.x, 1.0 - ClampAmt[0]);   // Right
corners[2] = max(floor(cellCoords.y) * CellSize.y, ClampAmt[0]);       // Top
corners[3] = min(ceil(cellCoords.y) * CellSize.y, 1.0 - ClampAmt[0]);  // Bottom

// Swizzle corners for different effects
corners = (Scramble == 1) ? corners.xywz : (Scramble == 2) ? corners.yxzw : (Scramble == 3) ? corners.zwyx : corners;

// Sample corners
float c0	= texture2D(Texture, vec2(corners[0], corners[2])).r;
float c1 	= texture2D(Texture, vec2(corners[1], corners[2])).r;
float c2 	= texture2D(Texture, vec2(corners[0], corners[3])).r;
float c3 	= texture2D(Texture, vec2(corners[1], corners[3])).r;

// Clamp coords for edge-table
vec2 cellCoordsClamp = fract(cellCoords);
cellCoordsClamp.x = clamp(cellCoordsClamp.x, ClampAmt[1], 1.0 - ClampAmt[1]);
cellCoordsClamp.y = clamp(cellCoordsClamp.y,ClampAmt[1], 1.0 - ClampAmt[1]);

// Calculate index for edge-table
// Returns 0.0 > 15.0 to index into edge-table strip
float index = 0.0;
index += step(Threshold, c0);
index += step(Threshold, c1) * 2.0;
index += step(Threshold, c2) * 4.0;
index += step(Threshold, c3) * 8.0;

// Sample edge-table and return
vec2 ttcoords	= vec2(cellCoordsClamp.x * eTableCellWidth + (eTableCellWidth * index), cellCoordsClamp.y);
gl_FragColor = texture2D(EdgeTable, ttcoords);
}```

#### 11 Responses to “Non-Marching Squares”

1. January 19, 2011 at 1:01 pm

Looks interesting , crystallising, montaging all come to mind.

2. 2 toneburst
January 19, 2011 at 1:31 pm

Thanks!

I’ve not really looked into it, but I have a suspicion some of the algorithms people have been using to do blob-detection on the Kinect depthmap might use a similar kind of setup, albeit with actually marching squares.

My version is really just an effect, though.

a|x

3. January 19, 2011 at 4:38 pm

Nice work.

You may wish to check out the “blob tree” theory on linking up blobs into hierarchy.

Yeah, this is a lot like combining a hough with a susan algo for finding corners.

I’ve looked at the some thing for creating side C of the implied triangle of sides A/B that’s created when you have a depth map slope from one pixel value to the next in the z axis.

4. January 19, 2011 at 10:28 pm

Nice!

I’ve put together something similar using a CoreImage filter patch in QC. It essentially pixelizes the image, samples the color of each square, then draws one of five colorized tiles based on the luminance. This was my first CoreImage filter, so there may be significant room for optimization.

``` kernel vec4 shapelFilter(sampler image, sampler shape, vec2 shapeSize, float offset) { vec4 pixelSample; float lumSample; float lumAverage; const vec2 sourcePos = destCoord(); vec2 shapeOffset = mod(sourcePos , shapeSize); const vec2 sourceSampleLoc1 = sourcePos - (shapeOffset - vec2(2)); const vec2 sourceSampleLoc2 = sourcePos - (shapeOffset - vec2(4)); pixelSample = sample (image, samplerTransform(image, sourceSampleLoc1)); pixelSample = pixelSample + sample (image, samplerTransform(image, sourceSampleLoc2)); const vec4 averagePixel = pixelSample / vec4(2); lumSample = averagePixel.r; lumSample = lumSample + averagePixel.g; lumSample = lumSample + averagePixel.b; lumAverage = lumSample / 3.0; const vec2 shapeSpacing = shapeOffset - shapeSize; shapeOffset.x = shapeOffset.x + (floor(lumAverage / .17) * shapeSize.x); const vec4 shapePixel = (shapeSpacing.x > shapeSize.x ? vec4(0) : sample(shape, shapeOffset)); ```

``` return averagePixel * shapePixel; }```

• 5 toneburst
January 25, 2011 at 10:49 am

That looks cool. I’d probably write some of it slightly differently, but it looks good in principle. Have you looked at the Apple ASCII Art filter in the system compositions repository? That does a very similar thing, in fact. I prefer to use GLSL to do this kind of thing, because I find it easier to deal with coordinates in the 0.0 > 1.0 range. Also, GLSL shaders are more transferrable.

Keep up the good work, anyway. What’s your next CoreImage/QC project?

a|x

5. February 2, 2011 at 8:13 pm

I’ve just started digging into OpenGL and GLSL on an recent iOS project. However, from what I understand, Core Image filters share many similarities with fragment shaders. I’m hoping this will make the transition easier.

What I find attractive about Core Image is it’s ability to merge filter chains into one or more filter recipes optimized for available hardware using just-in-time compilation. This includes utilizing available CPUs and GPUs. When GPUs are targeted, merging the filter chain can result in significantly better performance as the texture data can remain in VRAM longer. However, I do not think chained CI patches in QC can take advantage of this feature.

Core Image filters are not yet supported yet in iOS, but I’m guessing it’s only a matter of time. CALayer in iOS has a unused CI filter property, which leaves the door open for Apple to add support for CI in future version. With OpenCL support in the SGX543 GPU from Imagination, CI support could appear this year for second gen. iPads and fifth gen. iPhones.

Most of my QC work is focused on creating realtime visuals for events. My current VJ rig uses a custom QC image processing network which is controlled via MIDI and OSC. I’ve started on a Cocoa wrapper around a the top level patch and plan to incrementally replace QC functionality with Objective-C and Core Image.

I’ve found the current gen. iPad can decode two 720p video streams and play them back concurrently at 1024×768 with varying levels of opacity using the new 4.2 CAPlayerLayer in AV Foundation. The results are very impressive as there are no obvious dropped frames. This is likely due to SSD storage and hardware h.264 decompression.

Based on this test, I’m looking into ways to integrate the iPad beyond use as a touch-based OSC controller.

6. 7 639me
February 7, 2011 at 5:37 pm

do i have to apply one copy of the glsl shader to every tile? i’m asking because i like the effect, and it looks somewhat like some glsl texture coordinate errors you had going in march 08…!

7. 8 toneburst
February 7, 2011 at 6:23 pm

Hiya,

the effect just needs to be applied once. The glitches between tiles are because of an error in my lookup-table, in fact. I realised shortly after posting the screenshots that I’d left a 1px black line at the bottom of one of the cells in the strip, so every time that particular tile was drawn, that annoying hairline glitch appeared. Having said that, there are issues with filtering if mipmapping is turned on for the LUT- you get ‘bleeding’ of edges between LUT tiles, which is annoying, as mipmapping mipmapping smoothes the tiles out nicely, particularly at small cell-sizes.

a|x

8. 9 639me
February 7, 2011 at 7:47 pm

i’d be super happy for an example comp!

9. 10 toneburst
February 7, 2011 at 8:59 pm

‘tb mosaic 0.1 qcFX.qtz’ in the Box.net widget ->>

a|x

10. 11 639me
February 8, 2011 at 12:50 am