Development Pages
Dave
COLLIER

Edge Detection Scripts
My approach is a specific one
Each of these images may be clicked on to expand to full-screen or 2048 pixels, whichever is the smaller. Notice that the edge detection processed with the Canny formulas look very pretty, but they aren’t really perceptual edge detection, they have areas that look like shading, which is why they look so convincing. My software does just find edges that are lines.
Colour photo
Photo processed with Canny formula and Sobel filter.
Photo processed with Canny formula and Sobel filter, inversed to black on white. Very impressive though in parts so many lines that it looks like shading, unsurprising it would look impressive, it’s almost a monochrome conversion of the photo.
Photo processed with Canny formula and Sobel filter, non-maximum suppression.
Photo processed with Canny formula and Sobel filter, non-maximum suppression, inversed to black on white. Similar problems to the previous sample.
Photo processed with my Edge Detect by Line script: 21 adjacent pixels constituting a line, brightness difference threshold 13, hue difference threshold 8°, including colour name check, 2 pixels spread on output, image-rendering high quality (i.e. antialiased in the browser from the CSS). These variables allow adjustment of the level of edge lines produced depending on usage requirement.
Note: If an image is rendering fuzzy then that is a browser image rendering problem, click on the pic to expand
T
All – so far as I can see – of the edge detection formulas have some things in common, most if not all:
Work upon an image in greyscale, without defining what formula for greyscale conversion they are working to (see my page Formulas for Calculating Pixel Brightness (i.e. Greyscale)).
Do a level of Gaussian blur on the image before calculating.
Apply a single mathematical formula, usually quite a complex one, to each pixel being looked at.
Tend to make a stronger line, the greater the difference in brightness between two adjoining areas.
They pretty-well all, too, produce impressive-looking results.
B
ut I’m not sure that’s how the eye really works. I think the eye is looking at shapes. Hue difference will also come into it. The following two blocks can be distinguished by most people in colour but will convert to identical shades in greyscale, (depending on which formula is used to calculate greyscale). See my page Equal Brightness Blocks.
T
here’s also the question of what one is doing edge detection for. I know what I am doing it for, which is essentially two reasons: one is to emphasise shapes that I have made by simplifying the colours in an image, and the second may be a dream, I would like to simulate fairly accurately a hand-assembled tile mosaic of the type you see on monuments in various parts of the world or the mosaics by Boris Anrep in the National Gallery in London – obviously I would need to define the edges that I need before I could begin to do that, and they will not be the edges that are produced by any of the widely-used edge detection formulas, those would not do at all, too many shady-spots.
M
y software is nowhere near at that stage yet, I am still experimenting. The approach I am currently taking is to look at each pixel as being part of a line, if it is then I consider that it could be part of an edge. My software has a parameter that defines how many adjacent contrasting pixels might be considered to be a line.
A
pixel could be part of an edge if it is sufficiently different from the pixels surrounding it. Different in brightness, or different in hue, or different in perceptual hue, by which I mean colour name; red at 0° on the colour circle is a different perceptual hue from yellow at 50°, but two colours 50° apart at 90° and 140° are both perceptually green.
I
have done experiments in looking at what in terms of some of the formulas for edge detection are called gradients, the level of gradating difference between the pixel being looked at and its adjacent pixels, and I find it doesn’t make any noticeable difference, the software works just as well looking only at immediately adjacent pixels to the one being evaluated.
A
nd I don’t think that doing a Gaussian blur is very useful. I can see why many of the academic formulas do that, it is to try and avoid there being too many selected pixels, but by treating the selected pixels as part of a significant line I don’t need to bother with that, I can use the original photo in the raw.
W
e aren’t at perfection (yet!), especially with complex photos such as the one I am using in the samples to the left, but I have to say that I do find my flexible variables much more useful for what I am looking for than all the currently-defined academic recommendations. And they don’t involve a lot of complex mathematical formulas (possibly un-endearing my approach to some!)
D
ifferent from the academic received wisdom then. I wouldn’t say it is better, certainly not yet, it has some way to go, but for me holds more promise, it feels like I can develop it in the direction for what I want to achieve. The academic mathematical formulas by contrast feel like they’re rather an end in themselves.
The software for edge detection by line can be found on my page Edge Detect by Line.