T
here are many formulas for detecting edges, the Wikipedia page for
Edge Detection gives links.
All – so far as I can see – of the edge detection formulas have some things in common, most if not all:
• Do a level of Gaussian blur on the image before calculating.
• Apply a single mathematical formula, usually quite a complex one, to each pixel being looked at.
• Tend to make a stronger line, the greater the difference in brightness between two adjoining areas.
They pretty-well all, too, produce impressive-looking results.
B
ut I’m not sure that’s how the eye really works. I think the eye is looking at shapes. Hue difference will also come into it. The following two blocks can be distinguished by most people in colour but will convert to identical shades in greyscale, (depending on which formula is used to calculate greyscale). See my page
Equal Brightness Blocks.
T
here’s also the question of what one is doing edge detection for. I know what I am doing it for, which is essentially two reasons: one is to emphasise shapes that I have made by simplifying the colours in an image, and the second may be a dream, I would like to simulate fairly accurately a hand-assembled tile mosaic of the type you see on monuments in various parts of the world or the mosaics by Boris Anrep in the National Gallery in London – obviously I would need to define the edges that I need before I could begin to do that, and they will not be the edges that are produced by any of the widely-used edge detection formulas, those would not do at all, too many shady-spots.
M
y software is nowhere near at that stage yet, I am still experimenting. The approach I am currently taking is to look at each pixel as being part of a line, if it is then I consider that it could be part of an edge. My software has a parameter that defines how many adjacent contrasting pixels might be considered to be a line.
A
pixel could be part of an edge if it is sufficiently different from the pixels surrounding it. Different in brightness, or different in hue, or different in perceptual hue, by which I mean colour name; red at 0° on the colour circle is a different perceptual hue from yellow at 50°, but two colours 50° apart at 90° and 140° are both perceptually green.
I
have done experiments in looking at what in terms of some of the formulas for edge detection are called gradients, the level of gradating difference between the pixel being looked at and its adjacent pixels, and I find it doesn’t make any noticeable difference, the software works just as well looking only at immediately adjacent pixels to the one being evaluated.
A
nd I don’t think that doing a Gaussian blur is very useful. I can see why many of the academic formulas do that, it is to try and avoid there being too many selected pixels, but by treating the selected pixels as part of a significant line I don’t need to bother with that, I can use the original photo in the raw.
W
e aren’t at perfection (yet!), especially with complex photos such as the one I am using in the samples to the left, but I have to say that I do find my flexible variables much more useful for what I am looking for than all the currently-defined academic recommendations. And they don’t involve a lot of complex mathematical formulas (possibly un-endearing my approach to some!)
D
ifferent from the academic received wisdom then. I wouldn’t say it is better, certainly not yet, it has some way to go, but for me holds more promise, it feels like I can develop it in the direction for what I want to achieve. The academic mathematical formulas by contrast feel like they’re rather an end in themselves.
The software for edge detection by line can be found on my page
Edge Detect by Line.