Gallery
Instagram
Flickr
Pinterest
DaveCOLLIER
DevelopmentPages
Edge Detection Scripts
there are many . . .
T
here are many formulas for detecting edges, the Wikipedia page for Edge Detection gives links. And Adobe Capture on iPad has a number of options that can work as edge detection though are essentially brightness difference detection.
What is the difference between edge detection and brightness difference detection? Not very much, it’s largely a matter of how much the software fills in the shapes.
Many of the published edge detection formulas have things in common:
They work upon an image in greyscale, without defining what formula for greyscale conversion they are working to (see my page Formulas for Calculating Pixel Brightness (i.e. Greyscale)).
They recommend a level of Gaussian blur on the image before calculation.
They apply a single mathematical formula, usually quite a complex one, to each pixel being looked at.
They tend to make a stronger line, the greater the difference in brightness between two adjoining areas.
They pretty-well all, too, produce impressive-looking results. BUT . . . I’m not sure that’s how the eye/brain combination really works. I think the eye is looking at shapes. Hue difference will also come into it. The following two blocks can be distinguished by most people in colour but will convert to identical shades in greyscale, (depending on which formula is used to calculate greyscale). See my page Equal Brightness Blocks.
T
here’s also the question of what one is doing edge detection for. There are edges everywhere, lots of little ones, that we mostly ignore, while at the same time there are edges that are highly significant, like those you don’t want to bump your head on. Edges are of varying significance in relation to their perceptual significance. When someone does a line drawing, they are without thinking about it picking out the significant perceptual edges. The most effective software will be that which can do the same, can pick out significnt perceptual edges. I’m not sure this can ever be possible with a single mathematical formula.
M
y software Edge Detect By Lines tries to look at significant lines, I am still experimenting with this. The approach I am currently taking is to look at each pixel as being part of a line, if it is then I consider that it could be part of an edge. There are parameters that define how many adjacent contrasting pixels might be considered to be a line, among other criteria.
A
pixel could be part of an edge if it is sufficiently different from the pixels surrounding it; different in brightness, or different in hue, or different in perceptual hue, by which I mean colour name; red at 0° on the colour circle is a different perceptual hue from yellow at 50°, but two colours 50° apart at 90° and 140° are both perceptually green.
I
have done experiments in looking at what in terms of some of the formulas for edge detection are called gradients, the level of gradating difference between the pixel being looked at and its adjacent pixels, as used in a number of the theoretical formulas. I find it doesn’t make a difference that is significant in the right direction, the software works just as well looking only at immediately adjacent pixels to the one being evaluated, for the purposes of what I am trying to achieve.
A
nd I don’t think that doing a Gaussian blur is very useful. I can see why many of the academic formulas do that, it is to try and avoid there being too many selected pixels, but surely that is using a trick to try and overcome a problem that may be the wrong problem to be looking at. By treating the selected pixels as part of a significant line I don’t think I need to bother with that, I can use the original photo in the raw.
W
e are by no means at perfection (yet!), in any of the methods that have been devised. This is especially so with complex photos such as the one I am using in the samples below, but I have to say that I do find my flexible variables much more useful for what I am looking for than many of the currently-defined academic recommendations. And they don’t involve a lot of complex mathematical formulas (possibly un-endearing my approach to some!)
D
ifferent from the academic received wisdom then. I wouldn’t say it is better, certainly not yet, it has some way to go, but for me holds more promise, it feels like I can develop it in the direction for what I want to achieve. The academic mathematical formulas by contrast feel like they’re rather an end in themselves.
My software for edge detection by line can be found on my page Edge Detect by Line.
Each of these images may be clicked on to expand to full-screen or 2048 pixels high, whichever is the smaller.
Samples
original pic
Colour photo
Canny Sobel
Photo processed with Canny formula and Sobel filter.
Canny Sobel Inverse
Photo processed with Canny formula and Sobel filter, inversed to black on white. Very impressive though in parts so many lines that it looks like shading, almost a monochrome conversion of the photo. Too many edges.
Canny Sobel non-maximal suppression
Photo processed with Canny formula and Sobel filter, non-maximum suppression.
Canny Sobel non-maximal suppression inversed
Photo processed with Canny formula and Sobel filter, non-maximum suppression, inverted to black on white. Similar problems to the previous sample using Canny formula for edge detection.
Adobe Capture Edge
Adobe Capture using the Edge option. As with a classic Canny formula – which perhaps this option is using – this gives white lines on a black background.
Adobe Capture Edge Inverted
The Adobe Capture with Edge option inverted. This looks superb, though is not entirely just edges, there are some filled-in shapes. And it tends not to do faces very well.
Adobe Capture Line
Adobe Capture using the Line option. Also a bit unsatisfactory on faces.
Edge Detect By Line
My own software Edge Detect by Line. A different approach, looking for significant perceptual lines as lines. Still too much noise, which I’m working on eliminating.
Note: Where an image is rendering fuzzy then that is a browser image-rendering problem, click on the pic to expand and see a better resolution picture