Gallery
Instagram
Flickr
Pinterest
DaveCOLLIER
DevelopmentPages
Edge Detection Scripts
there are many . . .
T
here are many formulas for detecting edges, the Wikipedia page for Edge Detection gives links. And Adobe Capture on iPad has a number of options that can work as edge detection though are essentially brightness difference detection.
What is the difference between edge detection and brightness difference detection? Not very much, it’s largely a matter of how much the software fills in the shapes.
Many of the published edge detection formulas have things in common:
They work upon an image in greyscale, without defining what formula for greyscale conversion they are working to (see my page Formulas for Calculating Pixel Brightness (i.e. Greyscale)).
They recommend a level of Gaussian blur on the image before calculation.
They apply a single mathematical formula, usually quite a complex one, to each pixel being looked at.
They tend to make a stronger line, the greater the difference in brightness between two adjoining areas.
They pretty-well all, too, produce impressive-looking results. BUT . . . I’m not sure that’s how the eye/brain combination really works. I think the eye is looking at shapes. Hue difference will also come into it. The following two blocks can be distinguished by most people in colour but will convert to identical shades in greyscale, (depending on which formula is used to calculate greyscale). See my page Equal Brightness Blocks.
T
here’s also the question of what one is doing edge detection for. There are edges everywhere, lots of little ones, that we mostly ignore, while at the same time there are edges that are highly significant, like those you don’t want to bump your head on. Edges are of varying significance in relation to their perceptual significance. When someone does a line drawing, they are without thinking about it picking out the significant perceptual edges. It would be intriguing to think that software could do the same, could pick out significant perceptual edges, however that raises the question of what is significant. I think maybe this can never be possible with a single mathematical formula.
M
y software Edge Detect By Averages looks at edge detection in a somewhat different way, I am still experimenting with this. The approach I am currently taking is to look at relative brightnesses and hue between adjoining pixels and then to try and work out whether this is relavant to a given shape. There are parameters that define how many adjacent contrasting pixels might be considered to be a line, among other criteria.
A
pixel could be part of an edge if it is sufficiently different from the pixels surrounding it; different in brightness, or different in hue, or different in perceptual hue, by which I mean colour name; red at 0° on the colour circle is a different perceptual hue from yellow at 50°, but two colours 50° apart at 90° and 140° are both perceptually green.
I
have done experiments in looking at what in terms of some of the formulas for edge detection are called gradients, the level of gradating difference between the pixel being looked at and its adjacent pixels, as used in a number of the theoretical formulas. I find it doesn’t make a difference that is significant in the right direction, the software works just as well looking only at immediately adjacent pixels to the one being evaluated, for the purposes of what I am trying to achieve.
A
nd I don’t think that doing a Gaussian blur is very useful. I can see why many of the academic formulas do that, it is to try and avoid there being too many selected pixels, but surely that is using a trick to try and overcome a problem that may be the wrong problem to be looking at. By treating the selected pixels as part of a significant line I don’t think I need to bother with that, I can use the original photo in the raw.
W
e are by no means at perfection (yet!), in any of the methods that have been devised. This is especially so with complex photos that have lots of little shapes, but I have to say that I do find my flexible variables much more useful for what I am looking for than many of the currently-defined academic recommendations. And they don’t involve a lot of complex mathematical formulas, it's more a case of unravelling what is strong and significant for the job being looked at.
D
ifferent from the academic received wisdom then. I wouldn’t say it is better, certainly not yet, it has some way to go, but for me holds more promise, it feels like I can develop it in the direction for what I want to achieve. The academic mathematical formulas by contrast feel like they’re rather an end in themselves.
My software for edge detection by line can be found on my page Edge Detect by Averages.
I also have a page that uses the widely-recommended Canny formulas for edge detection, Canny Edge Detection. This produces impressive results once one has the parameters right for a given original image – different photos require different variables.
All this has to depend on what one is doing edge detection for. I my case with my software I am coming at it from an artistic image perspective, looking to see how one might break down a picture into simple component parts automatically, a fom of image abstraction. Rather different from the more scientific approaches and requiring different inputs.