||Distance transforms are one of the most useful and versatile tools in image processing and analysis; shape matching, object classification, morphological operation, skeletonization, shape driven segmentation and comparison, and a large variety of shape features, are just some of the many uses of distance transforms.
For obvious reasons we all want fast and exact methods; regardless if the task is to measure the size of flying lumps of lava, distance between activated regions within a cell nucleus, or to merge several manual segmentations of a liver into an atlas, there is a general desire to get as accurate and precise result as possible. Since decades back the Euclidean distance transform can be accurately computed in linear time, offering excellent combination of speed and performance.
However, often the used representation of image objects as a collection of pixels (voxels) severely limits performance.
In this talk I will suggest to look beyond the classic approach of representing a shape as a set of pixels or voxels. I will present a simple solution which offers an considerable improvement in accuracy and precision at a negligible cost in terms of time and memory consumption. The method is easy to implement, easy to extend to higher dimensions and straightforward to parallelize.