The curse of the big table

June 3rd, 2015

As an Area Editor for Pattern Recognition Letters, I’m frequently confronted with papers containing big tables of results. It is often the deblurring and denoising papers that (obviously using PSNR as a quality metric!) display lots of large tables comparing the proposed method with the state of the art on a set of images. I’m seriously tired of this. Now I’ve set my foot down, and asked an author to remove the table and provide a plot instead. In this post I will show what is wrong with the tables and propose a good alternative.

Read the rest of this entry »

No, that’s not a Gaussian filter

February 6th, 2015

I recently got a question from a reader regarding Gaussian filtering, in which he says:

I have seen some codes use 3×3 Gaussian kernel like
    h1 = [1, 2, 1]/4
to do the separate filtering.
The paper by Burt and Adelson (The Laplacian Pyramid as a Compact Image Code, IEEE Transactions on Communication, 31:532-540, 1983) seems to use 5×5 Gaussian kernel like
    h1 = [1/4 - a/2, 1/4, a, 1/4, 1/4-a/2],
and a is between 0.3-0.6. A typical value of a may be 0.375, thus the Gaussian kernel is:
    h1 = [0.0625, 0.25, 0.375, 0.25, 0.0625]
or
    h1 = [1, 4, 6, 4, 1]/16.

I have written previously about Gaussian filtering, but neither of those posts make it clear what a Gaussian filter kernel looks like.

Read the rest of this entry »

Computer vision is hard!

September 24th, 2014

Today’s xkcd comic is relevant to this blog.

xkcd comic #1425

Mouse-over text: “In the 60s, Marvin Minsky assigned a couple of undergrads to spend the summer programming a computer to use a camera to identify objects in a scene. He figured they’d have the problem solved by the end of the summer. Half a century later, we’re still working on it.”

Proper counting

September 23rd, 2014

I just came across an editorial in the Journal of the American Society of Nephrology (Kirsten M. Madsen, Am Soc Nephrol 10(5):1124-1125, 1999), which states:

A considerable number of manuscripts submitted to the Journal include quantitative morphologic data based on counts and measurements of profiles observed in tissue sections or projected images. Quite often these so-called morphometric analyses are based on assumptions and approximations that cannot be verified and therefore may be incorrect. Moreover, many manuscripts have insufficient descriptions of the sampling procedures and statistical analyses in the Methods section, or it is apparent that inappropriate (biased) sampling techniques were used. Because of the availability today of many new and some old stereologic methods and tools that are not based on undeterminable assumptions about size, shape, or orientation of structures, the Editors of the Journal believe that it is time to dispense with the old, often biased, model-based stereology and change the way we count and measure.

It then goes on to say that the journal would require appropriate stereological methods be employed for quantitative morphologic studies. I have never read a paper in this journal, but certainly hope that they managed to hold on to this standard during the 15 years since this editorial was written. Plenty of journals have not come this far yet.

DIPimage 2.6 released

April 14th, 2014

Today we released version 2.6 of DIPimage and DIPlib. The change list is rather short. There are two items that I think are important: 1) We fixed a bug that caused an unnecessary copy of the output image(s) in the DIPlib-MEX interface, slowing down functions especially for large images. 2) We introduced a new setting to automatically make use of a feature introduced in the previous release.

Read the rest of this entry »