next up previous contents
Next: Other applications Up: Research Previous: Other microscopic biomedical image   Contents

General theory and tools

The Stochastic Watershed
Bettina Selig, Cris Luengo, Ida-Maria Sintorn, Filip Malmberg, Robin Strand
Funding: S-faculty, SLU
Period: 1102-
Abstract: The stochastic watershed is an image segmentation method that builds on the classical seeded watershed algorithm. It creates a probability density function for edges in the image by repeated applications of the seeded watershed with random seeds.

Previously, we developed a perturbation-based approach to improve the properties of the algorithm: by adding noise to the input image at every application of the seeded watershed, we were able to avoid larger regions being split. We have also proposed an efficient, deterministic algorithm that computes the result that one would obtain after an infinite number of repetitions of the seeded watershed (Pattern Recognition Letters), as well as an efficient algorithm to convert this tree-based result back to all edges in the image's graph.

During 2015, we published a paper describing a method for combining the perturbation-based approach with the deterministic algorithm. We also submitted a manuscript describing a method for exact evaluation of stochastic watersheds applied to supervised, or targeted, image segmentation.

Adaptive Mathematical Morphology
Vladimir Curic, Cris Luengo, Gunilla Borgefors
Partners: Anders Landström, Matthew Thurley, Luleå University of Technology, Luleå; Sébastien Lefèvre, University of South Brittany, Vannes, France; Jesús Angulo, Santiago Velasco-Forero, Centre for Mathematical Morphology, MINES ParisTech, Fontainebleau, France
Funding: Graduate School in Mathematics and Computing (FMB)
Period: 1101-1506
Abstract: The construction of adaptive structuring elements that adjust their shape and size to the local structures in the image has recently been a popular topic in mathematical morphology. Despite that several methods for the construction of spatially adaptive structuring elements have been proposed, it is still an open problem, both from a theoretical and implementation point of view. We have proposed the salience adaptive structuring elements, which modify their shape and size according to the saliency of nearby edges in the image, as well as structuring element with a predefined shape that only changes size based on the saliency of nearby edges.

This year, we presented a paper at the International Symposium on Mathematical Morphology (ISMM) describing the very first adaptive Hit-or-Miss transform. We showed how making this filter adaptive makes it better at detecting faint signals in a noisy background.

Digital Distance Functions and Distance Transforms
Robin Strand, Gunilla Borgefors
Partner: Benedek Nagy, Dept. of Computer Science, Faculty of Informatics, University of Debrecen, Hungary; Nicols Normand, IRCCyN, University of Nantes, France
Funding: TN-faculty, UU; S-faculty, SLU
Period: 9309-
Abstract: The distance between any two grid points in a grid is defined by a distance function. In this project, weighted distances have been considered for many years. A generalization of the weighted distances is obtained by using both weights and a neighborhood sequence to define the distance function. The neighborhood sequence allows the size of the neighborhood to vary along the paths.

In 2015, a manuscript on optimal path extraction and spatially-varying cost functions was accepted for the DGCI 2016 conference.

Precise Image-Based Measurements through Irregular Sampling
Teo Asplund, Robin Strand, Cris Luengo, Gunilla Borgefors
Partner: Matthew Thurley, Luleå University of Technology, Luleå
Funding: Swedish Research Council
Period: 1604-
Abstract: Operations within mathematical morphology depend strongly on the sampling grid, and therefore in general produce a result different from the corresponding continuous-domain operation. Ideally image-based measurements are sampling invariant, however the morphological operators are not, because firstly: the output depends on local suprema/infima, but it is very likely that local extrema fall between sampling points. Secondly: the operators produce lines along which the derivative is not continuous, thereby introducing infinitely high frequencies which make the result not band limited. Therefore the result cannot be represented using the classical sampling theorem. Finally: the structuring element is limited by the sampling grid.

To tackle these issues we will use irregular sampling to capture local maxima and minima and increase the sampling density in areas with a non-continuous derivative. Another benefit of moving towards mathematical morphology on irregularly sampled data is that this allows us to use morphological operators on irregularly sampled data without resampling and interpolating.

Image Enhancement Based on Energy Minimization
Nataša Sladoje
Partners: Joakim Lindblad, Buda Bajic, Faculty of Engineering, University of Novi Sad, Serbia
Funding: Swedish Governmental Agency for Innovation Systems (VINNOVA); TN-faculty, UU
Period: 1409-
Abstract: A common approach to solve the very important but severely ill-posed problem of image deconvolution, is to formulate it in a form of an energy minimization problem. Typically, some regularization is applied, utilizing available a priori knowledge. Total variation regularization is among most popular approaches, due to its generally good performance.

During 2015, we have studied performances of energy minimization based restoration methods for enhancing images degraded with blur and one of the three considered noise types - Gaussian, Poisson and mixed Poisson-Gaussian. In the case of degradation with Poisson noise, we observe both a Bayesian approach, and an approach based on Anscombe variance stabilizing transformation (VST), and we compared their performances. For the restoration of images degraded with blur and mixed Poisson-Gaussian noise we consider generalized Anscombe VST. For all the three considered types of noise, we explored utilization of Huber potential function in regularization, both in combination with Bayesian and with VST approach.

We have summarized the results obtained in a large empirical study on images affected by different levels of Gaussian, Poisson and mixed Gaussian-Poisson noise and different sizes of the Point Spread Functions in a paper which is currently under evaluation. We concluded that restoration utilizing Huber potential function outperforms classical Total Variation regularization, and that, for higher levels of noise (lower counts), VST approach outperforms Bayesian.

We have presented some of these results at the Swedish Symposium on Image Analysis, organized in Ystad in March 2015.

We have also observed so-called blind deblurring methods, applicable when the Point Spread Function (PSF) is unknown, and both the PSF and the original image have to be simultaneously reconstructed. We were particularly focused on a noise model which is a mixture of Poisson (signal dependent) and Gaussian (signal independent) noise. We have proposed a blind deconvolution method, based on regularized energy minimization, for images degraded by such mixed noise. We have applied it to Transmission Electron Microscopy images of cilia and summarized the results in a paper which we will present at the IEEE International Symposium on Biomedical Imaging, ISBI 2016.

Figure 29: (Top) An astronomical image degraded with blur and different types of noise. (Bottom) Reconstructed images by using spectral projected gradient to minimize different Huber regularized energy functions. Regularization parameters and are optimized to maximize the peak signal-to-noise ratio.
Image natasa_deblur

Coverage Model and its Application to High Precision Medical Image Processing
Nataša Sladoje
Partners:  Joakim Lindblad, Vladimir Ilic, Faculty of Technical Sciences, University of Novi Sad, Serbia
Funding: TN-faculty, UU
Period: 1409-
Abstract: The coverage model, which we have been developing for several years now, provides a framework for representing objects present in digital images as spatialfuzzy subsets. Assigned membership values indicate to what extent image elements are covered by the imaged objects. During last years, we have shown, both theoretically, and in applications, that the model can be used to improve information extraction from digital images and to reduce problems originating from limited spatial resolution.

During 2015, we have developed two algorithms of linear time complexity for estimating the Euclidean Distance Transform (EDT) with sub-voxel precision. Due to discretization effects, distance transforms defined on a binary image have limited precision, including reduced rotational and translational invariance. We have showed that significant improvement in performance of EDTs can be achieved if voxel coverage values are utilized and the position of an object boundary is estimated with sub-voxel precision. The study is published in the Pattern Recognition Letters journal.

We have also proposed a method for computing, in linear time, the exact EDT of sets of points s.t. one coordinate of a point can be assigned any real value, whereas other coordinates are restricted to discrete sets of values. The proposed distance transform is applicable to objects represented by grid line sampling, and readily provides sub-pixel precise distance values. The method shows very good performance and exhibits a number of appealing properties, such as simple implementation, easy parallelization, straightforward extension to higher dimensions. The results of this study are published in proceedings of the 12th International Symposium on Mathematical Morphology (ISMM), and presented in Reykjavik, Iceland, in May.

In 2015, our work on the coverage model included development of a coverage segmentation method for extracting thin structures in 3D images. The method needs a reliable crisp segmentation as an input and uses information from local linear unmixing and the crisp segmentation to create a high-resolution crisp reconstruction of the object, which can then be used as a final result, or down-sampled to a coverage segmentation at the starting image resolution. We suggested implementation that enables low memory consumption and processing time, and by that applicability of the method on real CTA data. The study is published in proc. of the 5th Intern. Conference on Image Processing Theory, Tools and Applications, IPTA 2015, and presented in Orleans, France, in November.

Predictive Modelling of Real Time Video of Outdoor Scenes Captured With a Moving Handheld Camera
Nataša Sladoje
Partner: Joakim Lindblad, Protracer AB, Stockholm
Funding: Swedish Governmental Agency for Innovation Systems (VINNOVA); UU; TN-faculty
Period: 1510-
Abstract: This project is inspired by the growing market demand for real time matchmoving technologies in sports broadcasting. Matchmoving, also referred to as video tracking or camera tracking, is a technique that allows 3D computer graphics to be inserted into a live broadcast to enhance the visual experience for the viewing audience. The major technological and functional limitation of existing real time matchmoving technology is its reliance on cameras installed on stands and on a known background settings. Within this project, we will work towards development of a software for robust predictive modeling (statistical analysis) of real time video of outdoor scenes captured with a moving handheld camera. We want to be able to identify, track and trace sub-pixel sized objects moving at speed within a free moving video stream. This is a collaborative project with Protracer AB, the world-leading provider of ball tracking technology.

Figure 30: Protracer is a world leading provider of ball tracking technology. Products include real-time tracking and display of golf shots in TV broadcasts. This project aims to bring their technology to a more dynamic environment.
Image natasa_golf

Feature Point Descriptors for Image Stitching
Anders Hast, Ida-Maria Sintorn, Damian Matuszewski, Carolina Wählby
Partner: Vironova AB; Dept. of Electronic Computers RSREU, Ryazan, Russia
Funding: TN-faculty; UU; Science for Life Laboratory
Period: 1501-
Abstract: When microscopy images are to be put together to form a larger image than one field of view, images are stitched together based on key point features in the images. Several methods for matching these images exist, but are often general in the sense that they can handle scale and rotation, which are not present in this particular case. Therefore, these methods are like cracking a nut with a sledge hammer, and we have investigated how simpler and therefore more efficient and also faster methods can be developed and applied for solving this task. Several key point descriptors have been investigated that are based on new sampling strategies and also new ways of combining these samples, using for instance elements of the Fourier transform, instead of histograms of gradients etc. A paper describing two versions of fast and simple feature point descriptor with or without rotation invariance was presented at the WSCG conference.

The whole pipeline of matching has been investigated and several improvements have been suggested. We have shown that for instance RANSAC can be substituted by a fast clustering method, which makes computation of the transformation between images and false positives removal not only faster, but also deterministic, which otherwise is a problem with RANSAC as it is based on a random sampling approach. This alternative to RANSAC was presented at the second workshop on Features and Structures (FEAST), co-located with the International Conference on Machine Learning, Lille, France, in July.

Figure 31: Sample results of region matching in TEM images.
Feature points

Regional Orthogonal Moments for Texture Analysis
Ida-Maria Sintorn, Carolina Wählby, Amit Suveer
Partner: Vironova AB; Dept. of Immunology, Genetics and Pathology, UU
Funding: Swedish Research Council
Period: 1501-
Abstract: The purpose of this project is to investigate and systematically characterize a novel approach for texture analysis, which we have termed Regional Orthogonal Moments (ROMs). The idea is to combine the descriptive strength and compact information representation of orthogonal moments with the well-established local filtering approach for texture analysis. We will explore ROMs and quantitative texture descriptors derived from the ROM filter responses, and characterize them with special consideration to noise, rotation, contrast, scale robustness, and generalization performance, important factors in applications with natural images. In order to do this we will utilize and expand available image texture datasets and adapt machine learning methods for microscopy image prerequisites. The two main applications for which we will validate the ROM texture analysis framework are viral pathogen detection and identification in MiniTEM images, related to project 33 and glioblastoma phenotyping of patient specific cancer stem cell cultures for disease modeling and personalized treatment, see project 17.

Digital Hyperplanes
Christer Kiselman
Partner: Adama Koné, Université des Sciences, des Techniques et des Technologies de Bamako, USTTB, Bamako I (Mali)
Period: 1001-
Abstract: Digital planes in all dimensions are studied. The general goal is to generalize to any dimension the results of Kiselman's 2011 paper in Mathematika. An important part of the study was finished with Adama's thesis, presented on 2016 January 16. There are, however, several possible generalizations to be investigated.

Convexity of Marginal Functions in the Discrete Case
Christer Kiselman
Partner: Shiva Samieinia, KTH
Period: 1011-
Abstract: We define, using difference operators, classes of functions defined on the set of points with integer coordinates which are preserved under the formation of marginal functions. The duality between classes of functions with certain convexity properties and families of second-order difference operators plays an important role and is explained using notions from mathematical morphology.

A manuscript, joint with Shiva, was accepted on 2015 April 11. Several generalizations are now being studied.

Euclid's Straight Lines
Christer Kiselman
Period: 0701-1503
Abstract: This project was both linguistic and mathematical. We raise two questions on Euclid's Elements: How to explain that Propositions 16 and 27 in his first book do not follow, strictly speaking, from his postulates (or are perhaps meaningless)? and: What are the mathematical consequences of the meanings of the term eutheia, which we today often prefer to consider as different?

The answer to the first question is that orientability is a tacit assumption. The answer to the second is rather a discussion on efforts to avoid actual infinity, and having to (in some sense or another) construct equivalence classes of segments to achieve uniqueness. Finished with a publication in Normat.

Discrete Convolution Equations
Christer Kiselman
Period: 1201-
Abstract: We study solvability of convolution equations for functions with discrete support in , a special case being functions with support in the integer points. The more general case is of interest for several grids in Euclidean space, like the body-centred and face-centered tesselations of three-space, as well as for the non-periodic grids that appear in the study of quasicrystals. The theorem of existence of fundamental solutions by de Boor, Höllig & Riemenschneider is generalized to general discrete supports, using only elementary methods. We also study the asymptotic growth of sequences and arrays using the Fenchel transformation. Estimates using the Fourier transformation are studied. Now duality of convolution will be investigated. A paper was published on 2015 May 07 in Mathematika. A second paper was submitted on 2015 December 31.

Mathematical Spaces / Mathematical Rooms
Christer Kiselman
Partner: Hania Uscka-Wehlou
Period: 1310-1503
Abstract: A survey of mathematical spaces, mathematical terminology, Euclidean and digital geometry, discretization of space and time, tropical mathematics, mathematical morphology, research policy, evaluation of research. Finished with a publication in Sundelöfs Societet.

Complex Convexity
Christer Kiselman
Period: 6710-
Abstract: A bounded open set with boundary of class which is locally weakly lineally convex is weakly lineally convex, but, as shown by Yurii Zelinskii, this is not true for unbounded domains. We construct explicit examples, Hartogs domains, showing this. Their boundary can have regularity or . Obstructions to constructing smoothly bounded domains with certain homogeneity properties are presented.

There are several publications in this project. The latest manuscript was accepted on 2015 May. A current activity is a study of one-sided regularity of subsets of Rn or , presented in an invited lecture at Stockholm University on September 16.

DIPimage and DIPlib
Cris Luengo
Partners: Bernd Rieger, Lucas van Vliet, Quantitative Imaging Group, Delft University of Technology, The Netherlands; Michael van Ginkel, Unilever Research and Development, Colworth House, Bedford, UK
Funding: ERC grant to Bernd Rieger
Period: 0807-1506
Abstract: DIPimage is a MATLAB toolbox for scientific image analysis, useful for both teaching and research It has been in active development since 1999, when it was created at Delft University of Technology. In 2008, when Cris Luengo moved to Uppsala, CBA was added to the project as a main development site. DIPlib, created in 1995, is a C library containing many hundreds of image analysis routines. DIPlib is the core of the DIPimage toolbox, and both projects are developed in parallel. Because DIPlib provides efficient algorithms, MATLAB is useful for image analysis beyond the prototyping stage. Together, MATLAB and DIPimage form a powerful tool for working with scalar and vector images in any number of dimensions.

During 2015 we looked for and obtained funding to port the DIPlib library to C++ and modernise its infrastructure. When this port is finished, DIPlib and DIPimage will become open source projects.

next up previous contents
Next: Other applications Up: Research Previous: Other microscopic biomedical image   Contents