next up previous contents
Next: Refereed conference proceedings Up: Publications Previous: Special Journal Issues   Contents


Journal articles

  1. Image Segmentation and Identification of Paired Antibodies in Breast Tissue
    Authors: Jimmy C. Azar, Martin Simonsson, Ewert Bengtsson, Anders Hast
    Journal: Computational & Mathematical Methods in Medicine, Vol. 2014, Article ID 647273, 11 pages
    Abstract: Comparing staining patterns of paired antibodies designed towards a specific protein but toward different epitopes of the protein provides quality control over the binding and the antibodies' ability to identify the target protein correctly and exclusively. We present a method for automated quantification of immunostaining patterns for antibodies in breast tissue using the Human Protein Atlas database. In such tissue, dark brown dye 3,3'-diaminobenzidine is used as an antibody-specific stain whereas the blue dye hematoxylin is used as a counterstain. The proposed method is based on clustering and relative scaling of features following principal component analysis. Our method is able (1) to accurately segment and identify staining patterns and quantify the amount of staining and (2) to detect paired antibodies by correlating the segmentation results among different cases. Moreover, the method is simple, operating in a low-dimensional feature space, and computationally efficient which makes it suitable for high-throughput processing of tissue microarrays.

  2. Screening for Cervical Cancer Using Automated Analysis of PAP-Smears
    Authors: Ewert Bengtsson, Patrik Malm
    Journal: Computational and Mathematical Methods in Medicine, Vol. 2014, Article ID 842037, 12 pages
    Abstract: Cervical cancer is one of the most deadly and common forms of cancer among women if no action is taken to prevent it, yet it is preventable through a simple screening test, the so-called PAP-smear. This is the most effective cancer prevention measure developed so far. But the visual examination of the smears is time consuming and expensive and there have been numerous attempts at automating the analysis ever since the test was introduced more than 60 years ago. The first commercial systems for automated analysis of the cell samples appeared around the turn of the millennium but they have had limited impact on the screening costs. In this paper we examine the key issues that need to be addressed when an automated analysis system is developed and discuss how these challenges have been met over the years. The lessons learned may be useful in the efforts to create a cost-effective screening system that could make affordable screening for cervical cancer available for all women globally, thus preventing most of the quarter million annual unnecessary deaths still caused by this disease.

  3. 3D Tree-Ring Analysis Using Helical X-Ray Tomography
    Authors: Jan van den Bulcke(1), Erik L.G. Wernersson, Manuel Dierick(2), Denis Van Loo(2),
    Bert Masschael(2), Loes Brabant(2), Matthieu N. Boone(2), Luc Van Hoorebeke(2), Kristof Haneca(3), Anders Brun, Cris L. Luengo Hendriks, Joris Van Acker(1)
    (1) UGCT Ghent University, Dept. of Forest and Water Management, Laboratory of Wood Technology, Ghent, Belgium
    (2) UGCT Ghent University, Dept. of Physics and Astronomy, Ghent, Belgium
    (3) Flanders Heritage Agency, Brussels, Belgium
    Journal: Dendrochronologia, Vol. 32, nr 1, pages 39-46
    Abstract: The current state-of-the-art of tree-ring analysis and densitometry is still mainly limited to two dimensions and mostly requires proper treatment of the surface of the samples. In this paper we elaborate on the potential of helical X-ray computed tomography for 3D tree-ring analysis. Microdensitometrical profiles are obtained by processing of the reconstructed volumes. Correction of the structure direction, taking into account the angle of growth rings and grain, results in very accurate microdensity and precise ring width measurements. Both a manual as well as an automated methodology is proposed here, of which the MATLAB (c) code is available. Examples are given for pine (Pinus sylvestris L), oak (Quercus robur L) and teak (Tectona grandis L.). In all, the methodologies applied here on the 3D volumes are useful for growth related studies, enabling a fast and non-destructive analysis.

  4. Efficient Algorithm for Finding the Exact Minimum Barrier Distance
    Authors: Krzysztof Chris Ciesielskia(1,2), Robin Strand(3), Filip Malmberg(3), Punam K. Saha(4)
    (1) Dept. of Mathematics, West Virginia University, Morgantown, WV, USA
    (2) Dept. of Radiology, MIPG, University of Pennsylvania, Philadelphia, PA, USA
    (3) Dept. of Radiology, Oncology and Radiation Science, UU
    (4) Dept. of Electrical and Computer Engineering and Dept. of Radiology, The University of Iowa, Iowa City, USA
    Journal: Computer Vision and Image Understanding, Vol. 123, pages 53-64
    Abstract: The minimum barrier distance, MBD, introduced recently in [1], is a pseudo-metric defined on a compact subset D of the Euclidean space and whose values depend on a fixed map (an image) f from D into R. The MBD is defined as the minimal value of the barrier strength of a path between the points, which constitutes the length of the smallest interval containing all values of f along the path.

    In this paper we present a polynomial time algorithm, that provably calculates the exact values of MBD for the digital images. We compare this new algorithm, theoretically and experimentally, with the algorithm presented in [1], which computes the approximate values of the MBD. Moreover, we notice that every generalized distance function can be naturally translated to an image segmentation algorithm. The algorithms that fall under such category include: Relative Fuzzy Connectedness, and those associated with the minimum barrier, fuzzy distance, and geodesic distance functions. In particular, we compare experimentally these four algorithms on the 2D and 3D natural and medical images with known ground truth and at varying level of noise, blur, and inhomogeneity.

  5. Adaptive Mathematical Morphology: A Survey of the Field
    Authors: Vladimir Curic, Anders Landström(1), Matthew J. Thurley(1), Cris L. Luengo Hendriks
    (1) Dept. of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Sweden
    Journal: Pattern Recognition Letters, Vol. 47, pages 18-28
    Abstract: We present an up-to-date survey on the topic of adaptive mathematical morphology. A broad review of research performed within the field is provided, as well as an in-depth summary of the theoretical advances within the field. Adaptivity can come in many different ways, based on different attributes, measures, and parameters. Similarities and differences between a few selected methods for adaptive structuring elements are considered, providing perspective on the consequences of different types of adaptivity. We also provide a brief analysis of perspectives and trends within the field, discussing possible directions for future studies.

  6. A New Set Distance and Its Application to Shape Registration
    Authors: Vladimir Curic, Joakim Lindblad(1), Nataša Sladoje(1), Hamid Sarve, Gunilla Borgefors
    (1) Faculty of Technical Sciences, University of Novi Sad, Serbia.
    Journal: Pattern Analysis and Applications, Vol. 17, nr 1, pages 141-152
    Abstract: We propose a new distance measure, called Complement weighted sum of minimal distances, between finite sets in Zn and evaluate its usefulness for shape registration and matching. In this set distance the contribution of each point of each set is weighted according to its distance to the complement of the set. In this way, outliers and noise contribute less to the new similarity measure. We evaluate the performance of the new set distance for registration of shapes in binary images and compare it to a number of often used set distances found in the literature. The most extensive evaluation uses a set of synthetic 2D images. We also show three examples of real problems: registering a set of 2D images extracted from synchrotron radiation micro-computed tomography (SR CT) volumes depicting bone implants; the difficult multi-modal registration task of finding the exact location of a 2D slice of a bone implant, as imaged by a light microscope, within a 3D SR CT volume of the same implant; and finally recognition of handwritten characters. The evaluation shows that our new set distance performs well for all tasks and outperforms the other observed distance measures in most cases. It is therefore useful in many image registration and shape comparison tasks.

  7. Canine Body Composition Quantification Using 3 Tesla Fat Water MRI
    Authors: Aliya Gifford(1,2), Joel Kullberg(3), Johan Berglund(3), Filip Malmberg, Katie C. Coate(4), Phillip E. Williams(4), Alan D. Cherrington(4), Malcolm J. Avison(1,2,5,6), E. Brian Welch(1,2,6)
    (1) Vanderbilt University Institute of Imaging Science, Vanderbilt University School of Medicine, Nashville, Tennessee, USA
    (2) Chemical and Physical Biology Program, Vanderbilt University School of Medicine, Nashville, Tennessee, USA
    (3) Dept. of Radiology, UU, Uppsala, Sweden
    (4) Dept. of Molecular Physiology and Biophysics, Vanderbilt University School of Medicine, Nashville, Tennessee, USA
    (5) Dept. of Pharmacology, Vanderbilt University School of Medicine, Nashville, Tennessee, USA
    (6) Dept. of Radiology & Radiological Sciences, Vanderbilt University School of Medicine, Nashville, Tennessee, USA
    Journal: Journal of Magnetic Resonance Imaging, Vol. 39, Issue 2, pages 485-491
    Abstract: Purpose: To test the hypothesis that a whole-body fat-water MRI (FWMRI) protocol acquired at 3 Tesla combined with semi-automated image analysis techniques enables precise volume and mass quantification of adipose, lean, and bone tissue depots that agree with static scale mass and scale mass changes in the context of a longitudinal study of large-breed dogs placed on an obesogenic high-fat, high-fructose diet.

    Materials and Methods: Six healthy adult male dogs were scanned twice, at weeks 0 (baseline) and 4, of the dietary regiment. FWMRI-derived volumes of adipose tissue (total, visceral, and subcutaneous), lean tissue, and cortical bone were quantified using a semi-automated approach. Volumes were converted to masses using published tissue densities.

    Results: FWMRI-derived total mass corresponds with scale mass with a concordance correlation coefficient of 0.931 (95 confidence interval = [0.813, 0.975]), and slope and intercept values of 1.12 and -2.23 kg, respectively. Visceral, subcutaneous and total adipose tissue masses increased significantly from weeks 0 to 4, while neither cortical bone nor lean tissue masses changed significantly. This is evidenced by a mean percent change of 70.2 for visceral, 67.0 for subcutaneous, and 67.1 for total adipose tissue.

    Conclusion: FWMRI can precisely quantify and map body composition with respect to adipose, lean, and bone tissue depots. The described approach provides a valuable tool to examine the role of distinct tissue depots in an established animal model of human metabolic disease

  8. Simple Filter Design for First and Second Order Derivatives by a Double Filtering Approach
    Author: Anders Hast
    Journal: Pattern Recognition Letters, Vol. 42, pages 65-71
    Abstract: Spline filters are usually implemented in two steps, where in the first step the basis coefficients are computed by deconvolving the sampled function with a factorized filter and the second step reconstructs the sampled function. It will be shown how separable spline filters using different splines can be constructed with fixed kernels, requiring no inverse filtering. Especially, it is discussed how first and second order derivatives can be computed correctly using cubic or trigonometric splines by a double filtering approach giving filters of length 7.

  9. How to Promote Student Creativity and Learning Using Tutorials in Teaching Graphics and Visualisation
    Author: Anders Hast
    Journal: Journal for Geometry and Graphics, Vol. 18, nr 2, pages 237-245
    Abstract: Course assignments play an important role in the learning process. However, they can be constructed in such a way that they prohibit creativity, rather than promoting it. Therefore it was investigated how programming assignments are set up that students encounter in computer science education and which approach could help the students in problem solving and whether these would help or prohibit them to be creative or not. Especially, an online tutorial about visualisation using VTK and Python was used as an example in different courses on visualisation. It was also examined how students in the computer graphics courses that did not have access to such tutorial answered questions about assignments.

  10. Improved Illumination Correction That Preserves Medium Sized Objects
    Authors: Anders Hast, Andrea Marchetti(1)
    (1) Consiglio Nazionale delle Ricerche, Institute of Informatics and Telematics, Pisa, Italy
    Journal: Machine Graphics & Vision, Vol. 23, nr 1/2, pages 3-20
    Abstract: Illumination correction is a method used for removing the influence of light coming from the environment and of other distorting factors in the image capturing process. An algorithm based on the luminance mapping is proposed that can be used to remove low frequency variations in the intensity, and to increase the contrast in low contrast areas when necessary. Moreover, the algorithm can be employed to preserve the intensity of medium-sized objects with different intensity or colour than their surroundings, which otherwise would tend to be washed out. Furthermore, examples are given showing how the method can be used for both greyscale images and colour photos.

  11. Effects of Defects on the Tensile Strength of Short-Fibre Composite Materials
    Authors: Thomas Joffre(1), Arttu Miettinen(2), Erik L.G. Wernersson, Per Isaksson(1),
    E. Kristofer Gamstedt(1)
    (1) Ångström Laboratory, Dept. of Engineering Sciences, UU
    (2) Dept. of Physics, University of Jyväskylä, Jyväskylä, Finland
    Journal: Mechanics of Materials, Vol. 75, pages 125-134
    Abstract: Heterogeneous materials tend to fail at the weakest cross-section, where the presence of microstructural heterogeneities or defects controls the tensile strength. Short-fibre composites are an example of heterogeneous materials, where unwanted fibre agglomerates are likely to initiate tensile failure. In this study, the dimensions and orientation of fibre agglomerates have been analysed from three-dimensional images obtained by X-ray microtomography. The geometry of the specific agglomerate responsible for failure initiation has been identified and correlated with the strength. At the plane of fracture, a defect in the form of a large fibre agglomerate was almost inevitably found. These new experimental findings highlight a problem of some existing strength criteria, which are principally based on a rule of mixture of the strengths of constituent phases, and not on the weakest link. Only a weak correlation was found between stress concentration induced by the critical agglomerate and the strength. A strong correlation was however found between the stress intensity and the strength, which underlines the importance of the size of largest defects in formulation of improved failure criteria for short-fibre composites. The increased use of three-dimensional imaging will facilitate the quantification of dimensions of the critical flaws.

  12. Automated Analysis of Dynamic Behavior of Single Cells in Picoliter Droplets
    Authors: Mohammad Ali Khorshidi(1), Prem Kumar Periyannan Rajeswari(1), Carolina Wählby(2,3), Håkan N. Jönsson(1), Helene Andersson Svahn(1)
    (1) Division of Proteomics and Nanobiotechnology, Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden
    (2) Science for Life Laboratory, UU
    (3) Broad Institute of Harvard and MIT, Cambridge, USA
    Journal: Lab on a Chip, Vol. 14, pages 931-937
    Abstract: We present a droplet-based microfluidic platform to automatically track and characterize the behavior of single cells over time. This high-throughput assay allows encapsulation of single cells in micro-droplets and traps intact droplets in arrays of miniature wells on a PDMS-glass chip. Automated time-lapse fluorescence imaging and image analysis of the incubated droplets on the chip allows the determination of the viability of individual cells over time. In order to automatically track the droplets containing cells, we developed a simple method based on circular Hough transform to identify droplets in images and quantify the number of live and dead cells in each droplet. Here, we studied the viability of several hundred single isolated HEK293T cells over time and demonstrated a high survival rate of the encapsulated cells for up to 11 hours. The presented platform has a wide range of potential applications for single cell analysis, e.g. monitoring heterogeneity of drug action over time and rapidly assessing the transient behavior of single cells under various conditions and treatments in vitro.

  13. 3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading
    Authors: Tae-Yun Kim(1), Nam-Hoon Cho(2), Goo-Bo Jeong(3), Ewert Bengtsson, Heung-Kook Choi(1)
    (1) Dept. of Computer Engineering, Inje University, Gyeongnam, Republic of Korea
    (2) Dept. of Pathology, Yonsei University, Seoul, Republic of Korea
    (3) Dept. of Anatomy, Gachon University, Incheon, Republic of Korea
    Journal: Computational and Mathematical Methods in Medicine, Vol. 2014, Article ID 536217, 12 pages
    Abstract: One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system.

  14. Priors for X-Ray in-Line Phase Tomography of Heterogeneous Objects
    Authors: Max Langer(1,2), Peter Cloetens(2), Bernhard Hesse(1,2,3), Heikki Suhonen(2),
    Alexandra Pacureanu(4), Kay Raum(3), Françoise Peyrin(1,2)
    (1) Université de Lyon, Creatis, Lyon, France
    (2) European Synchrotron Radiation Facility, Grenoble, France
    (3) Julius Wolff Institute, Berlin-Brandenburg School for Regenerative Therapies, Charité-Universitätsmedizin Berlin, Germany
    (4) Science for Life Laboratory, UU
    Journal: Philosophical Transactions. Series A: Mathematical, physical, and engineering science, Vol. 372, nr 2010, 20130129, pages 1-9
    Abstract: We present a new prior for phase retrieval from X-ray Fresnel diffraction patterns. Fresnel diffraction patterns are achieved by letting a highly coherent X-ray beam propagate in free space after interaction with an object. Previously, either homogeneous or multi-material object assumptions have been used. The advantage of the homogeneous object assumption is that the prior can be introduced in the Radon domain. Heterogeneous object priors, on the other hand, have to be applied in the object domain. Here, we let the relationship between attenuation and refractive index vary as a function of the measured attenuation index. The method is evaluated using images acquired at beamline ID19 (ESRF, Grenoble, France) of a phantom where the prior is calculated by linear interpolation and of a healing bone obtained from a rat osteotomy model. It is shown that the ratio between attenuation and refractive index in bone for different levels of mineralization follows a power law. Reconstruction was performed using the mixed approach but is compatible with other, more advanced models. We achieve more precise reconstructions than previously reported in literature. We believe that the proposed method will find application in biomedical imaging problems where the object is strongly heterogeneous, such as bone healing and biomaterials engineering.

  15. Light Scattering in Fibrous Media with Different Degrees of in-Plane Fiber Alignment
    Authors: Tomas Linder(1), Torbjörn Löfqvist(1), Erik L. G. Wernersson, Per Gren(2)
    (1) EISLAB, Dept. of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Luleå, Sweden
    (2) Division of Fluid and Experimental Mechanics, Luleå University of Technology, Luleå, Sweden
    Journal: Optics Express, Vol. 22, Issue 14, pages 16829-16840
    Abstract: Fiber orientation is an important structural property in paper and other fibrous materials. In this study we explore the relation between light scattering and in-plane fiber orientation in paper sheets. Light diffusion from a focused light source is simulated using a Monte Carlo technique where parameters describing the paper micro-structure were determined from 3D x-ray computed tomography images. Measurements and simulations on both spatially resolved reflectance and transmittance light scattering patterns show an elliptical shape where the main axis is aligned towards the fiber orientation. Good qualitative agreement was found at low intensities and the results indicate that fiber orientation in thin fiber-based materials can be determined using spatially resolved reflectance or transmittance.

  16. Evaluation of Prostate Segmentation Algorithms for MRI: The PROMISE12 Challenge
    Authors: Geert Litjens(1), Robert Toth(2), Wendy van de Ven(1), Caroline Hoeks(1), Sjoerd Kerkstra(1), Bram van Ginneken(1), Graham Vincent(3), Gwenael Guillard(3), Neil Birbeck(4), Jindang Zhang(4), Robin Strand, Filip Malmberg, Yangming Ou(5), Christos Davatzikos(5), Matthias Kirschner(6), Florian Jung(6), Jing Yuan(7), Wu Qiu(7), Qinquan Gao(8), Philip Eddie Edwards(8), Bianca Maan(9), Ferdinand van der Heijden(9), Sumya Ghose(10,11,12), Jhimli Mitra(10,11,12), Jason Dowling(10), Dean Barratt(13), Henkjan Huisman(1), Anant Madabhushi(2)
    (1) Radboud University Nijmegen Medical Centre, The Netherlands
    (2) Case Western Reserve University, USA
    (3) Imorphics, England, United Kingdom
    (4) Siemens Corporate Research, USA
    (5) University of Pennsylvania, USA
    (6) Technische Universitat Darmstadt, Germany
    (7) Robarts Research Institute, Canada
    (8) Imperial College London, England, United Kingdom
    (9) University of Twente, The Netherlands
    (10) Commonwealth Scientific and Industrial Research Organisation, Australia
    (11) Université de Bourgogne, France
    (12) Universitat de Girona, Spain
    (13) University College London, England, United Kingdom
    Journal: Medical Image Analysis, Vol. 18, nr 2, pages 359-373
    Abstract: Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (Math Eq) and had an efficient implementation with a run time of 8 min and 3 s per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi-atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at http://promise12.grand-challenge.org/.

  17. Detection of Façade Regions in Street View Images from Split-and-Merge of Perspective Patches
    Authors: Fei Liu, Stefan Seipel
    Journal: Journal of Image and Graphics, Vol. 2, nr 1, pages 8-14
    Abstract: Identification of building façades from digital images is one of the central problems in mobile augmented reality (MAR) applications in the built environment. Directly analyzing the whole image can increase the difficulty of façade identification due to the presence of image portions which are not façade. This paper presents an automatic approach to façade region detection given a single street view image as a pre-processing step to subsequent steps of façade identification. We devise a coarse façade region detection method based on the observation that façades are image regions with repetitive patterns containing a large amount of vertical and horizontal line segments. Firstly, scan lines are constructed from vanishing points and center points of image line segments. Hue profiles along these lines are then analyzed and used to decompose the image into rectilinear patches with similar repetitive patterns. Finally, patches are merged into larger coherent regions and the main building façade region is chosen based on the occurrence of horizontal and vertical line segments within each of the merged regions. A validation of our method showed that on average façade regions are detected in conformity with manually segmented images as ground truth.

  18. An Efficient Algorithm for Exact Evaluation of Stochastic Watersheds
    Authors: Filip Malmberg, Cris L. Luengo Hendriks
    Journal: Pattern Recognition Letters, Vol. 47, pages 80-84
    Abstract: The stochastic watershed is a method for unsupervised image segmentation proposed by Angulo and Jeulin (2007). The method first computes a probability density function (PDF), assigning to each piece of contour in the image the probability to appear as a segmentation boundary in seeded watershed segmentation with randomly selected seeds. Contours that appear with high probability are assumed to be more important. This PDF is then post-processed to obtain a final segmentation. The main computational hurdle with the stochastic watershed method is the calculation of the PDF. In the original publication by Angulo and Jeulin, the PDF was estimated by Monte Carlo simulation, i.e., repeatedly selecting random markers and performing seeded watershed segmentation. Meyer and Stawiaski (2010) showed that the PDF can be calculated exactly, without performing any Monte Carlo simulations, but do not provide any implementation details. In a naive implementation, the computational cost of their method is too high to make it useful in practice. Here, we extend the work of Meyer and Stawiaski by presenting an efficient (quasi-linear) algorithm for exact computation of the PDF. We demonstrate that in practice, the proposed method is faster than any previously reported method by more than two orders of magnitude. The algorithm is formulated for general undirected graphs, and thus trivially generalizes to images with any number of dimensions.

  19. New Insights into the Mechanisms behind the Strengthening of Lignocellulosic Fibrous Networks with Polyamines
    Authors: Andrew Marais(1,5), Mikael S. Magnusson(2,5), Thomas Joffre(3), Erik L. G. Wernersson,
    Lars Wågberg(1,4,5)
    (1) Division of Fibre Technology, School of Chemical Science and Engineering, KTH Royal Institute of Technology, Stockholm, Sweden
    (2) Dept. of Solid Mechanics, School of Engineering Sciences, KTH Royal Institute of Technology, Stockholm, Sweden
    (3)Ångström Laboratory, UU
    (4) The Wallenberg Wood Science Centre, School of Chemical Science and Engineering, KTH Royal Institute of Technology, Stockholm, Sweden
    (6) VINN Excellence Centre BiMaC Innovation, Stockholm, Sweden
    Journal: Cellulose 21, pages 3941-3950
    Abstract: Polyelectrolytes have been used extensively in the papermaking industry for various purposes. Although recent studies have shown that polyamines can be efficient dry-strength additives, the mechanism governing the strength enhancement of paper materials following the adsorption of polyamines onto pulp fibres is still not well understood. In this study, the effect of the adsorption of polyallylamine hydrochloride (PAH) onto the surface of unbleached kraft pulp fibres was investigated on both the fibre and the network scale. Isolated fibre crosses were mechanically tested to evaluate the impact of the chemical additive on the interfibre joint strength on the microscopic scale and the effect was compared with that previously observed on the paper sheet scale. X-ray microtomography was used to understand structural changes in the fibrous network following the adsorption of a polyamine such as PAH. Using image analysis methods, it was possible to determine the number of interfibre contacts (or joints) per unit length of fibre as well as the average interfibre joint contact area. The results showed that the median interfibre joint strength increased by 18 upon adsorption of PAH. This can be achieved both by a larger molecular contact area in the contact zones and by a stronger molecular adhesion. The addition of the polymer also increased the number of efficient interfibre contacts per sheet volume. This combination of effects is the reason why polyamines such as PAH can increase the dry tensile strength of paper materials.

  20. Automatic Mapping of Standing Dead Trees after an Insect Outbreak Using the Window Independent Context Segmentation Method
    Authors: Michael Nielsen(1), Marco Heurich(2), Bo Malmberg(1), Anders Brun
    (1) Stockholm University
    (2) Bavarian Forest National Park
    Journal: Journal of Forestry, Vol. 112, nr 6, pages 564-571
    Abstract: Since the 1980s, there has been an increase in the spruce bark beetle population in the Bavarian Forest National Park in southeastern Germany. There is a need for accurate and time-effective methods for monitoring the outbreak, because manual interpretation of image data is time-consuming and expensive. In this article, the window independent context segmentation method is used to map deadwood areas. The aim is to evaluate the method's ability to monitor deadwood areas on a yearly basis. Two-color infrared scenes with a spatial resolution of 40 x 40 cm from 2001 and 2008 were used for the study. The method was found to be effective with an overall accuracy of 88 for the 2001 scene and 90 for the 2008 scene.

  21. A Streaming Distance Transform Algorithm for Neighborhood-Sequence Distances
    Authors: Nicolas Normand(1), Robin Strand, Pierre Evenou(1), Aurore Arlicot(1)
    (1) LUNAM Université, Université de Nantes, IRCCyN UMR CNRS 6597, France
    Journal: Image Processing On Line, Vol. 4, pages 196-203
    Abstract: We describe an algorithm that computes a "translated" 2D Neighborhood-Sequence Distance Transform (DT) using a look up table approach. It requires a single raster scan of the input image and produces one line of output for every line of input. The neighborhood sequence is specified either by providing one period of some integer periodic sequence or by providing the rate of appearance of neighborhoods. The full algorithm optionally derives the regular (centered) DT from the "translated" DT, providing the result image on the fly, with a minimal delay, before the input image is fully processed. Its efficiency can benefit all applications that use neighborhood-sequence distances, particularly when pipelined processing architectures are involved, or when the size of objects in the source image is limited.

  22. Spotting Words in Medieval Manuscripts
    Authors: Fredrik Wahlberg, Mats Dahllöf(1), Lasse Mårtensson(2), Anders Brun
    (1) Dept. of Linguistics and Philology, UU
    (2) University of Gävle, Sweden
    Journal: Studia Neophilologica, Vol. 86, pages 171-186
    Abstract: This article discusses the technology of handwritten text recognition (HTR) as a tool for the analysis of historical handwritten documents. We give a broad overview of this field of research, but the focus is on the use of a method called word spotting' for finding words directly and automatically in scanned images of manuscript pages. We illustrate and evaluate this method by applying it to a medieval manuscript. Word spotting uses digital image analysis to represent stretches of writing as sequences of numerical features. These are intended to capture the linguistically significant aspects of the visual shape of the writing. Two potential words can then be compared mathematically and their degree of similarity assigned a value. Our version of this method gives a false positive rate of about 30, when the true positive rate is close to 100, for an application where we search for very frequent short words in a 16th-Century Old Swedish cursiva recentior manuscript. Word spotting would be of use e.g. to researchers who want to explore the content of manuscripts when editions or other transcriptions are unavailable.

  23. Characterisations of Fibre Networks in Paper Using Micro Computed Tomography Images
    Authors: Erik L. G. Wernersson, Svetlana Borodulina(1), Artem Kulachenko(1), Gunilla Borgefors
    (1) Dept. of Solid Mechanics, KTH Royal Institute of Technology, Stockholm, Sweden
    Journal: Nordic Pulp and Paper Research Magazine, Vol. 29, No. 3, pages 468-475
    Abstract: Although several methods exist for characterisation of the morphology of wood fibres, the application of these procedures for the analysis of paper microstructure has been limited due to their complexity or shortcomings. Here, a methodology for microstructure characterisation of individual fibres, as well as paper, is presented which is based on three dimensional computed tomography images of paper at micrometer resolution. The first step of the method consists of a graphical user interface (GUI), designed to minimize the amount of manual labour. To manually identify a fibre from a 2 x 2 paper sheet takes about one minute with this GUI. Then several algorithms are available to analyse the image data automatically guided by the user input. With this approach it is possible to measure several characteristic properties without complete segmentation of the individual fibres. The methodology includes a method to calculate the contact areas between fibres even in extreme cases of severely deformed fibres, which are naturally present in paper. Among the measurable properties are also estimators for the free fibre lengths and fibre wall thickness. Comment: Journal front page illustration

  24. High- and Low-Throughput Scoring of Fat Mass and Body Fat Distribution in C. Elegans
    Authors: Carolina Wählby(1,2), Annie Lee Conery(3), Mark-Anthony Bray(2), Lee Kamentsky(2),
    Jonah Larkins-Ford(3), Katherine L. Sokolnicki(2), Matthew Veneskey(2), Kerry Michaels(4),
    Anne E. Carpenter(2), Eyleen J. O'Rourke(4)
    (1) Science for Life Laboratory, UU
    (2) Imaging Platform, Broad Institute of Harvard and MIT, Cambridge, MA, USA
    (3) Massachusetts General Hospital, Boston, MA, USA
    (4) Dept. of Biology, University of Virginia, Charlottesville, VA, USA
    Journal: Methods, Vol. 68, Issue 3, pages 492-499
    Abstract: Fat accumulation is a complex phenotype affected by factors such as neuroendocrine signaling, feeding, activity, and reproductive output. Accordingly, the most informative screens for genes and compounds affecting fat accumulation would be those carried out in whole living animals. Caenorhabditis elegans is a well-established and effective model organism, especially for biological processes that involve organ systems and multicellular interactions, such as metabolism. Every cell in the transparent body of C. elegans is visible under a light microscope. Consequently, an accessible and reliable method to visualize worm lipid-droplet fat depots would make C. elegans the only metazoan in which genes affecting not only fat mass but also body fat distribution could be assessed at a genome-wide scale.

    Here we present a radical improvement in oil red O worm staining together with high-throughput image-based phenotyping. The three-step sample preparation method is robust, formaldehyde-free, and inexpensive, and requires only 15 min of hands-on time to process a 96-well plate. Together with our free and user-friendly automated image analysis package, this method enables C. elegans sample preparation and phenotype scoring at a scale that is compatible with genome-wide screens. Thus we present a feasible approach to small-scale phenotyping and large-scale screening for genetic and/or chemical perturbations that lead to alterations in fat quantity and distribution in whole animals.

  25. Bone Canalicular Network Segmentation in 3D Nano-CT Images through Geodesic Voting and Image Tessellation
    Authors: Maria A Zuluaga(1,2,3), Maciej Orkisz(2), Pei Dong(2,3), Alexandra Pacureanu(4), Pierre-Jean Gouttenoire(2,3), Françoise Peyrin(2,3)
    (1) Centre for Medical Image Computing, University College London, UK
    (2) CREATIS, Université de Lyon, France
    (3) European Synchrotron Radiation Facility, Grenoble, France
    (4) Science for Life Laboratory, UU
    Journal: Physics in Medicine and Biology, Vol. 59, No. 9, pages 2155-2171
    Abstract: Recent studies emphasized the role of the bone lacuno-canalicular network (LCN) in the understanding of bone diseases such as osteoporosis. However, suitable methods to investigate this structure are lacking. The aim of this paper is to introduce a methodology to segment the LCN from three-dimensional (3D) synchrotron radiation nano-CT images. Segmentation of such structures is challenging due to several factors such as limited contrast and signal-to-noise ratio, partial volume effects and huge number of data that needs to be processed, which restrains user interaction. We use an approach based on minimum-cost paths and geodesic voting, for which we propose a fully automatic initialization scheme based on a tessellation of the image domain. The centroids of pre-segmented lacunæ are used as Voronoi-tessellation seeds and as start-points of a fast-marching front propagation, whereas the end-points are distributed in the vicinity of each Voronoi-region boundary. This initialization scheme was devised to cope with complex biological structures involving cells interconnected by multiple thread-like, branching processes, while the seminal geodesic-voting method only copes with tree-like structures. Our method has been assessed quantitatively on phantom data and qualitatively on real datasets, demonstrating its feasibility. To the best of our knowledge, presented 3D renderings of lacunæ interconnected by their canaliculi were achieved for the first time.

  26. Evaluation of the Automatic methods for Building Extraction
    Authors: Julia Åhlén(1), Stefan Seipel, Fei Liu
    (1) Dept. of Industrial Development, IT and Land Management, University of Gävle, Sweden
    Journal: International Journal of Computers and Communications, Vol. 8, pages 171-176
    Abstract: Recognition of buildings is not a trivial task, yet highly demanded in many applications including augmented reality for mobile phones. Recognition rate can be increased significantly if building façade extraction will take place prior to the recognition process. It is also a challenging task since each building can be viewed from different angles or under different lighting conditions. Natural situation outdoor is when buildings are occluded by trees, street signs and other objects. This interferes for successful building façade recognition. In this paper we evaluate the knowledge based approach to automatically segment out the whole building façade or major parts of the façade. This automatic building detection algorithm is then evaluated against other segmentation methods such as SIFT and vanishing point approach. This work contains two main steps: segmentation of building façades region using two different approaches and evaluation of the methods using database of reference features. Building recognition model (BRM) includes evaluation step that uses Chamfer metrics. BMR is then compared to vanishing points segmentation. In the evaluation mode, comparison of these two different segmentation methods is done using the data from ZuBuD. Reference matching is also done using Scale Invariant Feature Transform. The results show that the recognition rate is satisfactory for the BMR model and there is no need to extract the whole building façade for the successful recognition.


next up previous contents
Next: Refereed conference proceedings Up: Publications Previous: Special Journal Issues   Contents