next up previous contents
Next: Non-refereed conferences and workshops Up: Publications Previous: Journal articles   Contents

Refereed conference proceedings

Authors affiliated with CBA are in bold.
  1. Document Binarization Using Topological Clustering Guided Laplacian Energy Segmentation
    Authors: Kalyan Ram Ayyalasomayajula, Anders Brun
    In Proceedings: International Conference on Frontiers in Handwriting Recognition (ICFHR)
    Abstract: The current approach for text binarization proposes a clustering algorithm as a preprocessing stage to an energy-based segmentation method. It uses a clustering algorithm to obtain a coarse estimate of the background (BG) and foreground (FG) pixels. These estimates are used as a prior for the source and sink points of a graph cut implementation,which is used to efficiently find the minimum energy solution of an objective function to separate the BG and FG. The binary image thus obtained is used to refine the edge map that guides the graph cut algorithm. A final binary image is obtained by once again performing the graph cut guided by the refined edges on a Laplacian of the image.

  2. An evaluation of potential functions for regularized image deblurring
    Authors: Buda Bajic(1), Joakim Lindblad(1), Nataša Sladoje
    (1) Faculty of Technical Sciences, University of Novi Sad, Serbia
    In Proceedings: International Conference on Image Analysis and Recognition (ICIAR), Vilamoura, Portugal, Lecture Notes in Computer Science 8814, pages, 150-158
    Abstract: We explore utilization of seven different potential functions in restoration of images degraded by both noise and blur. Spectral Projected Gradient method confirms its excellent performance in terms of speed and flexibility for optimization of complex energy functions. Results obtained on images affected by different levels of Gaussian noise and different sizes of the Point Spread Functions, are presented. The Huber potential function demonstrates outstanding performance.

  3. Quantitative and Automated Microscopy - Where Do We Stand after 80 Years of Research?
    Author: Ewert Bengtsson
    In Proceedings: IEEE 11th International Symposium on Biomedical Imaging (ISBI), pages 274-277
    Abstract: Visual information is essential in medicine; almost all cancer is diagnosed through visual examination of tissue samples. But while the human visual system is excellent at recognizing patterns it is poor at providing reproducible quantitative data. Many tasks also require inspection of many thousands of images. Computerized image analysis has been developed ever since the first computers became available to provide quantitative data and to automate tedious tasks. Still the impact on routine pathology is limited. In this paper the historical development of the field is briefly outlined and the reasons for the limited impact so far are analyzed and some predictions are made about the future.

  4. Picro-Sirius-HTX Stain for Blind Color Decomposition of Histopathological Prostate Tissue
    Authors: Ingrid Carlbom, Christophe Avenel, Christer Busch(1)
    (1) Dept. of Immunology Genetics and Pathology, UU
    In Proceedings: IEEE 11th International Symposium on Biomedical Imaging (ISBI), pages 282-285
    Abstract: Gleason grading is the most widely used system for determining the severity of prostate cancer. The Gleason grade is determined visually under a microscope from prostate tissue that is most often stained with Hematoxylin-Eosin (H& E). In an earlier study we demonstrated that this stain is not ideal for machine learning applications, but that other stains, such as Sirius-hematoxylin (Sir-Htx), may perform better. In this paper we illustrate the advantages of this stain over H&E for blind color decomposition. When compared to ground truth defined by an experienced pathologist, the relative root-mean-square errors of the color decomposition mixing matrices for Sir-Htx are better than those for H&E by a factor of two, and the Pearson correlation coefficients of the density maps resulting from the decomposition of Sir-Htx-stained tissue gives a 99 correlation with the ground truth. Qualitative examples of the density maps confirm the quantitative findings and illustrate that the density maps will allow accurate segmentation of morphological features that determine the Gleason grade.

  5. Pixel Classification Using General Adaptive Neighborhood-Based Features
    Authors: Víctor González-Castro(1), Johan Debayle(1), Vladimir Curic
    (1) École Nationale Supérieure des Mines de Saint-Étienne, France
    In Proceedings: 22nd International Conference on Pattern Recognition (ICPR), pages 3750-3755
    Abstract: This paper introduces a new descriptor for characterizing and classifying the pixels of texture images by means of General Adaptive Neighborhoods (GANs). The GAN of a pixel is a spatial region surrounding it and fitting its local image structure. The features describing each pixel are then regionbased and intensity-based measurements of its corresponding GAN. In addition, these features are combined with the graylevel values of adaptive mathematical morphology operators using GANs as structuring elements. The classification of each pixel of images belonging to five different textures of the VisTex database has been carried out to test the performance of this descriptor. For the sake of comparison, other adaptive neighborhoods introduced in the literature have also been used to extract these features from: the Morphological Amoebas (MA), adaptive geodesic neighborhoods (AGN) and salience adaptive structuring elements (SASE). Experimental results show that the GAN-based method outperforms the others for the performed classification task, achieving an overall accuracy of 97.25 in the five-way classifications, and area under curve values close to 1 in all the five one class vs. all classes" binary classification problems.

  6. Robust and Invariant Phase Based Local Feature Matching
    Author: Anders Hast
    In Proceedings: 22nd International Conference on Pattern Recognition (ICPR), pages 809-814
    Abstract: Any feature matching algorithm needs to be robust, producing few false positives but also needs to be invariant to changes in rotation, illumination and scale. Several improvements are proposed to a previously published Phase Correlation based algorithm, which operates on local disc areas, using the Log Polar Transform to sample the disc neighborhood and the FFT to obtain the phase. It will be shown that the matching can be done in the frequency domain directly, using the Chi-squared distance, instead of computing the cross power spectrum. Moreover, it will be shown how combining these methods yields an algorithm that sorts out a majority of the false positives. The need for a peak to sub lobe ratio computation in order to cope with sub pixel accuracy will be discussed as well as how the FFT of the periodic component can enhance the matching. The result is a robust local feature matcher that is able to cope with rotational, illumination and scale differences to a certain degree.

  7. Towards Automatic Stereo Pair Extraction for 3D Visualisation of Historical Aerial Photographs
    Author: Anders Hast
    In Proceedings: International Conference on 3D Imaging (IC3D), 8 pages
    Abstract: An efficient and almost automatic method for stereo pair extraction of aerial photos is proposed. There are several challenging problems that needs to be taken into consideration when creating stereo pairs from historical aerial photos. These problems are discussed and solutions are proposed in order to obtain an almost automatic procedure with as little input as possible needed from the user. The result is a rectified and illumination corrected stereo pair. It will be discussed why viewing aerial photos in stereo is important since the depth cue gives more information than single photos do.

  8. Invariant Interest Point Detection Based on Variations of the Spinor Tensor
    Authors: Anders Hast, Andrea Marchetti(1)
    (1) Consiglio Nazionale delle Ricerche, Institute of Informatics and Telematics, Pisa, Italy
    In Proceedings: 22nd International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG), Communication papers proceedings, pages 49-56
    Abstract: Image features are obtained by using some kind of interest point detector, which often is based on a symmetric matrix such as the structure tensor or the Hessian matrix. These features need to be invariant to rotation and to some degree also to scaling in order to be useful for feature matching in applications such as image registration. Recently, the spinor tensor has been proposed for edge detection. It was investigated herein how it also can be used for feature matching and it will be proven that some simplifications, leading to variations of the response function based on the tensor, will improve its characteristics. The result is a set of different approaches that will be compared to the well known methods using the Hessian and the structure tensor. Most importantly the invariance when it comes to rotation and scaling will be compared.

  9. An Evaluation of the Faster STORM Method for Super-resolution Microscopy
    Authors: Omer Ishaq(1), Johan Elf(1,2), Carolina Wählby(1,3)
    (1) Science for Life Laboratory, SciLifeLab, UU
    (2) Dept. of Cell and Molecular Biology, UU
    (3) Imaging Platform, Broad Institute of Harvard and MIT, Cambridge, MA, USA
    In Proceedings: 22nd International Conference on Pattern Recognition, Stockholm, Sweden, pages 4435-4440
    Abstract: Development of new stochastic super-resolution methods together with fluorescence microscopy imaging enables visualization of biological processes at increasing spatial and temporal resolution. Quantitative evaluation of such imaging experiments call for computational analysis methods that localize the signals with high precision and recall. Furthermore, it is desirable that the methods are fast and possible to parallelize so that the ever increasing amounts of collected data can be handled in an efficient way. We here in address signal detection in super-resolution microscopy by approaches based on compressed sensing. We describe how a previously published approach can be parallelized, reducing processing time at least four times. We also evaluate the effect of a greedy optimization approach on signal recovery at high noise and molecule density. Furthermore, our evaluation reveals how previously published compressed sensing algorithms have a performance that degrades to that of a random signal detector at high molecule density. Finally, we show the approximation of the imaging system's point spread function affects recall and precision of signal detection, illustrating the importance of parameter optimization. We evaluate the methods on synthetic data with varying signal to noise ratio and increasing molecular density, and visualize performance on real super-resolution microscopy data from a time-lapse sequence of living cells.

  10. Geovisualization of Uncertainty in Simulated Flood Maps
    Authors: Nancy Joy Lim(1), Stefan Seipel
    (1) University of Gävle, Sweden
    In Proceedings: IADIS conference in Computer Graphics, Visualization, Computer Vision and Image Processing (CGCVIP), pages 206-214
    Abstract: The paper presents a three-dimensional (3D) geovisualisation model of uncertainties in simulated flood maps that can help communicate uncertain information in the data being used. An entropy-based measure was employed for uncertainty quantification. In developing the model, Visualisation Toolkit (VTK) was utilised. Different data derived from earlier simulation study and other maps were represented in the model. Cartographic principles were considered in the map design. A Graphical User Interface (GUI), which was developed in Tkinter, was also created to further support exploratory data analysis. The resulting model allowed visual identification of uncertain areas, as well as displaying spatial relationship between the entropy and the slope values. This geovisualisation has still to be tested to assess its effectiveness as a communication tool. However, this type of uncertainty visualisation in flood mapping is an initial step that can lead to its adoption in decision-making when presented comprehensively to its users. Thus, further improvement and development is still suggested for this kind of information presentation.

  11. Optimizing Optics and Imaging for Pattern Recognition Based Screening Tasks
    Authors: Joakim Lindblad(1), Nataša Sladoje(1), Patrik Malm, Ewert Bengtsson, Ramin Moshavegh(2), Andrew Mehnert(2)
    (1) Faculty of Technical Sciences, University of Novi Sad, Serbia
    (2) Dept. of Signals and Systems, Chalmers University of Technology, Gothenburg, Sweden
    In Proceedings: 22nd International Conference on Pattern Recognition (ICPR), Stockholm, Sweden, pages 3333-3338
    Abstract: We present a method for simulating lower quality images starting from higher quality ones, based on acquired image pairs from different optical setups. The method does not require estimates of point (or line) spread functions of the system, but utilizes the relative transfer function derived from images of real specimen of interest in the observed application. Thanks to the use of a larger number of real specimen, excellent stability and robustness of the method is achieved. The intended use is exploring the influence of image quality on features and classification accuracy in pattern recognition based screening tasks. Visual evaluation of the obtained images strongly confirms usefulness of the method. The approach is quantitatively evaluated by observing stability of feature values, proven useful for PAP-smear classification, between synthetic and real images from seven different microscope setups. The evaluation shows that features from the synthetically generated lower resolution images are as similar to features from real images at that resolution, as features from two different images of the same specimen, taken at the same low resolution, are to each other.

  12. Anti-Aliased Euclidean Distance Transform on 3D Sampling Lattices
    Authors: Elisabeth Linnér, Robin Strand
    In Proceedings: 18th IAPR International Conference on Discrete Geometry for Computer Imagery (DGCI), Siena, Italy, Lecture Notes in Computer Science 8668, pages 88-98
    Abstract: The Euclidean distance transform (EDT) is used in many essential operations in image processing, such as basic morphology, level sets, registration and path finding. The anti-aliased Euclidean distance transform (AAEDT), previously presented for two-dimensional images, uses the gray-level information in, for example, area sampled images to calculate distances with sub-pixel precision. Here, we extend the studies of AAEDT to three dimensions, and to the Body-Centered Cubic (BCC) and Face-Centered Cubic (FCC) lattices, which are, in many respects, considered the optimal three-dimensional sampling lattices. We compare different ways of converting gray-level information to distance values, and find that the lesser directional dependencies of optimal sampling lattices lead to better approximations of the true Euclidean distance.

  13. A Graph-Based Implementation of the Anti-Aliased Euclidean Distance Transform
    Authors: Elisabeth Linnér, Robin Strand
    In Proceedings: 22nd International Conference on Pattern Recognition (ICPR), Stockholm, Sweden, pages 1025-1030
    Abstract: With this paper, we present an algorithm for the anti-aliased Euclidean distance transform, based on wave front propagation, that can easily be extended to images of arbitrary dimensionality and sampling lattices. We investigate the behavior and weaknesses of the algorithm, applied to synthetic two-dimensional area-sampled images, and suggest an enhancement to the original method, with complexity proportional to the number of edge elements, that may reduce the amount and relative magnitude of the errors in the transformed image by as much as a factor of 10.

  14. Exact Evaluation of Stochastic Watersheds : From Trees to General Graphs
    Authors: Filip Malmberg, Bettina Selig, Cris L. Luengo Hendriks
    In Proceedings: 18th IAPR International Conference on Discrete Geometry for Computer Imagery (DGCI), Siena, Italy, Lecture Notes in Computer Science 8668, pages 309-319
    Abstract: The stochastic watershed is a method for identifying salient contours in an image, with applications to image segmentation. The method computes a probability density function (PDF), assigning to each piece of contour in the image the probability to appear as a segmentation boundary in seeded watershed segmentation with randomly selected seedpoints. Contours that appear with high probability are assumed to be more important. This paper concerns an efficient method for computing the stochastic watershed PDF exactly, without performing any actual seeded watershed computations. A method for exact evaluation of stochastic watersheds was proposed by Meyer and Stawiaski (2010). Their method does not operate directly on the image, but on a compact tree representation where each edge in the tree corresponds to a watershed partition of the image elements. The output of the exact evaluation algorithm is thus a PDF defined over the edges of the tree. While the compact tree representation is useful in its own right, it is in many cases desirable to convert the results from this abstract representation back to the image, e.g, for further processing. Here, we present an efficient linear time algorithm for performing this conversion.

  15. A Structural Texture Approach for Characterising Malignancy Associated Changes in Pap Smears Based on Mean-Shift and the Watershed Transform
    Authors: Andrew Mehnert(1), Ramin Moshavegh(1), K. Sujathan(1), Patrik Malm, Ewert Bengtsson
    (1) Dept. of Signals and Systems, Chalmers University of Technology, Gothenburg, Sweden
    In Proceedings: 22nd International Conference on Pattern Recognition (ICPR), Stockholm, Sweden pages 1189-1193
    Abstract: This paper presents a novel structural approach to quantitatively characterising nuclear chromatin texture in light microscope images of Pap smears. The approach is based on segmenting the chromatin into blob-like primitives and characterising their properties and arrangement. The segmentation approach makes use of multiple focal planes. It comprises two basic steps: (i) mean-shift filtering in the feature space formed by concatenating pixel spatial coordinates and intensity values centred around the best all-in-focus plane, and (ii) hierarchical marker-based watershed segmentation. The paper also presents an empirical evaluation of the approach based on the classification of 43 routine clinical Pap smears. Two variants of the approach were compared to a reference approach (employing extended depth-of-field rather than mean-shift) in a feature selection/classification experiment, involving 138 segmentation-based features, for discriminating normal and abnormal slides. The results demonstrate improved performance over the reference approach. The results of a second feature selection/classification experiment, including additional classes of features from the literature, show that a combination of the proposed structural and conventional features yields a classification performance of 0.919 0.015 (AUC Std. Dev.). Overall the results demonstrate the efficacy of the proposed structural approach and confirm that it is indeed possible to detect malignancy associated changes (MACs) in conventional Papanicolaou stain.

  16. Virus Recognition Based on Local Texture
    Authors: Ida-Maria Sintorn(1), Gustaf Kylberg
    (1) Vironova AB, Stockholm, Sweden
    In Proceedings: 22nd International Conference on Pattern Recognition (ICPR), Stockholm Sweden, pages 3227-3232
    Abstract: To detect and identify viruses in electron microscopy images is crucial in certain clinical emergency situations. It is currently a highly manual task, requiring an expert sitting at the microscope to perform the analysis visually. Here we focus on and investigate one aspect towards automating the virus diagnostic task, namely recognizing the virus type based on their texture once possible virus objects have been segmented. We show that by using only local texture descriptors we achieve a classification rate of almost 89 on texture patches from 15 different virus types and a debris (false object) class. We compare and combine 5 different types of local texture descriptors and show that by combining the different types a lower classification error is achieved. We use a Random Forest Classifier and compare two approaches for feature selection.

  17. The Minimum Barrier Distance - Stability to Seed Point Position
    Authors: Robin Strand, Filip Malmberg, Punam Saha(1), Elisabeth Linnér
    (1) University of Iowa, USA
    In Proceedings: 18th IAPR International Conference on Discrete Geometry for Computer Imagery (DGCI), Siena, Italy, Lecture Notes in Computer Science, vol 8668, pages 111-121
    Abstract: Distance and path-cost functions have been used for image segmentation at various forms, e.g., region growing or live-wire boundary tracing using interactive user input. Different approaches are associated with different fundamental advantages as well as difficulties. In this paper, we investigate the stability of segmentation with respect to perturbations in seed point position for a recently introduced pseudo-distance method referred to as the minimum barrier distance. Conditions are sought for which segmentation results are invariant with respect to the position of seed points and a proof of their correctness is presented. A notion of δ-interface is introduced defining the object-background interface at various gradations and its relation to stability of segmentation is examined. Finally, experimental results are presented examining different aspects of stability of segmentation results to seed point position.

  18. Scribal Attribution using a Novel 3-D Quill-Curvature Feature Histogram
    Authors: Fredrik Wahlberg, Anders Brun, Lasse Mårtensson(1)
    (1)University of Gävle, Sweden
    In Proceedings: International Conference on Frontiers in Handwriting Recognition (ICFHR),Crete, Greece
    Abstract: In this paper, we propose a novel pipeline for automated scribal attribution based on the Quill feature: 1) We compensate the Quill feature histogram for pen changes and page warping. 2) We add curvature as a third dimension in the feature histogram, to better separate characteristics like loops and lines. 3) We also investigate the use of several dissimilarity measures between the feature histograms. 4) We propose and evaluate semi-supervised learning for classification, to reduce the need of labeled samples.Our evaluation is performed on 1104 pages from a 15th century Swedish manuscript. It was chosen because it represents a significant part of Swedish manuscripts of said period.Our results show that only a few percent of the material need labelling for average precisions above 95. Our novel curvature and registration extensions, together with semi-supervised learning, outperformed the current Quill feature.

  19. Knowledge Based Single Building Extraction and Recognition
    Authors: Julia Åhlén(1), Stefan Seipel
    (1) Dept. of Industrial Development, IT and Land Management, University of Gävle, Sweden
    In Proceedings: WSEAS International Conference on Computer Engineering and Applications, pages 29-35
    Abstract: Building façade extraction is the primary step in the recognition process in outdoor scenes. It is also a challenging task since each building can be viewed from different angles or under different lighting conditions. In outdoor imagery, regions, such as sky, trees, pavement cause interference for a successful building façade recognition.In this paper we propose a knowledge based approach to automatically segment out the whole façade or major parts of the façade from outdoor scene. The found building regions are then subjected to recognition process. The system is composed of two modules: segmentation of building façades region module and façade recognition module.In the façade segmentation module, color processing and objects position coordinates are used. In the façade recognition module, Chamfer metrics are applied. In real time recognition scenario, the image with a building is first analyzed in order to extract the façade region, which is then compared to a database with feature descriptors in order to find a match. The results show that the recognition rate is dependent on a precision of building extraction part, which in turn, depends on a homogeneity of colors of façades.

  20. Time-Space Visualisation of Amur River Channel Changes Due to Flooding Disaster
    Authors: Julia Åhlén(1), Stefan Seipel
    (1) Dept. of Industrial Development, IT and Land Management, University of Gävle, Sweden
    In Proceedings: International Multidisciplinary Scientific GeoScience Conference (SGEM)
    Abstract: The analysis of flooding levels is a highly complex temporal and spatial assessment task that involves estimation of distances between references in geographical space as well as estimations of instances along the time-line that coincide with given spatial locations. This work has an aim to interactively explore changes of Amur River boundaries caused by the severe flooding in September 2013. In our analysis of river bank changes we use satellite imagery (Landsat 7) to extract parts belonging to Amur River. We use imagery from that covers time interval July 2003 until February 2014. Image data is pre-processed using low level image processing techniques prior to visualization. Pre-processing has a purpose to extract information about the boundaries of the river, and to transform it into a vectorized format, suitable as inputs subsequent visualization. We develop visualization tools to explore the spatial and temporal relationship in the change of river banks. In particular the visualization shall allow for exploring specific geographic locations and their proximity to the river/floods at arbitrary times. We propose a time space visualization that emanates from edge detection, morphological operations and boundary statistics on Landsat 2D imagery in order to extract the borders of Amur River. For the visualization we use the time-spacecube metaphor. It is based on a 3D rectilinear context, where the 2D geographical coordinate system is extended with a time-axis pointing along the 3rd Cartesian axis. Such visualization facilitates analysis of the channel shape of Amur River and thus enabling for a conclusion regarding the defined problem. As a result we demonstrate our time-space visualization for river Amur and using some amount of geographical point data as a reference we suggest an adequate method of interpolation or imputation that can be employed to estimate value at a given location and time.

next up previous contents
Next: Non-refereed conferences and workshops Up: Publications Previous: Journal articles   Contents