Seminars at CBA, Spring 2003
Detektion av slitna strömavtagare med hjälp av datoriserad bildanalys
(Master thesis presentation in Swedish)
In this M.Sc. Thesis work it is examined if, with computerised image
analysis, it is possible to automatically decide whether a pantograph is
damaged and in need of maintenance. This problem arises in an automatic
image analysis system which generates images of trains. The thesis work is
a part of a project with the purpose to limit the number of torn down
contact wires. The report describes a method of doing this and the
stability of an implementation of the method.
I detta examensarbete undersöks om det med datoriserad bildanalys är
möjligt att från en bild på en strömavtagare automatiskt avgöra om
strömavtagaren är sliten och i behov av underhåll. Problemställningen
kommer från ett automatiskt bildanalyssystem som genererar bilder på tåg.
Examensarbetet är en del i ett projekt som syftar till att minska antalet
nedrivna kontaktledningar. Rapporten beskriver ett angreppssätt och
stabiliteten i en implementation av detta.
At the Interface Between Biomedical Optics and Cancer Research
Head of Cancer Imaging Unit, British Columbia Cancer Research Center, Canada
The exploitation of the interaction of light and tissue for early
detection, classification (grading) and treatment(biopsy and
chemoprevention). Using the tools of analytical imaging (quantiative
microscopy) and optical techniques (tissue autoflourescence imaging and
spectroscopy) as well as their marrage in in vivo confocal
Results will be discussed from on going work in the lung and the
Non-linear modelling of shape
The seminar will take up the non-linearity of
shapes as they appear from natural objects. Do they? We will have a look
at sub-spaces of sampled shape contours and see that shape variation
appears as highly non-linear foldings of sample clouds. An attempt was
made to model such clouds introducing a non-linear PCA model formulated in
a neural network manner. If the time allows, additionally, a short
introduction to kernel PCA will be given to provide alternative views on
Användning av smart kamera för inspektion av topografi och glans hos blanka objekt
(Master thesis presentation in Swedish)
I tillverkningsindustrin är det ofta nödvändigt att veta att topografi och glans hos tillverkade föremål uppfyller vissa krav. I detta examensarbete behandlas problemet med att inspektera blanka ytor, vilket de flesta inspektionsmetoder som används idag inte klarar.
Målet för arbetet har varit att utreda om en smart kamera, som automatiskt detekterar linjer, kan användas för uppgiften. Det visas att det går att få information om objektets form och glans genom att observera spegelbilden av en linjär ljuskälla i objektets yta. Formen på den observerade linjen ger ytans form, och intensiteten ger glansen. Inspektion i hög hastighet möjliggörs av den smarta kameran IVP Ranger SAH5, som ger datareduktion med en faktor 512 jämfört med en vanlig sensor av samma storlek. Metoden fungerar bäst på cylindriska objekt. Konkaviteter går inte att inspektera. En stor fördel med metoden är att den ställer mycket låga krav på upplinjering.
Metoden har testats på problemet att inspektera bränslekutsar av urandioxid, och har visat sig klara många av inspektionskraven.
A tool for filtering and visualization of digital images in the Fourier domain
A software application for image filtering in the 2D and 3D
Fourier domain has been developed. The intent is to use the
application as a learning tool in digital image analysis
courses. An introduction to the discrete Fourier
transform (DFT) is given, with an emphasis on how it is applied
to images. Important properties of the DFT is exemplified, and
it is described how filters are designed in the Fourier
domain to avoid unwanted effects. Butterworth filters and
Gaussian filters are specially considered.
A method for visualizing the phase spectrum
of the Fourier transform is presented. The visualization method
makes use of the HSI color model, and it gives possibility
to display both the amplitude spectrum and the phase spectrum
in the same image to facilitate interpretation.
An overview of the implementation is
given, and it is shown how filters created using the software
can be processed with success on 2D and 3D images. A user's manual is
To allow for future extensions
of this work, literature on quadrature filters and filter
optimization have been studied and given as reference.
Datorstödd datortomografidiagnostik för tjocktarmscancer -- översikt och centrumlinjer
Department of Scientific Computing (TDB), Uppsala University
(Presentation in Swedish)
CTC = CT Colonography
CAD = Computer-Aided Diagnosis (ibland Detection)
* Kort översikt över vad "CAD för CTC" är, varför CTC ibland
heter "Virtual Endoscopy", mål, problem, lösningar
* Studie av hur man kan göra snabba centrumlinjer, och vad de kan användas
till inom CTC med/utan CAD
* Demonstration av vår egenutvecklade programvara
Color image processing
Center of Mathematical Morphology, Fontainebleau, France
The seminar attempts to treat four aspects of digital colour imagery
- It begins with a critical analysis of the current colour spaces. Many
of these spaces were originally developed for computer graphic
applications, and are not suitable for quantitative image processing (lack
of independence between chromatic and not chromatic data, measurements
which do not satisfy the triangular inequality...). Some changes are
proposed for improving the current spaces, and new spaces are introduced.
- The second point, more specific, deals with the hue, i.e., with data
that are defined on the unit circle. The round table structure of the unit
circle may seem contradictory with the need of suprema and infima required
by lattices. However, one can overcome the contradiction either by
accepting increment-based operators only, or by introducing criteria of
clusters, or finally by using the hue as a way to label the space.
- The algorithms which are based on the product lattice of the three
colours RGB introduce parasite supplementary colours. Such an effect
disappears when the lattice under study is totally ordered (e.g.,
lexicographic order). To which extend do we need a total order, and if so,
which one? These questions are approached within the scope of filtering.
- The segmentation techniques generally involve gradients. Consequently,
the first part of the lecture is devoted to introduce various colour
gradients and increments, and to and compare their performances. The
second part applies to colour images the segmentation techniques based on
watersheds and on connections. The last part of the lecture concerns
multispectral data such as those obtained in polarized microscopy.
Tessellating the plane with regular polygons
Tessellations of the plane by regular polygons was done already
by Ura-Kaipa* and his contemporaries. The first systematic treatment of
regularity of such tessellations was presented by Johannes Kepler in 1619,
and is still valid. During the last century much new work was added, to
further classify these patterns. I will talk about possible patterns and
classification of patterns from a geometrical point of view and will show a
considerable number of patterns.
* Read Heidenstam: Svenskarna och deras hövdingar 1.
2 months in Delft trying to understand how to deal with 4D images
A new challenge - from 3D to 4D images! I will talk about what I was working on in Delft during the Autumn. It regards reducing the amount of data in a 4D image (can also be called skeletonization :-) ). I will describe what we have found out so far and what the difficulties are compared to working with 3D images.
The algorithm will be used in a project which aims to "develop an integrated approach to visualize and quantitatively analyze spatial processes in living cells and tissues" (by time series of 3D-images, primarily confocal scanning microscopy). I will describe how a reduction algorithm can be useful for this purpose.
And, I will have to say some nice words about the group I was working in - Pattern Recogntion Group, TU Delft.
Creating synthetic PET images and using Auto-correlation function (ACF) for verification of the precision of the reconstruction algorithm
Positron Emission Tomography (PET) is a medical imaging technique based on molecule tracing. PET is used for tracing and measuring the concentration of biologically active molecules labeled with positron emitting, so-called tracers, which are injected into humans or animals or used in a technical set up in a phantom. The area of the studies is mostly based on biomedical processes within living (in-vivo) animals or humans or in laboratory set-ups of tissues/organs (in-vitro or ex-vivo).
The objectives of PET are mostly concerned with metabolism, physiology, and functionality of certain organs/tissues. An important clinical application is the study of metabolism in tumors.
At the moment there exist four different types of cameras in the PET center in Uppsala, which are used for examination on humans (Siemens-CTI HR+ and Scanditronix/GEMS 4096), on monkeys and rats (Hamamatsu SHR7700) and for rats and mice (Concorde Micro-PET). Data acquisition is done in 2D respective 3D with exception in Micro PET, which acquires data only in 3D.
The acquisition of data is based on detecting the emitted positrons (counts) by the camera from the organs/tissues, and the different geometry (2D or 3D) results in different sensitivity, resolution and data corrections.
The raw data collected (sinograms) is reconstructed and converted for further visualization and image analysis. The limited number of counts to individual detectors combined with properties of the reconstruction (filter back projection) generates noise in the images, which are sometimes up to 25%.
One-way of understanding how PET images are created is to create synthetic PET images to have control over what is being analyzed and how much correlated the noises are, among other things.
One way to verify the reconstruction algorithm used for creating the PET images is to apply Auto-correlation function on the created PET images in 2D/3D to se the properties of the filer that is used. In other words, whether the reconstructed image is reliable or not.
Influence of Bottom Reflectance on a Color Correction Algorithm for Underwater Images
Diminishing the bluishness in digital underwater images caused by negative effects of water column is the aim of a color correction algorithm presented here. I will describe a set of calculations for determining the impact of bottom reflectance on the algorithm's performance. I will also describe the adverse effects of extremely low and high bottom reflectances on the algorithm and how to compensate for these effects.
Bildanalys i dagens jordbruk - så mycket mer än satellitbilder
Anna kommer att prata om:
olika exempel på bildanalystillämpningar
mjölkningsrobotar, klassificering av slaktkroppar, självgående
användning av fjärranalys
detektion av fältgränser från satellit- och flygbilder
Capturing spatial and spectral information simultaneously - by using WINN
Hamed Hamid Muhammed
Fresh research results will be presented.
Segmentation by Brownian motion
A parameter free method for segmentation of tree crowns in aerial images
will be presented. The method uses Brownian motion to construct a new image that
is used to produce the final segmentation of the image. Result from two
different image materials will be given in order to show that the result is neihter
dependent on scale nor colour.
Visual tracking in 3-dimensions using single/multiple fixed camera(s) configurations
and some other work being done in image processing at the University of Cape Town
The problem I am working on is that of visual person tracking. I aim to
generate useful information about how people move around inside a room, a
parking lot or a shopping centre given a sequence of images generated by one
or more cameras "looking" at a scene of interest.
Tracking in world coordinates rather than image coordinates offers several
advantages but requires camera calibration information. Obtaining
calibration information manually can be very tedious and so I've been spending some time
working on an automatic camera calibration method. (See attachment).
At the seminar I will present the calibration method, and if we're lucky
some tracking in 3D. I will also speak a bit on what goes on at the University of
Cape Town in the image processing field.
Environmental applications of aquatic remote sensing
Many lakes, coastal zones and oceans are directly or indirectly influenced by human activities. Through the outlet of a vast amount of substances in the air and water, we are changing the natural conditions on local and global levels. Remote sensing sensors, on satellites or airplanes, can collect image data, providing the user with information about the depicted area, object or phenomenon. Three different applications are discussed in this thesis. In the first part, we have used a bio-optical model to derive information about water quality parameters from remote sensing data collected over Swedish lakes. In the second part, remote sensing data have been used to locate and map wastewater plumes from pulp and paper industries along the east coast of Sweden. Finally, in the third part, we have investigated to what extent satellite data can be used to monitor coral reefs and detect coral bleaching. Regardless of application, it is important to understand the limitations of this technique. The available sensors are different and limited in terms of their spatial, spectral, radiometric and temporal resolution. We are also limited with respect to the objects we are monitoring, as the concentration of some substances is too low or the objects are too small, to be identified from space. However, this technique gives us a possibility to monitor our environment, in this case the aquatic environment, with a superior spatial coverage. Other advantages with remote sensing are the possibility of getting updated information and that the data is collected and distributed in digital form and therefore can be processed using computers.
Forskning inom grafik och visualisering
Abstract is coming soon...
3D Visualisation of Sonar Coverage Data
The aim of this paper is to examine whether it is possible
to display sonar data as volumes and, if possible, to investigate to
what extent the algorithms for displaying the volumes can be extended or
manipulated to show the data from different points of view. In order to
display the sonar data as volumes, the isosurface algorithm, Marching
Cubes, and the direct volume rendering algorithms, 3D texture mapping
and volume ray tracing, are discussed. However,only Marching Cubes and
3D texture mapping have been implemented and compared in a more detailed
way to examine their actual usability for solving the problem. The
results have varied depending on the algorithms used and the purpose of
their usage. The use of Marching Cubes is relatively fast, but it is
more difficult to find usability for this algorithm. On the other hand,
3D texturing is quite slow, but can be used in many different ways. In
conclusion, it is clear that the problem of displaying sonar data as
volumes can be solved.
Grey-level convex hull computation utilized in paper pore analysis
I will present how discrete convex hulls can be computed in 2D
grey-level images, where the grey-level values are interpreted as
heights in 3D landscapes. For these 3D objects, we compute
approximations of their convex hull using a 3D binary method.
Differently from other grey-level convex hull algorithms, producing
results convex only in the geometric sense, our convex hull is convex
also in the grey-level sense. I will also discuss an application where
the structure of paper sheet is analyzed through confocal microscope
Automatic Image Analysis of concrete cracks
Karl Bolin & Johan Helgesson
The objective of the work is to investigate if image analysis methods
could be used to measure crack
changes on large concrete structures.
The thesis is based on a method developed by Prof. Anders Heyden
Lund University which calculates center of masses of painted objects. By
this method sub pixel
accuracy of movements can be detected.
A web cam was supposed to be used to acquire pictures of a painted concrete
surface. Images from the camera
were not acquired. Simulated pictures with
superimposed white noise were instead used to evaluate the
accuracy of the
method. With no noise the error was 0.008 pixels with an optimal threshold
a non optimal threshold the errors were 0.05-0.2 pixels. With
noise the accuracy further deteriorated
but still was better than 0.2 pixels
at a noise level of 15 %. When a pre-processing step with morphology
mplemented the accuracy is reduced.
The conclusion is that the program is accurate when images with high resolution
and low level of noise
can be retrieved.
Non linear filters for multicomponent images + guests
LIS, Grenoble, France
In this seminar, I will present some of the toys I have been playing with during the past few years in the filed of image processing. I will more specifically focus on the extension of some non linear filters (order filters, morphological operators) to the case of multicomponent images. These filters are based on the definition of an ordering relation ; therefore their extension to the vector case is not straightforward. I will also present some of the applications I have been working on (seismic data processing, sonar image analysis, satellite remote sensing).
Plaque burden estimation on whole-body MRA. First steps: registration and segmentation
On monday I will talk about what has become my main project: estimation of
atherosclerosis burden on whole-body magnetic resonance angiography images.
A clinical study is being conducted at Uppsala University Hospital, to follow
during a five years period a quite large population of healthy volunteers around 70 years of age. The goal is to get a global insight of their cardio-vascular function, and to evaluate the risk factors for diseases related to this
One of the many tests they undergo is a contrast-enhanced whole-body magnetic
resonance angiography. Four sub-volumes are acquired very quickly, to minimize
discomfort: head, thorax and abdomen, upper legs, and lower legs. I will talk
about the first two steps required for the processing of these volumes:
- the four sub-volumes present geometric distortions produced by the acquisition procedure. They have to be corrected, as a preprocessing step. I will present possible methods to do that, based on registration and deformable models.
- the objects of interest in these images are the arteries. I will refresh yourmemory on the anatomy of these vessels, and present our solution to the segmentation problem. It involves user interaction, by placing some landmark
points in the volume, and a fast-marching algorithm to actually perform the
Cell image analysis; past, present and future
This year we have the 30iest anniversary since Ewert started working on cell image analysis in the project that was to develop into CBA, and CBA has existed half that time. This double anniversary will be sort of featured in one of the invited presentations at this years SCIA in Göteborg in early July. Ewert has just started preparing the presentation with the same title as this seminar. Hopefully enough will be prepared until next Monday to give some idea about the contents for those who are not going to SCIA and a preview for those who are. And also to get some feed-back about what should be given more space and what should be left out.
In the Past cell image analysis has very much been dealing with cancer, screening, diagnosis and grading. At Present there is a strong trend towards new quantitative probes that can show how genes are expressed in the control of protein synthesis and how this determines the development of cells and organs. In the Future Quantitative Image Cytomics, QIC, may very well be the crucial field that makes all the far reaching promises of the biotechnology revolution come true. Ewert has just completed a proposal to VR together with Tomas Gustavsson at Chalmers in which they propose to develop the toolbox needed for QIC.
Shape description through surface parametrization
On monday I will give an overview of a method for global shape
approximation that was developed in the mid 90's by Christian Brechbuhler,
ETH, Switzerland. Some recent contributions by myself will also be
Simulated annealing - applied to the traveling salesman problem
Is it possible that a thermodynamical description of how molten
metal cool and crystalize, may provide an algorithm that can minimize an
objective function E over an enormous discrete configuration space S? The
answer to that question is yes, and that fact was investigated in a project
in algorithmic problem solving during the fall of 2003. By using the
heuristic method of simulated annealing, the speaker was able to solve
"large" instances of the well known combinatorial problem, the traveling
salesman problem. The empirical results will be presented, together with
some hands-on experience, for the audience, in solving the related
Hamiltonian cycle problem.
PS. A technical report, together with source code and references may be
Volume Rendering of Anathomy Data
As marching cubes and volume rendering techniques have been described
earlier the main purpose of this seminar is to demonstrate how direct
volume rendering can be used to visualise anathomy data in the field of
treatment planning. I will give a short introduction to cancer, radio
therapy and to our treatment planning system. I will also present some
problems related to direct volume rendering on consumer hardware.
Combining Intensity, Edge, and Shape Information for 2D and 3D Segmentation of Cell Nuclei
I will present the work by Carolina Wahlby and myself on cell nuceli
segmentation in 2D and 3D fluoresence microscopy images conducted during
the spring 2003. A segmentation method was constructed which
consists of four main steps. Morphological filtering is used to mark small
regions, or seeds, corresponding to the cell nuceli and the background
respectively. A watershed tranformation segments the image into regions,
each containing exactly one seed. Next, regions with weak borders are
merged. Finally, clusters of nuclei are separated based on the shape of
the cluster. Results of the method applied to both 2D and 3D images will
Att identifiera och lägesbestämma verktygshandtag med bildanalys
Uppgiften är dels det klassiska "bin-picking"-problemet, dels en nogrann
positionsbestämning. Föremålen är fogsvanshandtag, som ska plockas ut en
stor låda med handtag som ligger huller ombuller och sedan positioneras så
exakt att de automatiskt kan sättas ihop med sågbladet.
Fast Specular Highlights by Modifying the Phong-Blinn Model
The computation of specular highlights is very expensive since a power
function is involved. Usually a table is used instead, in order to make it
faster. Schlick proposed an alternative formulation which involves a
division. We propose a new formulation which only involves additions and
multiplications. It also involves an if statement which have the advantage
that we can decide when there is no highlight on the polygon. Hence the
computations can be skipped entirely.
Hamed Hamid Muhammed
Last modified: Tue May 27 12:04:16 MEST 2003
. . . . . . . .
This page has been accessed 2,501 times since 16-Aug-2002 - calculated by CronCount