Seminars at CBA, Fall 2002
Illusions and optical delusions
During the continuous search for physiological strangeness
I would like to share some of the more awesome and
funny items I "collected". However, only a very little
is actually known about how these illusions are made up by
our brains and more or less founded reasons can be
given to certain effects.
Segmentation of fluorescence labeled cells
On Monday I will talk about segmentation of fluorescence labeled cells. Starting with some different approaches to figure ground segmentation to find what is background and what is cell, and moving on to watershed segmentation, to split the foreground into individual cells. I also hope to discuss some variations on the "landscape" to perform the watershed segmentation on.
Report from ICPR2002, Quebec City
Gunilla BorgeFors and Ingela Nyström
We will give a brief report from International Conference on Pattern Recognition (ICPR2002) in Canada earlier this month. We will give some statistics and briefly present four papers, one from each conference Track.
Analysing the convex deficiency for a 3D object
For a nonconvex object, its concavity regions (i.e., convex deficiency CD) can be used to analyze the shape of the object. The CD can be decomposed into concavities (dents(?) on the surface of the object), cavities (background components enclosed in the object), and tunnels (background passing through the object). Known tools for shape analysis as distance transforms, thinning, watershed, and branch tracing are used for the decomposition and for analysing the structure of the tunnels.
This project was initiated during my stay in Napoli this summer.
Partial volume effect in PET studies
Positron Emission Tomography (PET) is a medical imaging technique based on molecule tracing. PET is used for tracing and measuring the concentration of biologically active molecules labeled with positron emitting so-called tracers which are injected into humans or animals or used in a technical set up in a phantom. The area of the studies is mostly based on biomedical processes within living (in-vivo) animals or humans or in laboratory set-ups of tissues/organs (in-vitro or ex-vivo).
The objectives of PET are mostly concerned with metabolism, physiology, and functionality of certain organs/tissues. PET also embrace studies related to measurement of blood flow and thereby measuring functionality of different parts of the brain. An important clinical application is the study of metabolism in tumors.
At the moment there exist four different types of cameras in the PET center in Uppsala, which are used for examination on humans ( Siemens-CTI HR+ and Scanditronix/GEMS 4096), on monkeys and rats (Hamamatsu SHR7700) and for rats and mice (Concorde Micro-PET). Data acquisition is done in 2D respective 3D with exception in Micro PET, which acquires data only in 3D.
The acquisition of data is based on detecting the emitted positrons (counts) by the camera from the organs/tissues, and the different geometry (2D or 3D) results in different sensitivity, resolution and data corrections. The primary data undergo the process of reconstruction and conversion and analyzing. The limited number of counts to individual detectors combined with properties of the reconstruction (filter back projection) generates noises in the images, which are sometimes up to 25%. The noises together with the camera's technical limitation (resolution) are the main factors determining the potential for detecting the exact size of a structure/structures in organ/tissues especially when the structures are small.
This kind of drawback needs to be taken into consideration for the clinical use of PET and need to be defined and quantified. One of the solutions to measure the actual size of a structure is called 50% method. . In our project we found this method to give correct size estimates. Another aspect was to assess the possibility to measure the correct radioactivity concentration in small objects. A rule of thumb was checked and the degradation effect of small object sizes were defined.
Using Hyperspectral Reflectance Data for Discrimination Between Healthy and Diseased Plants
Hamed Hamid Muhammed
Two methods will be presented and discussed during the seminar:
1. Feature-Vector Based Analysis (FVBA), based on linear transformations, such as PCA and ICA.
2. A nearest neighbour classifier: The correlation coefficient (COR) and the sum of squared differences (SSD) are used as distance measures (between two vectors).
An example of 3D paper void analysis
We compare two different methods for analysing the pore structure in 3D voxel images
of paper samples. (a) A scanning based approach, developed at PFI, Norway and (b) a method
based on computing watersheds, but slightly adapted to paper analysis. As paper is roughly
50% fibres and 50% pores, the properties for a papersheet very much depends on the pore-
network, thus of interest to study.
Image Analysis of cast-iron
(Master thesis presentation)
This master thesis discusses the possibility of distinguishing salient features of cast-iron using a commertial Image analyse system.
There are three different types of cast-iron, "Ductile iron", "Grey iron" and "Compacted graphite iron". It is the form of microstructure of graphite that determines the different types of cast-iron.
Distinguishing cast-iron is presently done in accordance with the SS-EN ISO 945 standard that is visual comparison between reference photos and the test samples. Since the standard consists of visual manual comparison, there is the risk that the ascertainment, to a certain extent may be dependent on the operator. In order to distinguish the composition of cast-iron more objectively, as well as to allow more room for a number of measurable parameters for cast-iron, image analyse could be a powerful tool. In this study, it was possible to use image analyse as a tool to distinguish the different types and size of the graphite particles, but not their distribution.
Bottom reflectance and colour variability in underwater scenes
I will talk about how bottom reflectance and water depth contribute to the
variability of colours observed in underwater taken images. The reflectance
properties of sand, brown algae and other bottom types are measured by
using a calibration object of known reflectance profile. Estimation of the
spectral reflectance of each pixel of the imaged surface could be done.
Acquired images are used to predict changes in colour due to water depth
Automatic identification and classification of Cytomegalovirus capsids in
I will talk about how I construct templates for three different capsid
classes and use these to segment and classify capsids in electron
micrographs. The segmentation and classification method is based on
matching and thresholding. I will also discuss some possible methods to
improve the classification result and how this hopefully can be used to
remove the manual thresholding step.
Error detecting and error correcting codes
When a telex message is transmitted over a long distance there may be some
interference, and the message may not be received as it is sent. In these
circumstances we need to be able to detect and, if possible, correct errors.
How should these codes be constructed in order to be able to detect and even
One answer to the question will be given in the seminar. Which is, the best
possible code that can correct one error.
Evaluation of Swedish lake water quality modeling from remote sensing
A simple bio-optical model has been developed and used together with historical
water quality measurements from Lake Mälaren, Sweden, to construct an algorithm
for retrieval of chlorophyll concentrations from remote sensing data.
The algorithm has previously been applied to CASI data from Lake Mälaren and the
result was promising, but the model needed further validation on data from other
lakes in order to investigate its generality. CASI data was also collected over
the nearby Lake Erken and in this paper the same algorithm was applied to that
data set and evaluated using ground truth measurements.
Implementation and Evaluation of Image Analysis Based Seed Classification and Sorting System
An automatic seed sorting and classification system was implemented and
processing hardware was assembled and software was developed. Sorting
divided 850 g seed
samples into two fractions containing typical and non-typical seeds.
analysing a seed sample of a certain species, was defined as sorting seed
kernels from other
species into the fraction of typical seeds. Classification was based on
statistical analysis of 22
morphological, colour and texture features extracted from seed kernel
images. A decision
algorithm, based on linear discriminant analysis and Mahalanobis distances,
was used. Seed
samples from eleven varieties of rye, barley, triticale, wheat and oats were
classified into five
classes. Classification into six and eleven classes was evaluated and a
lower accuracy than
with five classes was found. Canonical discriminant analysis was performed.
features were arranged in order of significance by stepwise discriminant
misclassification of 2% was found for classification from and into rye,
barley, wheat and oats.
Misclassification of barley and oats into triticale was 2.5% and 4.1%,
were achieved by sorting less than 10% of the sample as non-typical seeds.
for the project were met and the system will be developed into a commercial
A short overview on 2 talks at the MIA'2002 conference
At the begining of september I attended the "Mathematical and Image Analysis"
conference in Paris. Tomorrow, I will talk about 2 presentations that I thought
were particularly interesting.
1. Guillermo Sapiro: "The Art of Geodesics: Theory, Computational Framework, and
2. Olivier Faugeras: "Variational methods for Multimodal Image Matching"
The talks were both one hour long, so I will have problems even coming close to
explaining the subjects in 20 minutes, but I will at least give you an insight
about the ongoing work on Partial Differential Equations used to tackle Image
I will begin my talk by giving some historical and theoretical background,
presenting the heat equation and the relationship between PDEs and scale space
theory. I'll then present the results of applying PDEs on segmentation (Sapiro)
and registration (Faugeras).
I'll end the seminar by showing some very fun images of the very last results
brought by PDEs and IP (image restoration, compression...).
As usual I beg you to interrupt me whenever you would feel the need for a
clarification, or simply to open discussion.
Improved Shadows by Modifying the Shadow MapTechnique and the Phong-Blinn Light Model
The human visual system makes use of the information that shadows provide
in order to determine the location of objects. In computer graphics
rendered scenes it can sometimes be difficult to determine whether an
object is floating in the air or is placed on the ground when shadows are
not used. However, shadows are computationally expensive to generate even
with the shadow map technique, which is a quite fast brute force scheme.
Several enhancements (made by others and new ideas by us) that will make
shadow mapping both faster but also visually more appealing will be discussed.
Application of Fuzzy Set Theory in Image Segmentation
Application of classical segmentation methods yields hard
where the regions are disjoint and have crisp boundaries.
Unfortunately, these techniques do not preserve the inherent
uncertainties associated with the real images. They also fail
structural details embedded in the original grey distribution
overcome noise, blurring, and background variation.
It seems that many of these problems could be solved by
instead of crisp, segmentation.
The intention of this report is to analyze some of the
related to different approaches of applying fuzzy set theory
the segmentation procedures, i.e., to summarize information
different ways of obtaining fuzzy segmented images.
Can coral reefs be monitored from space?
Our coral project is now approaching its termination. Next week will be the final presentation of results in Stockholm. A preview of the coming slide-show on Monday will show you some results from this research. We have focused on possibilities and limitations in using remote sensing data for monitoring coral reefs. We have shown that it is possible to identify coral/non-coral bottoms and that extensive bleaching as in 1998 is possible to detect even in medium resolution satellites like Landsat and SPOT.
How to balance a turbine?
Do you know??? It is your problem if you don't...
A framework for the evaluation of image segmentation algorithms
The Medical Image Processing Group
University of Pennsylvania, Philadelphia, USA
The purpose of this presentation is to describe a framework for
evaluating image segmentation algorithms. Image segmentation consists of
object recognition and delineation. For evaluating segmentation methods,
three factors - precision (reproducibility/reliability), accuracy
(agreement with truth/validity), and efficiency (time taken) - need to
be considered for both recognition and delineation. To assess precision,
we need to choose a figure of merit, repeat segmentation considering all
sources of variation, and determine variations in the figure of merit
via statistical analysis. It is impossible usually to establish true
segmentation. Hence, to assess accuracy, we need to choose a surrogate
of true segmentation and proceed as for precision. In determining
accuracy, it may be important to consider different ?landmark? areas of
the structure to be segmented depending on the application. To assess
efficiency, both the computational and the user time required for
algorithm and operator training and for algorithm execution should be
measured and analyzed. Precision, accuracy, and efficiency are
interdependent. It is difficult to improve one factor without affecting
others. Segmentation methods must be compared based on all three
factors. The weight given to each factor depends on application. Some
examples will be given to illustrate how the framework can be used to
assess segmentation methods used in practice.
All of the following topics
Some aspects of state of the art in cytometry and brain resarch
Some inpressions from two workshops in California in October 2002
What did Ewert do in California?
Road sign recognition from a moving vehicle
This project aims to research the current technology for recognising
road signs in real-time from a moving vehicle. The most promising
technology for intelligent vehicle systems is vision sensors and
processing, so this is examined the most thoroughly. Different
processing algorithms and research around the world concerned with
recognition are investigated. A functioning system has also been
implemented using standard web-camera mounted in a testing vehicle.
system is restricted to speed signs and achieves good performance
to fast but still robust algorithms. Colour information is used for
segmentation and a model matching algorithm is responsible for the
recognition. The human-computer interface is a voice saying what sign
has been found.
Hamed Hamid Muhammed
Last modified: Mon Dec 16 14:42:56 MET 2002
. . . . . . . .