Seminars at CBA, Fall 2004
( Please send your e-mail address to me, if you wish to join our seminar-reminder e-mail list )
A Practical Introduction to JPEG 2000
Dr Andrew P. Bradley
Univ of Queensland, Australia
JPEG 2000 is the emerging image compression standard based on wavelet
technology. It has been designed to complement the existing JPEG standard
by providing improved compression performance and offering additional
features such as progressive transmission by resolution, quality, component,
or location; random access; and integrated lossless to lossy compression.
This talk will overview the embedded block coding compression scheme used in
JPEG 2000 and describe how the packet based code stream enables true encode
once, decode many functionality. The talk will then go on to describe the
application of JPEG 2000 to a variety of image sources (medical, aerial, and
compound) and imaging applications (client/server, database browsing and
region of interest coding).
Resolution pyramids on the fcc and bcc grids
Partitionings on the face-centered cubic grid and the
body-centered cubic grid that are suitable for resolution pyramids are
found. The partitionings have properties similar to the oct-tree
partitioning on the cubic grid. Therefore, they are well-suited for methods
to construct multiscale representations developed for the cubic grid.
Multiscale representations of images are constructed using different
An automatic method for acquiring paper cross section images
Master thesis presentation
Stora Enso Research center in Falun performs research on paper. Small cross sections of paper, about 0.2 mm long, are sequentially viewed and photographed using a scanning electron microscope, SEM. The images from the SEM are stored on the hard drive of the microscope computer and then analyzed. Each paper sample produces about 100 to 150 images. For each acquiring of an image, the operator of the microscope has to perform certain image settings and operations through clicks and scrolling with the computer mouse. A series of about 100 sample images demands large amounts of time when many operations must be performed per image. In order to decrease the amount of work for Stora Enso staff, through lessening the need for staff to be present at the SEM during image acquisition, and also to speed up the analyzing process and win time, a method to automate the process of image acquiring has been created.
To accomplish this task image analysis and computer communication were used as the main tools. Image analysis acts as a virtual eye to determine characteristic and/or critical points in a SEM image for decision making. Computer communication is used for commanding the SEM to perform certain actions. Combining these tools, a program acquiring images without human intervention was created and hidden behind a user friendly interface.
The program was tested on many different kinds of paper. It could be concluded that the total time demanded for the acquisition of a series of images, was drastically reduced. A series of a total of 100 images of any sample type could now take a little less than 1 h to acquire from the moment when the first scanning of an image starts, and no staffs need to be present at the SEM during that time.
Registration of tomographic animal volume images, from microPET, CT and MRT
Master thesis presentation
Medical imaging is of great importance in many fields, both in clinical
work and in medical research. Different imaging systems give different
information about the patient, why it is valuable to combine/register
different images to one another. The co-registrations allow precise
comparisions of organs, anatomical regions and pathological processes
between modalities. Many methods and programs have been developed for
registration of human images, mostly brains, while little work has been
done on full body animal images. Registration of animal images are of
interest since many medical experiments are performed on rats or monkeys.
This report describes the construction of a program performing
registrations on animal images from three different modalities; PET, CT
and MRT. The basis of the work have been another program, created for
registration of human brain images. Changes and additions have been made
to meet the requirements from this new field of application. Both global
and local registrations have been used.
Three experiments have been done to test the final program. The test
images were from rats and a Marmoset monkey. The experiments showed that a
method developed for human brain images can be used for full body
registrations of animal images with a satisfying result, especially if the
images are from the same animal. When the images are from different
individuals the results are a little poorer, but still fairly good.
A short introduction to VTK
The Visualization ToolKit (VTK) is one of the most widely used
software libraries for visualization. It is open source and contains
many state of the art visualization algorithms. Volume images are easy
visualized with VTK through, e.g., slice planes, isosurfaces, and volume
rendering. There are also several image processing routines available.
As a VTK-user you can choose between C++ programming or scripting in Tcl
In this talk I will try to give a mini-tutorial on how the visualization
pipeline is implemented in VTK, and how you can use VTK at CBA. I will
also provide you with some (hopefully useful) code examples. More info
is available on http://www.cb.uu.se/~erik/vtk/ (under construction).
Applying Multivariate analysis on dynamic and noisy PET data
Positron Emission Tomography (PET) stands in front position of the molecular
imaging for visualization of physiological/functional information of different
pharmaceuticals in a target area in vivo and in vitro. The quality of the
images relatively poor because such images are generally noisy, which makes the
analysis so difficult. It is essential to understand the behaviour and
properties of noise such as variance (magnitude) and texture (correlation).
Several methods have been employed for underestanding and improving the
quality of PET images and even several methos have been perposed concerning the
analysis of the PET studies e.g. compartmental meodels, Fourier analysis and
multivariate analysis e.g. PCA.
In general PCA is relatively simple method used for reduction of
data/dimensions and generating of high-contrast images that make feature
idenfication easier because it does not include any model-based restriction
since it is independent of any kinetic model. But this method is a data driven
technique, which has difficulty to separate signal from the noise specially in
noisy PET images. Several pre-normalization methods has been proposed that
provide better results and are in focus in this seminar.
Investigating an Image Analysis Approach for Characterisation and Differentiation of Fungal Spores
Master thesis presentation
The presentation will be in Swedish.
Fast surface rendering for interactive volume image segmentation in a haptic environment
Master thesis presentation
Segmentation is a very important step when analyzing medical volume
images. The goal with this Master's project is to implement fast
surface rendering for interactive volume image segmentation in a
The implementation uses a modified Marching Cubes algorithm for the
surface rendering. To make the
implementation efficient the surface rendering has been divided into
two major parts, surface extraction and triangle generation. The
implementation uses surface tracking to extract the iso-surface from
a volume image. This method finds a surface component and extracts
the connected surface by following the connectivity of the surface.
When the surface is extracted the shape and location of the surface
is stored for later use by the triangle generation. The triangle
generation uses the stored shape and location of the
surface in conjunction with an efficient caching strategy to make
the creation of a polygon mesh as efficient as possible.
The implementation of the surface renderer into the haptic
environment makes it possible to use surface rendering and volume
rendering with the aid of a haptic device. Some modelling and
segmentation tools are implemented, e.g., draw, erase, erode, dilate
The implementation of the surface renderer showed to be efficient
for arbitrary volume image sizes, and allows interactive
segmentation and modelling for moderate image sizes.
On Colour Reconstruction of Underwater Images Taken in Shallow Waters
Digital cameras are used to an ever-increasing extent for
collecting underwater imagery.
Marine scientists depend on trustful and economically defendable tools for
acquiring data under the ocean surface. However, colour images taken with
digital cameras under the water tend to be bluish due to severe absorption
of light at longer wavelengths. In this paper we study the possibilities of
correcting for this colour distortion through image processing. A parameter
that needs to be taken into account is the image enhancement functions
built into the camera. The functions, which kept as trade secrets by the
manufacturers will adapt the camera sensitivity functions to different
light conditions in different colours in order to beautify the image. We
use a spectrometer and underwater images taken by two different digital
cameras to approximate these functions for the appropriate spectral
intervals and are thus able to make a model for the behaviour of the white
balance algorithms in the different digital cameras. This model is used to
pre-process the underwater images so that the red, green and blue channels
show correct values before the images are subjected to correction for the
effects of the water column through application of Beer's Law to restore
colours by eliminating the effects of the water column. Experimental
results show that the proposed method works well for correcting images
taken at different depths.
Algorithms for the Analysis of 3D Magnetic Resonance Angiography Image
Atherosclerosis is a disease of the arterial wall, progressively impairing blood flow as it spreads throughout the body. The heart attacks and strokes that result of this condition cause more deaths than cancer in industrial countries. Angiography refers to the group of imaging techniques used through the diagnosis, treatment planning and follow-up of atherosclerosis. In recent years, Magnetic Resonance Angiography (MRA) has shown promising abilities to supplant conventional, invasive, X-ray-based angiography. In order to fully benefit from this modality, there is a need for more objective and reproducible methods.
This thesis shows, in two applications, how computerized image analysis can help define and implement these methods. First, by using segmentation to improve visualization of blood-pool contrast enhanced (CE)-MRA, with an additional application in coronary Computerized Tomographic Angiography. We show that, using a limited amount of user interaction and an algorithmic framework borrowed from graph theory and fuzzy logic theory, we can simplify the display of complex 3D structures like vessels. Second, by proposing a methodology to analyze the geometry of arteries in whole-body CE-MRA. The vessel centreline is extracted, and geometrical properties of this 3D curve are measured, to improve interpretation of the angiograms. It represents a more global approach than the conventional evaluation of atherosclerosis, as a first step towards screening for vascular diseases.
We have developed the methods presented in this thesis with clinical practice in mind. However, they have the potential to be useful to other applications of computerized image analysis.
Cognitive vision systems
Hamed Hamid Muhammed
Cognitive vision systems include facilities for "understanding", "knowing"
and "learning". Understanding involves Recognition and Categorization of
objects and events/actions. Interpretation and Reasoning enable
construction of rich semantic models of the environment. Knowing implicitly
specifies a need to consider memory as a common basis for representation
and maintenance of information. Learning involves automatic acquisition of
models and representations allow the system to operate in an open-ended
fashion beyond initial specifications.
The above issues can only be addressed in a meaningful manner in the
context of a fully operational system, which implies that it must be
embodied and continuously operating.
Finally, a number of potential applications of cognitive Vision are
presented, such as surveillance, industrial inspection, stock photo,
databases, industrial robotics, film TV and media, life science and
We will discuss the classical definition of a manifold and see how it can
be transformed to a digital setting. Some of the results in this digital
theory are quite surprising and different from the continuous theory.
No knowledge of topology is assumed.
Segmentation and Classification of Individual Tree Crowns in High Spatial Resolution Aerial Images
By segmentation and classification of individual tree crowns in high spatial resolution aerial images information about the forest can be automatically extracted. Segmentation is about finding the individual tree crowns and giving each of them an unique label. Classification, on the other hand, is about recognising the species of the tree. The information of each individual tree in the forest increases the knowledge of the forest which can be useful for managements, and biodiversity assessments, etc.
Different algorithms for segmenting individual tree crowns are presented and also compared to each other in order to find strengths and weaknesses. All segmentation algorithms developed in this thesis focus on preserving the shape of the tree crown. Regions, representing the segmented tree crowns, grows according to certain rules from seed points. One method starts from many regions for each tree crown and searches for the region that fits the tree crown best. The other methods start from a set of seed points, representing the locations of the tree crowns, to create the regions. The segmentation result varies from 73 to 95 % correctly segmented tree crowns depending on the forest type and method.
The classification method presented uses shape information of the segments and colour information of the corresponding tree crown in order to decide the species. The classification method classifies 77 % of the species correctly.
Robustness - calculation of spectra
When using multiple-layer color filters it is possible
to approximate spectra by the use of the pseudo inverse. The
seminar is devoted to a discussion on how errors in filtered
multispectral data affect a calculated spectrum. (This sub-project
is a cooperation between Fredrik Bergholm and Hamed H. Muhammed.)
A Short Introduction To ITK
The NLM Insight Segmentation and Registration Toolkit (ITK) is
a extensive open source software library for image analysis. Implemented
in C++, using generic programming, it gives an advanced, yet easy to
use, toolkit containing a huge amount of image analysis functions. The
toolkit is also wrapped in interpreted languages, such as Tcl, Java, and
I will give a brief overview of the structure of ITK, and how to use it.
An overview of image analysis in paper science
Paper made of wood fibre is an important material to study. The
network of fibres is a highly complex structure. Most of the studies of
paper with image analysis tools has until recently
been done in two-dimensional cross sections.
To visualize and do measurements the internal structure of paper in 3D is
important in many ways, but also a though problem. Many paper
properties on the macroscopical level correspond directly to properties
of single fibers and how they interact in the network of fibres.
Incorporating this information in the manufacturing of paper is not
yet reality. New ways of describing and exploring paper with image
analysis methods can be valuable for the papermaking industry in refining
the papermaking process.
This presentation will be an overview of the image analysis methods that
are used in paper science, discuss fundamental problems in imaging of
paper samples in 3D and also present some of the most important work in
the field. The discussion will focus on 3D imaging and 3D image
analysis of paper and other fibrous materials.
Please send your e-mail address to me, if you wish to join our seminar-reminder e-mail list
Hamed Hamid Muhammed
Last modified: Tue Dec 21 12:04:45 MET 2004
. . . . . . . .