Seminars at CBA, Fall 2003
Measuring the Length of the Intersection of a Line and a Digital Volume
This Master Thesis investigates methods to calculate the
intersection length of a line and a digital volume. The problem has a
background in the planning of radiation therapy of cancer tumours, so
the object and rays are real objects, simulated in a computer. Two
methods were implemented and analyzed: voxel traversal along a digital
line through a digital volume and a line-polyhedron intersection test
where the polyhedron is the isosurface extracted from the volume data.
The finite precision of a floating-point system makes it hard to
implement robust geometric algorithms. An extension to the floating
point arithmetics was used to be able to calculate exact geometric
predicates. The former method turns out to be more efficient than the
latter but also less accurate.
Open Boundary Conditions for Shallow Water Waves Propagation
In Oceanography, the spatial domains of the problems are
typically so large that it does not make sense to try to model
the whole region. Instead, one must split up spatial domains
into submodels, subgrids, with artificial water boundaries, so-called
These open boundaries cause tricky numerical problems. A passive
open boundary should be constructed so that propagating waves leave the
model grid, with as little artificial reflections as possible.
However, the problem is known to be ill-conditioned,
and most available methods tend to give quite substantial
reflections for shallow water waves. One way to reduce these
unwanted reflections is to use intermittent open boundaries,
that is, open boundaries which are turned on and off periodically.
The seminar explains the principles for such boundaries, which
require some specific tricks (strategies) in order to work properly.
The emphasis in the seminar will be on specifying a dynamic
process (for the 3 involved variables u,v,h) which generates
negligible unwanted artificial wave reflection at open boundaries.
(Knowledge background in oceanography is not necessary.)
Volume rendering using graphics and haptics
Real-time visualization of volume images is common today.
By using hardware accelerated 3D textures it is possible to render
large volumes at rates that allows for a user to relly interact with
the volume. If the hardware also supports stereoscopic rendering,
the three dimensional feeling becomes very evident. The visualization
part is very important, but interaction in 3D space requires something
more than just a mouse or a keyboard. What if we had a tool with which
we could touch and feel the objects?
The Reachin Desktop display consists of everything mentioned above,
where the main part is the PHANToM desktop haptic device.
The display is constructed so that the physical location of the haptic device is co-located with the graphics.
The aim of my research is to use the tools described above to develop
software that can facilitate image analysis tasks like, e.g.,
semi-automatic segmentation. In this presentation I will give a brief
introduction to haptics and haptic rendering, a description of the
equipment I use, and demonstrations on how graphics and haptics
can be combined to visualize volume data.
Is Fusion Imaging a better solution?
Fusion Imaging, also known, as "Multimodality Imaging" is a combination of two different imaging modalities for creating images with higher level of precision and better diagnostics. A few fusion imaging tools are available in the market today, e. g. SPECT-CT and PET-CT, which is the most common used equipment today.
Positron Emission Tomography (PET) is a tomographical and radiological tool based on molecule tracing, used to image metabolism, physiology, and functionality of certain organs/tissues, and the "output images" are known as Functional images. Computed Tomography (CT) in other hand is tomographical equipment for depicting anatomy of the object and the "output images" are known as Structural images. The combination of these two superior tools have created a tool for obtaining both structural and functional information about the object at the same time, which will create more precise diagnostics.
Classification of handwritten numbers
This seminar will be about classification. The NIST database
contains about 800 000 images of handwritten characters and digits
obtained from 3100 people. A smaller subset of about one tenth of the
original collection is used to train non-linear networks for the purpose
of correctly labeling images of digits. The experiments are at the time of
writing still in progress so there is the slight chance that the content
of the seminar will be completely different in case of catastrophic
Sea Surface Correction of Ikonos Images to Improve Bottom Mapping in Near-Shore Environments
The presence of quasi-stochastic sea surface effects in Ikonos images compromise reconnaissance of bottom features. To eliminate most of these wave and glint patterns, the authors use the near-infrared band. The spatial distribution of relative glint intensity is then scaled by absolute glint intensities of each of the visilble bands. The result is subtracted from the visible bands, thus filtering out glint effects. This technique offers potential to use high spatial resolution airborn or satelite imaes of optically shallow water for mapping substrate features.
Hamed Hamid Muhammed
The goal of this Monday-seminar will be to provide a brief introduction to the area of cognitive computer vision.
The purpose of cognitive systems is to produce a response to appropriate percepts.
The response may be a direct physical action into the environment of the system. Such an action will somehow change the state of the system, which allows us to interchangeably say that percepts shall be related to responses or to states.
A response may be delayed in the form of a reconfiguration of internal models in response to the interpreted context of the system. Or it may be to generate in a subsequent step a generalized symbolic representation, which will allow its intentions of actions to be communicated.
As important as the percepts, is the dependence upon context. Contextual properties range from low to high levels, and have to be transparent with object recognition.
Systems must be able to autonomously adapt to and learn from the environment.
The central mechanism is the perception-action feedback cycle, where in the learning phase, action precedes perception.
Percepts shall be mapped derectly onto states or responses or functions involving these.
Symbolic representation is derived mainly from system states and action states.
Segmentation and Visualisation of Human Brain Structures
The focus of the seminar is mainly on the development of segmentation
techniques for human
brain structures and of the visualisation of such structures. The images of
are both anatomical images (magnet resonance imaging (MRI) and
functional images that show blood flow (functional magnetic imaging (fMRI),
tomography (PET), and single photon emission tomograpy (SPECT)). When
anatomical images, the structures segmented are visible as different parts
of the brain, e.g. the
brain cortex, the hippocampus, or the amygdala. In functional images, it is
the activity or the blood
flow that be seen.
Grey-level morphology methods are used in the segmentations to make tissue
the images more homogenous and minimise difficulties with connections to
A method for automatic histogram thresholding is also used. Furthermore,
there are binary
operations such as logic operation between masks and binary morphology
The visualisation of the segmented structures uses either surface rendering
rendering. For the visualisation of thin structures, surface rendering is
the better choice since
otherwise some voxels might be missed. It is possible to display activation
from a functional
image on the surface of a segmented cortex.
A new method for autoradiographic images has been developed, which
registration, background compensation, and automatic thresholding to get
faster and more
realible results than the standard techniques give.
Molecular Analysis of Cancer Cells in Context
National Cancer Institute in Frederick, Maryland, USA
Heterogeneity of cell phenotypes and altered cellular organization are
universal in solid tumors. The underlying molecular mechanisms driving
these processes and eventually leading to metastasis can only be studied
from within cells that are also in their natural cellular environment.
Therefore, we are developing a set of microscope-based technologies to
analyze the genotypic and phenotypic properties of individual cells in
their natural context.
We have developed image segmentation tools for cell nuclei (2D and
3D images) and whole cells based on cell surface labels (2D only). In
addition, we have developed quantitative image analysis tools for
analyzing the 3D organization of cells in tissue (2D and 3D),
enumerating FISH signals in intact nuclei (2D and 3D), analyzing the
spatial organization of FISH signals in intact nuclei (2D and 3D),
quantifying telomere size in interphase nuclei (2D only) and quantifying
the co-localization of two proteins in cells.
Algorithms for Applied Digital Image Cytometry
This seminar will be a pre-presentation of the "popular"
presentation of my thesis (which I will defend at 10.15 on October 31 in
Häggsalen room 10132, at Ångströmlab). The presentation will be in
Swedish, and my aim is to keep it to less than 15 minutes. It will be
based on my thesis, but I will try to make the presentation easy to
understand also for people without knowledge in image analysis.
Classification of tree crowns into tree species
The purpose of the seminar is to present a method for classifying individual
tree crown segments into the tree species, birch, aspen, spruce, and pine.
The classification is done by using four different measures, one measure for
each species. The measure will be defined and motivated. At the end of the
talk, results from the classification will be presented.
Research Related to Biomaterials, i.e. Implants For Application in Man
Carina B Johansson
Carina B Johansson have been working in the field of biomaterials since
1978. Most of her work has been performed at the University of Göteborg,
Sweden. Since August 2003 she is employed at the University of Örebro,
Department of Technology / Medical Technology.
Her research is focusing on the interaction between "technique and biology".
With the growing population and population getting older, there is a need
for "spare-parts" that should be well accepted by the body (biocompatible)
and not provoke foreign body reactions. The aim with this presentation is
to give a brief summary of factors that are of importance for tissue
integration of implants as well as discuss some research related methods of
how to study implant integration.
Shape signature of fuzzy sets based on distance from centroid
In this seminar, I will present the work I did at CBA during the last
spring, in collaboration with Ingela and Natasa. The subject is to
derive shape descriptors for fuzzy objects, and therefore skip the
segmentation step that usually results in some loss of information. The
first results show that extending classical descriptors, more accurate
results can actually be obtained with fuzzy objects than with crisp
sets. The problems we encountered are at 2 different levels : the first
level is the theoretical extension of classical descriptors to the fuzzy
case. The second level is the discrete implementation of these
descriptors. It turned out that 2 different approaches, theoretically
leading to similar results in the continuous case led to different
results onces implemented in the discrete case.
A "review" of vascular segmentation
I will talk about vascular segmentation. It has been an active
field of research for about 10 years, and many attempts have been made
to identify, segment and/or quantify vessels.
I chose to present 3 articles that I found to provide a significant
contribution. As 20 minutes is quite a short time, I will only talk,
for each article, about two particular aspects of the method. Here are
the references, and the two points of focus:
* A. Frangi et al
Model-Based Quantitation of 3-D Magnetic Resonance Angiographic Images
IEEE Tr. Medical Imaging, 10/1999
focus on: vesselness filter and deformable models
* K. Krissian et al
Model-Based Quantitation of 3-D Magnetic Resonance Angiographic Images
Computer Vision and Image Understanding, 2000
focus on: preprocessing (anisotropic diffucion) and generalizd cylinder model
* S. Aylward et al
Initialization, Noise, Singularities, and Scale in Height Ridge Traversal
for Tubular Object Centerline Extraction
Medical Image Analysis, 02/2002
focus on: ridge tracking and the notion of scale
Computer-based Morphometric Assessment of Spiral Ganglion Neurite
Outgrowth in vitro using Image Processing
Henrik Boström and Tomas Lundström
There is a pioneer research going on among medical researchers from all
around the world; they want to make deaf people hear again in a natural
way. To achieve this goal they want to make neuron-cells grow inside the
human ear to reestablish the ability for deaf people to hear again.
The research processes involves growing a huge amount of neurons in the
laboratories and keeping track of the growth-rate and growth-behavior of
these cells. There can be thousands of cells to keep track of every week.
To perform these quantitative assessments of the growing cells the
researchers started measuring the lengths of the neurite outgrowths,
growing out from the seed of the cells.
In this master thesis we have developed a digital image processing
computer software for extracting and measuring neurite outgrowths in
The processing of one such image involves four main digital image
processing fields; these are thresholding, object classification,
morphological operations and measuring by skeletonizing.
Work in progress: Segmentation and separation of flourescent markers
This seminar will present a work that is still very much in progress.
The work deals with with two things: Segmentation and separation of
flourescent markers, and is application in "in situ" analysis of the
distribution of "mutated" and normal mitochondrial genomes (mtDNA). The
status quo of the work will be presented with the aim of providing fuel
for a fruitful discussion of the method, at the end of the seminar.
Super-resolution and Resolution Conversion by Discrete Geometry
National Institute of Informatics, IMIT, Chiba University, Japan
In this paper, we propose an inverse quantization method for binary
digital images on a plane in a space. If a shape is sampled and
expressed as a digital shape, it is impossible to reconstruct high
resolution boundary. Since the resolution-conversion produces
high-resolution digital images from low-resolution ones, we can only
register low-resolution images and objects in the memory of computers.
Therefore, the resolution-conversion technique enables us to save the
amount of memories for the data storage. We show our concept for the
resolution conversion as an application of degital data archiving.
The inverse quantization of digital terrain data for the recovery of a
smooth terrain surface and series of iso-level counters on it are solved
using variational problems. This is a surface reconstruction method
common in computer vision and aerial data processing. The expansion and
super-resolution of digital binary images are the same problem because,
for the achievement of these process, we are required to construct a
smooth boundary curve and surface as an estimation of the original
boundary from digitized objects, which are expressed as a collection of
pixels and voxels.
Our resolution-conversion method first estimates smooth boundaries of
objects from discrete objects produced by sampling procedure. We, first,
derive a method for the extraction of boundary as a collection of edgel
and surfel from a4-connected 6-connected object in a plane and a space,
respectively. Our method in a space is a parallel version of Herman's
surfel extraction method, that is, our method first extracts surfel
slice by slice perpendicular three orthogonal directions which are
parallel to the axes of the coordinate system and second construct the
union of these surfels as the boundary of an object. Second, we
introduce a method for the estimation of smooth boundary using
deformation process for these orthogonal polygons and polyhedrons.
Finally, using the deformed boundary, we construct a high-resolution
object resampling an object with the estimated smooth boundary.
Computer Graphics in Sweden and Fast Intensity Distribution Functions for Soft and Hard Edged Spotlights
Computer Graphics reserach in Sweden could involve people from other
fields, like Numerical Analysis and Image Analysis. This will be discussed
in the seminar. Another topic is "Fast Intensity Distribution Functions
for Soft and Hard Edged Spotlights".
Two fast distribution functions for spotlights will be discussed and
terminology used in stage lighting to model these luminaries will be
presented. In OpenGL and other API's the original Warn model is used where
the light distribution is computed using a power function. In professional
modeling tools, a linear or a cubic function is often used. We propose the
use of two different quadratic functions instead that will make the
computation involved faster than using the power function or a cubic
function. Moreover it will be more flexible than using a linear function.
These functions can be used to model both hard and soft edged spotlights.
A demonstration of a program for image anlysis written in Java
This Master Thesis is about creating a program to handle image analysis in
general with watershed segmentation as its main feature. The program is
written in Java and is available for many platforms. It also uses a user
interface that might be easier to use than Imp's for those who are not
used to Unix. Many of the standard image analysis operations are
implemented and extending the program is quite easy; menus and dialogs are
created from xml-files and new commands can be inserted into a running
program. The program also includes a number of standard features such as
undo/redo, scripting and batch processing. These features and more will be
presented at the seminar.
Digital straight lines in the Khalimsky plane
In this talk I will present a new definition of
digital straight lines in the digital plane equipped
with the Khalimsky topology. A basic requirement for such lines
is that they are connected subsets of the plane. In fact, it turns out
that these digital lines will be homeomorphic to the Khalimsky line.
One consequence of this is that two crossing lines must have
a common intersection point, something that is not true for
the 8-connected curves. (This fact is sometimes called a
"connectivity paradox" of the 8-connected plane). A generalized
version of Rosenfeld's chord property is also discussed and is
then used to classify the subsets of the plane that are digital
straight line segments.
Digital lines - more practically
I am going to talk about my research in digital geometry. I am,
for the moment, most interested in digital lines, so my presentation next
monday will be related to what Erik Melin presented this monday. I am mainly
interested in "construction" of a digital line, I mean - pixel by pixel,
"block" by "block", "sequence" by "sequence" (it will all become clear next
monday) and how to properly recognize a digital line in a set of pixels.
I have mainly worked with Rosenfeld digitization, but most (maybe all) of my
results are also valid for the Khalimsky-Melin case.
Like Erik, I did not use any complicated mathematical tools, only some very
elementary geometry and some extraordinarily elementary number theory (only
the Euclidic algorithm and some simple conclusions following from it).
I hope you will enjoy it and that we will have an interesting discussion
Hamed Hamid Muhammed
Last modified: Thu Nov 27 15:35:55 MET 2003
. . . . . . . .
This page has been accessed 933 times since 16-Aug-2002 - calculated by CronCount