Seminars at CBA, Spring 2004
( Please send your e-mail address to me, if you wish to join our seminar-reminder e-mail list )
The Nest: Managing a complex relational dataset
Architectural Association School of Architecture
Master of Art thesis presentation
The magpie's nest is a highly efficient construction, with an ability to respond to multiple, non-linear loading scenarios, through high redundancy of structural components and the combinatorial effect of material properties (such as friction, elasticity, and porosity) with a complex 3D geometry. In order to develop an understanding for this intricate configuration, the descriptive potential of contemporary mapping/analysis techniques were investigated. The dual relationship between the requirements of the case study and the repercussions of the digital techniques was highlighted during the research, operating in between modalities and their respective data types in order to describe the entire set of interior and exterior complex spatial relations at multiple resolution levels ensuring system efficiency. One surface recording technique (a digitizer arm), and two depth recording techniques (magnetic resonance imaging and computer tomography) were employed, generating vector and voxel information respectively. Due to the convertible character of the voxel data, the high-resolution nest description generated through the medical imaging procedures could be visualised and manipulated in off the shelf 3D modelling packages, such as 3D Studio Max.
The face-centered cubic grid and the body-centered cubic grid
The body-centered cubic (bcc) grid and face-centered cubic (fcc)
grid are the three-dimensional ``equivalents'' of the two-dimensional
hexagonal grid. In the fcc grid and the bcc grid, the voxels are not cubic.
The voxels consists of truncated octahedra and rhombic dodecahedra,
respectively. One property of the voxels in these grids is that they are more
``sphere-like'' than the cube. The outline is roughly
*the basics - neighbourhoods, connectivity, etc.
*how to generate images on the fcc and bcc grids and
(The seminar is a part of the examination of a reading course.)
Harmonic functions, graphs and clusters
Some types of graphs can be mapped into the Euclidean plane without
self-intersections, using harmonic functions. This result can be used for
construction of bijective mappings from a surface homeomorphic to the
sphere to the sphere. However, a straight forward procedure will result in
vertex clustering, making the result difficult to use in
applications. Some examples of this will be shown, discussed and
Layer Segmentation in Cross-section Images of Board
Master thesis presentation
This Master's Thesis investigates methods for layer segmentation of SEM
images of packaging board using simple methods. Edge detection was used to
find the borders between the different areas of interest and two methods
were developed to extract the proper segmentation borders: One based on
simple morphological operations, and one based on rejecting suspected
classification errors followed by interpolation to fill the occurring
gaps. The later proved to be more accurate if enough information is
available to do correct rejections.
Haptic volume rendering
Many early haptic volume rendering algorithms
have been based on force-fields. Those algorithms
often suffer from instabilities and other limitations.
More recently developed algorithms are constraint-based
and involve a virtual coupling network that guarantees
stability in the rendering.
In this talk, that is mainly based on two different papers,
I will describe this type of haptic volume rendering.
I also intend to show some examples of my own work
and introduce some new ideas for the future.
Introduction of some Pre-Clinical PET methodologies
Positron emission tomography (PET) is a powerful technique with the potential of displaying functional or biochemical information by recording radio-labelled molecules interacting in a biological system, either in vitro or in vivo. Clinical PET investigations are the ultimate goal in testing and validating new tracers and illustrating their clinical or pharmacological significance. Tracers need to be tested and validated in vitro and in vivo animal models before drawing conclusions on their usefulness. Image analysis approaches span the spectrum of the data obtained from "Qualitative" to "Quantitative" allowing better understanding and refinement. Clinical PET images and data generated need to be evaluated and fitted to known biological models in order to provide useful and reliable information.
Some of the most important Pre-clinical PET methodologies will be introduced briefly.
Camera specification issues for underwater imaging
Color restauration of underwater images taken with a commercial
camera can be done only if parameters such as quantum efficincy, minimum
illumination of the object and the size of CCD sensor are known. In cameras
the light is split on the chip further into primary colors. When the light
is insufficient to produce color there are two obtions: there is no signal
or cameras filters try to compensate for that producing the erroneously
colored image. In underwater environments where the light is attenuated by
water column properties knowledge of camera behavior is crucial. The
manufacturer has sensitivity curves of the CCD chip, but little is known
for the public on how these are shifted with different light conditions.
The calculation of quantum efficiency of the CCD chip is done in the
science of light engineering photometry and the result is compared to
Fuzzy shape analysis
Fuzzy concepts are already incorporated in many image
segmentation techniques, naturally expressing and retaining
``fuzziness" of the real images.
Even though the segmentation results obtained by binarization
(defuzzification) of the fuzzy segmented image provide a very good
start for any further classical image analysis procedure,
it became clear that the segmentation should not be the only step in
the image analysis process where inaccuracy of the data is considered.
Our intention is to analyze different ways to extend the main shape
analysis tools to fuzzy images. The first step was to get an
overview of the existing approaches and results related to fuzzy shape
analysis; that was the goal of my reading course.
The state of the art is summarized in the course report, and will be
briefly presented during the lecture.
A preprocessing method for better segmentation
In order to improve the segmentation result a preprocessing method has been
developed. The old segmentation method based on Brownian motion (BM) can be
seen as a preprocessing step (BM) and a segmentation step. The
BM preprocessing step has been removed and replaced with an improved version
based on the same idea (i.e. random walk of a particle). This new version
is carving the grey level landscape differently depending on the shape of
the landscape. Another nice thing is that the result can be also achieved by
an iterative method. Both methods will be presented and compared.
Developement of the fibre orientation analyser SPADES: a system using polarization-axis direction estimation
Master thesis presentation
The properties of a paper sheet is to a large extent dependent on the fiber
orientation in the plane of the sheet. The purpose of this thesis is to
create an off-line measurement system for fiber orientation based on the
polarization effects of paper. The equipment consists of a polarization
analyzer and uses a CCDcamera as light detector. Results show that the
polarization axis of paper at visible wavelengths correlates very well with
the fiber orientation.
The conclusion is that the speed and accuracy of the system make it a very
competitive method for off-line fiber orientation analysis. However, the
low noise levels required make it difficult to implement on-line and
further development into an on-line system should be put on hold.
Queue-based algorithms in Image Analysis
A lot of algorithms used in Image Analysis can be efficiently implemented
with a queue data structure: region growing, morphology operators, watershed,
distance transforms, fast marching, fuzzy connectedness to name only a few.
I will describe a general framework where the same algorithmic structure is
able to perform all these operations, by changing only a few lines in the code.
I will first describe the queue data structure and one of its most efficient
implementation, the binary heap. I will then show the general structure of
a queue-based algorithm, and present some working examples programmed in JAVA
as part of the ImageJ package.
As a conclusion, I will talk briefly about the latest numerical algorithms used
to solve this kind of problems, shading new light on chamfer-like algorithms.
Jamie Silverla and Javier Protillo,
Breadth-first search and its application to image processing problems,
IEEE Transaction on Image Processing, Vol 10, No 8, August 2001
Marcos Cordeiro d' Ornellas
A Queue-based Algorithmic Pattern
Proceedings of the SugarLoafPLoP 2002 conference
Yen-Hsi Richard Tsai, Li-Tien Cheng, Stanley Osher, Hong-Kai Zhao
Fast Sweeping Algorithms for a Class of Hamilton--Jacobi Equations
SIAM Journal on Numerical Analysis. Volume 41, Number 2
Classification of Proteins in Electron Tomography Reconstructions
Master thesis presentation
This master's thesis investigates basic image analysis methods for
classifying proteins in ET density volume reconstructions. A few basic
pattern classification methods were chosen along with appropriate object
features. Proteins were reconstructed from PDB (Protein Data Bank) and
analysed to get an indication of whether it is possible to identify groups
of proteins in reconstructions from electron tomography. The lack of a
priori information came to play a central role in the attempts to
classify the proteins.
Multi-camera arrangement for teat detection in robotic milking
Maria Petterson & Johan Andren Dinerf
Master thesis presentation
This master thesis deals with automatic milking systems.
The teat detection and positioning system used today on the
DeLaval automatic milking system, VMS, comes with a number of
drawbacks, that could be solved if it was replaced with a stereo
vision system, placed outside the milking robot. This would
decrease the damages on the present camera/laser detection device,
and possibly increase the speed of the robot. This thesis is a
feasibility study to find out if such a system is possible.
The stereo calculations show that a stereo vision system is very
sensitive. If such a system should work with high enough accuracy,
the system needs to continuously be recalibrated, using reference
points in the VMS. Results show that average error in absolute
measurements is within the accepted range. The demand is higher
when attaching a teat cup. Therefore relative measurements between
objects is of higher interest. Errors in relative measurements
depend on the size of the relative measurement and is 8\%. This
means that the error is low when the demand is high.
The image analysis does not detect the teats with high enough
accuracy today, but shows that it is possible in an environment
with appropriate illumination. All teats are seen using two stereo
The final conclusion is that such a system is possible but is very
sensitive. A final system needs to be more robust and exact.
Spherical Linear Interpolation of Quaternions
Spherical Linear Interpolation (slerp) can be done in a very efficient way
by using an equal angle incremental approach. We have earlier showed that
complex multiplication can be used for this and even better is to use the
Chebyshev recurrence formula. These approaches will be fast since no
trigonometric functions are needed in the inner loop. We used these
approaches for vector interpolation and intensity interpolation for
shading. However, we have not yet fully investigated how these approaches
can be used for slerp of quaternions, which are used in animation. We will
discuss how quaternions are used in animation and also present different
approaches for slerp of quaternions. We will also show that it is possible
to perform non equal angle interpolation of quaternions for animation.
On modelling nonlinear variation in discrete appearances of objects
With this seminar I will give a presentation of my thesis in a
popular-scientific manner. The thesis will be defended on 19th May, 10.15,
in room 80101 at ┼ngstr÷m Laboratory. The presentation is supposed to be
short and held in easy understandable terms. It covers background and main
focus of the thesis.
Barrera Kristiansen AB
Quadratic shading has been proposed as a technique to improve the slow Phong shading. Quadratic shading yields only two additions in the inner loop, instead of divisions and square roots for Phong shading but the set up for quadratic shading is instead quite complex. We propose a new quadratic shading technique called X-shading which completely removes all calculation of division and square roots but still offers near phong quality shading. In X-Shading the mid-edge vectors are replaced by approximations which are found by minimizing a minimal curvature integral.
The resulting approximation is very accurate and and simplifies the setup. This setup approach is shown to be about four times faster than an ordinary quadratic shading and no visual quality reduction is observed.
Unique descriptive signatures
Hamed Hamid Muhammed
Comparing spatial and temporal patterns in spectra can provide unique
signatures describing the underlying biological, chemical and/or physical
processes (i.e. the effects of these processes on the spectral
properties of the studied objects).
These signatures can be used for characterizing and estimating
the corresponding properties of the studied objects. This goal can be
accomplished by using a reference data set consisting of spectra and the
corresponding assessments or measurements of the properties of interest.
The spectra which normally are hyperspectral vectors are first normalised
into zero-mean and unit-variance vectors by performing various
combinations of spectral- and band-wise normalisations. Then, after
applying the same normalisation procedures to new spectra, a nearest
neighbour classifier is used to classify the new data against the
reference data. Finally, the corresponding signatures are computed using a
linear transformation model. High correlation is obtained between the
classification results and the corresponding field assessments, confirming
the usefulness and efficiency of this approach. The simplicity and low
computational load of this approach makes it suitable for real-time
Continuous digitization in Khalimsky spaces
Consider a function from n-dimensional Euclidean space to the real line,
which is Lipschitz with Lipschitz constant 1 for the supremum metric.
We will show how to find a digital representation, a continuous mapping
from Khalimsky n-space to the Khalimsky line approximating the given
function. Then we will discuss how this digitization can be used to define
digital planes and hyperplanes which are topologically well behaved.
Automatic classification of images detected in Gyrolab(TM)
Master thesis presentation
Gyros AB is a biotechnical company which manufactures a system for protein quantification. Protein concentration is calculated from images produced by fluorescent molecules. It is desirable to automatically classify these images on a scale from poor to good, which indicates the quality of the preceding process and if the image is suitable for protein quantification.
In this thesis project, a classification system has been designed. Firstly, a set of parameters for each image have been constructed. Secondly, a neural network is used as a classifier. Results show that it is possible to a reasonable level of accuracy distinguish poor images from good.
Hierarchical template based segmentation of proteins in TEM-volumes
Sidec Electron Tomography, SET, is a protein imaging method
which produces volumes with a resolution of approximately 2x2x2nm. The
resulting volumes are segmented by simple thresholding and visual judgment
of the objects of correct size. This segmentation method is tedious, and
proteins touching other proteins or objects will not be found since the
size of the object then will be to big. Another problem with this method
is that the volumes have varying background which affects the result of
the thresholding. I will present an idea of an alternative segmentation
method which I believe can solve the problems, while also reducing the
amount of visual inspection needed.
Image analysis as a tool to characterize layering in stratified paper
Maria Sannes Lande
Master thesis presentation
The dream in the paper industry is to make a paper that is
both lighter and stronger than conventional paper of today. This may
come true if paper has a layered structure where the fibers in different
layers have different properties.
The goal of this project has been to develop tools to evaluate the
quality of multi-layer paper based on image analysis to get information
about the mixing of the layers. Three main questions were posed: 1: How
does the layers mix? 2: How well does the outer layer cover the inner
core? 3: How does flocs (fiber entangled in each other) move in the
thickness direction? These questions have not fully been answered but I
have developed methods as a step closer to answering these questions.
The papers have been separated into thin layers, splits. To find out how
the layers mix a program was made that identifies the fibers coming from
the different layers and calculates the percents in each split. By
rebuilding the paper digitally in the computer it is possible to find
out how well the outer layers cover the inner layer. By studying flocs
in the thin layers of the paper it is possible to see how these are
spread in the paper and it might help to understand how flocs influence
paper properties. A method that identifies flocs has been developed and
I have looked into the possibility to make a volume image of flocs.
Improvement of stereo matching algorithm
This thesis considers the improvement of an existing stereo matching algorithm by using two filters and two different corrolation methods. A stereo matching algotitm is the process of transforming a stero image pair into an image containing a depth map.
The first section includes a brief introduction of the problems concerning stereo matching and a short discription of the original algorithm. The main purpose of the first filter is extract the depths used in the image and mark them for corrolation. The choice to mark a certain depth is based on a fast corrolation of a small number of pixels. This reduces the workload for the corrolation witch the most time consuming part of the process.
Since the corrolation is also the most important part for a correct depth map some alterations has been made to it. The new version changes the shape of the corrolation window to match local areas in the images.
The second filter uses information about local sufaces and concentrations of good corrolation results to flatten out surfaces and remove noise in the corrolation result matrix.
The last part of the algorithm fills in empty areas in the depth map.
By showing result depth maps the improved version is proved to produce sharper images and in less time. A compairison with a commercial product reveals similar final images but the improved version is by far quicker.
Basic Computer Graphics Theory Explained
Computer graphics is a frequent term in today's computer influenced world. It refers to drawing something using a computer, and visualizing this drawing using some device. "Computer graphics" is commonly associated with computer games, but is involved in everything that you see on your screen (i.e. interface, animated movies, etc.). This seminar will walk through basic computer graphics in a pipeline fashion, from a 3D object model to a shaded and colored 2D projection on to the screen. I will limit the talk to "on-line" rendered local illumination computer graphics, and exclude things like global illumination (e.g. ray tracing and radiosity), volume visualization, actual implementation and graphics APIs.
Some methods and results in temcell tracking
Some results of tracking images will be demonstrated.
The assignment of cells ID will be introduced.
Some special cases especially about splitting cells will be anaylized.
Implementation proposal, analysis and investigation of possible methods for reading license plates using Optical Character Recognition (OCR)
Master thesis presentation
In the application of traffic monitoring an automatically method for identification of the characters on vehicles license plates is needed. It was decided to divide the project work in two, so that part one would find a license plate inside a given image, part two would work on recognise the character on the license plate. Optical Character Recognition (OCR) is used to identify the characters on the license plate images as described in this thesis report. There are several OCR software's on the market. Comparison and investigation of possible methods to read licence plates using different types of standard OCR software have been performed. Eight OCR-packages was reviewed and Abbyy FineReader 6.0 Professional, which was one of the "best", tested results, recognised 65 % accurate of 71 different license plate images. The expected result was "at least" 97% accurate and that made us to draw the conclusion, that tested OCR-packages found on the market, could not successfully fulfil our task and have to be modified to achieve the desired accuracy. By making the mentioned OCR-package analysis, we draw the conclusion that a specialised algorithm was needed to achieve desired accuracy. A Specialised algorithm was developed and implemented. The algorithm consists of two stages, pre-processing of the license plate and letter recognition. In the pre-processing part the license plate is transformed to black and white, cleared from dirt and prepared for letter recognition. In the letter recognition part Principal Component Analysis is used for dimension reduction and small-specialised neural networks are used for letter recognition. The neural networks were trained with Levenberg-Marquardt regularisation algorithm with early stopping to avoid over-fitting and over-training. The mentioned algorithm could successfully detect 97% of the licence plate validation set.
Please send your e-mail address to me, if you wish to join our seminar-reminder e-mail list
Hamed Hamid Muhammed
Last modified: Mon Jun 21 12:29:28 MEST 2004
. . . . . . . .