a) Immediate and retained mode are applied to graphics systems and refers to whether the system saves the primitives in a database for automatic regeneration (retained mode) or if the application programmer must redraw everything herself (immediate mode).
b) event mode. But seen in the context of a) the question was misleading (mode meaning different things in a and b) so some flexibility in the correction process is required.
c) Scan conversion is the "translation" of primitives from a description in mathematical terms (such as a line from P1 to P2 or a circle with radius r centred at P3) to pixels to display. Scan conversion is necessary because most graphics devices use raster techniques while graphics primitives are defined in mathematical terms.
b) The midpoint line algorithm uses integer arithmetic and incremental updating of a decision variable d which is the value of the midpoint between the two pixels we are to choose between. If the slope is between 0 and 1 and dy = y1-y0 and dx = x1-x0 the line is described by F(x,y) = dy*x - dx*y + B *dx = 0 (B is the line's intercept with the y axis). Given a selected point (xp,yp), d then becomes F(xp+1,yp+1/2) and if the value of this is >0 we choose the NE (xp+1,yp+1) pixel, otherwise we choose E (xp+1,yp). A new value of d is then calculated incrementally from the old one.
See Foley et al: figure 3.6, page 73.
The rest of the interface is more flexible with a number of buttons and the usual stuff.
For user-friendliness a minimum requirement is that
the user is always made aware of the "mode" the program is running in. If there are
buttons to select draw/remove/move for instance one can change the appearance of the
cursor to reflect the current mode. It is also important that multiple control points
can be recognized.
a) # of times per time unit the image is redrawn
b) in CRTs: time from removal of excitation to the moment when the phosphorence has decayed to 10 percent of the initial light o
c) in CRTs: a metal plate with a lot of holes mounted close to the viewing surface that makes sure that each of the 3 electron beams can hit only one type of phosphor spot (R, G, or B)
d) a mouse is a relative device, a digitizer is an absolute device
a) In homogeneous coordinates, a third (in 2D, a fourth in 3D) coordinate is added to a point. Homogeneous coordinates are used in order to make it possible to express translation as a multiplication and consequently make it possible to concatenate transformations by multiplying transformation matrices.
b) If the window is specified by (xmin,ymin) and
(xmax,ymax) in world coordinates and the viewport by
(umin,vmin) and (umax,vmax) in screen
coordinates the window-to-viewport transformation becomes
MWV = T(u,min,vmin) S((umax-umin)/(xmax-xmin),(vmax-vmin)/(ymax-ymin)) T (-xmin,-ymin)
c) Because the last row in the matrix always contains zeros in columns 1 and 2 only four multiplies and four adds are required.
a) Because the colour-matching functions of R, G and B requires a negative value of r for some colours: adding a negative amount of red is not an easy thing to do.
Alternatively, one can mark R, G, and B in the CIE chromaticity diagram and note that the triangle created by drawing lines between the three does not encompass all colours.
b) Y in CIE represents intensity.
c) The colour-matching functions are not spectral distributions. They are auxiliary functions used to compute how much of X, Y, and Z should be mixed together to generate a specific colour. Because colours have a spectral distributions, the way to generate a specific colour is to compute the integrals of the products of the distribution and the colour-matching functions.
d) See Foley et al: figure 11.12, page 406.
In Motif, the programmer defines functions to be called as a result occurring of events and the system takes care of the dispatching of events to the correct function.
a) Image precision (z-buffer) and Object precision (Painter's algorithm).
b)
z-buffer:
initialise z-buffer to background value
for each polygon
for each pixel in its projection
if (pixel closer than current pixel in z buffer)
write pixel.
Painter's algorithm:
sort polygons according to the farthest z value
scan convert each polygon in ascending order (back-to-front)
In each step two successive vertices are analysed (we assume that the first of the two vertices was treated in the previous step) and zero (both vertices outside), one (both inside or first inside and second outside in which case the intersection between the clip edge and the line between the two vertices is output), or two (the first outside and the second inside in which case the intersection between the clip edge and the line plus the second vertex are output) edges are output.
For a figure, see Foley et al: figure 3.31, page 113.
Nper = MVV3DV M Sper SHpar T(-PRP) R T(-VRP)
a) The model describes the step in 3 viewing and each term is a transformation matrix. Note that the transformations are done in the order T(-VRP) to MVV3DV.
b)
Nper - the transformation for perspective projection of 3D scenes.
MVV3DV - the window to viewport transformation.
M - transformation of the perspective-projection canonical view volume into the parallel
projection canonical view volume with parallel sides.
Sper - scaling the view volume into the canonical view volume
SHpar - shearing so that the centre of the view volume becomes the z axis
T(-PRP) - translatation so that the perspective reference point is at the origin
R - rotation of the view reference coordinate system into the WC system
T(VRP) - translation of the view reference point to the origin
c) The view orientation matrix is RT(-VRP)
and the view mapping matrix is
M Sper SHpar T(-PRP)
a) log(4) / log(3) ( = ~1.2619 )
b) log(8) / log(4) ( = 1.5 )
c) log(6) / log(6/sqrt(2)) ( = ~1.2398 )
I is the resulting pixel intensity.
Ia is the ambient light
ka is the ambient light reflection coefficient of the material
fatt is the attenuation of the light source (a function of the distance from the light)
Ip is the intensity of a point light
kd is the diffuse-reflection coefficient of the material.
ks is the specular reflection coefficient of the material
N is the (normalized) surface normal
L is the (normalized) direction vector to the light source
R is the (normalized) direction vector of the reflection
V is the (normalized) view direction vector
NL is the angle between N and L
RV is the angle between R and V
n in RVn is the specular-reflection coefficient (1->infinity. infinity for perfect reflector)
a) A voxel is a rectangular cell/volume in space, a cuberille is a cube in space. In other words, a cuberille is a special case of a voxel.
b) The drawbacks parallel those of representing 2D shape by bitmaps: voxels requires more memory and we lose accuracy (these two drawbacks can to some extent be traded off against each other).