Institutionen för teknisk databehandling, Bo Nordin (bosse@cb.uu.se, 018-183470), 961217

Suggested solutions to
Examination, Computer Graphics MN1/DV1 961217


1. CRT-monitorn / The CRT Monitor (2p)

a) The image on the CRT is created by an electron gun that emits electrons toward a phosphor-coated screen. When the electrons hit the screen, the phosphor emits visible light, but because the phosphor has finite persistence the image must be redrawn. If the refesh rate isn't high enough, flicker will develop.

b) In interlaced mode only every second line is displayed in a refresh cycle. Thus, if the refresh rate is 60 Hz, the image is redrawn 30 times per second. In non-interlaced mode all lines are displayed in each refresh cycle.

2. Skrivmod / Write Mode (2p)

a) Rubber banding normally uses xor (but invert works just as well)

b) Assuming that we use a visible surface determination technique where we treat polygons in order back-to-front and write pixels into the frame buffer as we go along, we can calculate the transparency as a linear combination of the current value and the value of any transmitting polygon we encounter.

3. Om interaktion / About Interaction (3p)

a) Pop up a modal dialogue. We use request mode.

b) Use rubberbanding in sample mode. The user first anchors one corner (the lower left one, for example) of the rectangle by pressing a mouse button. Then, as the user moves the cursor, with the mouse button pressed, show the rectangle as it changes size with the current cursor position defining the opposite (upper right in our example) corner. When the user releases the mouse button freeze the rectangle.

Alternatively (perhaps a better solution generally), one can use event mode: in that case the resizing is done by a callback function called in response to cursor movements and there is no need to force the user to keep the button pressed during the operation. The main reason for that in the solution in sample mode is that we want to make sure that the operation is terminated: we don't want the processor to keep on sampling positions if the user forgets about the whole thing and takes a coffee break, for instance.

4. 2D-transformationer / 2D Transformations (4p)

Either we translate, rotate, and translate or we translate and mirror (scale).

Translate, rotate, and translate, 3 steps:

1) Translate to (0,0): T(-2,-2):

  1  0 -2
  0  1 -2
  0  0  1
2) Rotate 180 degrees: R(180):

 -1  0 0 
  0 -1 0
  0  0 1
3) Translate: T(5,-2):
  1  0  5
  0  1 -2
  0  0  1
Concatenating the three matrices, we get T(5,-2)R(180)T(-2,-2):
 -1  0  7
  0 -1  0
  0  0  1

Translate and mirror, 2 steps:

1) Translate to (1,2): T(-1,0)

  1  0 -1
  0  1  0
  0  0  1
2) Mirror ni the x axis: S(1,-1):
  1  0  0
  0 -1  0
  0  0  1
Concatenating the two matrices, we get S(1,-1)T(1,2):
  1  0 -1
  0 -1  0
  0  0  1

5. Färgteori / Colour Theory (4p)

Dominant wavelength: (W2+W3)/2
Luminance: (W4-W1)S1 + (W3-W2)(S2-S1)
Purity: (W3-W2)(S2-S1) / Luminance

The formula for purity is according to Principles of Colour Technology by Billmeyer and Saltzman. However, the expression S2(W3-W2) / S1 ((W4-W1)-(W3-W2)) makes perfect sense, too (at least to me).

In order that the figure display red (or pink rather), W2 and W3 must be in the vicinity of 600-650 nm.

A purer red will result if S1 is decreased towards zero, the closer to zero, the purer the colour.

6. Attribut i X / Attributes in X (2p)

In X, all graphics attributes (which specify how primitives appear: colour, line width, etc.) are defined by the GC (graphics context), a large data structure that is created, initialised, and possibly manipulated by the programmer but stored in the server. This scheme is used in order to minimise network traffic: several GC's can be stored with the server, and the number of arguments that has to be sent when a primitive is drawn is reduced.

In PHIGS, each attribute is set by a function.

7. Borttagning av trappstegseffekter / Antialiasing (4p)

a) Aliasing occurs when we draw straight lines on as raster graphics terminal simply because we approximate a continuous signal with a discrete one: we can manipulate what we see only down to the pixel resolution.

b) Area sampling exploits the fact that a line is drawn with a width of one pixel: a line that is not aligned with either of the axes thus covers more than the area of one pixel. Unweighted area sampling sets the intensity of each pixel covered by the line in proportion to the area covered. Weighted area sampling uses a filter function, in the simplest case a cicular cone, in order to give areas close to the ideal line higher weight.

8. Cirkelgenerering / Generation of circles (4p)

Scan conversion is the "translation" of primitives from a description in mathematical terms (such as a line from P1 to P2 or a circle with radius R centred at P3) to pixels to display.

Scan conversion is necessary because most graphics devices use raster techniques while graphics primitives are defined in mathematical terms.

Solving the equation x2 + y2 = R2 for y and looping over x is inefficient because of the multiplication and square root operations involved. The algorithm also produces large gaps for values of x close to R - the slope becomes infinite.

Most circle generation algorithms exploit the fact that a circle is a very symmetric figure. It is sufficient to display 1/8 of the circle: the rest can be obtained by mirroring.

The midpoint circle algorithm is an efficient algorithm that uses integer arithmetic and incremental methods. If we consider only the second octant, from x = 0 to x = R, each step consists of choosing the next pixel: either E (east) or SE (southeast). By determining whether the ideal curve crosses the pixel boundary in x below or above the midpoint between the two candidate pixels the correct one if chosen. In the process, a decision variable which is the value of the function F(x,y)=x2+y2-R2 at the midpoint is used. The decision variable is updated according to the choice of pixel.

9. 3D-grafik / 3D viewing (5p)

a) First we need a view plane (projection plane) which is defined by a view reference point (the VRP) and a normal to the plane: the VPN (view-plane normal). In order to define a window in the VRC we need two more axes in VRC: the view-up vector (VUP) which determines the tilt of the camera, and a third axis which is defined such that the three axes together form a right-handed coordinate system.

b) The view position is defined in the VRC system. In perspective projections a position is defined: the projection reference point (PRP). In parallel projection the direction of projection (DOP) is calculated from the PRP to the centre of the window.

c) Front and back clipping planes.

10. Bézier-kurvor / Bézier curves (3p)

A Bézier curve segment is defined by four control points: P1-P4. P1 and P4 determine the start and end of the curve and the vectors P1P2 and P3P4 determine the starting and ending tangent vectors. The curve thus interpolates the two end control points and approximates the other two.

Each curve segment is completely contained in the convex hull of the four control points. This property makes it possible to use the convex hull in clipping algorithms for fast trivial accept/reject of curves segments

Figure 9.15 on page 337 in the textbook shows two Bézier curves with control points and the convex hull.

11. Ljus / Illumination (3p)

The calculations here have not been thoroughly checked - that's what makes this document preliminary for the time being. The cosine of the angle between two vectors u and v can be computed as u.v / sqrt ( u.u v.v).

I = Iaka + Id(kd(N.L) + ks(R.V)40) / d =
3 * 0.4 + 2 * (0.8*(10/sqrt(150)) + 0.7*(20/(sqrt(150)*sqrt(180)))40) / sqrt(150)

12. Volymvisualisering / Volume Visualisation (2p)

Volume rendering is the process of displaying voxel data while surface rendering uses surfaces, either given explicitly as polygons, curves, etc or calculated by thresholding a voxel volume. Volume rendering uses a variety of techniques to calculate pixel values: average intensity along a ray, various types of gradients, etc. Volume rendering can use the distance to an object in which case it reduces to a simple type of surface rendering.

13. JPEG/MPEG-komprimering / JPEG/MPEG compression (2p)

The steps in JPEG compression are:

It is the quantization step after the DCT that makes JPEG compression lossy (if the first step involves subsampling information is irrevocably lost there, too).