lecture #1 began here
lecture #2 began here
My past experience with glut has been poor. Also, it is not really part of OpenGL -- various platforms that provide OpenGL do not provide glut. They all provide "gl" and "glu", the OpenGL utilities library. So the text uses glut and we will use UIGUI ("oowee gooey", standing for "UI GUI"). It is fine if you want to learn GLUT as well; go right ahead. We will be learning a lot of "gl" and "glu" in a few weeks.
GLUT | uigui |
---|---|
// #includes void main(ing arc, char **argv) { glutInit(&argc,argv); glutInitDisplayMode(GLUT_SINGLE); glutInitWindowSize(640,480); glutInitWindowPosition(100,150); glutCreateWindow("my first attempt"); glutDisplayFunc(myDisplay); glutReshapeFunc(myReshape); glutMouseFunc(myMouse); glutKeyboardFunc(myKeyboard); } |
#include "uigui.h" int main() { WOpen(); GotoRC(10,2); writes("Hello, world"); WEnd(); } |
Generally the operating system and/or window system limit access to the frame buffer. There is conflict between abstraction and performance. 3D graphics "accelerator" hardware capabilities are increasingly "higher level", to where writing to the frame buffer yourself makes less and less sense.
Homework #0 postulated the use of a primitive for setting an individual
dot (pixel, picture element) on the screen, and the previous lecture
described the frame buffer used to store the contents of the screen in
memory. So, how do we implement DrawPixel() ? The following pseudocode
fragments approximate it for a display WIDTH x HEIGHT whose frame buffer
is an array of bytes for which we have a pointer named fb
.
Implementation for real hardware would vary depending e.g. on whether
the processor was "big-endian" or "little-endian", whether a 1-bit
means black or means white, etc.
void DrawPixel(int x, int y, int value) { int index = y * HEIGHT + x / 8; int bit = y * HEIGHT + x % 8; if (value) { /* set the bit on */ fb[index] |= 1 << bit; } else { /* set the bit off */ fb[index] &= ~(1 << bit); } }
void DrawPixel(int x, int y, int value) { int index = (y * HEIGHT + x) * 3; fb[index++] = value & 255; fb[index++] = (value>>8) & 255; fb[index] = (value>>16) & 255; }
lecture #3 began here
Draws a horizontal line from (x1, y) to (x2, y):
while loop | "for"-loop | generator |
---|---|---|
x := x1 while x <= x2 do { DrawPoint(x,y) x := x + 1 } |
every x := x1 to x2 do { DrawPoint(x,y) } |
every DrawPoint(x1 to x2, y) |
# note: use real numbers and round off, to avoid numeric errors procedure DrawLine0(x1,y1,x2,y2) m := real(y2 - y1) / real(x2 - x1) y := real(y1) every x := x1 to x2 do { DrawPoint(x, integer(y + 0.5)) # round off y +:= m } end
procedure DrawLine1(x1,y1,x2,y2) dx := x2 - x1 dy := y2 - y1 d := 2 * dy - dx incrE := 2 * dy incrNE := 2 * (dy - dx) x := x1 y := y1 DrawPoint(x,y) while x < x2 do { if d <= 0 then { d +:= incrE x +:= 1 } else { d +:= incrNE x +:= 1 y +:= 1 } DrawPoint(x,y) } end
incrE 6 incrNE -14 x 50 y 50 d -4 x 51 y 50 d 2 x 52 y 51 d -12 x 53 y 51 d -6 x 54 y 51 d 0 x 55 y 51 d 6 x 56 y 52 d -8 x 57 y 52 d -2 x 58 y 52 d 4 x 59 y 53 d -10 x 60 y 53 d -4
To avoid the discontinuity you can write a loop that steps through small angular increments and uses sin() and cos() to compute x and y.
procedure DrawCircle0(x,y,R) every angle := 0.0 to 2 * &pi by 1.0 / R do { px := x + sin(angle) * R py := y + cos(angle) * R DrawPoint(px,py) } end
procedure CirclePoints(x0, y0, x, y) DrawPoint(x0 + x, y0 + y) DrawPoint(x0 + y, y0 + x) DrawPoint(x0 + y, y0 - x) DrawPoint(x0 + x, y0 - y) DrawPoint(x0 - x, y0 - y) DrawPoint(x0 - y, y0 - x) DrawPoint(x0 - y, y0 + x) DrawPoint(x0 - x, y0 + y) end
procedure DrawCircle1(x0,y0,R) x := 0 y := R d := 5.0 / 4.0 - R CirclePoints(x0, y0, x, y) while (y > x) do { if d < 0 then { d +:= 2.0 * x + 3.0 } else { d +:= 2.0 * (x - y) + 5.0 y -:= 1 } x +:= 1 CirclePoints(x0, y0, x, y) } end
lecture #4 began here
See the wikipedia article on Bresenham's circle algorithm for a better description of that algorithm, using integer-only operations. An example program circ.icn is worth playing with.
The most interesting raster operation is probably XOR. It has the property that XOR'ing a pixel value a second time restores the pixel to its former contents, giving you an easy "undo" for simple graphics. On monochrome displays XOR works like a champion, but on color displays it is less useful, because the act of XOR'ing in a pixel value may produce seemingly random or indistinct changes, especially on color index displays where the bits being changed are not color intensities. If a color display uses mostly a foreground color being drawn on a background color, drawing the XOR of the foreground and background colors in XOR mode works pretty much the way regular XOR does on monochrome displays.
lecture #5 began here
Icon/Unicon 2D facilities consist of around 45 functions in total. There is one new data type (Window) which has around 30 attributes, manipulated using strings. Most graphics API's introduce a few hundred functions and several dozens of new typedefs and struct types that you must memorize. UIGUI will eventually have around 90 if each of the 45 requires two C versions.
These facilities are mainly described in Chapters 3 and 4 of the Icon Graphics Book. Here are some of my favorite functions you should start with. The main differences between Icon/Unicon and UIGUI: the C functions are not able to take an optional initial parameter, and do not yet support multiple primitives with a single call. This is less useful in C than in Icon anyhow, since C does not have an "apply" operator. An initial "working set" of UIGUI primitives will consist of just under half of the whole 2D API:
The border that defines a region may be 4-connected or 8-connected. 4-connected regions do not consider diagonals to be adjacent; 8-connected regions do. The following algorithms demonstrate brute force filling. The foreground color is presumed to be set to "new".
# for interior-defined regions. old must not == new procedure floodfill_4(x,y,old,new) if Pixel(x,y,1,1) === old then { DrawPoint(x,y) floodfill_4(x,y-1,old,new) floodfill_4(x,y+1,old,new) floodfill_4(x-1,y,old,new) floodfill_4(x+1,y,old,new) } endFor the example we started with (filling in one of our circles) only slightly more is needed:
# for boundary-defined regions. the new color may be == to the boundary procedure boundaryfill_4(x, y, boundary, new) p := Pixel(x,y,1,1) # read pixel if p ~== boundary & p ~== new then { DrawPoint(x,y) boundaryfill_4(x,y-1,boundary,new) boundaryfill_4(x,y+1,boundary,new) boundaryfill_4(x-1,y,boundary,new) boundaryfill_4(x+1,y,boundary,new) } endThe main limitation of these brute-force approaches is that the recursion is deep, slow, and heavily redundant. One way to do better is to fill all adjacent pixels along one dimension with a loop, and recurse only on the other dimension.
Pixel(x,y,w,h) asks for all the pixels in a rectangular region with a single network transaction, which will be must faster than reading each pixel individually. In general, client-side "image" structures are not interchangeable with server-side "pixmap" and "window" structures, but this is an unfortunate limitation of their design.
How, how do you store a local (client-side) copy of a window in order to work with it efficiently? There are many ways you can represent an image, but we will start with a couple of brute force representations using lists of lists and tables of tables.
lecture #6 began here
A: 80 columns, 12 rows, in the system default "fixed" font.
Q: function watt() doesn't work, what up?
A: watt() supported most context attributes out of the box, but needed some more code to catch canvas attributes such as size=. Code has been added; new library build will come shortly.
# nonrecursive boundary fill. the new color may be == to the boundary procedure nr_boundaryfill_4(x, y, boundary, new) L := [x, y] while *L > 0 do { x := pop(L) y := pop(L) p := Pixel(x,y,1,1) # read pixel if p ~== boundary & p ~== new then { DrawPoint(x,y) put(L, x, y-1) put(L, x, y+1) put(L, x-1, y) put(L, x+1, y) } } end
Under these various circumstances, window systems may discard part or all of an application's screen contents, and ask the application to redraw the needed portion of the screen later, when needed. A window system tells an application to redraw its screen by sending it an "expose" or "paint" event. Some applications just maintain a data structure containing the screen contents and redraw the screen by traversing this data structure; this is the raster graphics analog of the old "display list" systems used in early vector graphics hardware. But clients are complicated unnecessarily if they have to remember what they have already done. The obvious thing to do, and the thing that was done almost immediately, was to add to the window system the ability to store screen contents in an off-screen bitmap called a backing store, and let it do its own redrawing. Unfortunately, evil software monopolists at AT&T Bell Labs such as Rob Pike patented this technique, despite not having invented it, and despite its being obvious. This leaves us in the position of violating someone's patent whenever we use, for example, the X Window System.
"World coordinates" are coordinates defined using application domain units. Graphics application writers usually find it far easier to write their programs using world-coordinates. Within the "world", the graphics application presents one or more rectangle views, denoted "world-coordinate windows", that thanks to translation, scaling, and rotation, might show the entire world, or might focus on a very tiny area in great detail.
A second set of transformations is applied to the graphics primitives to get to the "physical coordinates", or "viewport" of whatever hardware is rendering the graphics. The viewport's physical coordinates might refer to screen coordinates, printer device coordinates, or the window system's window coordinates.
If the world coordinates and the physical coordinates do not have the same height-width aspect ratio, the window-to-viewport transformation distorts the image. Avoiding distortion requires either that the "world-coordinate window" rectangles match the viewport rectangles' shapes, or that part of the viewport pixels go unused and the image is smaller.
Fonts have: height, width, ascent, descent, baseline. Font portability is one of the major remaining "issues" in Icon's graphics facilities. There are four "portable font names": mono, typewriter, sans, serif, but unless/until these become bit-for-bit identical, programs that use text require care and/or adjustment in order to achieve portability. You can "fish for a font" with something like:
Font(("Frutiger"|"Univers"|"Helvetica"|"sans") || ",14")
Proportional width fonts may be substantially more useful, but require substantially more computation to use well than do fixed width fonts. TextWidth(s) tells how many pixels wide string s is in a window's current font. Here is an example that performs text justification.
procedure justify(allwords, x, y, w) sofar := 0 thisline := [] while word := pop(allwords) do { if sofar + TextWidth(word) + (*thisline+1) * TextWidth(" ") > w then { setline(x, y, thisline, (w - sofar) / real(*thisline)) thisline := [] sofar := 0 y +:= WAttrib("fheight") } } end procedure setline(x,y,L,spacing) while word := pop(L) do { DrawString(x,y,word) x +:= TextWidth(s) + spacing } end
How do you draw smooth curves? You can approximate them with "polylines" and if you make the segments small enough, you will have the desired smoothness property. But how do you calculate the polylines for a smooth curve?
Sewing together curves is a big issue: the "slope" (endpoint tangent vectors are used to track these) of both curves must be equal at the join point.
Since we are drawing a smooth curve piecewise, to do the curve segment between pi and pi+1 you need four points actually; you need the points pi-1 and pi+2 in order to do the job. The algorithm will step through the i's starting with i=4 (3 if you were C language going 0..3 as your first 4 points), and counting i that way as the point after the two points whose segment we are drawing, the steps will actually render the curve between pi-2 and pi-1.
The data type used in the Icon implementation is a record Point with fields x and y. Under X11 you could use XPoint and under MS Windows, POINT would denote the same thing (duh).
record Point(x,y)
There are interesting questions at the endpoints! The algorithm always needs some pi-1 and pi+2 so at the endpoints one must supply them somehow. Icon, Unicon, and UIGUI use the following rule:
if p1 == pN then add pN-1 before p1 and p2 after pN. else duplicate p1 and pN at their respective ends.
Consequently, I have added this to the front of the gencurve() procedure for the lecture notes.
# generate a smooth curve between a list of points procedure gencurve(p) if (p[1].x == p[-1].x) & (p[1].y == p[-1].y) then { # close push(p, copy(p[-2])) put(p, copy(p[2])) } else { # replicate push(p,copy(p[1])) put(p,copy(p[-1])) }Now to draw every segment between pi-2 and pi-1. This is a "for" loop:
every i := 4 to *p do {Build the coefficients ax, ay, bx and b_y, using: (note: b_y not "by", because by is an Icon reserved word). This part is "magic": you have to lookup MCR and GiBs in the journal article yourself.
_ _ _ _ i i 1 | -1 3 -3 1 | | Pi-3 | Q (t) = T * M * G = - | 2 -5 4 -1 | | Pi-2 | CR Bs 2 | -1 0 1 0 | | Pi-1 | |_ 0 2 0 0_| |_Pi _|Given the magic, it is clear how the ax/ay/bx/b_y are calculated:
ax := - p[i-3].x + 3 * p[i-2].x - 3 * p[i-1].x + p[i].x ay := - p[i-3].y + 3 * p[i-2].y - 3 * p[i-1].y + p[i].y bx := 2 * p[i-3].x - 5 * p[i-2].x + 4 * p[i-1].x - p[i].x b_y := 2 * p[i-3].y - 5 * p[i-2].y + 4 * p[i-1].y - p[i].yCalculate the forward differences for the (parametric) function using parametric intervals. These used to be intervals of size 0.1 along the total curve; that wasn't smooth enough for large curves, so they were expanded to max(x or y axis difference). This is a Bug! It is not always big enough! What is the correct number to use??
steps := max(abs(p[i-1].x - p[i-2].x), abs(p[i-1].y - p[i-2].y)) + 10 stepsize := 1.0 / steps stepsize2 := stepsize * stepsize stepsize3 := stepsize * stepsize2 thepoints := [ ]From here on out as far as Dr. J is concerned this is basic calculus and can be understood by analogy to physics. dx/dy are velocities, d2x/d2y are accelerations, and d3x/d3y are the changes in those accelerations... The 0.5's and the applications of cubes and squares are from the "magic matrix"...
x := p[i-2].x y := p[i-2].y put(thepoints, x, y) dx := (stepsize3*0.5)*ax + (stepsize2*0.5)*bx + (stepsize*0.5)*(p[i-1].x-p[i-3].x) dy := (stepsize3*0.5)*ay + (stepsize2*0.5)*b_y + (stepsize*0.5)*(p[i-1].y-p[i-3].y) d2x := (stepsize3*3) * ax + stepsize2 * bx d2y := (stepsize3*3) * ay + stepsize2 * b_y d3x := (stepsize3*3) * ax d3y := (stepsize3*3) * ayThe "inner for loop" calculates the points for drawing (this piece of) the curve, broken into steps.
every 1 to steps do { x +:= dx y +:= dy dx +:= d2x dy +:= d2y d2x +:= d3x d2y +:= d3y put(thepoints, x, y) }DrawLine is used instead of DrawPoints in order to avoid holes and get working linwidth/linestyle, but this turns out to be a mixed bag...
DrawLine ! thepoints } end
After my first pass, it was obvious some pixels seem to be missing, not just from the Icon version, but from the "official" built-in version! Some missing pixels were filled in by adding another line segment to the end of each step:
To fill in more pixels, I traced the actual execution behavior, and saw some pixels in the generated output not showing up! Pixels in green are generated by the algorithm but not shown in the drawn curve:
Ways to fix: make the stepsize smaller? Change from DrawLine to DrawPoint as DrawCurve's underlying primitive? Using lines instead of points was perhaps done in the first place to avoid gaps in drawn output, but some API's (X11?) exclude the endpoints of lines being drawn... However, DrawLine is potentially nicer than DrawPoint from the standpoint that it will use the "linewidth" (handy) and "linestyle".
/* * genCurve - draw a smooth curve through a set of points. * Algorithm from Barry, Phillip J., and Goldman, Ronald N. (1988). * A Recursive Evaluation Algorithm for a class of Catmull-Rom Splines. * Computer Graphics 22(4), 199-204. */ void genCurve(w, p, n, helper) wbp w; XPoint *p; int n; void (*helper) (wbp, XPoint [], int); { int i, j, steps; float ax, ay, bx, by, stepsize, stepsize2, stepsize3; float x, dx, d2x, d3x, y, dy, d2y, d3y; XPoint *thepoints = NULL; long npoints = 0; for (i = 3; i < n; i++) { /* * build the coefficients ax, ay, bx and by, using: * _ _ _ _ * i i 1 | -1 3 -3 1 | | Pi-3 | * Q (t) = T * M * G = - | 2 -5 4 -1 | | Pi-2 | * CR Bs 2 | -1 0 1 0 | | Pi-1 | * |_ 0 2 0 0_| |_Pi _| */ ax = p[i].x - 3 * p[i-1].x + 3 * p[i-2].x - p[i-3].x; ay = p[i].y - 3 * p[i-1].y + 3 * p[i-2].y - p[i-3].y; bx = 2 * p[i-3].x - 5 * p[i-2].x + 4 * p[i-1].x - p[i].x; by = 2 * p[i-3].y - 5 * p[i-2].y + 4 * p[i-1].y - p[i].y; /* * calculate the forward differences for the function using * intervals of size 0.1 */ #ifndef abs #define abs(x) ((x)<0?-(x):(x)) #endif #ifndef max #define max(x,y) ((x>y)?x:y) #endif steps = max(abs(p[i-1].x - p[i-2].x), abs(p[i-1].y - p[i-2].y)) + 10; if (steps+4 > npoints) { if (thepoints != NULL) free(thepoints); thepoints = (XPoint *)malloc((steps+4) * sizeof(XPoint)); npoints = steps+4; } stepsize = 1.0/steps; stepsize2 = stepsize * stepsize; stepsize3 = stepsize * stepsize2; x = thepoints[0].x = p[i-2].x; y = thepoints[0].y = p[i-2].y; dx = (stepsize3*0.5)*ax + (stepsize2*0.5)*bx + (stepsize*0.5)*(p[i-1].x-p[i-3].x); dy = (stepsize3*0.5)*ay + (stepsize2*0.5)*by + (stepsize*0.5)*(p[i-1].y-p[i-3].y); d2x = (stepsize3*3) * ax + stepsize2 * bx; d2y = (stepsize3*3) * ay + stepsize2 * by; d3x = (stepsize3*3) * ax; d3y = (stepsize3*3) * ay; /* calculate the points for drawing the curve */ for (j = 0; j < steps; j++) { x = x + dx; y = y + dy; dx = dx + d2x; dy = dy + d2y; d2x = d2x + d3x; d2y = d2y + d3y; thepoints[j + 1].x = (int)x; thepoints[j + 1].y = (int)y; } helper(w, thepoints, steps + 1); } if (thepoints != NULL) { free(thepoints); thepoints = NULL; } } static void curveHelper(wbp w, XPoint *thepoints, int n) { /* * Could use drawpoints(w, thepoints, n) * but that ignores the linewidth and linestyle attributes... * Might make linestyle work a little better by "compressing" straight * sections produced by genCurve into single drawline points. */ drawlines(w, thepoints, n); } /* * draw a smooth curve through the array of points */ void drawCurve(w, p, n) wbp w; XPoint *p; int n; { genCurve(w, p, n, curveHelper); }
Window system native manipulation starts with off-screen invisible windows you can draw on, and copy to visible windows from. A window opened with "canvas=hidden" in Icon can be used for this purpose; CopyArea(w1,w2,x,y,wd,ht,x2,y2) or WAttrib("canvas=normal") are examples of ways to get hidden graphics onto the screen.
lecture #9 began here
lecture #10 (virtual lecture) began here
Lectures 9 and 10 were taught by Jafar. Any questions? lecture #11 began here
For 3D we end up with 4x4 matrices, employing the same tricks. For exams (midterm, final, etc.) you will have to know your matrix transforms down cold! Should we do a pencil and paper homework?
Remember: matrices compose nicely, but not all transforms are commutative. Plan ahead for the old translate-to-origin, rotate, and translate-back trick.
lecture #13 began here
Translation is extended from the 2D case:
T(dx,dy,dz) = |
|
---|
Scaling is similarly extended:
S(dx,dy,dz) = |
|
---|
Rotation is a little more interesting; the rotation around the origin that we did before now becomes rotation around the z-axis with the following matrix:
Rz(θ) = |
|
---|
But there are two more axes one might want to rotate about:
Rx(θ) = |
|
---|
Ry(θ) = |
|
---|
We have some basic introduction to color earlier, namely the RGB color coordinate system commonly used on computer monitors. Computer hardware commonly uses 24-bits to express color information for each pixel, while software may use another coordinate system, such as X11's 48-bit system.
It may surprise you to hear that RGB color coordinates are a relatively new invention, constructed solely as a by-product of the hardware used in color monitors. RGB is not used in traditional color disciplines such as photography or print shops. Interestingly, RGB is not even capable of expressing many colors that humans recognize. We will see aspects of that and other issues in this talk.
The truth is that humans vary a fair amount in their perception, ranging from those who are blind to those who can see details far more precisely than average. As was mentioned in the discussion of gamma correction, human perception of brightness is on a log scale; we perceive ratios of intensity, not absolute values. Having appropriate gamma correction might affect how many intensities are needed in a given application.
For colors that are formed by subtracting (filtering) out of white light from what will be reflected, the colors cyan, magenta, and yellow are subtractive primary colors; they are complements of red (cyan="no red"), green (magenta="no green"), and blue (yellow="no blue"). CMY coordinates are commonly used on color printers. Adding a fourth color (pure black) typically improves the quality and reduces ink costs by giving blacker blacks than the dull gray you get from using all three CMY inks to produce "black" by regular subtraction.
There are other significant color models in wide use; for example color TV signals use a model (YIQ) that emphasizes intensity but also tacks on hue and saturation using a lower amount of bandwidth. YIQ uses twice as much bandwidth for intensity as for all other color information, and is mapped onto both monochrome and color (RGB) TV sets.
lecture #14 began here.
Hill Sections 3.3 and 4.7 talk about how clipping can be implemented. Let's see how much we can get from principles. The first clipping algorithm to do is: clipping lines. Suppose we have to implement a function
DrawClipped(x1,y1,x2,y2,x,y,wd,ht)that draws a line from x1,y1 to x2,y2, except skipping anything that is outside of (x,y,wd,ht). How can we write that procedure? Let's first dismiss some trivial cases. Can we tell whether any clipping is needed at all? Yes, it is easy:
procedure DrawClipped(x1,y1,x2,y2,x,y,wd,ht) if x <= x1 <= x+wd & x <= x2 <= x+wd & y <= y1 <= y+ht & y <= y2 <= y+ht then { # draw the whole line, no clipping needed return DrawLine(x1,y1,x2,y2) } # else some clipping is actually needed. endNow, how can we tell what clipping is needed? Here is Totally Easy Clipping:
if x1 < x & x2 < x then return if x1 > x+wd & x2 > x+wd then return if y1 < y & y2 < y then return if y1 > y+ht & y2 > y+ht then returnThis narrows down the problem to: either one end needs to be clipped and the other doesn't, or both ends need to be clipped. If both ends need to be clipped, there are still cases where the entire line is outside our drawing rectangle and needs to be clipped.
All of this up to now is totally obvious common sense, but it is also the start of the Cohen Sutherland clipping algorithm. It must be that what you do next is what makes Cohen Sutherland interesting. Let's pretend that Dr. J hasn't read or doesn't remember Cohen Sutherland even a bit. What options do I have?
procedure DrawClipped(x1,y1,x2,y2,x,y,wd,ht) if x <= x1 <= x+wd & x <= x2 <= x+wd & y <= y1 <= y+ht & y <= y2 <= y+ht then { # draw the whole line, no clipping needed return DrawLine(x1,y1,x2,y2) } # else some clipping is actually needed. if x1 < x & x2 < x then return if x1 > x+wd & x2 > x+wd then return if y1 < y & y2 < y then return if y1 > y+ht & y2 > y+ht then return # else: divide and conquer DrawClipped(x1,y1,(x1+x2)/2,(y1+y2)/2,x,y,wd,ht) DrawClipped(x2,y2,(x1+x2)/2,(y1+y2)/2,x,y,wd,ht) endDoes it work? If you try this out, you will most likely get a segmentation fault, because we haven't adequately handled the base case (when is DrawClipped handling segments so small that no recursion makes sense?). Cohen-Sutherland differs in that it isn't dividing in half, but rather, looks to chop at the actual intersection points.
procedure clipSegment(x1,y1, x2,y2, x,y,wd,ht) repeat { p1_inside := (x <= x1 <= x+wd & y <= y1 <= y+ht) | &null p2_inside := (x <= x2 <= x+wd & y <= y2 <= y+ht) | &null if \p1_inside & \p2_inside then { # draw the whole line, no clipping needed return DrawLine(x1,y1,x2,y2) } # else some clipping is actually needed. if x1 < x & x2 < x then return if x1 > x+wd & x2 > x+wd then return if y1 < y & y2 < y then return if y1 > y+ht & y2 > y+ht then return if /p1_inside then { # p1 not inside, fix p1 if x1 < x then { # p1 to the left, chop left edge } else if x1 > x+wd then { # p1 to the right, chop right edge } else if y1 < y then { # p1 above, chop against top edge } else if y1 > y+ht then { # p1 below, chop against bottom edge } } else { # p2 not inside, fix p2 } } endIn-class exercise: how to do the chopping?
Check out my Cohen-Sutherland demo.
There is a lot more to say about clipping, especially in 3D. We will revisit this topic as time allows.
OpenGL actually consists of two libraries (GL and GLU), and to use it you must either write window system code yourself (X11 or Win32) or use a third party library such as "glut" to handle window creation and user input.
In order to compile OpenGL programs, you may have to install whatever packages are needed for GL and GLU libraries and headers (libGL and libGLU in .so or .a forms, and various <GL/*.h> files under some header directory such as /usr/include, /usr/X11R6/include, /usr/openwin/include, or whatever. You generally have to learn your compiler's options for specifying where to look for these libraries (-L and -I).
In addition to these headers and libraries for linking, which you normally specify in a makefile, you may need a LD_LIBRARY_PATH environment variable in order for the system to find one or more of your shared libraries at program load time in order to run it. Note that you may already have an LD_LIBRARY_PATH, and you should just add your OpenGL (e.g. Mesa) directory to your existing path if you have one.
Compared with the earlier Icon/UIGUI 2D interface we have seen, OpenGL has many more features, and more complexity, for 3D programming. The glut library which interfaces OpenGL to the host window system claims to be simple and easy to use, but is limited and restrictive in its capabilities. Last time I taught this course, students complained bitterly about glut. Your options in this course are basically: glut or UIGUI 3D.
UIGUI does this sort of stuff for you under the hood; if you use it, your graphics primitives get saved in a data structure, and the UIGUI library registers its display callback function to walk and redraw that data structure. But for everything to work, you have to read a GUI input function or call pollevent() more or less constantly.
glBegin glVertex+ glEndThe glBegin(type_of_object) function specifies what primitive is being depicted, and the glVertex family of functions allow any number of (x,y,z) coordinates in a variety of types and dimensions. This lecture is presenting selected material from the OpenGL Primer, chapter 2.
lecture #15 began here
simple.c
is slightly misleading since it uses
old-style K&R C, implying glutCreateWindow() returns void.
Since the OpenGL
functions don't take a window argument, you can expect to find
another helper function down the road which sets which window
subsequent calls are directed to, stored in some hidden global variable.
Consider now the task of setting the foreground color with which objects are drawn. Although colors are real numbers between 0 and 1, you can use almost any numeric type to supply RGB values; integer values are converted into fractions of the maximum value, so for example many programmers who are used to working with 8 bits each of R, G, and B, can call a function that takes values like (255, 127, 0) and internally this is converted to the equivalent (1.0, 0.5, 0).
So, like the glVertex family, there are many (28) functions in the families that set the foreground color, glColor*. An example call would be: glColor3f(1.0, 0.5, 0.0). Apparently they didn't bother to make 28 functions for setting the background color with glClearColor(), because that operation is far less common.
This discussion of color is well and good, but tragically it all becomes meaningless as you transition from "toy" programs to more realistic ones, because once you introduce lighting into your application, glColor*() has no effect! When lighting is introduced, the color of objects becomes a function of the color of light and the reflective properties of the objects, specified by "materials". We will see lighting in detail a little later.
lecture #16 began here
When you use gluOrtho2D() you are manipulating the projection matrix. Functions like gluOtho2D modify (i.e. to a matrix multiply into) whatever is in the matrix already, and if you want to start from a known position, you need to reset the matrix to the identity matrix with glLoadIdentity(), so the complete code to specify a 2D window on the world in OpenGL looks like
glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(x1,x2,y1,y2);Later on, we will see that you frequently compose these matrices, especially the model view matrix, while drawing a complex scene. From one transformation corresponding to a given "parent" coordinate system, if you have several "child" coordinate systems whose coordinates are relative to the parent, each child will need to save the parent transformation, make changes and then draw their objects, and then restore the parent. When object hierarchies are multiple levels deep, a special type of stack is natural. OpenGL provides a built-in stack mechanism, in which the "push" operation copies the top stack element, to which the child may then apply transformations for their coordinate system relative to the parent coordinates. glPushMatrix() and glPopMatrix() are used in this way, and they operate on whichever current matrix you have specified using glMatrixMode().
glViewport(x, y, w, h) maps the current projection onto a specific region within the selected window.
glNewList(myhandle, GL_COMPILE); glPushAttrib(GL_CURRENT_BIT); glColor3f(1.0,0.0,0.0); glRectf(-1.0,0.0,0.0); glPopAttrib(); glEndList(); ... glCallList(myhandle);
lecture #17 began here
Q: my cylinders won't show. what up? A: any number of reasons, but cylinders need a draw style and "normals". here is a typical sequence:
glPushMatrix(); glTranslatef(x, y, z); glRotated(270.0, 1.0, 0.0, 0.0); /* rotate so cylinder points "up" */ qobj = gluNewQuadric(); gluQuadricDrawStyle(qobj, GLU_FILL); gluQuadricNormals(qobj, GLU_SMOOTH); gluCylinder(qobj, radius1, radius2, height, slices, rings);
void mouse(int button, int state, int x, int y) { ... glSelectBuffer(SIZE, nameBuffer); glGetIntegerv(GL_VIEWPORT, viewport); glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); gluPickMatrix((GLdouble)x,(GLdouble)(viewport[3]-y),N,N,viewport); gluOrtho2D(xmin,xmax,ymin,ymax); glRenderMode(GL_SELECT); glInitNames(); glPushName(0); draw_objects(GL_SELECT); glMatrixMode(GL_PROJECTION); glPopMatrix(); glFlush(); hits = glRenderMode(GL_RENDER); processHits(hits, nameBuffer); glutPostRedisplay(); }The display callback function becomes trivial, because we have moved code to a helper function called to display or to check for hits:
glClear(GL_COLOR_BUFFER_BIT); draw_objects(GL_RENDER); glFlush();The draw_objects function does the graphics functions, but "labels" each selectable object with an integer code:
void draw_objects(GLenum mode) { if (mode == GL_SELECT) glLoadName(1); glColor3f(1.0,0.0,0.0); glRectf(-0.5,-0.5,1.0,1.0); if (mode == GL_SELECT) glLoadName(2); glColor3f(0.0,0.0,1.0); glRectf(-1.0,-1.0,0.5,0.5); }The processHits function considers all objects within N pixels (N was specified in gluPickMatrix) of the user click.
void processHits(GLint hits, GLuint buffer[]) { unsigned int i, j; GLuint names, *ptr; printf("hits = %d\n", hits); ptr = (GLuint *) buffer; for (i = 0; i < hits; i++) { names = *ptr; ptr += 3; /* skip over number of names and depths */ for (j = 0; j < names; j++) { if (*ptr == 1) printf("red rectangle\n"); else printf("blue rectangle\n"); ptr ++; } } }
lecture #18 began here
void cube() { glColor3f(1.0,0.0,0.0); glBegin(GL_POLYGON); glVertex3f(-1.0,-1.0,-1.0); glVertex3f(-1.0,1.0,-1.0); glVertex3f(-1.0,1.0,1.0); glVertex3f(-1.0,-1.0,1.0); glEnd(); /* other 6 faces similar) */ } pGenerally it will be preferable to store the graphics in a data structure (array, list, tree) and write code that walks the structure.
OpenGL uses z-buffers, or depth buffers, which are extra memory buffers, to track different objects' relative depths. This is built-in, but you have to turn it on to get its benefits. Also, if any of your objects are see-through, you will need to read more details on z-buffering in the OpenGL references.
glutInitDisplayMode(GLUT_RGB|GLUT_DOUBLE|GLUT_DEPTH); glEnable(GL_DEPTH_TEST); ... glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(0.0, 0.0, -1.0); /* move object away/in front of camera*/
glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(0.0, 0.0, -1.0); /* move object 1 */ glutWireTetrahedron(); glLoadIdentity(); glTranslatef(0.0, 0.0, -3.0); /* move object 2, further away */ glutWireCube();
glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(0.0, 0.0, -1.0); /* move object 1 */ glutWireTetrahedron(); glTranslatef(0.0, 0.0, -2.0); /* move object 2, -2 further than object 1 */ glutWireCube();
glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(x, y, z); /* move object back from origin */ glRotatef(angle, dx, dy, dz); /* rotate about axis specified by vector */ glTranslatef(-x, -y, -z); /* move object to origin */
void base() { glPushMatrix(); /* make our local copy */ glRotatef(-90.0, 1.0, 0.0, 0.0); gluCylinder(p, BASE_RADIUS, BASE_RADIUS, BASE_HEIGHT, 5, 5); glPopMatrix(); } void lower_arm() { glPushMatrix(); /* make our local copy */ glTranslate(0.0,0.5*LOWER_ARM_HEIGHT,0.0); /* translate to our center */ glScalef(LOWER_ARM_WIDTH, LOWER_ARM_HEIGHT, LOWER_ARM_WIDTH); glutWireCube(1.0); glPopMatrix(); } void upper_arm() { glPushMatrix(); /* make our local copy */ glTranslate(0.0,0.5*UPPER_ARM_HEIGHT,0.0); /* translate to our center */ glScalef(UPPER_ARM_WIDTH, UPPER_ARM_HEIGHT, UPPER_ARM_WIDTH); glutWireCube(1.0); glPopMatrix(); } void display() { glClear(GL_COLOR_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glColor3f(1.0,0.0,0.0); /* isn't this poor way to say "red" ? */ glRotatef(theta[0], 0.0, 1.0, 0.0); base(); glTranslatef(0.0,BASE_HEIGHT,0.0); glRotate(theta[1], 0.0, 0.0, 1.0); lower_arm(); glTranslatef(0.0,LOWER_ARM_HEIGHT,0.0); glRotatef(theta[2], 0.0, 0.0, 1.0); upper_arm(); glutSwapBuffers(); }What about more complex multi-piece objects such as the running man in Figure 5.10? With the right combination of pushes and pops, code similar to the above example would work... but its much cooler to do it as a tree traversal:
typedef struct treenode { GLfloat m[16]; void (*f)(); struct treenode *sibling, *child; } treenode; void traverse(treenode *root) { if (root == NULL) return; glPushMatrix(); glMultMatrix(root->m); root->f(); traverse(root->child); glPopMatrix(); traverse(root->sibling); } void display() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); traverse(torso_root); glutSwapBuffers(); } where treenode *torso_root = malloc(sizeof(treenode)); torso_root->f = torso; glLoadIdentity(); glRotatef(theta[0], 0.0, 1.0, 0.0); glGetFloatv(GL_MODELVIEW_MATRIX, torso_root->m); torso_root->sibling = NULL; torso_root->child = head_node; ... etc.
In OpenGL, light calculations are done on a polygon by polygon basis. To get more realistic shading, break your objects into more polygons.
lecture #19 began here
Types of questions. I can ask you anything I like within the subject domain, but typical Dr. J questions are: short answer; math calculation; write a code fragment; or debug or explain code fragment. Dr. J does not usually give "T" or "F" or "multiple choice" questions, nor proofs.
Topics likely to be selected from the following list. On a 50 minute midterm, Dr. J might typically ask 8-10 questions, depending on their estimated length. He has been known to give fewer, such as 5-6 longer questions.
|
lecture #20 began here
Lecture 20 went over the midterm results.
lecture #21 began here
glEnable(GL_LIGHTING); glEnable(GL_LIGHT0);Once you enable lighting, all the glColor*() calls you've been doing will be ignored...what you see will depend on materials and light sources. The lighting model also uses a normal vector, which is not autocalculated by OpenGL, but rather, you set by calling glNormal3*(dx,dy,dz)
To specify a light source, you call glLight*(light, param, value) where light is an integer code (which light) param is what to set, and value is the value for that param. For example, the (x,y,z) location of light 0 would be
GLfloat a[] = {1.0, 2.0, 3.0, 1.0}; glLightfv(0, GL_POSITION, a);To setup a light, you may see a lot of calls to glLight*(), besides position you can setup GL_DIFFUSE, GL_SPECULAR, GL_AMBIENT properties, etc. There are defaults for these values so that it is easy to setup a simple lighting (single source, white, bland) model.
typedef struct material { GLfloat ambient[4]; GLfloat diffuse[4]; GLfloat specular[4]; GLfloat shininess; } material; material redPlastic = { {0.3, 0.0, 0.0, 1.0}, {0.6, 0.0, 0.0, 1.0}, {0.8, 0.6, 0.6, 1.0}, 32.0 } rp; ... glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, rp.ambient); glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, rp.diffuse); glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, rp.specular); glMaterialf (GL_FRONT_AND_BACK, GL_SHININESS, rp.shininess); glNormal3f(nx, ny, nz); /* unit normal appropriate for this object */ glBegin(...); ...red plastic object glEnd();
There is a "raster position", or pixel cursor, set using glRasterPos(), and a function
glBitmap(height, width, x, y, x2, y2, bits)which draws a bitmap. The main use of this facility is to draw text. The "raster position" is transformed by model-view and projection matrices to yield a screen coordinate. glBitmap uses the current "raster color", as set by glColor*(), which sets both raster color and drawing color.
glDrawPixels(height, width, GL_RGB, GL_UNSIGNED_BYTE, imagebits)draws rectangles at the current raster position. There are several binary formats available besides GL_RGB, and several types besides GL_UNSIGNED_BYTE.
glReadPixels(0,0,rows,columns,GL_RGB,GL_UNSIGNED_BYTE, imagebits)performs a semi-inverse operation. Note that if you write an image, and read it back in, you will not always get back identical bits, that depends on the actual display hardware and how much is "lost in translation".
glCopyPixels(x,y,w,h, GL_COLOR)performs a "bit blit" to the current raster position of the rectangle given by (x,y,w,h). Blitting to other buffers besides the frame buffer (GL_COLOR) may be possible. Depending on the hardware, there may be a depth buffer, front and back buffers (a la glSwapBuffers) and on higher powered machines, stereo images for 3D eyegoggles, etc.
lecture #22 began here
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 64, 64, 0, GL_RGB, GL_UNSIGNED_BYTE, imagebits);
To setup a texture you execute a sequence of calls similar to
GLUint a[32]; glEnable(GL_TEXTURE_2D); glGenTextures(n, a); glBindTexture(GL_TEXTURE_2D, a[0]); // construct the actual texture image in memory glTexImage2D(...);Textures are applied to geometric primitives by defining the mapping between texel coordinates and vertices:
glBegin(GL_QUADS); glTexCoord2f(0.0, 0.0); glVertex3f(v[0].x, v[0].y, v[0].z); glTexCoord2f(0.0, 1.0); glVertex3f(v[1].x, v[1].y, v[1].z); glTexCoord2f(1.0, 1.0); glVertex3f(v[2].x, v[2].y, v[2].z); glTexCoord2f(1.0, 0.0); glVertex3f(v[3].x, v[3].y, v[3].z); glEnd();From a design point of view, it is a bad flaw that these two sets of separate calls (glTexCoord+glVertex) must made in lock-step. But the association between vertices and texture coordinates has to be made: how would you rather do it?
Some parameters are required in order to define the semantics of textures fully.
glTexParameter*(target, name, value)is used to set texture mapping parameters.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);sets things up so the texture "wraps" (repeats)
lecture #23 began here
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);We will talk about blending.
To check your version, call char *glGetString(GL_VERSION)
For maximum portability, you should make your source images' (.jpg, .gif, whatever) widths and heights be a power of 2 in the first place. There is a gluScaleImage() function that you can use to changes its dimensions, but my experience is that this is both slow and buggy on some platforms.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST);to build and install a set of texture images based on your original source image, instead of a single-level texture, call gluBuild2DMipmaps() instead of glTexImage2D().
gluQuadricTexture(GLUquadricObj *obj, GLboolean);There is a "lower level" automatic texture coordinate generation mechanism that will work for arbitrary objects. You can define an equation for computing the texture coordinate from the xyz space coordinates:
GLfloat plane_s = {0.5, 0.0, 0.0, 0.5}; GLfloat plane_t = {0.0, 0.5, 0.0, 0.5}; ... glEnable(GL_TEXTURE_GEN_S); glEnable(GL_TEXTURE_GEN_T); ... glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR); glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR); ... glTexGenfv(GL_S, GL_OBJECT_LINEAR, plane_s); glTexGenfv(GL_T, GL_OBJECT_LINEAR, plane_t);
To get a "handle" in which you can place a texture.
glGenTextures(1, &i);
To assign the current texture to handle i:
glBindTexture(target, i);If i has not been defined yet, subsequent texture calls define it. If i has been previously defined, opengl switches to it and uses it for subsequently drawn objects.
For curves, this is done with glMap1f(entity, u0, u1, stride, order, data). A subsequent call to glEvalcoord1*() replaces calls to glVertex*(). Example:
glBegin(GL_LINE_STRIP); for(i=0;i<20;i++) glEvalCoord1f(u0 + i * (u1-u0)/ 20.0); glEnd();This approximates the curve with 21 points between u0 and u1. Note instead of (x,y,z) coordinates, glEvalCoord1f() just takes a value of u and passes it to all enabled evaluators, which call e.g. glVertex.
Evaluators can be used for curves, surfaces, normals, colors, and textures. For any of these to work, you have to enable them e.g. glEnable(GL_MAP1_VERTEX_33)
For equally spaced values of u (such as the 21 values above) there is special support via:
glMapGrid1f(21,u0,u1); glEvalMesh1(GL_LINE, 0, 20);
lecture #24 began here
typedef struct xyz { double x,y,z; } vertex, normal; struct mesh { int nfaces; struct face *faces; int nvertices; vertex *vertices; int nnormals; normal *normals; }; struct face { struct mesh *m; int nvertices; int *vertices; // indices into m->vertices int nnormals; // either 1, or nvertices int *normals; // indices into m->normals };
vertex | x | y | z |
---|---|---|---|
0 | 0 | 0 | 0 |
1 | 1 | 0 | 0 |
2 | 1 | 1 | 0 |
3 | 0.5 | 1.5 | 0 |
4 | 0 | 1 | 0 |
5 | 0 | 0 | 1 |
6 | 1 | 0 | 1 |
7 | 1 | 1 | 1 |
8 | 0.5 | 1.5 | 1 |
9 | 0 | 1 | 1 |
normal | nx | ny | nz |
---|---|---|---|
0 | -1 | 0 | 0 |
1 | -0.707 | 0.707 | 0 |
2 | 0.707 | 0.707 | 0 |
3 | 1 | 0 | 0 |
4 | 0 | -1 | 0 |
5 | 0 | 0 | 1 |
6 | 0 | 0 | -1 |
Face | Vertices | Associated Normal |
---|---|---|
0 (left) | 0,5,9,4 | 0,0,0,0 |
1 (roof left) | 3, 4, 9, 8 | 1,1,1,1 |
2 (roof right) | 2, 3, 8, 7 | 2, 2, 2,2 |
3 (right) | 1, 2, 7, 6 | 3, 3, 3, 3 |
4 (bottom) | 0, 1, 6, 5 | 4, 4, 4, 4 |
5 (front) | 5, 6, 7, 8, 9 | 5, 5, 5, 5, 5 |
6 (back) | 0, 4, 3, 2, 1 | 6, 6, 6, 6, 6 |
lecture #25 began here
These compression algorithms are hard enough that normal mortals do not interact with them by opening the files and decompressing them by hand. Rather, the compression standards come with C API's for libraries that are fairly portable and standard.
JPEG is particularly good at compressing things like photos. For simple images (for example, black and white documents, or smallish textures), JPEG's sizes are often much larger than that of simpler algorithms.
Some human-readable, relatively open standards include:
#VRML V1.0 ascii DEF Ez3d_Scene Separator { DEF Ez3d_Viewer Switch { whichChild -3 DEF Title Info { string "" } DEF Viewer Info { string "walk" } DEF BackgroundColor Info { string "0.000000 0.000000 0.000000" } DEF Cameras Switch { whichChild 0 PerspectiveCamera { position 0 3.20041 7.72648 orientation -1 0 0 0.392699 focalDistance 4.18154 } } } DEF Ez3d_Environment Switch { whichChild -3 } DEF Ez3d_Objects Switch { whichChild -3 DEF Cube001 Separator { ShapeHints { vertexOrdering COUNTERCLOCKWISE shapeType UNKNOWN_SHAPE_TYPE creaseAngle 0.523599 } DEF Ez3d_Cube001 Cube { } } } }
xof 0303txt 0032 template VertexDuplicationIndices {DWORD nIndices; DWORD nOriginalVertices; array DWORD indices[nIndices]; } template XSkinMeshHeader { <3cf169ce-ff7c-44ab-93c0-f78f62d172e2> WORD nMaxSkinWeightsPerVertex; WORD nMaxSkinWeightsPerFace; WORD nBones; } template SkinWeights { <6f0d123b-bad2-4167-a0d0-80224f25fabb> STRING transformNodeName; DWORD nWeights; array DWORD vertexIndices[nWeights]; array float weights[nWeights]; Matrix4x4 matrixOffset; } Frame RootFrame { FrameTransformMatrix { 1.000000,0.000000,0.000000,0.000000, 0.000000,1.000000,0.000000,0.000000, 0.000000,0.000000,-1.000000,0.000000, 0.000000,0.000000,0.000000,1.000000;; } Frame Cube { FrameTransformMatrix { 1.000000,0.000000,0.000000,0.000000, 0.000000,1.000000,0.000000,0.000000, 0.000000,0.000000,1.000000,0.000000, 0.000000,0.000000,0.000000,1.000000;; } Mesh { 24; 1.000000; 1.000000; -1.000000;, 1.000000; -1.000000; -1.000000;, -1.000000; -1.000000; -1.000000;, -1.000000; 1.000000; -1.000000;, 1.000000; 1.000000; 1.000000;, -1.000000; 1.000000; 1.000000;, -1.000000; -1.000000; 1.000000;, 1.000000; -1.000000; 1.000000;, 1.000000; 1.000000; -1.000000;, 1.000000; 1.000000; 1.000000;, 1.000000; -1.000000; 1.000000;, 1.000000; -1.000000; -1.000000;, 1.000000; -1.000000; -1.000000;, 1.000000; -1.000000; 1.000000;, -1.000000; -1.000000; 1.000000;, -1.000000; -1.000000; -1.000000;, -1.000000; -1.000000; -1.000000;, -1.000000; -1.000000; 1.000000;, -1.000000; 1.000000; 1.000000;, -1.000000; 1.000000; -1.000000;, 1.000000; 1.000000; 1.000000;, 1.000000; 1.000000; -1.000000;, -1.000000; 1.000000; -1.000000;, -1.000000; 1.000000; 1.000000;; 6; 4; 0, 3, 2, 1;, 4; 4, 7, 6, 5;, 4; 8, 11, 10, 9;, 4; 12, 15, 14, 13;, 4; 16, 19, 18, 17;, 4; 20, 23, 22, 21;; MeshMaterialList { 1; 6; 0, 0, 0, 0, 0, 0;; Material Material { 0.800000; 0.800000; 0.800000;1.0;; 0.500000; 1.000000; 1.000000; 1.000000;; 0.0; 0.0; 0.0;; } //End of Material } //End of MeshMaterialList MeshNormals { 24; 0.577349; 0.577349; -0.577349;, 0.577349; -0.577349; -0.577349;, -0.577349; -0.577349; -0.577349;, -0.577349; 0.577349; -0.577349;, 0.577349; 0.577349; 0.577349;, -0.577349; 0.577349; 0.577349;, -0.577349; -0.577349; 0.577349;, 0.577349; -0.577349; 0.577349;, 0.577349; 0.577349; -0.577349;, 0.577349; 0.577349; 0.577349;, 0.577349; -0.577349; 0.577349;, 0.577349; -0.577349; -0.577349;, 0.577349; -0.577349; -0.577349;, 0.577349; -0.577349; 0.577349;, -0.577349; -0.577349; 0.577349;, -0.577349; -0.577349; -0.577349;, -0.577349; -0.577349; -0.577349;, -0.577349; -0.577349; 0.577349;, -0.577349; 0.577349; 0.577349;, -0.577349; 0.577349; -0.577349;, 0.577349; 0.577349; 0.577349;, 0.577349; 0.577349; -0.577349;, -0.577349; 0.577349; -0.577349;, -0.577349; 0.577349; 0.577349;; 6; 4; 0, 3, 2, 1;, 4; 4, 7, 6, 5;, 4; 8, 11, 10, 9;, 4; 12, 15, 14, 13;, 4; 16, 19, 18, 17;, 4; 20, 23, 22, 21;; } //End of MeshNormals } // End of the Mesh Cube } // SI End of the Object Cube } // End of the Root Frame
// version 103 // numTextures,numTris,numVerts,numParts,1,numLights,numCameras 1,10,8,1,1,0,0 // partList: firstVert,numVerts,firstTri,numTris,"name" 0,8,0,10,"pCube2" // texture list: name pillar2.tga // triList: materialIndex,vertices(index, texX, texY) 0, 2,511.834,0.308228, 1,255.801,255.708, 0,255.831,0.218765 0, 2,511.834,0.308228, 3,511.804,255.798, 1,255.801,255.708 0, 4,196.009,19.0188, 3,255.49,78.5001, 2,196.009,78.5001 0, 4,196.009,19.0188, 5,255.49,19.0188, 3,255.49,78.5001 0, 6,61.9285,79.5219, 5,121.41,255.527, 4,61.9285,255.527 0, 6,61.9285,79.5219, 7,121.41,79.5219, 5,121.41,255.527 0, 3,123.297,79.5219, 7,182.778,255.527, 1,123.297,255.527 0, 3,123.297,79.5219, 5,182.778,79.5219, 7,182.778,255.527 0, 4,184.666,79.5219, 0,244.147,255.527, 6,184.666,255.527 0, 4,184.666,79.5219, 2,244.147,79.5219, 0,244.147,255.527 // vertList: x,y,z -2.26879,-6.65022,-2.2596 2.25041,-6.65022,-2.2596 -2.26879,6.72212,-2.2596 2.25041,6.72212,-2.2596 -2.26879,6.72212,2.2596 2.25041,6.72212,2.2596 -2.26879,-6.65022,2.2596 2.25041,-6.65022,2.2596 // lightList: "name", type, x,y,z, r,g,b, (type-specific info) // cameraList: "name", x,y,z, p,b,h, fov(rad) partTree 1 -1 posOrientList 1 -0.00918928,0.0359506,0, 0,0,0 partUserTextList 1 0
lecture #26 began here
/* * pngread(filename, p) - read png file, setting gf_ globals */ static int pngread(char *filename, int p) { unsigned char header[8]; int bit_depth, color_type; double gamma; png_uint_32 i, rowbytes; png_bytepp row_pointers = NULL; png_color_16p pBackground; png_structp png_ptr = NULL; png_infop info_ptr = NULL; png_infop end_info = NULL; gf_f = NULL; #ifdef MSWindows if ((gf_f = fopen(filename, "rb")) == NULL) { #else /* MSWindows */ if ((gf_f = fopen(filename, "r")) == NULL) { #endif /* MSWindows */ return Failed; } /* read the first n bytes (1-8, 8 used here) and test for png signature */ fread(header, 1, 8, gf_f); if (png_sig_cmp(header, 0, 8)) { return Failed; /* (NOT_PNG) */ } png_ptr = png_create_read_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL); if (!png_ptr) return Failed; info_ptr = png_create_info_struct(png_ptr); if (!info_ptr) { png_destroy_read_struct(&png_ptr, NULL, NULL); return Failed; } end_info = png_create_info_struct(png_ptr); if (!end_info){ png_destroy_read_struct(&png_ptr, &info_ptr, NULL); return Failed; } if (setjmp(png_jmpbuf(png_ptr))) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); if (gf_f) { fclose(gf_f); gf_f = NULL; } return Failed; } png_init_io(png_ptr, gf_f); png_set_sig_bytes(png_ptr, 8); png_read_info(png_ptr, info_ptr); { unsigned long mywidth, myheight; png_get_IHDR(png_ptr, info_ptr, &mywidth, &myheight, &bit_depth, &color_type, NULL, NULL, NULL); gf_width = mywidth; gf_height = myheight; } /* * Expand palette images to RGB, low-bit-depth grayscale images to 8 bits, * transparency chunks to full alpha channel; strip 16-bit-per-sample * images to 8 bits per sample; and convert grayscale to RGB[A] */ if (color_type == PNG_COLOR_TYPE_PALETTE) png_set_expand(png_ptr); if (color_type == PNG_COLOR_TYPE_GRAY && bit_depth < 8) png_set_expand(png_ptr); if (png_get_valid(png_ptr, info_ptr, PNG_INFO_tRNS)) png_set_expand(png_ptr); if (bit_depth == 16) png_set_strip_16(png_ptr); if (color_type == PNG_COLOR_TYPE_GRAY || color_type == PNG_COLOR_TYPE_GRAY_ALPHA) png_set_gray_to_rgb(png_ptr); /* * if it doesn't have a file gamma, don't * do any correction */ if (png_get_gAMA(png_ptr, info_ptr, &gamma)) png_set_gamma(png_ptr, GammaCorrection, gamma); /* * All transformations have been registered; now update info_ptr data, * get rowbytes and channels, and allocate image memory. */ png_read_update_info(png_ptr, info_ptr); rowbytes = png_get_rowbytes(png_ptr, info_ptr); /* pChannels = (int)png_get_channels(png_ptr, info_ptr); */ if ((gf_string = (unsigned char *)malloc(rowbytes*gf_height)) == NULL) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); return Failed; } if ((row_pointers=(png_bytepp)malloc(gf_height*sizeof(png_bytep))) == NULL){ png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); free(gf_string); gf_string = NULL; return Failed; } /* set the individual row_pointers to point at the correct offsets */ for (i = 0; i < gf_height; ++i) row_pointers[i] = gf_string + i*rowbytes; /* now we can go ahead and just read the whole image */ png_read_image(png_ptr, row_pointers); /* and we're done! (png_read_end() can be omitted if no processing of * post-IDAT text/time/etc. is desired) */ free(row_pointers); row_pointers = NULL; png_read_end(png_ptr, NULL); if (png_ptr && info_ptr) { png_destroy_read_struct(&png_ptr, &info_ptr, &end_info); png_ptr = NULL; info_ptr = NULL; end_info = NULL; } fclose(gf_f); gf_f = NULL; return Succeeded; } #endif /* HAVE_LIBPNG */
lecture #27 began here
lecture #28 began here
Concept:
gluTessCallback(tess, GLU_TESS_BEGIN, tcbBegin); gluTessCallback(tess, GLU_TESS_VERTEX, tcbVertex); gluTessCallback(tess, GLU_TESS_END, tcbEnd); /* plus the following only if you intersect yourself: */ gluTessCallback(tess, GLU_TESS_COMBINE, tcbCombine); /* plus the following, in order to catch errors */ gluTessCallback(tess, GLU_TESS_ERROR, tcbError);
GLdouble data[numVerts][3]; gluTessBeginPolygon(tess, NULL); gluTessBeginContour(tess); for(i=0;i<sizeof(data)/sizeof(GLDouble)*3;i++) gluTessVertex(tess,data[i],data[i]); gluTessEndContour(tess); gluTessEndPolygon(tess);
void tcbBegin (GLenum prim) { glBegin (prim); } void tcbVertex (void *data) { glVertex3dv ((GLdouble *)data); } void tcbEnd (); { glEnd (); } void tcbCombine (GLdouble c[3], void *d[4], GLfloat w[4], void **out) { GLdouble *nv = (GLdouble *) malloc(sizeof(GLdouble)*3); nv[0] = c[0]; nv[1] = c[1]; nv[2] = c[2]; *out = (void *)nv; }
gluTessBeginPolygon(tobj, NULL); gluTessBeginContour(tobj); gluTessVertex(tobj, v1, v1); gluTessVertex(tobj, v2, v2); gluTessVertex(tobj, v3, v3); gluTessVertex(tobj, v4, v4); gluTessEndContour(tobj); gluTessBeginContour(tobj); gluTessVertex(tobj, v5, v5); gluTessVertex(tobj, v6, v6); gluTessVertex(tobj, v7, v7); gluTessEndContour(tobj); gluTessEndPolygon(tobj);
lecture #29 began here
By the way, you should have read Chapter 7 (cameras), and we are now talking about selected ideas from Chapter 8. We have already talked about some stuff, like textures; we are filling in stuff we haven't covered. Today: an extra pass on shading and hidden surfaces; Monday: shadows.
OpenGL offers glShadeModel(GL_FLAT) and glShadeModel(GL_SMOOTH), which does goraud shading, interpolating colors at each pixel in between vertices for a given triangle.
Lambert's Law: As a face is turned away from the light source, object appears dimmer because the area shined on gets smaller.
Effects of distance: light gets dimmer as it gets farther away, but it is easy to overdo this.
Equations. Since ambient is everywhere, it is cheapest to calculate. Diffuse light will decrease with Lambert's law. Specular light will depend on shininess and angle between light, object, and eye. See text.
light = ambient + diffuse + specular
light = IaPa + IdPd × lambert + IsPs × phong
Actually, to be more technical, OpenGL implements Goraud shading, a per-face or per-vertex normal-driven calculation of specular light effects. When normals vary per vertex, trivial interpolation fills in the differences. Full-on Phong shading would calculate normals at every pixel; it seems to be at least 8x slower, and is not built-in to OpenGL, but it looks nicer.
for every face to be rendered for every pixel p at (i,j) if depth(p) < z[i,j] then fb[i,j] = p zb[i,j] = depth(p)Good: faces can be sent to OpenGL (rendered) in any order
It is possible (and on slow platforms it may be critical) to implement your own algorithms to reduce the amount of stuff you send to OpenGL. Dr. J has a great anecdote on this, regarding the time he first scaled his virtual environment software from single-room to whole-floor.
lecture #30 began here
Section 8.5 points out that despite my preoccupation with reading textures in from files, textures can be generated by code ("procedural textures"). The examples, things like checkerboard patterns, tend to resemble the "fill patterns" used in 2D graphics API's such as Xlib/X11. It is important to note that arbitrarily powerful and complicated code can produce rich and complicated textures (think: L-systems and fractals), but since the code gets dumped into a 2D image eventually, it is hard to see what it saves (load times?).
One of the juicier points to note as one maps textures is shown in Figure 8.42 on linear interpolation vs. correct interpolation. There is also a good discussion of environment mapping, a technique that makes for nice shiny reflective surfaces using textures computed using a "surrounding cube".
A more general technique uses a lot of memory to throw at the problem: a shadow buffer. A shadow buffer is a z-buffer rendered with the camera located at the light source. This only has to be (re)computed when the visible objects move relative to the light source, not when the camera moves. During rendering, each vertex V if its distance from the light source is larger than the shadow buffer's value for its nearest object, is in shadow, and is rendered with only ambient light sources.
lecture #31 began here
Virtual lecture brought to you by the college of engineering dean's search.
Chapter 9 is largely about images and pixmaps, material that we have introduced, or material that is not necessary. One important concept that we haven't discussed is antialiasing, or "removing jaggies". There are several approaches discussed (prefiltering, supersampling, postfiltering), and it is not important for you to memorize them all, but it is important for you to understand the general concepts.
The reason jaggies show up at all is because pixels are discrete and rendering continuous entities on them involves approximation and round-off error. Given the original equations (for example, for some line segment) one approach to antialiasing would be to calculate the partially-hit pixels' colors in proportion to how much they were hit.
Supersampling involves taking a higher resolution and computing pixels' values as averages of the surrounding (tinier) datapoints. A refinement on this is to weight the center point more than its neighbor/edgepoints.
lecture #32 began here
Lecture 32 was a guest lecture by Keith Jeffery about Shaders.
lecture #33 began here
I am skipping over chapter 10 on curve design on the grounds that we've done a bunch with curves earlier in the semester, but I reserve the right to come back to it and talk more about b-splines or NURBS if I get a request to do so, or get a personal itch to go at it.
I am glossing over chapter 11 on color at this point; we've talked a fair amount already, but there is one color topic I want to be sure that we covered:
lecture #34 began here
The Volume Problem: you can capture any aspect of program behavior textually, but the resulting log files easily grow to megabytes and beyond, for all but the smallest toy programs.
PV tools use graphics to deal with volume, and must also solve two other hard problems: intrusion (observing some behavior modifies it), and access to program behavior.
lecture #35 began here
Basic idea: provide a more accurate model of the effect of light in the rendering. If you start from all the light sources, almost all of your model effort will be wasted, bouncing light rays on stuff in directions that won't be visible to the camera. We want to approximate that, with a shortcut that computes the visible portion of the overall lighting in the scene.
Idea: "shoot" a ray in reverse from the eye out through each pixel. The color we see for that pixel will go back to whatever light source would bounce off something into our field of vision. This is sort of the opposite of the usual 3D projection-onto-viewport.
Idea: use a parametric equation for the light ray, to walk backwards into the scene, trying to map it back to a light source after bouncing off whatever object it hits.
Idea: how far backwards we are willing to trace the ray should depend on the material surface(s) it hits. Diffuse ones had better be direct hits from some light source, while shiny ones would reflect, and transparent ones would refract.
r(t) = E + Drc * tE is the eye point (x,y,z), D is the direction vector. Note that this is not exactly the camera direction, it is the direction of the eye looking through pixel (r,c), which is non-trivial to compute, but between your geometry skills and the materials in chapters 6,7 and 12 I expect you could find it.
define objects and light sources set up the camera for(int r=0; r < nRows; r++) for(int c=0; c < nCols; c++) { build the rc-th ray find ALL intersections of the rcth ray with the objects in the scene identify the intersection closest to eye compute the hit point and normal where ray hits object find the color of the light eye receives long the ray }
F(x,y,z) = x2 + y2 + z2 - 1 = 0Cylinder:
F(x,y,z) = x2 + y2 - 1 = 0 for 0 < z < 1Intersection will occur at: F(r(t))=0, i.e. solve for
F(E + D * t) = 0
lecture #36 began here
Practice exercise: where does ray r(t)=(4,1,3)+(-3,-5,-3)t hit the generic plane? Ez = 3, Dz=-3, thit = 1.
The generic sphere implicit function was F(x,y,z)=x2+y2+z2-1.
Since the distance of the point from the origin is sqrt(x2+y2+z2),
F(x,y,z) = F(P) = |P|2-1.
Substituting (E+Dt) for P, we get
|(E+Dt)|2-1 = |E|2 + 2 (E • Dt) + |Dt|2 - 1 = 0.
Distributing squares and switching terms around we get
(|D|2)t2 + (2 (E • D))t + (|E|2-1) = 0.
As gross as this is, it is a quadratic equation At2 + Bt + C = 0,
where
A=|D|2, B=2(E•D), C=(|E|2-1), so
t = (-(2E•D) ± sqrt((2E•D)2 - 4|D|2|E|2-1 ) / 2 * |D|2
Example exercise: where does r(t)=(3,2,3)+(-3,-2,-3)t hit the generic sphere?
There were a few class periods devoted to examples, in particular, there were texture and lighting examples. We grabbed a chainmail texture off a random image in a random website, converted it into a PPM and read the PPM and used it in an OpenGL program. And we looked at the "lighting lab" example, which also demonstrated several materials. We looked at a student example which moved the camera point (eye position) and played with orthographic versus perspective projection.