Monday, January 16, 2017

problem fundamentals

Our purpose is rendering from a model. The model defines geometries to render, and to render them we use a camera defined in model space. The camera consists of a focus, which is a point (which we call F), and an array of pixels, defined in the model space, each of which could be defined as being a point, say, the center of the pixel square, or one of its corners. Extending a line from F through a pixel as defined here as a point, and into the model space, that line will either intersect or not intersect any given geometry defined in the model, and if that line does intersect a geometry which is a surface, we can color a pixel on the screen which corresponds to that camera pixel the color of that surface.

Let us assume we are given the point F and definitive information about a line we'll call V, the angle of view, or the direction in which our camera is pointed, and also a value Fd, which is the distance from our film plane to F, and also the width and height of the film plane in pixels, and also the width of the film plane as a multiple of Fd, and, finally, some measure of the rotational orientation of the film plane. This is our required camera information, and from it we can derive the location of each pixel in the model, and the formula for a line from F through that pixel. Now let us add to our model some geometry defining a surface. We can now test each pixels view line for intersection with that geometry and color each corresponding pixel on the screen the color of the geometry, if its view line does intersect the geometry, or a background color if it does not.

Our problem thus branches in two directions. First, given the listed information about our camera, how do we locate each pixel in the model. Second, how are renderable geometries in the model constructed and how can we tell whether a point is in that geometry or not in that geometry?