Explain basic 3D theory

Recommendation: Use the NSDT scene editor to quickly build a 3D application scene

Coordinate System

3D is essentially about representing shapes in 3D space and using a coordinate system to calculate their position.

Coordinate System

WebGL uses a right coordinate system — the axis points to the right, the axis points up, and the axis points offscreen, as shown in the image above.xyz

object

Use vertices to build different types of objects. A vertex is a point in space that has its own 3D position in a coordinate system, usually some additional information that defines it. Each vertex is described by the following attributes:

  • Location : Identify it in 3D space (,,).xyz
  • Color : Holds RGBA values ​​(R, G, and B for red, green, and blue channels, alpha for transparency—all values ​​range from to ).0.01.0
  • Normal: A way to describe the orientation of a vertex.
  • Textures : Vertices can be used to decorate a 2D image of the surface they belong to, rather than simply a color.

You can use this information to build geometry - here is an example cube:

cube

Faces of a given shape are planes between vertices. For example, a cube has 8 distinct vertices (points in space) and 6 distinct faces, each made of 4 vertices. The normal defines the orientation of the face. Also, by connecting the points, we are creating the edges of the cube. Geometry is built from vertices and faces, while materials are textures, which use colors or images. If we connect the geometry with the material we will get a mesh.

rendering pipeline

The rendering pipeline is the process of preparing an image and outputting it to the screen. The graphics rendering pipeline takes a 3D object built from primitives described using vertices, applies processing, computes fragments and renders them as pixels on a 2D screen.

rendering pipeline

The terminology used in the above diagram is as follows:

  • Primitives : The input to the pipeline — it is built from vertices, which can be triangles, points, or lines.
  • Fragment : A 3D projection of a pixel, with all the same properties as a pixel.
  • Pixels : Points on the screen arranged in a 2D grid, with RGBA colors.

Vertex and fragment processing is programmable—you can write your own shaders to manipulate the output.

Vertex processing

Vertex processing is the act of combining information about individual vertices into primitives and setting their coordinates in 3D space for the viewer to see. It's like taking a photo of a given landscape that you've prepared - you have to first place the subject, configure the camera, and then shoot.

Vertex processing

There are four phases to this processing: The first phase involves arranging objects in the world, known as model transformation . Then there is the view transform , which is responsible for positioning and setting the orientation of the camera in 3D space. A camera has three parameters - position, orientation and direction - that must be defined for a newly created scene.

camera

Then, a projective transformation (also called a perspective transformation) defines the camera settings. It sets what the camera can see - the configuration includes field of view , aspect ratio and optional near and far planes . Read the camera paragraph in the three.js article to learn about these.

camera settings

The final step is the viewport conversion , which involves everything that goes next in the output rendering pipeline.

rasterization

Rasterization converts a primitive (connected vertices) into a set of fragments.

rasterization

These fragments (ie 2D projections of 3D pixels) are aligned to the pixel grid, so that eventually they can be printed as pixels on a 2D screen display during the output binning stage.

fragment processing

Fragment processing focuses on textures and lighting - it calculates the final color based on the given parameters.

fragment processing

texture

Textures are 3D images used in 2D space to make objects look better and more realistic. Textures are assembled from individual texture elements called texels, just as picture elements are assembled from pixels. Applying textures to objects during the fragment processing stage of the rendering pipeline allows us to tweak it by wrapping and filtering if necessary.

Texture wrapping allows us to repeat a 3D image around a 2D object. When the native resolution or texture image differs from the displayed fragment, texture filtering will be applied - it will be scaled down or scaled up accordingly.

illumination

The colors we see on the screen are the result of the light source interacting with the surface color of the object's material. Light may be absorbed or reflected. The standard Phong lighting model implemented in WebGL has four basic types of lighting:

  • Diffuse : Distant directional light, such as the sun.
  • Specular reflection : A point of light, like a light bulb or flashlight in a room.
  • Environment : Normal light is applied to everything in the scene.
  • Self-illumination : Light emitted directly by an object.

output merge

During the output operation phase, all fragments of primitives from 3D space are converted to a 2D grid of pixels, which are then printed on the screen display.

output merge

During output merging, some processing is also applied to ignore information that is not needed - for example, parameters of objects that are outside the screen or behind other objects are not calculated and therefore not visible.

Original Link: Explaining Basic 3D Theory (mvrlink.com)

Guess you like

Origin blog.csdn.net/ygtu2018/article/details/132601157