OpenGL Basic Concepts

OpenGL Basic Concepts

OpenGL Basic Concepts

  • Vertex

    The smallest unit of a 3D image is called a point or vertex.

    They represent a point in three-dimensional space and are used to build more complex objects.

    Polygons are made up of points, and objects are made up of polygons.

    Although OpenGL usually supports many kinds of polygons, OpenGLEs only supports triangles (ie triangles) so even if we want to draw a square we have to split it into two triangles to draw.

  • coordinate

    By default, the axis origin is the center of the screen.

    x is negative to the left of the origin and positive to the right. y is positive above the origin and negative below the origin. Vertical screen outward is z positive, vertical screen inward is z negative. By default, from the origin to the edge of the screen is 1.0f, and the increments or decrements along each axis are in arbitrary scales – they do not represent any real units such as feet, pixels or meters, etc. You can choose any scale that makes sense for your program (the unit must be consistent globally, not part meters and part pixels). OpenGL just treats it as a unit of reference, guaranteeing they have the same distance.

  • polygon (triangle)

    A polygon is a single closed loop consisting of points and edges. When drawing polygons, you need to pay special attention to the drawing order of vertices, which can be divided into clockwise and counterclockwise. Because the direction determines the orientation of the polygon, i.e. front and back. Avoiding rendering those occluded parts can effectively improve program performance. By default, the constituent faces that draw vertices in counter-clockwise order are front faces.

  • Textures and Texture Maps

    When image data is applied to a geometric primitive, the image is called a texture, and the technique or method is called texture mapping. Or the technique of providing detail through a polygon in an image. The texture has a specification, that is, the resolution, which needs to be enlarged or reduced when used. At this time, the texture color will be distorted, so you need to use a texture sample filter or a Mipmap. In addition, texture storage as image data will take up a lot of space. Texture compression is used to store texture data.

  • render

    Rendering is the conversion of primitives specified by object coordinates into images in the framebuffer.

    Images and vertex coordinates are closely related. This relationship is given by the drawing mode.

    Commonly used drawing modes are GL_POINTS, GL_LINE_STRIP, GL_LINE_LOOP, GL_LINES, GL_TRIANGLES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN.

Concepts of shaders, rendering pipelines, rasterization, etc. in OpenGL

  • In OpenGL, everything is in 3D space, while screens and windows are 2D arrays of pixels, so most of OpenGL's work is about converting 3D coordinates into 2D pixels that fit your screen.

  • The process of converting 3D coordinates to 2D coordinates is performed by the OpenGL graphics rendering pipeline (Graphics Pipeline, mostly translated as pipelines , which actually refers to a bunch of original graphics data passing through a pipeline, and finally appearing on the screen after various changes during the process. process) managed.

  • The graphics rendering pipeline can be divided into two main parts:

    • The first part converts your 3D coordinates to 2D coordinates
    • The second part is to convert the 2D coordinates into actual colored pixels.
  • The graphics rendering pipeline takes a set of 3D coordinates and turns them into colored 2D pixel output on your screen.

  • Each part of the rendering pipeline:

    • First, the 3D coordinates are passed as the input of the graphics rendering pipeline in the form of an array to represent the shape. This array is called Vertex Data ; the vertex data is a collection of vertices. A vertex (Vertex) is a collection of 3D coordinate data.

    • In order for OpenGL to know what our coordinates and color values ​​make up, OpenGL needs you to specify the type of rendering these data represent.

    These hints are called primitives , and any invocation of a drawing instruction will pass the primitives to OpenGL.

    如:GL_POINTS、GL_TRIANGLES、GL_LINE_STRIP

    • The first part of the graphics rendering pipeline is the Vertex Shader, which takes a single vertex as input. The main purpose of a vertex shader is to convert 3D coordinates to another 3D coordinate. At the same time, vertex shaders allow us to do some basic processing of vertex attributes.

    • The Primitive Assembly stage takes all the vertices output by the vertex shader as input (if it is GL_POINTS, then it is a vertex), and assembles all the points into the shape of the specified primitive.

    • The output of the primitive assembly stage is passed to the Geometry Shader . A geometry shader takes as input a collection of vertices in the form of primitives, and it can generate other shapes by constructing new (or other) primitives by generating new vertices.

    • The output of the geometry shader is passed to the Rasterization Stage , where it maps the primitives to the corresponding pixels on the final screen, generating fragments for use by the Fragment Shader . Clipping is performed before the fragment shader runs . Cropping discards all pixels beyond your view to improve performance

    • A fragment in OpenGL is all the data OpenGL needs to render a pixel.

    • The main purpose of the fragment shader is to calculate the final color of a pixel, and this is where all of the advanced OpenGL effects occur. Typically, fragment shaders contain 3D scene data (such as lighting, shadows, light color, etc.) that can be used to calculate the color of the final pixel.

    • After all the corresponding color values ​​are determined, the final object will be passed to the last stage, called Alpha testing and blending (Blending) stage . This stage detects the corresponding depth (and stencil) values ​​of the fragment, uses them to determine whether the pixel is in front of or behind other objects, and decides whether it should be discarded. This stage also checks the alpha value (the alpha value defines the transparency of an object) and blends the object. So, even if the output color of a pixel is calculated in the fragment shader, the final pixel color may be completely different when rendering multiple triangles.

    • For most occasions, we only need to configure vertex and fragment shaders . The geometry shader is optional, usually just use its default shader.

    • In OpenGL we have to define at least one vertex shader and one fragment shader (as there is no default vertex/fragment shader in GPU) .

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324731849&siteId=291194637