Computer Graphics and OpenGL C++ Version Study Notes Chapter 9 Sky and Background


For outdoor 3D scenes, you can usually enhance the realism by creating some realistic effects on the horizon. When we look far and wide, past nearby buildings and forests, we are accustomed to seeing large objects in the distance, such as clouds, mountains, or the sun (or the stars and moon in the night sky). However, adding these objects to the scene as a single model can incur prohibitively high performance costs. Skyboxes or skydomes provide an effective and relatively simple way to generate convincing horizon views.

9.1 Skybox

The concept of a skybox is very clever yet simple:

(1) Instantiate a cube object;
(2) Set the texture of the cube to the desired environment;
(3) Set The cube is placed around the camera.
We already know how to complete these steps. But there are a handful of other details to note.

How to texture the horizon?

The cube has 6 faces, and we need to add textures to all of them. One way is to use 6 image files and 6 texture units. Another common (and efficient) way is to use an image containing a 6-sided texture, as shown in Figure 9.1.
Insert image description here

Figure 9.1 6-sided skybox texture cube map

The texture cube map in the above example uses only one texture unit to add textured images to 6 faces. The 6 parts of the cube map correspond to the top, bottom, front, back, and sides of the cube. When the texture is "wrapped" around the cube, it acts as a horizon to the camera inside the cube, as shown in Figure 9.2.

Insert image description here

Figure 9.2 Cubemap wrapping camera

Texturing a cube using a texture cube map requires specifying the appropriate texture coordinates. Figure 9.3 shows the distribution of texture coordinates, which are then assigned to each vertex of the cube.

Insert image description here

Figure 9.3 Cubemap texture coordinates

How to make a skybox look "far away"?

Another important factor in building a skybox is making sure the texture appears to look like a distant horizon. First, one might think that this would require building a giant skybox. However, this proved undesirable as the huge skybox would stretch and distort the texture. Instead, you can make the skybox appear huge (and thus feel far away) by using these two tricks:

(a) Disable depth testing and render the skybox first (re-enable depth testing when rendering other objects in the scene);
(b) The skybox moves with the camera (if The camera needs to move).

By drawing the skybox first with depth testing disabled, the depth buffer's value will still be set to all 1.0 (i.e. the furthest distance). Therefore, all other objects in the scene will be fully rendered, i.e. the skybox will not block any other objects. This will cause the faces of the skybox to appear farther away than other objects, regardless of the actual size of the skybox. And the actual skybox cube itself can be very small, as long as it moves with the camera when it moves. Figure 9.4 shows a simple scene viewed from inside the skybox (actually just a brick-textured torus).

Insert image description here

Figure 9.4 Viewing the scene from inside the skybox

Here we benefit from a closer look at the relationship of Figure 9.4 to the previous Figures 9.2 and 9.3. Note that the visible part of the skybox in the scene is the rightmost part of the cubemap. This is because the camera is in its default orientation, facing the −Z direction, and therefore is looking at the back of the skybox cube (as shown in Figure 9.3). Also note that the back side of the cube map is horizontally flipped when rendered in the scene; this is because the "back" portion of the cube map has been folded around the camera and therefore appears to be flipped sideways, as shown in Figure 9.2 Show.

How to build textured cubemap?

When building a textured cubemap image from artwork or photos, care needs to be taken to avoid "seams" where cube faces meet and to create the correct perspective so that the skybox looks realistic and distortion-free. There are many tools to assist with this: Terragen, Autodesk 3Ds Max, Blender, and Adobe Photoshop all have tools for building or manipulating cubemaps. At the same time, there are many websites that offer a variety of ready-made cube maps, both paid and free.

9.2 Sky Dome

Another way to create a horizon effect is to use a sky dome. The basic idea is the same as a skybox, except that a textured sphere (or hemisphere) is used instead of a textured cube. As with the skybox, we first render the skydome (with depth testing disabled) and keep the camera centered in the skydome (the skydome texture in Figure 9.5 was made using Terragen [TE16]).

Insert image description here

Figure 9.5 The sky dome and the camera in it

Sky domes have their own advantages over sky boxes. For example, they are less susceptible to distortions and seams (although spherical distortion at the poles must be taken into account in texture images). One of the disadvantages of sky domes is that a sphere or dome model is more complex than a cube model. A sky dome has more vertices, the number of which depends on the desired accuracy.

When a sky dome is used to present an outdoor scene, it is usually combined with a ground plane or some kind of terrain. When using a sky dome to represent a cosmic scene (such as a starry sky), it is often more practical to use a sphere as shown in Figure 9.6 (a dashed line is added to the surface of the sphere to clearly visualize it).

Insert image description here

Figure 9.6 Starry sky dome using a sphere (star map from [BO01])

9.3 Implement skybox

Despite the many advantages of sky domes, sky boxes are still more common. OpenGL also has better support for skyboxes, which is more convenient when doing environment mapping (will be introduced later in this chapter). For these reasons, we will focus on the skybox implementation.

There are two ways to implement a skybox: build a simple skybox from scratch; or use the cubemap tools in OpenGL. They have their own advantages, so we will introduce them below.

9.3.1 Building a skybox from scratch

We've covered almost everything you need to build a simple skybox. Chapter 4 introduced the cube model; assigning texture coordinates has been shown in Figure 9.3 earlier in this chapter; using the SOIL2 library to read textures and place objects in 3D space has also been explained in previous chapters. Here we will see how to easily enable and disable depth testing (only one line of code is required).

Program 9.1 shows the code structure of a simple skybox, containing only a textured torus in the scene. Texture coordinate allocation and calls to enable/disable depth testing are highlighted.

Program 9.1 Simple Skybox
The standard texture shader is now used for all objects in the scene, including cubemaps:
vertShader.glsl< /span>

#version 430

layout (location = 0) in vec3 position;
layout (location = 1) in vec2 tex_coord;
out vec2 tc;

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
layout (binding = 0) uniform sampler2D s;

void main(void)
{
    
    
	tc = tex_coord;
	gl_Position = proj_matrix * mv_matrix * vec4(position,1.0);
} 

fragShader.glsl

#version 430

in vec2 tc;
out vec4 fragColor;

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
layout (binding = 0) uniform sampler2D s;

void main(void)
{
    
    
	fragColor = texture(s,tc);
}

main.cpp

#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <SOIL2\soil2.h>
#include <string>
#include <iostream>
#include <fstream>
#include <glm\gtc\type_ptr.hpp> // glm::value_ptr
#include <glm\gtc\matrix_transform.hpp> // glm::translate, glm::rotate, glm::scale, glm::perspective
#include "Torus.h"
#include "Utils.h"
using namespace std;

float toRadians(float degrees) {
    
     return (degrees * 2.0f * 3.14159f) / 360.0f; }

#define numVAOs 1
#define numVBOs 5

float cameraX, cameraY, cameraZ;
float torLocX, torLocY, torLocZ;
GLuint renderingProgram;
GLuint vao[numVAOs];
GLuint vbo[numVBOs];
GLuint brickTexture, skyboxTexture;
float rotAmt = 0.0f;

// variable allocation for display
GLuint mvLoc, projLoc;
int width, height;
float aspect;
glm::mat4 pMat, vMat, mMat, mvMat;

Torus myTorus(0.5f, 0.2f, 48);
int numTorusVertices, numTorusIndices;

void setupVertices(void) {
    
    
	// cube_vertices定义与之前相同
	// 天空盒的立方体纹理坐标,如图9.3所示
	float cubeVertexPositions[108] =
	{
    
    	-1.0f,  1.0f, -1.0f, -1.0f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f,
		1.0f, -1.0f, -1.0f, 1.0f,  1.0f, -1.0f, -1.0f,  1.0f, -1.0f,
		1.0f, -1.0f, -1.0f, 1.0f, -1.0f,  1.0f, 1.0f,  1.0f, -1.0f,
		1.0f, -1.0f,  1.0f, 1.0f,  1.0f,  1.0f, 1.0f,  1.0f, -1.0f,
		1.0f, -1.0f,  1.0f, -1.0f, -1.0f,  1.0f, 1.0f,  1.0f,  1.0f,
		-1.0f, -1.0f,  1.0f, -1.0f,  1.0f,  1.0f, 1.0f,  1.0f,  1.0f,
		-1.0f, -1.0f,  1.0f, -1.0f, -1.0f, -1.0f, -1.0f,  1.0f,  1.0f,
		-1.0f, -1.0f, -1.0f, -1.0f,  1.0f, -1.0f, -1.0f,  1.0f,  1.0f,
		-1.0f, -1.0f,  1.0f,  1.0f, -1.0f,  1.0f,  1.0f, -1.0f, -1.0f,
		1.0f, -1.0f, -1.0f, -1.0f, -1.0f, -1.0f, -1.0f, -1.0f,  1.0f,
		-1.0f,  1.0f, -1.0f, 1.0f,  1.0f, -1.0f, 1.0f,  1.0f,  1.0f,
		1.0f,  1.0f,  1.0f, -1.0f,  1.0f,  1.0f, -1.0f,  1.0f, -1.0f
	};
	float cubeTextureCoord[72] = 
	{
    
    	1.00f, 0.66f, 1.00f, 0.33f, 0.75f, 0.33f,	// 背面右下角
		0.75f, 0.33f, 0.75f, 0.66f, 1.00f, 0.66f,	// 背面左上角
		0.75f, 0.33f, 0.50f, 0.33f, 0.75f, 0.66f,	// 右面右下角
		0.50f, 0.33f, 0.50f, 0.66f, 0.75f, 0.66f,	// 右面左上角
		0.50f, 0.33f, 0.25f, 0.33f, 0.50f, 0.66f,	// 正面右下角
		0.25f, 0.33f, 0.25f, 0.66f, 0.50f, 0.66f,	// 正面左上角
		0.25f, 0.33f, 0.00f, 0.33f, 0.25f, 0.66f,	// 左面右下角
		0.00f, 0.33f, 0.00f, 0.66f, 0.25f, 0.66f,	// 左面左上角
		0.25f, 0.33f, 0.50f, 0.33f, 0.50f, 0.00f,	// 下面右下角
		0.50f, 0.00f, 0.25f, 0.00f, 0.25f, 0.33f,	// 下面左上角
		0.25f, 1.00f, 0.50f, 1.00f, 0.50f, 0.66f,	// 上面右下角
		0.50f, 0.66f, 0.25f, 0.66f, 0.25f, 1.00f	// 上面左上角
	};
	//像往常一样为立方体和场景对象设置缓冲区
	numTorusVertices = myTorus.getNumVertices();
	numTorusIndices = myTorus.getNumIndices();

	std::vector<int> ind = myTorus.getIndices();
	std::vector<glm::vec3> vert = myTorus.getVertices();
	std::vector<glm::vec2> tex = myTorus.getTexCoords();
	std::vector<glm::vec3> norm = myTorus.getNormals();

	std::vector<float> pvalues;
	std::vector<float> tvalues;
	std::vector<float> nvalues;

	for (int i = 0; i < numTorusVertices; i++) {
    
    
		pvalues.push_back(vert[i].x);
		pvalues.push_back(vert[i].y);
		pvalues.push_back(vert[i].z);
		tvalues.push_back(tex[i].s);
		tvalues.push_back(tex[i].t);
		nvalues.push_back(norm[i].x);
		nvalues.push_back(norm[i].y);
		nvalues.push_back(norm[i].z);
	}
	glGenVertexArrays(1, vao);
	glBindVertexArray(vao[0]);
	glGenBuffers(numVBOs, vbo);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
	glBufferData(GL_ARRAY_BUFFER, sizeof(cubeVertexPositions) * 4, cubeVertexPositions, GL_STATIC_DRAW);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
	glBufferData(GL_ARRAY_BUFFER, sizeof(cubeTextureCoord) * 4, cubeTextureCoord, GL_STATIC_DRAW);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[2]);
	glBufferData(GL_ARRAY_BUFFER, pvalues.size() * 4, &pvalues[0], GL_STATIC_DRAW);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[3]);
	glBufferData(GL_ARRAY_BUFFER, tvalues.size() * 4, &tvalues[0], GL_STATIC_DRAW);

	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo[4]);
	glBufferData(GL_ELEMENT_ARRAY_BUFFER, ind.size() * 4, &ind[0], GL_STATIC_DRAW);
}

void init(GLFWwindow* window) {
    
    
	renderingProgram = Utils::createShaderProgram("vertShader.glsl", "fragShader.glsl");

	glfwGetFramebufferSize(window, &width, &height);
	aspect = (float)width / (float)height;
	pMat = glm::perspective(1.0472f, aspect, 0.1f, 1000.0f);

	setupVertices();

	brickTexture = Utils::loadTexture("brick1.jpg");
	skyboxTexture = Utils::loadTexture("alien.jpg");

	torLocX = 0.0f; torLocY = -0.75f; torLocZ = 0.0f;
	cameraX = 0.0f; cameraY = 0.0f; cameraZ = 5.0f;
}

void display(GLFWwindow* window, double currentTime) {
    
    
	//  清除颜色缓冲区和深度缓冲区,并像之前一样创建投影视图矩阵和摄像机视图矩阵
	glClear(GL_DEPTH_BUFFER_BIT);
	glClear(GL_COLOR_BUFFER_BIT);

	vMat = glm::translate(glm::mat4(1.0f), glm::vec3(-cameraX, -cameraY, -cameraZ));

	// 准备首先绘制天空盒。M矩阵将天空盒放置在摄像机位置

	glUseProgram(renderingProgram);

	mMat = glm::translate(glm::mat4(1.0f), glm::vec3(cameraX, cameraY, cameraZ));
	// 构建MODEL-VIEW矩阵
	mvMat = vMat * mMat;
	// 如前,将MV和PROJ矩阵放入统一变量
	mvLoc = glGetUniformLocation(renderingProgram, "mv_matrix");
	projLoc = glGetUniformLocation(renderingProgram, "proj_matrix");

	glUniformMatrix4fv(mvLoc, 1, GL_FALSE, glm::value_ptr(mvMat));
	glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(pMat));
	// 设置包含顶点的缓冲区
	glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(0);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
	glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(1);
	//激活天空盒纹理 
	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, skyboxTexture);

	glEnable(GL_CULL_FACE);
	glFrontFace(GL_CCW);	// 立方体缠绕顺序是顺时针的,但我们从内部查 看,因此使用逆时针缠绕顺序GL_CCW
	glDisable(GL_DEPTH_TEST);
	glDrawArrays(GL_TRIANGLES, 0, 36);// 在没有深度测试的情况下 绘制天空盒
	glEnable(GL_DEPTH_TEST);

	//现在像之前一样绘制场景中的对象

	glUseProgram(renderingProgram);

	mvLoc = glGetUniformLocation(renderingProgram, "mv_matrix");
	projLoc = glGetUniformLocation(renderingProgram, "proj_matrix");

	mMat = glm::translate(glm::mat4(1.0f), glm::vec3(torLocX, torLocY, torLocZ));
	mMat = glm::rotate(mMat, toRadians(15.0f), glm::vec3(1.0f, 0.0f, 0.0f));
	mvMat = vMat * mMat;

	glUniformMatrix4fv(mvLoc, 1, GL_FALSE, glm::value_ptr(mvMat));
	glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(pMat));

	glBindBuffer(GL_ARRAY_BUFFER, vbo[2]);
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(0);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[3]);
	glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(1);

	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, brickTexture);

	glClear(GL_DEPTH_BUFFER_BIT);
	glEnable(GL_CULL_FACE);
	glFrontFace(GL_CCW);
	glDisable(GL_LEQUAL);
	glDrawArrays(GL_TRIANGLES, 0, 36);
	glEnable(GL_DEPTH_TEST);

	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo[4]);
	glDrawElements(GL_TRIANGLES, numTorusIndices, GL_UNSIGNED_INT, 0);
}

void window_size_callback(GLFWwindow* win, int newWidth, int newHeight) {
    
    
	aspect = (float)newWidth / (float)newHeight;
	glViewport(0, 0, newWidth, newHeight);
	pMat = glm::perspective(1.0472f, aspect, 0.1f, 1000.0f);
}

int main(void) {
    
    
	if (!glfwInit()) {
    
     exit(EXIT_FAILURE); }
	glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
	glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
	GLFWwindow* window = glfwCreateWindow(800, 800, "Chapter9 - program1", NULL, NULL);
	glfwMakeContextCurrent(window);
	if (glewInit() != GLEW_OK) {
    
     exit(EXIT_FAILURE); }
	glfwSwapInterval(1);

	glfwSetWindowSizeCallback(window, window_size_callback);

	init(window);

	while (!glfwWindowShouldClose(window)) {
    
    
		display(window, glfwGetTime());
		glfwSwapBuffers(window);
		glfwPollEvents();
	}

	glfwDestroyWindow(window);
	glfwTerminate();
	exit(EXIT_SUCCESS);
}

Insert image description here

Figure 9.7 Simple sky box rendering result

As mentioned before, skyboxes are susceptible to image distortion and seams. Seams are visible lines that sometimes appear where two texture images touch (such as along the edge of a cube). Figure 9.8 shows an example of a seam in the upper half of the image, which is an artifact that occurs when running program 9.1. To avoid seams, the cubemap image needs to be carefully constructed and assigned precise texture coordinates. There are tools that can be used to reduce seams along image edges (e.g. [GI16]), but this topic is beyond the scope of this book.

Insert image description here

Figure 9.8 Skybox “seam” artifact

9.3.2 Using OpenGL cubemaps

Another way to build a skybox is to use an OpenGL texture cube map. OpenGL cubemaps are a little more complex than the simple methods we saw in the previous section. However, using OpenGL cubemaps has its own advantages, such as reduced seams and support for environment maps.

OpenGL texture cubemaps are similar to the 3D textures we will look at later in that they are accessed using 3 texture coordinates - usually labeled (s,t.r)
- instead of The two we have used so far. Another feature of OpenGL texture cubemaps is that the images in them have the upper left corner of the texture image (instead of the usual lower left corner) as the texture coordinate (0,0,0), which is often the source of confusion.

The method shown in Program 9.1 adds texture to the cubemap by reading in a single image, while the loadCubeMap() function shown in Program 9.2 reads in 6 separate cube face image files . As we learned in Chapter 5, there are many ways to read texture images, we chose to use the SOIL2 library. SOIL2 is also very handy here for instantiating and loading OpenGL cubemaps. We first find the file that needs to be read, and then call SOIL_load_OGL_cubemap(), whose parameters include 6 image files and some other parameters, similar to what we saw in Chapter 5SOIL_load_OGL_texture(). When using OpenGL cubemap, there is no need to flip the texture vertically, OpenGL will automatically handle it. Note that the loadCubeMap() function is placed in the "Utils.cpp" file.

The init() function now contains a function call to enable GL_TEXTURE_CUBE_MAP_SEAMLESS, which tells OpenGL to try to blend adjacent sides of the cube to reduce or eliminate seams. Indisplay(), the cube's vertices are sent down the pipeline as before, but this time there is no need to send the cube's texture coordinates. As we will see, OpenGL texture cubemaps typically use the cube's vertex positions as their texture coordinates. Afterwards disable depth testing and draw the cube. Then re-enable depth testing for the rest of the scene.

The completed OpenGL texture cube map is referenced using an int type identifier. As with shadow mapping, artifacts along the border can be reduced by setting the texture wrap mode to Clamp to Edge. In this case, it can also help shrink the seam further. Please note that the texture wrapping mode needs to be set for all three texture coordinates s, t and r.

The texture is accessed in the fragment shader using a special type of sampler namedsamplerCube. In a textured cubemap, the values ​​returned from the sampler are the texels "seen" from the origin along the direction vector (s,t,r). So we can usually simply use the passed in interpolated vertex position as texture coordinates. In the vertex shader, we assign the cube vertex positions into the output texture coordinate properties so that they are interpolated when they reach the fragment shader.

Also note that in the vertex shader we convert the incoming view matrix to a 3×3 and then back to a 4×4. This "trick" effectively removes the translation component while preserving the rotation (recall that the translation value is in the fourth column of the transformation matrix). This fixes the cubemap to the camera position while still allowing the composite camera to "look around".

Program 9.2 OpenGL Cubemap Skybox
vertShader.glsl

#version 430

layout (location = 0) in vec3 position;
layout (location = 1) in vec2 tex_coord;
out vec2 tc;

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
layout (binding = 0) uniform sampler2D s;

void main(void)
{
    
    
	tc = tex_coord;
	gl_Position = proj_matrix * mv_matrix * vec4(position,1.0);
}

vertCShader.glsl

#version 430

layout (location = 0) in vec3 position;
out vec3 tc;

uniform mat4 v_matrix;
uniform mat4 p_matrix;
layout (binding = 0) uniform samplerCube samp;

void main(void)
{
    
    
	tc = position;// 纹理坐标就是顶点坐标
	mat4 v3_matrix = mat4(mat3(v_matrix));// 从视图矩阵中删除平移
	gl_Position = p_matrix * v3_matrix * vec4(position,1.0);
}

fragShader.glsl

#version 430

in vec2 tc;
out vec4 fragColor;

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
layout (binding = 0) uniform sampler2D s;

void main(void)
{
    
    
	fragColor = texture(s,tc);
}

fragCShader.glsl

#version 430

in vec3 tc;
out vec4 fragColor;

uniform mat4 v_matrix;
uniform mat4 p_matrix;
layout (binding = 0) uniform samplerCube samp;

void main(void)
{
    
    
	fragColor = texture(samp,tc);
}

main.cpp

#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <SOIL2\soil2.h>
#include <string>
#include <iostream>
#include <fstream>
#include <glm\gtc\type_ptr.hpp> // glm::value_ptr
#include <glm\gtc\matrix_transform.hpp> // glm::translate, glm::rotate, glm::scale, glm::perspective
#include "Torus.h"
#include "Utils.h"
using namespace std;

float toRadians(float degrees) {
    
     return (degrees * 2.0f * 3.14159f) / 360.0f; }

#define numVAOs 1
#define numVBOs 4

float cameraX, cameraY, cameraZ;
float torLocX, torLocY, torLocZ;
GLuint renderingProgram, renderingProgramCubeMap;
GLuint vao[numVAOs];
GLuint vbo[numVBOs];
GLuint brickTexture, skyboxTexture;
float rotAmt = 0.0f;

// variable allocation for display
GLuint vLoc, mvLoc, projLoc;
int width, height;
float aspect;
glm::mat4 pMat, vMat, mMat, mvMat;

Torus myTorus(0.8f, 0.4f, 48);
int numTorusVertices, numTorusIndices;

void setupVertices(void) {
    
    
	float cubeVertexPositions[108] =
	{
    
    	-1.0f,  1.0f, -1.0f, -1.0f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f,
		1.0f, -1.0f, -1.0f, 1.0f,  1.0f, -1.0f, -1.0f,  1.0f, -1.0f,
		1.0f, -1.0f, -1.0f, 1.0f, -1.0f,  1.0f, 1.0f,  1.0f, -1.0f,
		1.0f, -1.0f,  1.0f, 1.0f,  1.0f,  1.0f, 1.0f,  1.0f, -1.0f,
		1.0f, -1.0f,  1.0f, -1.0f, -1.0f,  1.0f, 1.0f,  1.0f,  1.0f,
		-1.0f, -1.0f,  1.0f, -1.0f,  1.0f,  1.0f, 1.0f,  1.0f,  1.0f,
		-1.0f, -1.0f,  1.0f, -1.0f, -1.0f, -1.0f, -1.0f,  1.0f,  1.0f,
		-1.0f, -1.0f, -1.0f, -1.0f,  1.0f, -1.0f, -1.0f,  1.0f,  1.0f,
		-1.0f, -1.0f,  1.0f,  1.0f, -1.0f,  1.0f,  1.0f, -1.0f, -1.0f,
		1.0f, -1.0f, -1.0f, -1.0f, -1.0f, -1.0f, -1.0f, -1.0f,  1.0f,
		-1.0f,  1.0f, -1.0f, 1.0f,  1.0f, -1.0f, 1.0f,  1.0f,  1.0f,
		1.0f,  1.0f,  1.0f, -1.0f,  1.0f,  1.0f, -1.0f,  1.0f, -1.0f
	};

	numTorusVertices = myTorus.getNumVertices();
	numTorusIndices = myTorus.getNumIndices();

	std::vector<int> ind = myTorus.getIndices();
	std::vector<glm::vec3> vert = myTorus.getVertices();
	std::vector<glm::vec2> tex = myTorus.getTexCoords();
	std::vector<glm::vec3> norm = myTorus.getNormals();

	std::vector<float> pvalues;
	std::vector<float> tvalues;
	std::vector<float> nvalues;

	for (int i = 0; i < numTorusVertices; i++) {
    
    
		pvalues.push_back(vert[i].x);
		pvalues.push_back(vert[i].y);
		pvalues.push_back(vert[i].z);
		tvalues.push_back(tex[i].s);
		tvalues.push_back(tex[i].t);
		nvalues.push_back(norm[i].x);
		nvalues.push_back(norm[i].y);
		nvalues.push_back(norm[i].z);
	}
	glGenVertexArrays(1, vao);
	glBindVertexArray(vao[0]);
	glGenBuffers(numVBOs, vbo);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
	glBufferData(GL_ARRAY_BUFFER, sizeof(cubeVertexPositions), cubeVertexPositions, GL_STATIC_DRAW);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
	glBufferData(GL_ARRAY_BUFFER, pvalues.size() * 4, &pvalues[0], GL_STATIC_DRAW);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[2]);
	glBufferData(GL_ARRAY_BUFFER, tvalues.size() * 4, &tvalues[0], GL_STATIC_DRAW);

	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo[3]);
	glBufferData(GL_ELEMENT_ARRAY_BUFFER, ind.size() * 4, &ind[0], GL_STATIC_DRAW);
}

void init(GLFWwindow* window) {
    
    
	renderingProgram = Utils::createShaderProgram("vertShader.glsl", "fragShader.glsl");
	renderingProgramCubeMap = Utils::createShaderProgram("vertCShader.glsl", "fragCShader.glsl");//注意这里

	glfwGetFramebufferSize(window, &width, &height);
	aspect = (float)width / (float)height;
	pMat = glm::perspective(1.0472f, aspect, 0.1f, 1000.0f);

	setupVertices();

	brickTexture = Utils::loadTexture("brick1.jpg");// 场景中的环面
	skyboxTexture = Utils::loadCubeMap("cubeMap");// 包含天空盒纹理的文件夹
	glEnable(GL_TEXTURE_CUBE_MAP_SEAMLESS);

	torLocX = 0.0f; torLocY = 0.0f; torLocZ = 0.0f;
	cameraX = 0.0f; cameraY = 0.0f; cameraZ = 5.0f;
}

void display(GLFWwindow* window, double currentTime) {
    
    
	glClear(GL_DEPTH_BUFFER_BIT);
	glClear(GL_COLOR_BUFFER_BIT);

	vMat = glm::translate(glm::mat4(1.0f), glm::vec3(-cameraX, -cameraY, -cameraZ));

	// draw cube map
	// 准备首先绘制天空盒—注意,现在它的渲染程序不同了
	glUseProgram(renderingProgramCubeMap);

	vLoc = glGetUniformLocation(renderingProgramCubeMap, "v_matrix");
	glUniformMatrix4fv(vLoc, 1, GL_FALSE, glm::value_ptr(vMat));

	projLoc = glGetUniformLocation(renderingProgramCubeMap, "p_matrix");
	glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(pMat));
	//  初始化立方体的顶点缓冲区(这里不再需要纹理坐标缓冲区)
	glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(0);
	// 激活立方体贴图纹理
	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_CUBE_MAP, skyboxTexture);
	// 禁用深度测试,之后绘制立方体贴图 
	glEnable(GL_CULL_FACE);
	glFrontFace(GL_CCW);	// cube is CW, but we are viewing the inside
	glDisable(GL_DEPTH_TEST);
	glDrawArrays(GL_TRIANGLES, 0, 36);
	glEnable(GL_DEPTH_TEST);

	// draw scene (in this case it is just a torus)

	glUseProgram(renderingProgram);

	mvLoc = glGetUniformLocation(renderingProgram, "mv_matrix");
	projLoc = glGetUniformLocation(renderingProgram, "proj_matrix");

	mMat = glm::translate(glm::mat4(1.0f), glm::vec3(torLocX, torLocY, torLocZ));
	mMat = glm::rotate(mMat, toRadians(35.0f), glm::vec3(1.0f, 0.0f, 0.0f));
	mvMat = vMat * mMat;

	glUniformMatrix4fv(mvLoc, 1, GL_FALSE, glm::value_ptr(mvMat));
	glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(pMat));

	glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(0);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[2]);
	glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(1);

	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, brickTexture);

	glClear(GL_DEPTH_BUFFER_BIT);
	glEnable(GL_CULL_FACE);
	glFrontFace(GL_CCW);
	glDisable(GL_LEQUAL);
	glDrawArrays(GL_TRIANGLES, 0, 36);
	glEnable(GL_DEPTH_TEST);

	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo[3]);
	glDrawElements(GL_TRIANGLES, numTorusIndices, GL_UNSIGNED_INT, 0);
}

void window_size_callback(GLFWwindow* win, int newWidth, int newHeight) {
    
    
	aspect = (float)newWidth / (float)newHeight;
	glViewport(0, 0, newWidth, newHeight);
	pMat = glm::perspective(1.0472f, aspect, 0.1f, 1000.0f);
}

int main(void) {
    
    
	if (!glfwInit()) {
    
     exit(EXIT_FAILURE); }
	glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
	glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
	GLFWwindow* window = glfwCreateWindow(800, 800, "Chapter9 - program2", NULL, NULL);
	glfwMakeContextCurrent(window);
	if (glewInit() != GLEW_OK) {
    
     exit(EXIT_FAILURE); }
	glfwSwapInterval(1);

	glfwSetWindowSizeCallback(window, window_size_callback);

	init(window);

	while (!glfwWindowShouldClose(window)) {
    
    
		display(window, glfwGetTime());
		glfwSwapBuffers(window);
		glfwPollEvents();
	}

	glfwDestroyWindow(window);
	glfwTerminate();
	exit(EXIT_SUCCESS);
}

9.4 Environment maps

In the Lighting and Materials chapter we considered the "shine" of objects. However, we never modeled very shiny objects, such as mirrors or chrome items. While these objects have a small range of specular highlights, they can also reflect mirror images of surrounding objects. When we look at these objects, we see other things in the room and sometimes even our own reflections. The ADS lighting model does not provide a way to simulate this effect.

However, textured cubemaps provide a relatively simple way to simulate (at least partially) reflective surfaces. The trick is to use a cube map to construct the reflection object itself. [1] If you want to make it look real, you need to find the texture coordinates corresponding to the surrounding environment we see from the object.

Figure 9.9 shows the strategy of using a combination of view vectors and normal vectors to calculate a reflection vector, which is then used to find texels from the cubemap. Therefore, the reflection vector can be used to directly access the texture cube map. When cubemaps are used for the above functions, they are called environment maps.
Insert image description here

Figure 9.9 Overview of environment maps

We calculated reflection vectors in our previous study of Blinn-Phong lighting. The reflection vector concept here is similar to before, except that we now use the reflection vector to look up the value from the texture map. This technique is called environment mapping or reflection mapping. If you implement cubemaps using the second method we described (in Section 9.3.2, using OpenGL GL_TEXTURE_CUBE_MAP), then OpenGL can do environment map lookup using the same method used before to texture the cube. We use the view vector and the surface normal vector to calculate the reflection vector off the object surface corresponding to the view vector. The texture cubemap image can then be sampled directly using the reflection vector. The search process is assisted by OpenGL samplerCube;

Recall from the previous section that the samplerCube uses the view direction vector index. Reflection vectors are therefore ideal for finding the desired texel.

Implementing environment maps requires adding a relatively small amount of code. Program 9.3 shows changes in the display() and init() functions and the associated shaders to render a "reflection" torus using an environment map. All changes are highlighted. It is worth noting that if Blinn-Phong lighting is used, much of the code that needs to be added may already exist. The really new part of the code is in the fragment shader [in the main() function].

At first glance, the highlighted code in Program 9.3 does not appear to be new code. In fact, we've seen almost the same code when we looked at lighting. However, in the current case, the normal and reflection vectors serve completely different purposes. In the previous code, they were used to implement the ADS lighting model. And here they are used to calculate the texture coordinates of the environment map. Therefore, we highlight parts of the code so that the reader can more easily trace the use of normal and reflection vector calculations for this new purpose.

The rendered result will show a "chrome" torus using an environment map, as shown in Figure 9.10 (see color inset).

Insert image description here

Figure 9.10 Example of an environment map used to create a reflective torus

Program 9.3 Environment Map
vertShader.glsl

#version 430

layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
out vec3 vNormal;
out vec3 vVertPos;

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
uniform mat4 normalMat;
layout (binding = 0) uniform samplerCube t;

void main(void)
{
    
    
	vVertPos = (mv_matrix * vec4(position,1.0)).xyz;
	vNormal = (normalMat * vec4(normal,1.0)).xyz;
	gl_Position = proj_matrix * mv_matrix * vec4(position,1.0);
}

vertCShader.glsl

#version 430

layout (location = 0) in vec3 position;
out vec3 tc;

uniform mat4 v_matrix;
uniform mat4 p_matrix;
layout (binding = 0) uniform samplerCube samp;

void main(void)
{
    
    
	tc = position;
	mat4 v3_matrix = mat4(mat3(v_matrix));
	gl_Position = p_matrix * v3_matrix * vec4(position,1.0);
}

fragShader.glsl

#version 430

in vec3 vNormal;
in vec3 vVertPos;
out vec4 fragColor;

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
uniform mat4 normalMat;
layout (binding = 0) uniform samplerCube t;

void main(void)
{
    
    
	vec3 r = -reflect(normalize(-vVertPos), normalize(vNormal));
	fragColor = texture(t,r);
}

fragCShader.glsl

#version 430

in vec3 tc;
out vec4 fragColor;

uniform mat4 v_matrix;
uniform mat4 p_matrix;
layout (binding = 0) uniform samplerCube samp;

void main(void)
{
    
    
	fragColor = texture(samp,tc);
}

main.cpp

#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <SOIL2\soil2.h>
#include <string>
#include <iostream>
#include <fstream>
#include <glm\gtc\type_ptr.hpp> // glm::value_ptr
#include <glm\gtc\matrix_transform.hpp> // glm::translate, glm::rotate, glm::scale, glm::perspective
#include "Torus.h"
#include "Utils.h"
using namespace std;

float toRadians(float degrees) {
    
     return (degrees * 2.0f * 3.14159f) / 360.0f; }

#define numVAOs 1
#define numVBOs 4

Utils util = Utils();
float cameraX, cameraY, cameraZ;
float torLocX, torLocY, torLocZ;
GLuint renderingProgram, renderingProgramCubeMap;
GLuint vao[numVAOs];
GLuint vbo[numVBOs];
GLuint skyboxTexture;
float rotAmt = 0.0f;

// variable allocation for display
GLuint vLoc, mvLoc, projLoc, nLoc;
int width, height;
float aspect;
glm::mat4 pMat, vMat, mMat, mvMat, invTrMat;

Torus myTorus(0.8f, 0.4f, 48);
int numTorusVertices, numTorusIndices;

void setupVertices(void) {
    
    
	float cubeVertexPositions[108] =
	{
    
    	-1.0f,  1.0f, -1.0f, -1.0f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f,
		1.0f, -1.0f, -1.0f, 1.0f,  1.0f, -1.0f, -1.0f,  1.0f, -1.0f,
		1.0f, -1.0f, -1.0f, 1.0f, -1.0f,  1.0f, 1.0f,  1.0f, -1.0f,
		1.0f, -1.0f,  1.0f, 1.0f,  1.0f,  1.0f, 1.0f,  1.0f, -1.0f,
		1.0f, -1.0f,  1.0f, -1.0f, -1.0f,  1.0f, 1.0f,  1.0f,  1.0f,
		-1.0f, -1.0f,  1.0f, -1.0f,  1.0f,  1.0f, 1.0f,  1.0f,  1.0f,
		-1.0f, -1.0f,  1.0f, -1.0f, -1.0f, -1.0f, -1.0f,  1.0f,  1.0f,
		-1.0f, -1.0f, -1.0f, -1.0f,  1.0f, -1.0f, -1.0f,  1.0f,  1.0f,
		-1.0f, -1.0f,  1.0f,  1.0f, -1.0f,  1.0f,  1.0f, -1.0f, -1.0f,
		1.0f, -1.0f, -1.0f, -1.0f, -1.0f, -1.0f, -1.0f, -1.0f,  1.0f,
		-1.0f,  1.0f, -1.0f, 1.0f,  1.0f, -1.0f, 1.0f,  1.0f,  1.0f,
		1.0f,  1.0f,  1.0f, -1.0f,  1.0f,  1.0f, -1.0f,  1.0f, -1.0f
	};

	numTorusVertices = myTorus.getNumVertices();
	numTorusIndices = myTorus.getNumIndices();

	std::vector<int> ind = myTorus.getIndices();
	std::vector<glm::vec3> vert = myTorus.getVertices();
	std::vector<glm::vec2> tex = myTorus.getTexCoords();
	std::vector<glm::vec3> norm = myTorus.getNormals();

	std::vector<float> pvalues;
	std::vector<float> tvalues;
	std::vector<float> nvalues;

	for (int i = 0; i < numTorusVertices; i++) {
    
    
		pvalues.push_back(vert[i].x);
		pvalues.push_back(vert[i].y);
		pvalues.push_back(vert[i].z);
		tvalues.push_back(tex[i].s);
		tvalues.push_back(tex[i].t);
		nvalues.push_back(norm[i].x);
		nvalues.push_back(norm[i].y);
		nvalues.push_back(norm[i].z);
	}
	glGenVertexArrays(1, vao);
	glBindVertexArray(vao[0]);
	glGenBuffers(numVBOs, vbo);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
	glBufferData(GL_ARRAY_BUFFER, sizeof(cubeVertexPositions), cubeVertexPositions, GL_STATIC_DRAW);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
	glBufferData(GL_ARRAY_BUFFER, pvalues.size() * 4, &pvalues[0], GL_STATIC_DRAW);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[2]);
	glBufferData(GL_ARRAY_BUFFER, nvalues.size() * 4, &nvalues[0], GL_STATIC_DRAW);

	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo[3]);
	glBufferData(GL_ELEMENT_ARRAY_BUFFER, ind.size() * 4, &ind[0], GL_STATIC_DRAW);
}

void init(GLFWwindow* window) {
    
    
	renderingProgram = Utils::createShaderProgram("vertShader.glsl", "fragShader.glsl");
	renderingProgramCubeMap = Utils::createShaderProgram("vertCShader.glsl", "fragCShader.glsl");

	glfwGetFramebufferSize(window, &width, &height);
	aspect = (float)width / (float)height;
	pMat = glm::perspective(1.0472f, aspect, 0.1f, 1000.0f);

	setupVertices();

	skyboxTexture = Utils::loadCubeMap("cubeMap"); // expects a folder name
	glEnable(GL_TEXTURE_CUBE_MAP_SEAMLESS);

	torLocX = 0.0f; torLocY = 0.0f; torLocZ = 0.0f;
	cameraX = 0.0f; cameraY = 0.0f; cameraZ = 5.0f;
}

void display(GLFWwindow* window, double currentTime) {
    
    
	glClear(GL_DEPTH_BUFFER_BIT);
	glClear(GL_COLOR_BUFFER_BIT);

	vMat = glm::translate(glm::mat4(1.0f), glm::vec3(-cameraX, -cameraY, -cameraZ));

	// draw cube map

	glUseProgram(renderingProgramCubeMap);

	vLoc = glGetUniformLocation(renderingProgramCubeMap, "v_matrix");
	glUniformMatrix4fv(vLoc, 1, GL_FALSE, glm::value_ptr(vMat));

	projLoc = glGetUniformLocation(renderingProgramCubeMap, "p_matrix");
	glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(pMat));

	glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(0);

	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_CUBE_MAP, skyboxTexture);

	glEnable(GL_CULL_FACE);
	glFrontFace(GL_CCW);	// cube is CW, but we are viewing the inside
	glDisable(GL_DEPTH_TEST);
	glDrawArrays(GL_TRIANGLES, 0, 36);
	glEnable(GL_DEPTH_TEST);

	// draw scene (in this case it is just a torus)

	glUseProgram(renderingProgram);
	// 矩阵变换的统一变量位置,包括法向量的变换
	mvLoc = glGetUniformLocation(renderingProgram, "mv_matrix");
	projLoc = glGetUniformLocation(renderingProgram, "proj_matrix");
	nLoc = glGetUniformLocation(renderingProgram, "normalMat");
	// 构建MODEL矩阵,如前
	rotAmt += 0.01f;
	mMat = glm::translate(glm::mat4(1.0f), glm::vec3(torLocX, torLocY, torLocZ));
	mMat = glm::rotate(mMat, rotAmt, glm::vec3(1.0f, 0.0f, 0.0f));
	
	mvMat = vMat * mMat;
	// 构建MODEL-VIEW矩阵,如前
	invTrMat = glm::transpose(glm::inverse(mvMat));
	// 法向量变换现在在统一变量中
	glUniformMatrix4fv(mvLoc, 1, GL_FALSE, glm::value_ptr(mvMat));
	glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(pMat));
	glUniformMatrix4fv(nLoc, 1, GL_FALSE, glm::value_ptr(invTrMat));
	// 激活环面顶点缓冲区,如前
	glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(0);
	// 我们需要激活环面法向量缓冲区
	glBindBuffer(GL_ARRAY_BUFFER, vbo[2]);
	glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(1);
	// 环面纹理现在是立方体贴图 
	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_CUBE_MAP, skyboxTexture);
	// 绘制环面的过程未做更改
	glClear(GL_DEPTH_BUFFER_BIT);
	glEnable(GL_CULL_FACE);
	glFrontFace(GL_CCW);
	glDepthFunc(GL_LEQUAL);

	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo[3]);
	glDrawElements(GL_TRIANGLES, numTorusIndices, GL_UNSIGNED_INT, 0);
}

void window_size_callback(GLFWwindow* win, int newWidth, int newHeight) {
    
    
	aspect = (float)newWidth / (float)newHeight;
	glViewport(0, 0, newWidth, newHeight);
	pMat = glm::perspective(1.0472f, aspect, 0.1f, 1000.0f);
}

int main(void) {
    
    
	if (!glfwInit()) {
    
     exit(EXIT_FAILURE); }
	glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
	glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
	GLFWwindow* window = glfwCreateWindow(800, 800, "Chapter9 - program2", NULL, NULL);
	glfwMakeContextCurrent(window);
	if (glewInit() != GLEW_OK) {
    
     exit(EXIT_FAILURE); }
	glfwSwapInterval(1);

	glfwSetWindowSizeCallback(window, window_size_callback);

	init(window);

	while (!glfwWindowShouldClose(window)) {
    
    
		display(window, glfwGetTime());
		glfwSwapBuffers(window);
		glfwPollEvents();
	}

	glfwDestroyWindow(window);
	glfwTerminate();
	exit(EXIT_SUCCESS);
}

Although this scene requires two sets of shaders—one for the cubemap and one for the torus—Program 9.3 only shows the shader for drawing the torus. This is because the shader used to render the cubemap is the same as in program 9.2. The process of obtaining Program 9.3 by modifying Program 9.2 is summarized as follows.
In the init() function:
Create the normal vector buffer of the torus [actually done in setupVertices(), called by init()]; The texture coordinate buffer for the torus is no longer required.

In the display() function:
Create the matrix used to transform the normal vectors (called "norm_matrix" in Chapter 7) and connect it to the associated uniform variable ;
Activate the torus normal vector buffer; Activate the texture cube map to be the texture of the torus (instead of the previous "brick" texture).

In the vertex shader: Add the normal vector to the norm_matrix; output the transformed vertex and normal vectors in preparation for calculating the reflection vector, as described in the Lighting and Shadows chapter
Made similarly.

In the fragment shader:
Calculate the reflection vector in a similar way as in the lighting chapter; retrieve the output color from the texture (now a cubemap), using the reflection vector instead of the texture coordinates to search.

The rendering shown in Figure 9.10 is a great example of how powerful illusions can be achieved with simple techniques. We gave the object a "metallic" look by simply painting the background over it without doing ADS material modeling at all. Even without any ADS lighting being integrated into the scene, this technique gives the impression of light bouncing off objects. In this example, we even get the impression that there is a specular highlight on the lower left side of the torus because the cubemap includes the reflection of the sun in the water.

Additional information

As was the case when we first looked at textures in Chapter 5, using SOIL2 makes building cubemaps and adding textures to cubemaps easy. At the same time, it may also have some side effects, that is, preventing users from learning some useful OpenGL details. Of course, users can also instantiate and load OpenGL cubemaps without SOIL2. Although this topic is beyond the scope of this book, the basic steps are as follows:

(1) Use C++ tools to read 6 image files (they must be square);

(2) Use glGenTextures() to create textures and their integer references for the cube map;

(3) Call glBindTexture() and specify the texture ID and GL_TEXTURE_CUBE_MAP;

(4) Use glTexStorage2D() to specify the storage requirements of the cube map;

(5) Call glTexImage2D() or glTexSubImage2D() to assign images to each face of the cube.

For more details on creating OpenGL cubemaps without SOIL2, please browse some related tutorials on the Internet [dV14], [GE16].

As discussed in this chapter, one of the main limitations of environment mapping is that it can only build objects that reflect the contents of the cubemap. Objects that use maps to simulate reflections do not appear to other objects rendered in the scene. Whether this restriction is acceptable depends on the nature of the scenario. If there are objects in the scene that must appear in specular or chrome objects, other methods must be used. A common approach is to use stencil buffers (mentioned in Chapter 8), which are described in many networking tutorials (such as [OV12], [NE14] and [GR16]), although it is beyond the scope of this book scope.

We did not cover the implementation of sky domes, although they are arguably simpler than sky boxes in some respects and less susceptible to distortion, and even implementing environment mapping with them is simpler - at least mathematically - but OpenGL has Texture support often makes skyboxes more useful.

Of the topics covered later in the book, skyboxes and marquees are arguably the simplest conceptually. However, making them look convincing can be time-consuming. We've only briefly touched on some of the issues that can arise (such as seams), but depending on the texture image file used, other issues may arise that require additional fixes. Especially in animated scenes or when the camera can be moved interactively.

We also briefly covered how to produce usable and convincing textured cubemap images. There are many excellent tools for this, the most popular of which is Terragen [TE16].

All cubemaps in this chapter were created by the author using Terragen (except for the star field map in Figure 9.6).

Guess you like

Origin blog.csdn.net/weixin_44848751/article/details/131169590