Opengl ES系列学习--点亮世界

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/sinat_22657459/article/details/89640901

     本节我们在上一节的基础上继续添加光照,我们要分析的目标就是《OpenGL ES应用开发实践指南 Android卷》书中第13章实现的最终的结果,代码下载请点击:Opengl ES Source Code,该Git库中的lighting Module就是我们本节要分析的目标,先看下本节最终实现的结果。

     在上一节的基础上,把天空盒换成了一个夜晚的天空盒,同时增加了光照,三个粒子发射器也增加了点光源,可以看到他们发射点的颜色非常亮。在开始分析代码之前,涉及到光照的几个知识点我们需要先搞清楚,否则后边分析代码时会非常困惑。

     1、用朗伯体反射实现方向光

     要实现漫反射,我们可以使用朗伯体反射。朗伯体反射描述了这样一个表面,它会反射所有方向上打到它的光线,以使它在任何观察点上看起来都是一样的,它的外观只依赖于它与光源所处的方位及其距离。我们在代码中要实现朗伯体反射,需要先计算出表面和光线之间夹角的余弦值,而要计算余弦值只需要计算指向光源的向量与表面法线的点积,它的工作原理是,当两个向量都是归一化的向量时,那两个向量的点积就是它们之间夹角的余弦值,这也正是朗伯体反射所需要的。来看一段代码实现,在当前lighting Module当中,它的高度图顶点着色器heightmap_vertex_shader.glsl中,定义了getDirectionalLighting方法,源码如下:

vec3 getDirectionalLighting()
{   
    return materialColor * 0.3 
         * max(dot(eyeSpaceNormal, u_VectorToLight), 0.0);       
}

     materialColor表示材质光,eyeSpaceNormal表示法线向量,u_VectorToLight表示方向光源,内建函数dot的作用是计算两个向量的点积,点积的结果也就是它们的余弦值,再和materialColor相乘得到的结果就是当前平面的朗伯体反射,乘以0.3只是为了调低光强度,结合这段代码,大家应该就可以明白我们如何使用朗伯体反射了。

     2、理解点光和方向光

     这点在我们Opengl ES系列学习--序小节的第五个问题点已经描述了,如果还有疑问,请回头搞清楚。

     3、使用点光源

     1)我们将位置和法线放入眼空间,在这个空间里,所有的位置和法线都是相对于照相机的位置和方位,这样做是为了使我们可以在同一个坐标空间中比较所有事物的距离和方位。我们为什么使用眼空间而不是世界空间呢?因为镜面光也依赖于照相机的位置,即使我们在本章中没有使用镜面光,学习如何使用眼空间也是一个好主意,这样我们在以后就可以直接使用它了。
     2)要把一个位置放进眼空间中,我们只需要让它与模型矩阵相乘,把它放入世界空间中;我们再把它与视图矩阵相乘,这样就把它入入眼空间中了。为了简化操作,我们可以把视力矩阵与模型矩阵相乘得到一个单一的矩阵,称为模型视图矩阵,再用这个矩阵把我们的位置放入眼空间中。
     3)如果模型视图矩阵只包含平稳和旋转,这对法线也是有用的,但是,如果我们缩放了一个物体,那会怎么样?如果缩放在所有方向上都是一样的,我们只需要重新归一化法线,使它的长度保持为1,但是如果某个方向上被压扁了,那我们必须要补偿那一方向。要达到这个目的,我们可以把倒置模型视图矩阵,然后再转置这个倒置的矩阵,让法线与该矩阵相乘,最后归一化结果。这块涉及到很多矩阵运算等高等数学的知识,自己也没搞清楚,暂时就知道这样使用就行了。

     第三点使用点光源的文字全部是从书上原本的抄过来的,从这里大家可以感觉到,要熟练或者精通使用Opengl ES,不光要懂得这些最基本的API,还必须懂得一些高等数学、物理学上的东西,我们才能明白,要实现一个结果,到底要用什么样的原理来实现,否则想不通这些,我们根本不知道代码该怎么写,我自己对这些原理性的东西也是一头雾水。

     讲了几个概念,接下来我们来看代码实现,还是和以前一样,关注点放在差异的部分,本节的目录结构如下:

     看一下本节的顶点着色器,代码比之前多了很多,后续我们要自己实现一些复杂的功能,都要往这个方向发展。控制更多的逻辑肯定是需要更多的代码了。我们逐个分析每个包下所有文件的不同。

     data包下完全相同,跳过;objects包下的Heightmap类和之前有差异,我们再次来分析一下,修改后的源码如下:

public class Heightmap {
    private static final int POSITION_COMPONENT_COUNT = 3;
    private static final int NORMAL_COMPONENT_COUNT = 3;
    private static final int TOTAL_COMPONENT_COUNT =
            POSITION_COMPONENT_COUNT + NORMAL_COMPONENT_COUNT;
    private static final int STRIDE =
            (POSITION_COMPONENT_COUNT + NORMAL_COMPONENT_COUNT) * BYTES_PER_FLOAT;

    private final int width;
    private final int height;
    private final int numElements;

    private final VertexBuffer vertexBuffer;
    private final IndexBuffer indexBuffer;

    public Heightmap(Bitmap bitmap) {
        width = bitmap.getWidth();
        height = bitmap.getHeight();
        if (width * height > 65536) {
            throw new RuntimeException("Heightmap is too large for the index buffer.");
        }
        numElements = calculateNumElements();
        vertexBuffer = new VertexBuffer(loadBitmapData(bitmap));
        indexBuffer = new IndexBuffer(createIndexData());
    }

    /**
     * Copy the heightmap data into a vertex buffer object.
     */
    private float[] loadBitmapData(Bitmap bitmap) {
        final int[] pixels = new int[width * height];
        bitmap.getPixels(pixels, 0, width, 0, 0, width, height);
        bitmap.recycle();

        final float[] heightmapVertices =
                new float[width * height * TOTAL_COMPONENT_COUNT];

        int offset = 0;

        for (int row = 0; row < height; row++) {
            for (int col = 0; col < width; col++) {
                // The heightmap will lie flat on the XZ plane and centered
                // around (0, 0), with the bitmap width mapped to X and the
                // bitmap height mapped to Z, and Y representing the height. We
                // assume the heightmap is grayscale, and use the value of the
                // red color to determine the height.
                final Point point = getPoint(pixels, row, col);

                heightmapVertices[offset++] = point.x;
                heightmapVertices[offset++] = point.y;
                heightmapVertices[offset++] = point.z;

                final Point top = getPoint(pixels, row - 1, col);
                final Point left = getPoint(pixels, row, col - 1);
                final Point right = getPoint(pixels, row, col + 1);
                final Point bottom = getPoint(pixels, row + 1, col);

                final Vector rightToLeft = Geometry.vectorBetween(right, left);
                final Vector topToBottom = Geometry.vectorBetween(top, bottom);
                final Vector normal = rightToLeft.crossProduct(topToBottom).normalize();

                heightmapVertices[offset++] = normal.x;
                heightmapVertices[offset++] = normal.y;
                heightmapVertices[offset++] = normal.z;
            }
        }

        return heightmapVertices;
    }

    /**
     * Returns a point at the expected position given by row and col, but if the
     * position is out of bounds, then it clamps the position and uses the
     * clamped position to read the height. For example, calling with row = -1
     * and col = 5 will set the position as if the point really was at -1 and 5,
     * but the height will be set to the heightmap height at (0, 5), since (-1,
     * 5) is out of bounds. This is useful when we're generating normals, and we
     * need to read the heights of neighbouring points.
     */
    private Point getPoint(int[] pixels, int row, int col) {
        float x = ((float) col / (float) (width - 1)) - 0.5f;
        float z = ((float) row / (float) (height - 1)) - 0.5f;

        row = clamp(row, 0, width - 1);
        col = clamp(col, 0, height - 1);

        float y = (float) Color.red(pixels[(row * height) + col]) / (float) 255;

        return new Point(x, y, z);
    }

    private int clamp(int val, int min, int max) {
        return Math.max(min, Math.min(max, val));
    }

    private int calculateNumElements() {
        // There should be 2 triangles for every group of 4 vertices, so a
        // heightmap of, say, 10x10 pixels would have 9x9 groups, with 2
        // triangles per group and 3 vertices per triangle for a total of (9 x 9
        // x 2 x 3) indices.
        return (width - 1) * (height - 1) * 2 * 3;
    }

    /**
     * Create an index buffer object for the vertices to wrap them together into
     * triangles, creating indices based on the width and height of the
     * heightmap.
     */
    private short[] createIndexData() {
        final short[] indexData = new short[numElements];
        int offset = 0;

        for (int row = 0; row < height - 1; row++) {
            for (int col = 0; col < width - 1; col++) {
                // Note: The (short) cast will end up underflowing the number
                // into the negative range if it doesn't fit, which gives us the
                // right unsigned number for OpenGL due to two's complement.
                // This will work so long as the heightmap contains 65536 pixels
                // or less.
                short topLeftIndexNum = (short) (row * width + col);
                short topRightIndexNum = (short) (row * width + col + 1);
                short bottomLeftIndexNum = (short) ((row + 1) * width + col);
                short bottomRightIndexNum = (short) ((row + 1) * width + col + 1);

                // Write out two triangles.
                indexData[offset++] = topLeftIndexNum;
                indexData[offset++] = bottomLeftIndexNum;
                indexData[offset++] = topRightIndexNum;

                indexData[offset++] = topRightIndexNum;
                indexData[offset++] = bottomLeftIndexNum;
                indexData[offset++] = bottomRightIndexNum;
            }
        }

        return indexData;
    }

    public void bindData(HeightmapShaderProgram heightmapProgram) {
        vertexBuffer.setVertexAttribPointer(0,
                heightmapProgram.getPositionAttributeLocation(),
                POSITION_COMPONENT_COUNT, STRIDE);

        vertexBuffer.setVertexAttribPointer(
                POSITION_COMPONENT_COUNT * BYTES_PER_FLOAT,
                heightmapProgram.getNormalAttributeLocation(),
                NORMAL_COMPONENT_COUNT, STRIDE);
    }

    public void draw() {
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer.getBufferId());
        glDrawElements(GL_TRIANGLES, numElements, GL_UNSIGNED_SHORT, 0);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
    }
}

     大家可以看到,代码中有非常大段的注释,这是一个非常好的习惯,在我之前的博客中,也一直在强调注释的重要性,我们可以随便翻开android源码中哪个文件,都可以看到非常详细的注释,注释能非常好的提高代码的可读性,所以我们自己一定要养成这样的好习惯!不仅要写注释,而且写注释的时候,每个字、每个词都要斟酌,力求表达清楚。

     好,我们现在来看一下修改后和之前不同的地方。新增了NORMAL_COMPONENT_COUNT、TOTAL_COMPONENT_COUNT、STRIDE三个成员变量,NORMAL_COMPONENT_COUNT表示描述法线需要3个size;TOTAL_COMPONENT_COUNT表示描述一个顶点的总的size;STRIDE和以前一样,表示跨距,它们的值已经很清楚了。接着来看loadBitmapData方法,因为我们在顶点缓冲区中增加了法线分量,所以需要在heightmapVertices中相应增加法线的存储空间;两个for循环的目的还是为了填充顶点数组,getPoint方法的注释已经写的非常清楚了,返回由row和col给出的预期位置处的一个点,它的Y分量和我们上一节讲的一样,就是取该点处的pixel色值的红色分量进行归一化的结果,该方法中还进行了边界判断,比如我们高度图左、上、右、下四条边界,它们再往外就是空的,所以需要进行判断。这里大家一定要清楚,我们计算这些都是为了得到该平面的法线,必须得到法线,才能计算余弦值,进而得到朗伯体反射,这是我们最终的目的!!为了获得法线,我们先取该点的left、top、right、bottom这些相邻点(示例图如下),创建其平面的两个向量,这里一定要注意顺序,是从右向左,从上到下;然后调用Vector类的crossProduct方法得到叉积,并进行归一化就得到表面法线了。

     得到法线向量后,将其存储在顶点数组中;clamp方法是为了进行边界限定,如果val值比min还小,那么就返回min;如果它比max还大,那么就返回max;最后看一下bindData方法,我们增加了法线,所以着色器中也一定要定义法线属性,它的值是从顶点缓冲区中取到的,所以定义法线时,肯定是attribute,而不是uniform,相信大家在清楚了前面几节的基础上,理解这些应该非常容易了;也因为增加了法线,所以顶点数组中不再是单纯的位置,所以stride跨距也就是必须的了,而且调用setVertexAttribPointer给法线赋值时,我们还必须告诉它offset偏移量,否则取值会出错!

     接着看HeightmapShaderProgram类中的代码,源码如下:

public class HeightmapShaderProgram extends ShaderProgram {           
    private final int uVectorToLightLocation;
    private final int uMVMatrixLocation;
    private final int uIT_MVMatrixLocation;
    private final int uMVPMatrixLocation;
    private final int uPointLightPositionsLocation;
    private final int uPointLightColorsLocation;
    
    private final int aPositionLocation;
    private final int aNormalLocation;

    public HeightmapShaderProgram(Context context) {
        super(context, R.raw.heightmap_vertex_shader,
            R.raw.heightmap_fragment_shader);
                
        uVectorToLightLocation = glGetUniformLocation(program, U_VECTOR_TO_LIGHT);
        uMVMatrixLocation = glGetUniformLocation(program, U_MV_MATRIX);
        uIT_MVMatrixLocation = glGetUniformLocation(program, U_IT_MV_MATRIX);
        uMVPMatrixLocation = glGetUniformLocation(program, U_MVP_MATRIX);
        
        uPointLightPositionsLocation = 
            glGetUniformLocation(program, U_POINT_LIGHT_POSITIONS);
        uPointLightColorsLocation = 
            glGetUniformLocation(program, U_POINT_LIGHT_COLORS);
        aPositionLocation = glGetAttribLocation(program, A_POSITION);
        aNormalLocation = glGetAttribLocation(program, A_NORMAL);
    }

    /*
    public void setUniforms(float[] matrix, Vector vectorToLight) {
        glUniformMatrix4fv(uMatrixLocation, 1, false, matrix, 0);   
        glUniform3f(uVectorToLightLocation, 
            vectorToLight.x, vectorToLight.y, vectorToLight.z);
    }
     */
    
    public void setUniforms(float[] mvMatrix, 
                            float[] it_mvMatrix, 
                            float[] mvpMatrix, 
                            float[] vectorToDirectionalLight, 
                            float[] pointLightPositions,
                            float[] pointLightColors) {          
        glUniformMatrix4fv(uMVMatrixLocation, 1, false, mvMatrix, 0);   
        glUniformMatrix4fv(uIT_MVMatrixLocation, 1, false, it_mvMatrix, 0);   
        glUniformMatrix4fv(uMVPMatrixLocation, 1, false, mvpMatrix, 0);
        glUniform3fv(uVectorToLightLocation, 1, vectorToDirectionalLight, 0);
        
        glUniform4fv(uPointLightPositionsLocation, 3, pointLightPositions, 0);            
        glUniform3fv(uPointLightColorsLocation, 3, pointLightColors, 0);
    }

    public int getPositionAttributeLocation() {
        return aPositionLocation;
    }
    
    public int getNormalAttributeLocation() {
        return aNormalLocation;
    }
}

     这是修改后的高度图着色器,基本也很容易理解了,构造方法中对我们定义的所有uniform、attribute类型的变量进行取值,方便后续传值使用;setUniforms方法中从Render中传参,然后赋值给对应的变量,以实现对着色器程序的控制。这里需要注意,我们设置uniform变量的值时,一定要注意对应的API,必须要和我们着色器中定义的类型相同,否则就会出错,这已经是最基本的要求了!

     继续看ParticlesRenderer类,源码如下:

public class ParticlesRenderer implements Renderer {    
    private final Context context;

    private final float[] modelMatrix = new float[16];
    private final float[] viewMatrix = new float[16];
    private final float[] viewMatrixForSkybox = new float[16];
    private final float[] projectionMatrix = new float[16];        
    
    private final float[] tempMatrix = new float[16];
    private final float[] modelViewMatrix = new float[16];
    private final float[] it_modelViewMatrix = new float[16];
    private final float[] modelViewProjectionMatrix = new float[16];

    private HeightmapShaderProgram heightmapProgram;
    private Heightmap heightmap;
    
    /*
    private final Vector vectorToLight = new Vector(0.61f, 0.64f, -0.47f).normalize();
    */ 
    /*
    private final Vector vectorToLight = new Vector(0.30f, 0.35f, -0.89f).normalize();
    */
    final float[] vectorToLight = {0.30f, 0.35f, -0.89f, 0f};
    
    private final float[] pointLightPositions = new float[]
        {-1f, 1f, 0f, 1f,
          0f, 1f, 0f, 1f,
          1f, 1f, 0f, 1f};
    
    private final float[] pointLightColors = new float[]
        {1.00f, 0.20f, 0.02f,
         0.02f, 0.25f, 0.02f, 
         0.02f, 0.20f, 1.00f};
    
    private SkyboxShaderProgram skyboxProgram;
    private Skybox skybox;   
    
    private ParticleShaderProgram particleProgram;      
    private ParticleSystem particleSystem;
    private ParticleShooter redParticleShooter;
    private ParticleShooter greenParticleShooter;
    private ParticleShooter blueParticleShooter;     

    private long globalStartTime;    
    private int particleTexture;
    private int skyboxTexture;
    
    private float xRotation, yRotation;  

    public ParticlesRenderer(Context context) {
        this.context = context;
    }

    public void handleTouchDrag(float deltaX, float deltaY) {
        xRotation += deltaX / 16f;
        yRotation += deltaY / 16f;
        
        if (yRotation < -90) {
            yRotation = -90;
        } else if (yRotation > 90) {
            yRotation = 90;
        } 
        
        // Setup view matrix
        updateViewMatrices();        
    }
    
    private void updateViewMatrices() {        
        setIdentityM(viewMatrix, 0);
        rotateM(viewMatrix, 0, -yRotation, 1f, 0f, 0f);
        rotateM(viewMatrix, 0, -xRotation, 0f, 1f, 0f);
        System.arraycopy(viewMatrix, 0, viewMatrixForSkybox, 0, viewMatrix.length);
        
        // We want the translation to apply to the regular view matrix, and not
        // the skybox.
        translateM(viewMatrix, 0, 0, -1.5f, -5f);

//        // This helps us figure out the vector for the sun or the moon.        
//        final float[] tempVec = {0f, 0f, -1f, 1f};
//        final float[] tempVec2 = new float[4];
//        
//        Matrix.multiplyMV(tempVec2, 0, viewMatrixForSkybox, 0, tempVec, 0);
//        Log.v("Testing", Arrays.toString(tempVec2));
    }  

    @Override
    public void onSurfaceCreated(GL10 glUnused, EGLConfig config) {
        glClearColor(0.0f, 0.0f, 0.0f, 0.0f);  
        glEnable(GL_DEPTH_TEST);
        glEnable(GL_CULL_FACE);
        
        heightmapProgram = new HeightmapShaderProgram(context);
        heightmap = new Heightmap(((BitmapDrawable)context.getResources()
            .getDrawable(R.drawable.heightmap)).getBitmap());
        
        skyboxProgram = new SkyboxShaderProgram(context);
        skybox = new Skybox();      
        
        particleProgram = new ParticleShaderProgram(context);        
        particleSystem = new ParticleSystem(10000);        
        globalStartTime = System.nanoTime();
        
        final Vector particleDirection = new Vector(0f, 0.5f, 0f);              
        final float angleVarianceInDegrees = 5f; 
        final float speedVariance = 1f;
            
        redParticleShooter = new ParticleShooter(
            new Point(-1f, 0f, 0f), 
            particleDirection,                
            Color.rgb(255, 50, 5),            
            angleVarianceInDegrees, 
            speedVariance);
        
        greenParticleShooter = new ParticleShooter(
            new Point(0f, 0f, 0f), 
            particleDirection,
            Color.rgb(25, 255, 25),            
            angleVarianceInDegrees, 
            speedVariance);
        
        blueParticleShooter = new ParticleShooter(
            new Point(1f, 0f, 0f), 
            particleDirection,
            Color.rgb(5, 50, 255),            
            angleVarianceInDegrees, 
            speedVariance); 
                
        particleTexture = TextureHelper.loadTexture(context, R.drawable.particle_texture);
        
//        skyboxTexture = TextureHelper.loadCubeMap(context, 
//            new int[] { R.drawable.left, R.drawable.right,
//                        R.drawable.bottom, R.drawable.top, 
//                        R.drawable.front, R.drawable.back}); 
        skyboxTexture = TextureHelper.loadCubeMap(context, 
        new int[] { R.drawable.night_left, R.drawable.night_right,
                    R.drawable.night_bottom, R.drawable.night_top, 
                    R.drawable.night_front, R.drawable.night_back});
    }

    @Override
    public void onSurfaceChanged(GL10 glUnused, int width, int height) {                
        glViewport(0, 0, width, height);        

        MatrixHelper.perspectiveM(projectionMatrix, 45, (float) width
            / (float) height, 1f, 100f);   
        updateViewMatrices();
    }

    @Override    
    public void onDrawFrame(GL10 glUnused) {
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);                
                        
        drawHeightmap();
        drawSkybox();        
        drawParticles();
    }

    private void drawHeightmap() {
        setIdentityM(modelMatrix, 0);  
        
        // Expand the heightmap's dimensions, but don't expand the height as
        // much so that we don't get insanely tall mountains.        
        scaleM(modelMatrix, 0, 100f, 10f, 100f);
        updateMvpMatrix();        
        
        heightmapProgram.useProgram();
        /*
        heightmapProgram.setUniforms(modelViewProjectionMatrix, vectorToLight);
         */
        
        // Put the light positions into eye space.        
        final float[] vectorToLightInEyeSpace = new float[4];
        final float[] pointPositionsInEyeSpace = new float[12];                
        multiplyMV(vectorToLightInEyeSpace, 0, viewMatrix, 0, vectorToLight, 0);
        multiplyMV(pointPositionsInEyeSpace, 0, viewMatrix, 0, pointLightPositions, 0);
        multiplyMV(pointPositionsInEyeSpace, 4, viewMatrix, 0, pointLightPositions, 4);
        multiplyMV(pointPositionsInEyeSpace, 8, viewMatrix, 0, pointLightPositions, 8); 
        
        heightmapProgram.setUniforms(modelViewMatrix, it_modelViewMatrix, 
            modelViewProjectionMatrix, vectorToLightInEyeSpace,
            pointPositionsInEyeSpace, pointLightColors);
        heightmap.bindData(heightmapProgram);
        heightmap.draw(); 
    }
    
    private void drawSkybox() {   
        setIdentityM(modelMatrix, 0);
        updateMvpMatrixForSkybox();
                
        glDepthFunc(GL_LEQUAL); // This avoids problems with the skybox itself getting clipped.
        skyboxProgram.useProgram();
        skyboxProgram.setUniforms(modelViewProjectionMatrix, skyboxTexture);
        skybox.bindData(skyboxProgram);
        skybox.draw();
        glDepthFunc(GL_LESS);
    }
   
    private void drawParticles() {        
        float currentTime = (System.nanoTime() - globalStartTime) / 1000000000f;
        
        redParticleShooter.addParticles(particleSystem, currentTime, 1);
        greenParticleShooter.addParticles(particleSystem, currentTime, 1);              
        blueParticleShooter.addParticles(particleSystem, currentTime, 1);              
        
        setIdentityM(modelMatrix, 0);
        updateMvpMatrix();
        
        glDepthMask(false);
        glEnable(GL_BLEND);
        glBlendFunc(GL_ONE, GL_ONE);
        
        particleProgram.useProgram();
        particleProgram.setUniforms(modelViewProjectionMatrix, currentTime, particleTexture);
        particleSystem.bindData(particleProgram);
        particleSystem.draw(); 
        
        glDisable(GL_BLEND);
        glDepthMask(true);
    }
    
    /*
    private void updateMvpMatrix() {
        multiplyMM(tempMatrix, 0, viewMatrix, 0, modelMatrix, 0);        
        multiplyMM(modelViewProjectionMatrix, 0, projectionMatrix, 0, tempMatrix, 0);
    }
    */
    
    private void updateMvpMatrix() {
        multiplyMM(modelViewMatrix, 0, viewMatrix, 0, modelMatrix, 0);
        invertM(tempMatrix, 0, modelViewMatrix, 0);
        transposeM(it_modelViewMatrix, 0, tempMatrix, 0);        
        multiplyMM(
            modelViewProjectionMatrix, 0, 
            projectionMatrix, 0, 
            modelViewMatrix, 0);
    }
        
    private void updateMvpMatrixForSkybox() {
        multiplyMM(tempMatrix, 0, viewMatrixForSkybox, 0, modelMatrix, 0);
        multiplyMM(modelViewProjectionMatrix, 0, projectionMatrix, 0, tempMatrix, 0);
    }
}

     该类相比之前,不断的增加,现在又新增了modelViewMatrix、it_modelViewMatrix两个矩阵,它们就是为了实现我们本节一开始所讲的第三个概念使用点光源的第三个描述而添加的。它们的赋值是在updateMvpMatrix方法当中,我们后面再分析它;接着定义了vectorToLight,它表示方向光,它为什么要定义成四位呢?因为矩阵运算的规则是这样的,我们要把方向光放入眼空间中,需要将它和矩阵进行运算,还是参考本节前面描述的第三个概念。

     接下来的pointLightPositions表示给三个粒子发射器添加的点光源的位置,Y分量比粒子发射器的位置都大1,以免被遮住,W分量为1,pointLightColors表示三个点光源的颜色;接下来的updateViewMatrices方法和上一节完全相同,onSurfaceCreated方法中和上一节的差异只是将天空盒换成了一个黑夜的天空盒;onSurfaceChanged、onDrawFrame方法完全相同。

     drawHeightmap实现高度图的绘制,因为我们在HeightmapShaderProgram类中定义的setUniforms预留的参数,所以需要计算出这几个参数,传递进去。multiplyMV(vectorToLightInEyeSpace, 0, viewMatrix, 0, vectorToLight, 0)将视图矩阵viewMatrix和方向光向量vectorToLight相乘,结果存储在vectorToLightInEyeSpace向量当中,因为前面已经调用了updateMvpMatrix方法,该方法中已经将视图矩阵和模型矩阵相乘了,所以updateMvpMatrix方法执行完,方向光已经在世界空间中了,所以这里计算完的结果,方向光向量已经在眼空间中了;紧接着的三个multiplyMV原理完全相同,作用是将三个点光源放入眼空间中,三个点光源全部都占四位,所以offset偏移值为4,三个点光源向量的结果全部存储在pointPositionsInEyeSpace数组中;计算完成之后,调用heightmapProgram.setUniforms将参数设置到着色器程序中。

     该类中最后来看一下updateMvpMatrix方法的实现,先调用multiplyMM(modelViewMatrix, 0, viewMatrix, 0, modelMatrix, 0)将视图矩阵和模型矩阵相乘,得到的就是我们本节最上面三个概念中讲的模型视图矩阵;然后调用invertM(tempMatrix, 0, modelViewMatrix, 0)将模型视图矩阵反转;再调用transposeM(it_modelViewMatrix, 0, tempMatrix, 0)对反转后的矩阵进行倒置,这几步完全都是按照我们之前的几个概念来实现的,所以还是要先搞清楚概念,否则我们根本不知道下一步要怎么处理。

     最后来看一下顶点着色器和片段着色器,顶点着色器的源码如下:

// uniform mat4 u_Matrix;
uniform mat4 u_MVMatrix;
uniform mat4 u_IT_MVMatrix;
uniform mat4 u_MVPMatrix;


uniform vec3 u_VectorToLight;             // In eye space
/*
uniform vec3 u_VectorToLight;
*/

uniform vec4 u_PointLightPositions[3];    // In eye space
uniform vec3 u_PointLightColors[3];


attribute vec4 a_Position;
attribute vec3 a_Normal;


varying vec3 v_Color;
vec3 materialColor;
vec4 eyeSpacePosition;
vec3 eyeSpaceNormal;

vec3 getAmbientLighting();
vec3 getDirectionalLighting();
vec3 getPointLighting();
void main()                    
{    
    /*
    v_Color = mix(vec3(0.180, 0.467, 0.153),    // A dark green 
                  vec3(0.660, 0.670, 0.680),    // A stony gray 
                  a_Position.y);                      
    
    // Note: The lighting code here doesn't take into account any rotations/
    // translations/etc... that may have been done to the model. In that case, 
    // the combined model-view matrix should be passed in, and the normals 
    // transformed with that matrix (if the matrix contains any skew / scale, 
    // then the normals will need to be multiplied by the inverse transpose 
    // (see http://arcsynthesis.org/gltut/Illumination/Tut09%20Normal%20Transformation.html
    // for more information)).
    
    vec3 scaledNormal = a_Normal;
    scaledNormal.y *= 10.0;
    scaledNormal = normalize(scaledNormal);
    
    float diffuse = max(dot(scaledNormal, u_VectorToLight), 0.0);
    
    diffuse *= 0.3;
    
    v_Color *= diffuse;    
                    
    float ambient = 0.2;
        
    float ambient = 0.1;
    
    v_Color += ambient;
    */                    
    
    materialColor = mix(vec3(0.180, 0.467, 0.153),    // A dark green 
                        vec3(0.660, 0.670, 0.680),    // A stony gray 
                        a_Position.y);
    eyeSpacePosition = u_MVMatrix * a_Position;                             
                             
    // The model normals need to be adjusted as per the transpose 
    // of the inverse of the modelview matrix.    
    eyeSpaceNormal = normalize(vec3(u_IT_MVMatrix * vec4(a_Normal, 0.0)));   
                                                                                  
    v_Color = getAmbientLighting();
    v_Color += getDirectionalLighting();                                                                                                                  
    v_Color += getPointLighting();
        
    gl_Position = u_MVPMatrix * a_Position;
}  

vec3 getAmbientLighting() 
{    
    return materialColor * 0.1;      
}

vec3 getDirectionalLighting()
{   
    return materialColor * 0.3 
         * max(dot(eyeSpaceNormal, u_VectorToLight), 0.0);       
}

vec3 getPointLighting()
{
    vec3 lightingSum = vec3(0.0);
    
    for (int i = 0; i < 3; i++) {                  
        vec3 toPointLight = vec3(u_PointLightPositions[i]) 
                          - vec3(eyeSpacePosition);          
        float distance = length(toPointLight);
        toPointLight = normalize(toPointLight);
        
        float cosine = max(dot(eyeSpaceNormal, toPointLight), 0.0); 
        lightingSum += (materialColor * u_PointLightColors[i] * 5.0 * cosine) 
                       / distance;
    }  
    
    return lightingSum;       
}
  

     开头定义了三个矩阵,它们的值都是在Render中调用heightmapProgram.setUniforms传进来的,u_MVMatrix表示模型视图矩阵,u_IT_MVMatrix表示倒置矩阵的转置,u_MVPMatrix表示合并后的模型视图投影矩阵,它的作用就和我们之前使用的透视投影矩阵的作用完全相同;u_VectorToLight表示方向光;u_PointLightPositions[3]表示三个点光源的位置属性;u_PointLightColors[3]表示三个点光源的颜色属性;a_Position表示高度图顶点属性;a_Normal表示法线,它的值就是在Heightmap类的loadBitmapData方法中给顶点数组赋值时计算得到的,详细的计算过程我们前面已经详细讲解了;materialColor表示材质颜色,其实也就是高度图表面颜色;eyeSpacePosition表示高度图顶点在眼空间中的位置;eyeSpaceNormal表示法线在眼空间中的位置。

     main函数中第一句还是调用mix方法根据Y分量值获取高度图的颜色插值,然后调用eyeSpacePosition = u_MVMatrix * a_Position将高度图的顶点位置转换成眼空间中的位置,因为我们需要计算每个点光源与当前位置的向量,所以这里需要转换,后边就用到它了;接着将法线与转置矩阵相乘并归一化,得到法线在眼空间中的向量,这些都是根据前面讲的概念来实现的;然后分别调用getAmbientLighting()计算环境光,调用getDirectionalLighting()计算方向光的朗伯体反射,调用getPointLighting()计算三个点光源的朗伯体反射,把所以结果累加得到要传递给片段着色器的v_Color的最终值;最后调用gl_Position = u_MVPMatrix * a_Position给内建变量gl_Position赋值。getAmbientLighting方法很简单,就是把光照强度减小,系数0.1没有任何意义,大家可以自己修改;getDirectionalLighting方法的目的就是计算方向光的朗伯体反射,我们在本节开头已经用它作为示例讲解过了;getPointLighting是用来计算点光源的朗伯体反射,为什么for循环要执行三次呢?因为我们定义的点光源有三个,每个点光源都会影响一个高度图的效果,所以需要把三个点光源的影响全部计算在内。计算过程还是依照前面讲的概念,对点光源,需要先计算出光源到顶点位置的向量,然后调用normalize将向量归一化,再和法线运算取点积得到余弦值,也就是朗伯体反射的系数,最终和表面光materialColor相乘就得到朗伯体反射的最终结果了。这里将它和距离distance相除,是为了仿照光照密度随距离递减的效果,而乘5.0是为了放大光源效果,5.0也没有任何意义,大家可以随意修改它,就可以看到光强度明显变化了。

     片段着色器和之前一样,很简单,我们就不展开分析了。

     看完本节的分析,大家是不是最终就明白了,其实光照效果最终还是作用在颜色上来实现的,就和我们之前讲的速度一样,它是通过位移变化来实现的,这些抽象的东西本身是无法描述的,我们必须把它落实在实际的像素点上,这样才能非常准确的量化它们。

     好了,本节还涉及很多概念性的知识,理解好这些概念也非常重要,如果我们不懂这些概念,那让我们自己写,根本写不出来,因为我们根本不知道如何去计算它们,所以还需要用心掌握。

     下课!!!

猜你喜欢

转载自blog.csdn.net/sinat_22657459/article/details/89640901