现实世界中的物体通常并不只包含有一种材质,而是由多种材质所组成。想想一辆汽车:它的外壳非常有光泽,车窗会部分反射周围的环境,轮胎不会那么有光泽,所以它没有镜面高光,轮毂非常闪亮。
用一张覆盖物体的图像,让我们能够逐片段索引其独立的颜色值,它是一个表现了物体所有的漫反射颜色的纹理图像。
这次我们会将纹理储存为Material结构体中的一个sampler2D
。我们将之前定义的vec3
漫反射颜色向量替换为漫反射贴图。移除了环境光材质颜色向量,因为环境光颜色在几乎所有情况下都等于漫反射颜色,所以我们不需要将它们分开储存:
struct Material {
sampler2D diffuse;
vec3 specular;
float shininess;
};
...
in vec2 TexCoords;
接下来我们只需要从纹理中采样片段的漫反射颜色值即可:
vec3 diffuseTexColor = vec3(texture(material.diffuse,TexCoords));
//diffuse
vec3 norm = normalize(outNormal);
vec3 lightDir = normalize(lightPos - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = light.diffuse * diff * diffuseTexColor;
不要忘记将环境光的材质颜色设置为漫反射材质颜色同样的值。
//ambinet
vec3 ambient = light.ambient * diffuseTexColor;
你可能会注意到,镜面高光看起来有些奇怪,因为我们的物体大部分都是木头,我们知道木头不应该有这么强的镜面高光的。我们可以将物体的镜面光材质设置为vec3(0.0)
来解决这个问题,但这也意味着箱子钢制的边框将不再能够显示镜面高光了,我们知道钢铁应该是有一些镜面高光的。
我们同样可以使用一个专门用于镜面高光的纹理贴图。这也就意味着我们需要生成一个黑白的纹理,来定义物体每部分的镜面光强度。
由于箱子大部分都由木头所组成,而且木头材质应该没有镜面高光,所以漫反射纹理的整个木头部分全部都转换成了黑色。箱子钢制边框的镜面光强度是有细微变化的,钢铁本身会比较容易受到镜面高光的影响,而裂缝则不会。
接下来更新片段着色器的材质属性,让其接受一个sampler2D
而不是vec3
作为镜面光分量:
struct Material {
sampler2D diffuse;
sampler2D specular;
float shininess;
};
#include "myopenglwidget.h"
#include
#include
#include
#include
#include
float vertices[] = {
// positions // normals // texture coords
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f,
0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f,
0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f,
0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f,
-0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f,
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f,
-0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f,
0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f,
0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f,
0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f,
-0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f,
-0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f,
-0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f,
-0.5f, 0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 1.0f,
-0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f,
-0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f,
-0.5f, -0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f,
-0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f,
0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f,
0.5f, 0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f,
0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f,
0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f,
0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f,
0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f,
-0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f,
0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 1.0f,
0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f,
0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f,
-0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f,
-0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f,
-0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f,
0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f,
0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f,
0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f,
-0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f,
-0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f
};
GLuint indices[] = {
0, 1, 3,
1, 2, 3
};
GLuint VBO, VAO,EBO,lightVAO;
GLuint shaderProgram;
QVector3D lightPos(1.2f,1.0f,2.0f);
QVector3D lightColor(1.0f,1.0f,1.0f);
QVector3D objectColor(1.0f,0.5f,0.31f);
QTimer *timer;
QTime gtime;
QVector cubePositions = {
QVector3D( 0.0f, 0.0f, 0.0f),
QVector3D( 2.0f, 5.0f, -15.0f),
QVector3D(-1.5f, -2.2f, -2.5f),
QVector3D(-3.8f, -2.0f, -12.3f),
QVector3D( 2.4f, -0.4f, -3.5f),
QVector3D(-1.7f, 3.0f, -7.5f),
QVector3D( 1.3f, -2.0f, -2.5f),
QVector3D( 1.5f, 2.0f, -2.5f),
QVector3D( 1.5f, 0.2f, -1.5f),
QVector3D(-1.3f, 1.0f, -1.5f)
};
float fov = 45.0f;
MyOpenGLWidget::MyOpenGLWidget(QWidget *parent)
: QOpenGLWidget(parent)
{
cameraPos = QVector3D( 0.0f, 0.0f, 5.0f);//摄像机位置
cameraTarget = QVector3D( 0.0f, 0.0f, 0.0f);//摄像机看到的位置
cameraDirection = QVector3D(cameraPos - cameraTarget);//摄像机的方向
cameraDirection.normalize();
up = QVector3D(0.0f, 1.0f, 0.0f);
cameraRight = QVector3D::crossProduct(up,cameraDirection);//两个向量叉乘的结果会同时垂直于两向量,因此我们会得到指向x轴正方向的那个向量
cameraRight.normalize();
cameraUp = QVector3D::crossProduct(cameraDirection,cameraRight);
cameraFront = QVector3D( 0.0f, 0.0f, -1.0f);
timer = new QTimer();
timer->start(50);
gtime.start();
connect(timer,&QTimer::timeout,[=]{
update();
});
setFocusPolicy(Qt::StrongFocus);
//setMouseTracking(true);
}
void MyOpenGLWidget::initializeGL()
{
initializeOpenGLFunctions();
m_program = new QOpenGLShaderProgram();
m_program->addShaderFromSourceFile(QOpenGLShader::Vertex,":/shapes.vert");
m_program->addShaderFromSourceFile(QOpenGLShader::Fragment,":/shapes.frag");
m_program->link();
qDebug()<log();
m_lightProgram = new QOpenGLShaderProgram();
m_lightProgram->addShaderFromSourceFile(QOpenGLShader::Vertex,":/light.vert");
m_lightProgram->addShaderFromSourceFile(QOpenGLShader::Fragment,":/light.frag");
m_lightProgram->link();
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &VBO);
glBindVertexArray(VAO);//绑定VAO
glBindBuffer(GL_ARRAY_BUFFER, VBO);//顶点缓冲对象的缓冲类型是GL_ARRAY_BUFFER
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);//把顶点数据复制到缓冲的内存中GL_STATIC_DRAW :数据不会或几乎不会改变。
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(GLfloat), (GLvoid*)(3*sizeof(GLfloat)));
glEnableVertexAttribArray(1);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8 * sizeof(GLfloat), (GLvoid*)(6*sizeof(GLfloat)));
glEnableVertexAttribArray(2);
glGenBuffers(1, &EBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
glGenVertexArrays(1, &lightVAO);
glGenBuffers(1, &VBO);
glBindVertexArray(lightVAO);//绑定VAO
glBindBuffer(GL_ARRAY_BUFFER, VBO);//顶点缓冲对象的缓冲类型是GL_ARRAY_BUFFER
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);//把顶点数据复制到缓冲的内存中GL_STATIC_DRAW :数据不会或几乎不会改变。
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
m_program->bind();
m_program->setUniformValue("lightPos",lightPos);
m_program->setUniformValue("viewPos",cameraPos);
m_program->setUniformValue("material.shininess", 32.0f);
m_diffseTexture = new QOpenGLTexture(QImage(":/container2.png").mirrored());
m_program->setUniformValue("material.diffuse",0);
m_specularTexture = new QOpenGLTexture(QImage(":/container2_specular.png").mirrored());
m_program->setUniformValue("material.specular",1);
m_lightProgram->bind();
m_lightProgram->setUniformValue("lightColor",lightColor);
glBindVertexArray(0);//解绑VAO
}
void MyOpenGLWidget::paintGL()
{
glClearColor(0.2f,0.3f,0.3f,1.0f);
glEnable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
QMatrix4x4 model;
QMatrix4x4 view;
float time = gtime.elapsed()/50.0;
//int time = QTime::currentTime().msec();
QMatrix4x4 projection;
projection.perspective(fov,(float)( width())/(height()),0.1,100);
view.lookAt(cameraPos,cameraPos + cameraFront,up);
m_program->bind();
m_program->setUniformValue("projection",projection);
m_program->setUniformValue("view",view);
lightColor.setX(sin(time/100 * 2.0f));
lightColor.setY(sin(time/100 * 0.7f));
lightColor.setZ(sin(time/100 * 1.3f));
QVector3D diffuseColor = QVector3D(0.3f,0.3f,0.3f);
QVector3D ambinetColor = QVector3D(0.7f,0.7f,0.7f);
m_program->setUniformValue("light.ambient", ambinetColor);
m_program->setUniformValue("light.diffuse", diffuseColor); // 将光照调暗了一些以搭配场景
m_program->setUniformValue("light.specular", QVector3D(1.0,1.0,1.0));
m_diffseTexture->bind(0);
m_specularTexture->bind(1);
glBindVertexArray(VAO);//绑定VAO
model.rotate(time,1.0f,1.0f,0.5f);
m_program->setUniformValue("model",model);
glDrawArrays(GL_TRIANGLES,0,36);
m_lightProgram->bind();
m_lightProgram->setUniformValue("projection",projection);
m_lightProgram->setUniformValue("view",view);
//m_lightProgram->setUniformValue("lightColor",lightColor);
model.setToIdentity();
model.translate(lightPos);
model.rotate(1.0f,1.0f,5.0f,0.5f);
model.scale(0.2);
m_lightProgram->setUniformValue("model",model);
glBindVertexArray(lightVAO);//绑定VAO
glDrawArrays(GL_TRIANGLES,0,36);
// foreach(auto pos , cubePositions)
// {
// model.setToIdentity();
// model.translate(pos);
// //model.rotate(time,1.0f,5.0f,3.0f);
// m_program->setUniformValue("model",model);
// glDrawArrays(GL_TRIANGLES,0,36);
// }
}
void MyOpenGLWidget::resizeGL(int w, int h)
{
}
void MyOpenGLWidget::keyPressEvent(QKeyEvent *event)
{
qDebug()<key();
cameraSpeed = 2.5 * 100 / 1000.0;
switch (event->key()) {
case Qt::Key_W:{
cameraPos += cameraSpeed * cameraFront;
}
break;
case Qt::Key_S:{
cameraPos -= cameraSpeed * cameraFront;
}
break;
case Qt::Key_A:{
cameraPos -= cameraSpeed * cameraRight;
}
break;
case Qt::Key_D:{
cameraPos += cameraSpeed * cameraRight;
}
break;
default:
break;
}
update();
}
float PI = 3.1415926;
QPoint deltaPos;
void MyOpenGLWidget::mouseMoveEvent(QMouseEvent *event)
{
// static float yaw = -90;
// static float pitch = 0;
// static QPoint lastPos(width()/2,height()/2);
// auto currentPos = event->pos();
// deltaPos = currentPos-lastPos;
// lastPos=currentPos;
// float sensitivity = 0.1f;
// deltaPos *= sensitivity;
// yaw += deltaPos.x();
// pitch -= deltaPos.y();
// if(pitch > 89.0f) pitch = 89.0f;
// if(pitch < -89.0f) pitch = -89.0f;
// cameraFront.setX(cos(yaw*PI/180.0) * cos(pitch *PI/180));
// cameraFront.setY(sin(pitch*PI/180));
// cameraFront.setZ(sin(yaw*PI/180) * cos(pitch *PI/180));
// cameraFront.normalize();
// update();
}
void MyOpenGLWidget::wheelEvent(QWheelEvent *event)
{
if(fov >= 1.0f && fov <= 75.0f)
fov -= event->angleDelta().y()/120;
if(fov <= 1.0f)
fov = 1.0f;
if(fov >= 75.0f)
fov = 75.0f;
update();
}