图形流水线中光栅化原理与实现

图形流水线中光栅化原理与实现

    • 光栅化主要解决的问题
    • 光栅化原理
      • 判断像素在三角形内部or外部
      • 顶点属性插值
        • 重心坐标系
        • 插值深度
          • 直接插值Z值的问题
          • 正确的深度插值公式推导
        • 透视校正
    • 代码实现

光栅化主要解决的问题

在传统图形学流水线,技术难点可以分为两大类:

  • 可见性(visibility)
  • 着色(shading)
    文章图形流水线中坐标变换详解:模型矩阵、视角矩阵、投影矩阵中的坐标系变换和本文介绍的光栅化就是解决 可见性(visibility) 的关键技术。

着色(shading) 涉及到后续局部光照模型以及全局光照模型。这类知识点在以后我学明白后再写文章介绍给大家。

光栅化原理

  • 那什么是可见性问题呢?简单来说就是处在后面的物体应该被前面的物体所遮挡,从而在渲染结果上表现为不可见(对于不透明物体来说是这样,对于透明物体来说是颜色混合)。

  • 结合文章(图形流水线中坐标变换详解:模型矩阵、视角矩阵、投影矩阵),所有三角形点在经过模型矩阵变换、视角矩阵变换、投影矩阵变化以及透视除法后,坐标都变换到NDC坐标系下(x,y,z∈【-1, 1】)。而在知道输出屏幕大小的情况下,通过窗口变换可将x/y变换到窗口大小下(x∈【0,width】 y∈【0,height】z不变)。至此我们即将所有三角形投射到raster_space中。
    图形流水线中光栅化原理与实现_第1张图片

  • 光栅化的作用是判断哪些像素点与三角形“干涉”(在三角形内部),并插值出三角形内部点的属性值(Z值、颜色、法向、纹理坐标等)。

图形流水线中光栅化原理与实现_第2张图片

  • 光栅化解决可见性(visibility) 思想:借助Z-Buffer, FrameBuffer

    • Z-Buffer是一个与光栅图像大小一致的二维数组,记录着每个像素点的深度值。
    • Z-Buffer中所有值初始化为inifity,当三角形上点P的Z值小于Z-Buffer中记录的值时,Z-Buffer中相应位置深度值更新为P点的Z值。否则不进行更新。
    • FrameBuffer也是一个与光栅图像大小一致的二维数据,记录着每个像素点的颜色值
    • 当Z-Buffer深度值更新时,FrameBuffer对应处颜色值也进行更新。
  • 光栅化流程:

//遍历所有三角形
FOR 每个已经转换到窗口坐标系(raster space)下的三角形:
  //内部两重循环 遍历所有像素点
  FOR row in raster space:
    FOR column in raster space:
      pixelCoord = vec2(row, column)
      IF pixelCoord 在三角形内部:
        插值pixelCoord 的Z值
        用Z值与Z-Buffer做深度测试
        IF 通过深度测试:
          更新Z-Buffer对应处的Z值
          插值pixelCoord的其他顶点属性(color, normal, uv)
          将像素点处的颜色值写入FrameBuffer

根据光栅化步骤,我们可以提出两个关键点:

  1. 如何判断某个像素在三角形内部
  2. 如何正确插值顶点属性

判断像素在三角形内部or外部

前提条件:三角形都已投影到raster space空间中

  • 该问题可以简化为,在二维平面中,如何判断一个点在三角形内部。如果你学过计算几何中凸包求解问题。你可能记得里面有一个TO-LEFT测试:一条向量无限延长后可以将平面分成两部分,分别称为其左侧和右侧
    图形流水线中光栅化原理与实现_第3张图片
    给定三角形旋向,三角形三条边即可以确定三条向量,如果对于一个点P,对这三条向量来说,它都处于同一侧(都在左侧(或者边上)、都在右侧(或者边上)),那个我们可以称点P它在三角形内部。具体情况如下图所示:
    图形流水线中光栅化原理与实现_第4张图片
  • 向量叉乘判断点P在向量的左侧或右侧或在直线上
    图形流水线中光栅化原理与实现_第5张图片
    因此只需要判断三次叉乘结果是否同号(或者=0)即可判断点是否在上三角形上
  • 判断像素在三角形内部or外部 伪代码

//edge Funciton
bool edgeFunction(vec2 p, vec2 a, vec2 b, vec2 c){
  //三角形三个点abc构成三个向量ab bc ca
  vec2 ab = b - a;
  vec2 bc = c - b;
  vec2 ca = a - c;

  //待判断点p分别于abc构成三个向量 ap, bp, cp
  vec2 ap = p - a;
  vec2 bp = p - b;
  vec2 cp = p - c;
  //得到三次叉乘结果
   result1 = ab * ap;
   result2 = bc * bp;
   result3 = ca * cp;
  //判断是否同号
  float threshold = 1e-5;
  if(result1 > -threshold && result2 > -threshold && result3 > -threshold)//都为非负数
    return true;
  if(result1 < threshold && result2 < threshold && result3 >     return true;
  return false;
}

顶点属性插值

流水线中我们只会给三角形顶点赋值某些顶点属性,光栅化需要根据三个顶点信息插值三角形内部点的顶点属性。

重心坐标系

对于三角形内部点P来说,它可以由顶点V0/V1/V2唯一表示:
p = λ0 * V0 + λ1 * V1 + λ2 * V2 且 λ0 >=0 λ1 >=0 λ2 >=0; λ0 + λ1 + λ2 = 1
(λ0, λ1, λ2)即V0V1V2组成的三角形内部点P的重心坐标

同理我们可以用重心坐标来插值顶点属性
P_attribute = λ0 * V0_attribute + λ1 * V1_attribute + λ2 * V2_attribute
图形流水线中光栅化原理与实现_第6张图片

  • 既然可以用重心坐标系来插值所有顶点属性信息,那么又如何计算重心坐标系呢?
    图形流水线中光栅化原理与实现_第7张图片
    观察上图,你可以从中看出,重心坐标系值与V0V1P/V1V2P/V2V0P这三个三角形的面积有关。
    图形流水线中光栅化原理与实现_第8张图片
    由向量叉乘的集合意义可知: 叉乘得到的行列式的值,即两个向量所围成的平行四边形面积
    图形流水线中光栅化原理与实现_第9张图片
    结合判断点P是否在三角形内部的函数。我们可以在判断点P是否在三角形内部时,同时计算出P点的重心坐标系值。这是不是非常巧妙!

//改进 edge Funciton, 判断点p是否在三角形abc上时,
//同时返回其重心坐标系值
bool edgeFunction(vec2 p, vec2 a, vec2 b, vec2 c, vector<< float>& barycentricCoord ){
  //三角形三个点abc构成三个向量ab bc ca
  vec2 ab = b - a;
  vec2 bc = c - b;
  vec2 ca = a - c;
   //三角形面积的两倍
  float triangleArea = abs(ab * bc);
  //待判断点p分别于abc构成三个向量 ap, bp, cp
  vec2 ap = p - a;
  vec2 bp = p - b;
  vec2 cp = p - c;
  //得到三次叉乘结果
   result1 = ab * ap;
   result2 = bc * bp;
   result3 = ca * cp;
  //计算重心坐标系
  barycentricCoord.push_back(abs(result2) / triangleArea);
  barycentricCoord.push_back(abs(result3) / triangleArea);
  barycentricCoord.push_back(abs(result1) / triangleArea);
  //判断是否同号
  float threshold = 1e-5;
  if(result1 > -threshold && result2 > -threshold && result3 > -threshold)//都为非负数
    return true;
  if(result1 < threshold && result2 < threshold && result3 >     return true;
  return false;
}

插值深度

直接插值Z值的问题

根据上文结论可得,对于三角形内部点p,其Z值可以由此插值:
P_z = λ0 * V0_z + λ1 * V1_z + λ2 * V2_z
但实际上直接使用此式子是错误的,因为经过投影变换后,Z值不再满足线性变化。下图可以清楚的展现这种错误:
图形流水线中光栅化原理与实现_第10张图片
在投影前,P的Z值为:
P.z = V0.z * (1 - 0.666) + V1.z * 0.666 = 4.001;
投影后,P‘的Z值为
P’.z = V0.z * (1 - 0.8333) + V1.z * 0.8333 = 4.499;

正确的深度插值公式推导

图形流水线中光栅化原理与实现_第11张图片
图形流水线中光栅化原理与实现_第12张图片
因此正确的深度插值公式为:
1 / P_z = λ0 * 1 / V0_z + λ1 * 1 / V1_z + λ2 * 1 / V2_z

透视校正

同理在插值其他顶点属性(如颜色、法相、纹理坐标)时,如果直接用重心坐标系,也不正确。
因此正确的方法是借助已经正确插值的Z值,属性与Z值满足线性变化关系。
图形流水线中光栅化原理与实现_第13张图片
图形流水线中光栅化原理与实现_第14张图片
因此正确的顶点属性插值公式为:
Attr = Z * [Attr0 / V0_z * λ0 +Attr1 / V1_z * λ1 + Attr2 / V2_z * λ2 ]
下图展示了使用正确的属性插值与不正确的属性插值的颜色误差。
图形流水线中光栅化原理与实现_第15张图片

代码实现

图形流水线中光栅化原理与实现_第16张图片
图形流水线中光栅化原理与实现_第17张图片

图形流水线中光栅化原理与实现_第18张图片
vector2.h

//vector2.h
#pragma once
#include

using namespace std;

template <typename T>
class Vector2 {
public:
	T x, y;
	float z;

public:
	//Vector(){}
	~Vector2() {}
	Vector2(T xx = 0, T yy = 0, float zz = 0) :x(xx), y(yy), z(zz) {}
	Vector2(Vector2& t) { x = t.x; y = t.y; z = t.z; }


	//标量乘除
	Vector2 multiplayByScalar(float a, Vector2<T>& result) {
		result.x = x * a;
		result.y = y * a;
		return result;
	}

	Vector2 divideByScalar(float a, Vector2<T>& result) {
		result.x = x / a;
		result.y = y / a;
		return result;
	}

	//矢量运算
	Vector2<T> add(Vector2& t, Vector2<T>& result) {
		result.x = x + t.x;
		result.y = y + t.y;

		return result;
	}

	Vector2<T> divide(Vector2& t, Vector2<T>& result) {
		result.x = x - t.x;
		result.y = y - t.y;

		return result;
	}

	float cross(Vector2& t) {
		return y * t.x - x * t.y;
	}


	//模场
	float length() {
		return sqrt(x * x + y * y);
	}

	//归一化
	void normalize() {
		float l = this->length();
		if (l < 1e-5) {
			cout << "向量模长为0.0,不能归一化" << endl;
			return;
		}
		x /= l;
		y /= l;
	}

	//流运算
	friend ostream& operator<<(ostream& os, const Vector2<T>& t) {
		os << "(x, y, z):" << "(" << t.x << ", " << t.y << ", " << t.z << ")" << endl;
		return os;
	}

};

Vector3.h

//Vector3
#pragma once
#include

using namespace std;

template <typename T>
class Vector3 {
public:
	T x, y, z;
	float w;

public:
	//Vector(){}
	~Vector3() {}
	Vector3(T xx = 0, T yy = 0, T zz = 0, float ww = 1.0) :x(xx), y(yy), z(zz), w(ww) {}
	Vector3(Vector3& t) { x = t.x; y = t.y; z = t.z; w = t.w; }
	

	//标量乘除
	Vector3 multiplayByScalar(float a, Vector3<T>& result) {
		result.x = x * a;
		result.y = y * a;
		result.z = z * a;
		return result;
	}

	Vector3 divideByScalar(float a, Vector3<T>& result) {
		result.x = x / a;
		result.y = y / a;
		result.z = z / a;
		return result;
	}
	
	//矢量运算
	Vector3<T> add(Vector3& t, Vector3<T>& result) {
		result.x = x + t.x;
		result.y = y + t.y;
		result.z = z + t.z;

		return result;
	}

	Vector3<T> divide(Vector3& t, Vector3<T>& result) {
		result.x = x - t.x;
		result.y = y - t.y;
		result.z = z - t.z;

		return result;
	}

	Vector3<T> cross(Vector3& t, Vector3<T>& result) {
		result.x = y * t.z - z * t.y;
		result.y = z * t.x - x * t.z;
		result.z = x * t.y - y * t.x;

		return result;
	}

	//比较运算
	bool equal(Vector3<T>& t) {
		float threshold = 1e-5;
		if (abs(x - t.x) < threshold && abs(y - t.y) < threshold && abs(z - t.z) < threshold)
			return true;
		return false;
	}

	//透视除法
	void perspectiveDivision() {
		x = x / T(w);
		y = y / T(w);
		z = z / T(w);
		w = 1.0f;
	}

	//模场
	float length() {
		return sqrt(x * x + y * y + z * z);
	}

	//归一化
	void normalize() {
		float l = this->length();
		if (l < 1e-5) {
			cout << "向量模长为0.0,不能归一化" << endl;
			return ;
		}
		x /= l;
		y /= l;
		z /= l;
	}

	//流运算
	friend ostream& operator<<(ostream& os, const Vector3<T>& t) {
		os << "(x, y, z, w):" << "(" << t.x << ", " << t.y << ", " << t.z << ", " << t.w << ")" << endl;
		return os;
	}

};

Matrix4.h

//Matrix4.h
#pragma once
#include "Vector3.h"

class Matrix4
{
public:
	Matrix4();
	~Matrix4();

	Matrix4(float a0, float a1, float a2, float a3,
		float a4, float a5, float a6, float a7,
		float a8, float a9, float a10, float a11,
		float a12, float a13, float a14, float a15);

	Matrix4(Matrix4& t);

	//重置为单位矩阵
	void setIdentityMatrix();
	//求矩阵的行列式的值
	float getDeterminant();
	//求矩阵的逆矩阵
	Matrix4 invert();
	//矩阵与向量/点相乘
	Vector3<float> multiplyByVector(Vector3<float> v);
	//矩阵与矩阵相乘
	Matrix4 multiplyByMatrix4(Matrix4& m);

	//交换矩阵的两行
	void swapRow(int row1, int row2);

	//输出矩阵
	friend ostream& operator<<(ostream& os, Matrix4& t);


public:
	float m[16];

};


Matrix4.cpp

//Matrix4.cpp
#include "Matrix4.h"


Matrix4::Matrix4()
{
	m[0] = m[5] = m[10] = m[15] = 1.0f;
	m[1] = m[2] = m[3] = m[4] = m[6] = m[7] = m[8] = m[9] = m[11] = m[12] = m[13] = m[14] = 0.0f;
}

Matrix4::~Matrix4(){}

Matrix4::Matrix4(float a0, float a1, float a2, float a3,
	float a4, float a5, float a6, float a7,
	float a8, float a9, float a10, float a11,
	float a12, float a13, float a14, float a15) {
	m[0] = a0; m[1] = a1; m[2] = a2; m[3] = a3;
	m[4] = a4; m[5] = a5; m[6] = a6; m[7] = a7;
	m[8] = a8; m[9] = a9; m[10] = a10; m[11] = a11;
	m[12] = a12; m[13] = a13; m[14] = a14; m[15] = a15;
}

Matrix4::Matrix4(Matrix4& t) {
	for (int i = 0; i < 16; i++)
		m[i] = t.m[i];
}

//重置为单位矩阵
void Matrix4::setIdentityMatrix() {
	m[0] = m[5] = m[10] = m[15] = 1.0f;
	m[1] = m[2] = m[3] = m[4] = m[6] = m[7] = m[8] = m[9] = m[11] = m[12] = m[13] = m[14] = 0.0f;
}

//求矩阵的行列式的值
float Matrix4::getDeterminant() {
	return m[0] * m[5] * m[10] * m[15] + m[1] * m[6] * m[11] * m[12] + m[2] * m[7] * m[8] * m[13] + m[3] * m[4] * m[9] * m[14]
		- m[3] * m[6] * m[9] * m[12] - m[7] * m[10] * m[13] * m[0] - m[11] * m[14] * m[1] * m[4] - m[15] * m[2] * m[5] * m[8];
}

//求矩阵的逆矩阵
/* C++实现矩阵求逆 采用  Gauss-Jordan elimination method
** 原理:利用行变换和新建一个单位阵,当前矩阵通过行变换变为单位阵时,单位阵通过同样的行变换会变成矩阵的逆
** 行变换:1.任意交换两行	2.一行加减除了这行的其他行  3.一行乘除一个数

** 步骤:
** 1.通过交换行将对角线上所有值变为非0值
** 2.通过行变换按列从左到右将非对角线上的值变换为0
** 3.通过行缩放,将对角线上的值变为1
*/
Matrix4 Matrix4::invert() {
	Matrix4 invertM = Matrix4();
	int row = 0, column = 0;
	float tempM[16];
	for (int i = 0; i < 16; i++)
		tempM[i] = m[i];
	int matrixSize = 4;
	float threshold = 1e-5f;
	//1.通过交换行将对角线上所有值变为非0值
	for (column = 0; column < matrixSize; column++) {
		if (abs(tempM[column * matrixSize + column]) <= threshold) {
			//对角线上值为0 需要交换
			int swapRow = column;
			for (int i = 0; i < matrixSize; i++) {
				if (abs(tempM[i * matrixSize + column]) > threshold && abs(tempM[column * matrixSize + i]) > threshold)
					swapRow = i;
			}
			
			if (swapRow == column) {
				cout << "该矩阵没有逆矩阵" << endl;
				return invertM;
			}
			else {
				//交换
				for (int i = 0; i < matrixSize; i++) {
					float temp = tempM[column * matrixSize + i];
					tempM[column * matrixSize + i] = tempM[swapRow * matrixSize + i];
					tempM[swapRow * matrixSize + i] = temp;
				}
				invertM.swapRow(column, swapRow);
			}

		}

		//2.通过行变换按列从左到右将非对角线上的值变换为0
		//记录对角线的值
		float pivotsValue = tempM[column * matrixSize + column];

		for (row = 0; row < matrixSize; row++) {
			//如果处于对角线上 则跳过
			if (column == row)
				continue;
			//如果不处于对角线上 则将这一列上其他所有值变为0.0
			float coeff = tempM[row * matrixSize + column] / pivotsValue;

			//
			for (int i = 0; i < matrixSize; i++) {
				tempM[row * matrixSize + i] -= coeff * tempM[column * matrixSize + i];
				invertM.m[row * matrixSize + i] -= coeff * invertM.m[column * matrixSize + i];
			}

			tempM[row * matrixSize + column] = 0.0f;
		}


	}

	//3.通过行缩放,将对角线上的值变为1
	for (row = 0; row < matrixSize; row++) {
		float coeff = 1.0f / tempM[row * matrixSize + row];
		for (int i = 0; i < matrixSize; i++) {
			tempM[row * matrixSize + i] *= coeff;
			invertM.m[row * matrixSize + i] *= coeff;
		}
	}

	return invertM;
}

//矩阵与向量/点相乘
/* 矩阵是行主序,右乘 this * v
** 
*/

Vector3<float> Matrix4::multiplyByVector(Vector3<float> v) {
	Vector3<float> reusltV;
	reusltV.x = float(m[0] * v.x + m[1] * v.y + m[2] * v.z + m[3] * v.w);
	reusltV.y = float(m[4] * v.x + m[5] * v.y + m[6] * v.z + m[7] * v.w);
	reusltV.z = float(m[8] * v.x + m[9] * v.y + m[10] * v.z + m[11] * v.w);
	reusltV.w = float(m[12] * v.x + m[13] * v.y + m[14] * v.z + m[15] * v.w);

	return reusltV;
}


//矩阵与矩阵相乘 this * M
Matrix4 Matrix4::multiplyByMatrix4(Matrix4& M) {
	Matrix4 resultM;
	resultM.m[0] = m[0] * M.m[0] + m[1] * M.m[4] + m[2] * M.m[8] + m[3] * M.m[12];
	resultM.m[1] = m[0] * M.m[1] + m[1] * M.m[5] + m[2] * M.m[9] + m[3] * M.m[13];
	resultM.m[2] = m[0] * M.m[2] + m[1] * M.m[6] + m[2] * M.m[10] + m[3] * M.m[14];
	resultM.m[3] = m[0] * M.m[3] + m[1] * M.m[7] + m[2] * M.m[11] + m[3] * M.m[15];

	resultM.m[4] = m[4] * M.m[0] + m[5] * M.m[4] + m[6] * M.m[8] + m[7] * M.m[12];
	resultM.m[5] = m[4] * M.m[1] + m[5] * M.m[5] + m[6] * M.m[9] + m[7] * M.m[13];
	resultM.m[6] = m[4] * M.m[2] + m[5] * M.m[6] + m[6] * M.m[10] + m[7] * M.m[14];
	resultM.m[7] = m[4] * M.m[3] + m[5] * M.m[7] + m[6] * M.m[11] + m[7] * M.m[15];

	resultM.m[8] = m[8] * M.m[0] + m[9] * M.m[4] + m[10] * M.m[8] + m[11] * M.m[12];
	resultM.m[9] = m[8] * M.m[1] + m[9] * M.m[5] + m[10] * M.m[9] + m[11] * M.m[13];
	resultM.m[10] = m[8] * M.m[2] + m[9] * M.m[6] + m[10] * M.m[10] + m[11] * M.m[14];
	resultM.m[11] = m[8] * M.m[3] + m[9] * M.m[7] + m[10] * M.m[11] + m[11] * M.m[15];

	resultM.m[12] = m[12] * M.m[0] + m[13] * M.m[4] + m[14] * M.m[8] + m[15] * M.m[12];
	resultM.m[13] = m[12] * M.m[1] + m[13] * M.m[5] + m[14] * M.m[9] + m[15] * M.m[13];
	resultM.m[14] = m[12] * M.m[2] + m[13] * M.m[6] + m[14] * M.m[10] + m[15] * M.m[14];
	resultM.m[15] = m[12] * M.m[3] + m[13] * M.m[7] + m[14] * M.m[11] + m[15] * M.m[15];

	return resultM;
}


//交换矩阵的两行
void Matrix4::swapRow(int row1, int row2) {
	int matrixSize = 4;

	for (int i = 0; i < matrixSize; i++) {
		float temp = m[row1 * matrixSize + i];
		m[row1 * matrixSize + i] = m[row2 * matrixSize + i];
		m[row2 * matrixSize + i] = temp;
	}
}

//输出矩阵
ostream& operator<<(ostream& os, Matrix4& t) {
	os << "[" << t.m[0] << ",\t" << t.m[1] << ",\t" << t.m[2] << ",\t" << t.m[3] << "]" << endl;
	os << "[" << t.m[4] << ",\t" << t.m[5] << ",\t" << t.m[6] << ",\t" << t.m[7] << "]" << endl;
	os << "[" << t.m[8] << ",\t" << t.m[9] << ",\t" << t.m[10] << ",\t" << t.m[11] << "]" << endl;
	os << "[" << t.m[12] << ",\t" << t.m[13] << ",\t" << t.m[14] << ",\t" << t.m[15] << "]" << endl;
	return os;
}

Camera .h

//Camera .h
#pragma once

#include "Vector3.h"
#include "Matrix4.h"
const float PI = 3.14159f;

class Camera {
public :
	Vector3<float> position;
	Vector3<float> direction;
	Vector3<float> up;

	float nearClippingPlane;
	float farClippingPlane;
	float horizonFov;
	float verticalFov;

	Matrix4 cameraCoord;

public:
	Camera();
	~Camera();
	Camera(Vector3<float>& p);

	//设置direction 和 up
	void lookAt(Vector3<float> d, Vector3<float> u);

	//设置前后裁剪面
	void setClippingPlane(float n, float f);

	//设置hf 和 vf
	void setHFandVF(float hf, float vf);
	
	//计算视角矩阵函数群
	Matrix4 getCameraMatrix();
	Matrix4 getViewMatrix();

	//计算投影矩阵函数群
	//Matrix4 getProjectionMatrix_rasterSpace();
	Matrix4 getProjectionMatrix_NDCSpace();


};

Camera.cpp

//Camera.cpp
#include "Camera.h"
Camera::Camera() {
	position = Vector3<float>(0.0f, 0.0f, 0.0f);
	direction = Vector3<float>(0.0f, 0.0f, -1.0f, 0.0f);
	up = Vector3<float>(0.0f, 1.0f, 0.0f, 0.0f);
	nearClippingPlane = 1.0f;
	farClippingPlane = 100.0f;
	horizonFov = verticalFov = 90.0f;
}

Camera::~Camera() {}

Camera::Camera(Vector3<float>& p) {
	position = Vector3<float>(p);
	direction = Vector3<float>(0.0f, 0.0f, -1.0f, 0.0f);
	up = Vector3<float>(0.0f, 1.0f, 0.0f, 0.0f);
	nearClippingPlane = 1.0f;
	farClippingPlane = 100.0f;
	horizonFov = verticalFov = 90.0f;
}

//设置direction 和 up
void Camera::lookAt(Vector3<float> d, Vector3<float> u) {
	direction = Vector3<float>(d);
	up = Vector3<float>(u);
}

//设置前后裁剪面
void Camera::setClippingPlane(float n, float f) {
	nearClippingPlane = n;
	farClippingPlane = f;
}

//设置hf 和 vf
void  Camera::setHFandVF(float hf, float vf) {
	horizonFov = hf;
	verticalFov = vf;
}

//计算相机坐标系
/* 得到相机坐标系基底的矩阵表示
** 视角坐标系Z方向与direction方向相反  W = -normalize(direction)
** 视角坐标系X方向 U = Up * W
** 视角坐标系Y方向 V = W * U
*/
Matrix4 Camera::getCameraMatrix() {
	Vector3<float> W(1.0, 1.0, 1.0, 0.0);

	direction.multiplayByScalar(-1.0, W);
	W.normalize();
	up.normalize();

	Vector3<float> U(1.0, 1.0, 1.0, 0.0);
	up.cross(W, U);
	U.normalize();

	Vector3<float> V(1.0, 1.0, 1.0, 0.0);
	W.cross(U, V);
	V.normalize();

	Matrix4 cameraCoord = Matrix4();
	/*
	 [U.x. V.x, W.x, camera.x ]
	 [U.y, V.y, W.y, camera.y ]
	 [U.z, V.z, W.z, camera.z ]
	 [0.0, 0.0, 0.0, 1.0      ]
	*/
	cameraCoord.m[0] = U.x;
	cameraCoord.m[4] = U.y;
	cameraCoord.m[8] = U.z;
	cameraCoord.m[12] = 0.0;

	cameraCoord.m[1] = V.x;
	cameraCoord.m[5] = V.y;
	cameraCoord.m[9] = V.z;
	cameraCoord.m[13] = 0.0;

	cameraCoord.m[2] = W.x;
	cameraCoord.m[6] = W.y;
	cameraCoord.m[10] = W.z;
	cameraCoord.m[14] = 0.0;

	cameraCoord.m[3] = position.x;
	cameraCoord.m[7] = position.y;
	cameraCoord.m[11] = position.z;
	cameraCoord.m[15] = 1.0;

	this->cameraCoord = cameraCoord;
	return cameraCoord;
}

//计算图形流水线中模型变化(世界坐标系--->视角坐标系)的 视角矩阵
Matrix4 Camera::getViewMatrix() {
	Matrix4 cameraCoord = this->getCameraMatrix();
	return cameraCoord.invert();
}

////计算图形流水线中模型变化(视角坐标系--->rasterSpace)的投影矩阵
//Matrix4 Camera::getProjectionMatrix_rasterSpace() {
//	
//}

//计算图形流水线中模型变化(视角坐标系--->NDCSpace)的投影矩阵
Matrix4 Camera::getProjectionMatrix_NDCSpace() {
	/*
	** 投影矩阵的具体推导请看 https://blog.csdn.net/qq_27161673
	** [ 2N / (r - l), 0           ,  (r + l)/(r - l),  0 ]
	** [ 0           , 2N / (t - b),  (t + b)/(t - b),  0 ]
	** [ 0           , 0           ,  (N + F)/(N - F),  2NF / (N - F)   ]
	** [ 0           , 0           ,  -1             ,  0               ]
	*/

	//角度转弧度
	float horizonFov_radian = horizonFov * PI / 180.0f;
	float verticalFov_radian = verticalFov * PI / 180.0f;

	/*
	** r = N * tan(horizonFov_radian / 2);
	** l = -r;
	** t = N * tan(verticalFov_radian / 2);
	** t = -b;
	*/
	float r = nearClippingPlane * tan(horizonFov_radian / 2.0f);
	float l = -r;
	float t = nearClippingPlane * tan(verticalFov_radian / 2.0f);
	float b = -t;

	Matrix4 projectionMatrix = Matrix4();

	projectionMatrix.m[0] = nearClippingPlane / r;
	projectionMatrix.m[1] = 0.0f;
	projectionMatrix.m[2] = 0.0f;
	projectionMatrix.m[3] = 0.0f;

	projectionMatrix.m[4] = 0.0f;
	projectionMatrix.m[5] = nearClippingPlane / t;
	projectionMatrix.m[6] = 0.0f;
	projectionMatrix.m[7] = 0.0f;

	projectionMatrix.m[8] = 0.0f;
	projectionMatrix.m[9] = 0.0f;
	projectionMatrix.m[10] = (nearClippingPlane + farClippingPlane) / (nearClippingPlane - farClippingPlane);
	projectionMatrix.m[11] = 2.0f * nearClippingPlane * farClippingPlane / (nearClippingPlane - farClippingPlane);

	projectionMatrix.m[12] = 0.0f;
	projectionMatrix.m[13] = 0.0f;
	projectionMatrix.m[14] = -1.0f;
	projectionMatrix.m[15] = 0.0f;

	return projectionMatrix;
}

main.cpp

/************************************************************************/
/*2019-12-29 自己的光栅化程序                                           */
/************************************************************************/

/*
** 分成两个阶段
** 一、实现流水线中坐标变化过程:包括模型矩阵、视角矩阵以及投影矩阵。
** 二、实现光栅化:包括隐藏面消除和顶点属性插值
** 
** 本程序分为以下几个类
**
*/

#include "Camera.h"
#include "Vector2.h"
#include 
#include 

bool edgeFunction(Vector2<float> p, Vector2<float>a, Vector2<float>b, Vector2<float>c, vector<float>& barycentricCoord);

int main() {
	/************************************************************************/
	/* 设定窗口大小                                                                     */
	/************************************************************************/
	int screen_width = 400;
	int screen_height = 400;

	/************************************************************************/
	/* 设定相机                                                                      */
	/************************************************************************/
	// 设定相机位置
	Vector3<float> cameraPosition = Vector3<float>(0.0f, 0.0f, 3.0f, 1.0f);
	Camera camera = Camera(cameraPosition);
	//设定相聚 看向方向direction 和相机上方向up
	camera.lookAt(Vector3<float>(0.0f, 0.0f, -1.0f, 0.0f), Vector3<float>(0.0f, 1.0f, 0.0f, 0.0f));
	//设定视域前/后裁剪面
	camera.setClippingPlane(1.0f, 10.0f);
	//设定水平/垂直视域角度
	camera.setHFandVF(90.0f, 90.0f);

	/************************************************************************/
	/* 给定一个三角形数据(默认世界坐标系)                                                                     */
	/************************************************************************/
	Vector3<float> A_world = Vector3<float>(-2.0f, -1.0f, 0.0f, 1.0f);
	Vector3<float> B_world = Vector3<float>(2.0f, -1.0f, 1.0f, 1.0f);
	Vector3<float> C_world = Vector3<float>(-1.0f, 1.0f, -3.0f, 1.0f);
	cout << "A_world" << A_world << endl;
	cout << "B_world" << B_world << endl;
	cout << "C_world" << C_world << endl;
	Vector3<float> D_world = Vector3<float>(-2.0f, -1.0f, 1.0f, 1.0f);
	Vector3<float> E_world = Vector3<float>(2.0f, -1.0f, 0.0f, 1.0f);
	Vector3<float> F_world = Vector3<float>(1.0f, 1.0f, -3.0f, 1.0f);

	/************************************************************************/
	/* 转换到视角坐标系: p_view = viewMatrix * p_world                 */
	/************************************************************************/
	Matrix4 viewMatrix = camera.getViewMatrix();
	Vector3<float> A_view = viewMatrix.multiplyByVector(A_world);
	Vector3<float> B_view = viewMatrix.multiplyByVector(B_world);
	Vector3<float> C_view = viewMatrix.multiplyByVector(C_world);
	cout << "A_view" << A_view << endl;
	cout << "B_view" << B_view << endl;
	cout << "C_view" << C_view << endl;
	Vector3<float> D_view = viewMatrix.multiplyByVector(D_world);
	Vector3<float> E_view = viewMatrix.multiplyByVector(E_world);
	Vector3<float> F_view = viewMatrix.multiplyByVector(F_world);

	/************************************************************************/
	/* 转换到NDC坐标系 :p_pro = projectionMatrix * p_view                                              */
	/************************************************************************/
	Matrix4 projectionMatrix = camera.getProjectionMatrix_NDCSpace();
	Vector3<float> A_pro = projectionMatrix.multiplyByVector(A_view);
	Vector3<float> B_pro = projectionMatrix.multiplyByVector(B_view);
	Vector3<float> C_pro = projectionMatrix.multiplyByVector(C_view);
	cout << "透视除法前 A_pro" << A_pro << endl;
	cout << "透视除法前 B_pro" << B_pro << endl;
	cout << "透视除法前 C_pro" << C_pro << endl;
	//透视除法
	A_pro.perspectiveDivision();
	B_pro.perspectiveDivision();
	C_pro.perspectiveDivision();
	cout << "A_pro" << A_pro << endl;
	cout << "B_pro" << B_pro << endl;
	cout << "C_pro" << C_pro << endl;
	Vector3<float> D_pro = projectionMatrix.multiplyByVector(D_view);
	Vector3<float> E_pro = projectionMatrix.multiplyByVector(E_view);
	Vector3<float> F_pro = projectionMatrix.multiplyByVector(F_view);
	D_pro.perspectiveDivision();
	E_pro.perspectiveDivision();
	F_pro.perspectiveDivision();

	/************************************************************************/
	/* 转换到视图坐标系:p_screen = width/height * p_pro                                                                     */
	/************************************************************************/
	float half_screen_width = screen_width / 2;
	float half_screen_height = screen_height / 2;
	Vector2<float> A_screen = Vector2<float>(float((A_pro.x + 1.0f) * half_screen_width), float((1.0f - A_pro.y) * half_screen_height), A_view.z);
	Vector2<float> B_screen = Vector2<float>(float((B_pro.x + 1.0f) * half_screen_width), float((1.0f - B_pro.y) * half_screen_height), B_view.z);
	Vector2<float> C_screen = Vector2<float>(float((C_pro.x + 1.0f) * half_screen_width), float((1.0f - C_pro.y) * half_screen_height), C_view.z);
	cout << "A_screen" << A_screen << endl;
	cout << "B_screen" << B_screen << endl;
	cout << "C_screen" << C_screen << endl;
	Vector2<float> D_screen = Vector2<float>(float((D_pro.x + 1.0f) * half_screen_width), float((1.0f - D_pro.y) * half_screen_height), D_view.z);
	Vector2<float> E_screen = Vector2<float>(float((E_pro.x + 1.0f) * half_screen_width), float((1.0f - E_pro.y) * half_screen_height), E_view.z);
	Vector2<float> F_screen = Vector2<float>(float((F_pro.x + 1.0f) * half_screen_width), float((1.0f - F_pro.y) * half_screen_height), F_view.z);

	//点数组
	vector<Vector2<float>> vertice = vector<Vector2<float>>(6);
	vertice[0] = (A_screen);
	vertice[1] = (B_screen);
	vertice[2] = (C_screen);
	vertice[3] = (D_screen);
	vertice[4] = (E_screen);
	vertice[5] = (F_screen);

	//索引数组
	vector<unsigned int> indices = vector<unsigned int>();
	indices.push_back(0);
	indices.push_back(1);
	indices.push_back(2);
	indices.push_back(3);
	indices.push_back(4);
	indices.push_back(5);

	//点颜色数组
	Vector3<unsigned char> red = Vector3<unsigned char>(255, 0,0);
	Vector3<unsigned char> green = Vector3<unsigned char>(255, 255, 0);
	Vector3<unsigned char> blue = Vector3<unsigned char>(255, 0, 255);
	Vector3<unsigned char> red1 = Vector3<unsigned char>(0, 0, 255);
	Vector3<unsigned char> green1 = Vector3<unsigned char>(0, 255, 255);
	Vector3<unsigned char> blue1 = Vector3<unsigned char>(255, 0, 255);
	vector<Vector3<unsigned char>> colors = vector<Vector3<unsigned char>>(6);
	colors[0] = (red);
	colors[1] = (green);
	colors[2] = (blue);
	colors[3] = (red1);
	colors[4] = (green1);
	colors[5] = (blue1);

	/************************************************************************/
	/* 光栅化
	** 1.判断每个像素点是否在screen space空间的三角形中
	** 2.如果在 计算其三角形重心坐标,并进行顶点属性插值
	** 3.将颜色信息写入frame buffer中
	** 
	*/
	/************************************************************************/

	//创建Z-Buffer 和 FrameBuffer
	//创建Z-Buffer 记录FrameBuffer对应像素点的深度值
	int bufferSize = screen_width * screen_height;
	float* zBuffer = new float[bufferSize];
	//创建FrameBuffer
	vector<vector<float>> framebuffer(bufferSize); 
	//Z-Buffer初始化为最大值  FrameBuffer初始化为黑色
	for (int i = 0; i < bufferSize; i++) {
		zBuffer[i] = FLT_MAX;
		framebuffer[i] = vector<float>(4);
		for (int j = 0; j < 4; j++)
			framebuffer[i][j] = 0.0f;
	}

	
	

	//遍历所有像素
	for (int row = 0; row < screen_height; row++) {
		for (int column = 0; column < screen_width; column++) {
			
			//当前像素点坐标
			Vector2<float> pixelCoord = Vector2<float>(float(column) + 0.5f, float(row) + 0.5f);

			//遍历所有三角形
			int triangleNum = indices.size() / 3;
			for (int triangleIndex = 0; triangleIndex < triangleNum ; triangleIndex++) {
				//三角形三个点坐标(screen space坐标系下) 默认逆时针方向
				Vector2<float> a = vertice[indices[triangleIndex * 3]];
				Vector2<float> b = vertice[indices[triangleIndex * 3 + 1]];
				Vector2<float> c = vertice[indices[triangleIndex * 3 + 2]];

				//三角形三个点的颜色
				Vector3<unsigned char> a_color = colors[indices[triangleIndex * 3]];
				Vector3<unsigned char> b_color = colors[indices[triangleIndex * 3 + 1]];
				Vector3<unsigned char> c_color = colors[indices[triangleIndex * 3 + 2]];

				//记录像素点的重心坐标
				vector<float> barycentricCoord = vector<float>();


				//edge Function
				if (edgeFunction(pixelCoord, a, b, c, barycentricCoord) == false)
					continue;
				else {

					//插值像素点的Z坐标
					// 1 / z = λ * 1/ z0 + (1-λ) * 1 /z1
					pixelCoord.z = 1.0f / (barycentricCoord[0] / a.z + barycentricCoord[1] / b.z + barycentricCoord[2] / c.z);

					//深度测试 只有Z坐标小于0的物体才能被看到
					if (pixelCoord.z < 1e-5 && abs(pixelCoord.z) < zBuffer[row * screen_width + column] && (-pixelCoord.z) >= camera.nearClippingPlane && (-pixelCoord.z) <= camera.farClippingPlane) {
						zBuffer[row * screen_width + column] = abs(pixelCoord.z);
	
						//未使用透视矫正
						/*float R = (barycentricCoord[0] * a_color.x + barycentricCoord[1] * b_color.x  + barycentricCoord[2] * c_color.x );
						float G = (barycentricCoord[0] * a_color.y + barycentricCoord[1] * b_color.y  + barycentricCoord[2] * c_color.y );
						float B = (barycentricCoord[0] * a_color.z + barycentricCoord[1] * b_color.z  + barycentricCoord[2] * c_color.z );*/
						
						//插值像素点处的顶点属性(目前只有颜色)  使用透视矫正
						// attribute = z * [attribute0 / z0 * λ + attribute1 / z1 * (1-λ)]
						float R = pixelCoord.z * (barycentricCoord[0] * float(a_color.x) / a.z + barycentricCoord[1] * float(b_color.x) / b.z + barycentricCoord[2] * float(c_color.x) / c.z);
						float G = pixelCoord.z * (barycentricCoord[0] * float(a_color.y) / a.z + barycentricCoord[1] * float(b_color.y) / b.z + barycentricCoord[2] * float(c_color.y )/ c.z);
						float B = pixelCoord.z * (barycentricCoord[0] * float(a_color.z) / a.z + barycentricCoord[1] * float(b_color.z) / b.z + barycentricCoord[2] * float(c_color.z) / c.z);

			
						//更新frameBuffer中颜色值
						vector<float> piexlColor = vector<float>(4);
						piexlColor[0] = R / 255.0f; piexlColor[1] = G / 255.0f; piexlColor[2] = B / 255.0f; piexlColor[3] = 1.0f;
						framebuffer[row * screen_width + column] = piexlColor;

					}
					
				}

			}

		}
	}

	//将framebuffer中的数据写入图片中
	std::ofstream ofs;
	ofs.open("./output.ppm");
	ofs << "P3\n" << screen_width << ' ' << screen_height << "\n255\n";
	for (int i = 0; i < screen_height; i++)
	{
		for (int j = 0; j < screen_width; j++)
		{
			float r = framebuffer[i * screen_width + j][0];
			float g = framebuffer[i * screen_width + j][1];
			float b = framebuffer[i * screen_width + j][2];
			int ir = int(255.99 * r);
			int ig = int(255.99 * g);
			int ib = int(255.99 * b);
			ofs << ir << ' ' << ig << ' ' << ib << '\n';
		}
	}
	ofs.close();
	delete[] zBuffer;

	return 0;
}

/************************************************************************/
/* 判断一个点是否在三角形内部                                           */
/************************************************************************/
bool edgeFunction(Vector2<float> p, Vector2<float>a, Vector2<float>b, Vector2<float>c, vector<float>& barycentricCoord) {
	//根据旋向和三个点坐标组成三个向量
	Vector2<float> ab = Vector2<float>(b.x - a.x, b.y - a.y);
	Vector2<float> bc = Vector2<float>(c.x - b.x, c.y - b.y);
	Vector2<float> ca = Vector2<float>(a.x - c.x, a.y - c.y);

	Vector2<float> ap = Vector2<float>(p.x - a.x, p.y - a.y);
	Vector2<float> bp = Vector2<float>(p.x - b.x, p.y - b.y);
	Vector2<float> cp = Vector2<float>(p.x - c.x, p.y - c.y);

	float result1 = ab.cross(ap);
	float result2 = bc.cross(bp);
	float result3 = ca.cross(cp);
	float yuzhi = .1f;
	if ((result1 > -yuzhi && result2 > -yuzhi && result3 > -yuzhi) || (result1 < yuzhi && result2 < yuzhi && result3 < yuzhi) ) {
		//计算重心坐标系
		float tirangle_area = abs(ab.cross(bc));
		/*cout << tirangle_area << endl;*/
		barycentricCoord.push_back(abs(result2 / tirangle_area));
		barycentricCoord.push_back(abs(result3 / tirangle_area));
		barycentricCoord.push_back(abs(result1 / tirangle_area));
		return true;
	}

	return false;

}

你可能感兴趣的:(计算机图形学)