目录
1. 矩阵的掩膜操作
1.1 获取图像像素指针
1.2 像素范围处理
1.3 矩阵的掩膜操作
矩阵的掩膜操作API函数 --- filter2D()函数
1.4 代码演示
2. Mat对象
2.1 Mat对象构造函数与常用方法
2.2 Mat对象使用
2.3 Mat对象创建
2.4 定义小数组
2.5 代码演示
3. 图像操作
3.1 读写像素
3.2 修改像素值
3.3 代码演示
4. 图像混合
4.1 理论 --- 线性混合操作
4.2 相关API --- addWeighted
4.3 代码演示
5. 调整图像亮度与对比度
5.1 理论
5.2 相关API
5.3 代码演示
6. 绘制形状和文字
6.1 使用CV::Point 与CV::Scalar
6.2 绘制线,圆,矩形,椭圆等基本几何形状
6.3 代码演示
7. 模糊图像一
7.1 模糊原理
7.2 相关API
blur() --- 模糊函数
7.3 代码演示
8. 图像模糊二
8.1 中值滤波
8.2 高斯双边滤波
8.3 相关API
8.4 代码演示
9. 膨胀与腐蚀
9.1 形态学操作 -- 膨胀
9.2 形态学操作 -- 腐蚀
9.3 相关API
9.4 动态调整结构元素大小
9.5 代码演示
10. 形态学操作
10.1 开操作 -- open
10.2 闭操作 --- close
10.3 相关API
10.4 形态学梯度 --- Morphological Gradient
10.5 顶帽操作--- top hat
10.6 黑帽操作(底帽操作)--- BLACKHAT
10.7 代码演示
11. 形态学操作应用 --- 提取水平与垂直线
11.1 原理方法
11.2 提取步骤
11.3 转为二值图像 --- adaptiveThreshold
11.4 代码演示
12. 图像金字塔 --- 上采样与降采样
12.1 图像金字塔概念
12.1.1 图像金字塔概念 --- 高斯金字塔
12.2 采样相关API
12.3 代码演示
13. 基本阈值操作
13.1 图像阈值
13.2 阈值类型
13.2.1 阈值二值化 (threshold binary)
13.2.2 阈值反二值化 (threshold binary Inverted)
13.2.3 截断 (truncate)
13.2.4 阈值取零 (threshold to zero)
13.2.5 阈值反取零 (threshold to zero inverted)
13.3 代码演示
14. 自定义线性滤波
14.1 卷积概念
14.2 常见算子
14.3 自定义卷积模糊
14.4 代码演示
15. 处理边缘
15.1 卷积边缘问题
15.2 处理边缘
15.3 API --- 给图像增加边缘API
15.4 代码演示
16. Sobel算子
16.1 卷积应用 --- 图像边缘提取
16.2 Sobel算子
16.3 API
16.4 代码演示
17. Laplance算子
17.1 理论
17.2 API
17.3 代码演示
18. Canny边缘检测
18.1 Canny算法介绍
18.2 API
18.3 代码演示
19. 霍夫变换 --- 直线检测
19.1 霍夫直线变换介绍
19.2 API
19.3 代码演示
20. 霍夫圆检测
20.1 霍夫圆检测原理
20.2 API
20.3 代码演示
21. 像素重映射
21.1 像素重映射
21.2 API
21.3 代码演示
22. 直方图均衡化
22.1 直方图(Histogram)
22.2 直方图均衡化
22.3 直方图均衡化API
22.4 代码演示
23. 直方图计算
23.1 直方图概念(Histogram)
23.2 API
23.3 代码演示
24. 直方图比较
24.1 直方图比较方法
24.2 API
24.3 代码演示
25. 直方图反向投影(Back Projection)
25.1 反向投影
25.2 实现步骤与API
25.3 代码演示
26. 模板匹配(Template Match)
26.1 模板匹配介绍
26.2 API
26.3 代码演示
27. 轮廓发现
27.1 轮廓发现
27.2 API
27.3 代码演示
28. 凸包 --- Convex Hull
28.1 凸包概念
28.2 API
28.3 代码演示
29. 轮廓周围绘制矩形框和圆形框
29.1 API
29.2 代码演示
30. 图像矩
30.1 矩的概念
30.2 API
30.2.1 矩的计算: moments()函数
30.2.2 计算轮廓面积:contourArea() 函数
30.2.3 计算轮廓长度:arcLength()函数
30.3 代码演示
31. 点多边形测试
31.1 概念
31.2 API
31.3 代码演示
32. 基于距离变换与分水岭的图像分割
32.1 图像分割
32.2 距离变换与分水岭
32.3 API
32.4 代码演示
Mat.ptr
获取当前行指针 const uchar* current = myimage.ptr
获取当前像素点P(row, col)的像素值 p(row, col) = current[col];
saturate_cast
saturate_cast
saturate_cast
这个函数的功能是确保RGB值的范围在 0 - 255之间。
根据掩膜来重新计算每个像素的像素值,掩膜(mask也被称为Kernel)。
掩膜操作可以实现图像对比度的调整,使得图像可以锐化,提高图像对比度。
红色是中心像素,从上到下,从左到右对每个像素做同样的处理操作,得到最终结果就是对比度提高后的输出图像Mat对象。
公式为:I(i,j) = 5 * I(i,j) - [I(i - 1,j) + I(i + 1,j) + I(i,j - 1) + I(i,j + 1)]
实质就是如下矩阵:
对应下面的图:
函数作用:利用内核实现对图像的卷积运算
函数原型:
void filter2D( InputArray src, OutputArray dst, int ddepth,
InputArray kernel, Point anchor=Point(-1,-1),
double delta=0, int borderType=BORDER_DEFAULT );
参数解释:
InputArray src : 输入图像
OutputArray dst : 输出图像,和输入图像具有相同的尺寸和通道数量
int ddepth : 目标图像深度,如果没写,将生成与原图像深度相同的图像。当ddepth输入值为-1时,目标图像和原图像深度保持一致。
InputArray kernel : 卷积核(或者是相关核),一个单通道浮点型矩阵。如果想在图像不同的通道使用不同的kernel,可以先使用split()函数将图像通道事先分开。
Point anchor : 内核的基准点(anchor),其默认值为(-1,-1),说明位于kernel的中心位置。基准点即kernel中与进行处理的像素点重合的点。
double delta : 在储存目标图像前可选的添加到像素的值,默认值为0
int borderType : 像素向外逼近的方法,默认值是BORDER_DEFAULT,即对全部边界进行计算。
定义掩膜:
eg:
Mat kern = (Mat_(3,3) << 0, -1 ,0,
-1, 5, -1,
0, -1, 0);
getTickCount():用于返回从操作系统启动到当前所经的计时周期总数
getTickFrequency():用于返回CPU的频率。get Tick Frequency。这里的单位是秒,也就是一秒内重复的次数。
所以总次数 / 一秒内重复的次数 = 时间(s)
#include
#include
using namespace std;
using namespace cv;
int main(int argc, char**argv)
{
Mat src, dst;
src = imread("E:/技能学习/opencv基础/fruits.jpg");
if (!src.data)
{
cout << "could not load image!" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
imshow("input image", src);
//方法一:传统方式 通过指针
//int cols = (src.cols - 1) * src.channels(); //获取像素的列,三通道
//int offsetx = src.channels(); //有几个通道
//int rows = src.rows; //获取图像的行
//dst = Mat::zeros(src.size(), src.type());
//for (int row = 1; row < (rows - 1); row++)
//{
// const uchar* current = src.ptr(row); //指针 获取当前行
// const uchar* previous = src.ptr(row - 1); 获取当前行的上一行指针
// const uchar* next = src.ptr(row + 1); 获取当前行的下一行指针
// uchar* output = dst.ptr(row);
// for (int col = offsetx; col < cols; col++)
// {
// output[col] = saturate_cast(5 * current[col] -
// (current[col - offsetx] + current[col + offsetx] + previous[col] + next[col]));
// }
//}
//方法二:通过opencv中的API函数
double t = getTickCount();
Mat kernel = (Mat_(3, 3) << 0, -1, 0,
-1, 5, -1,
0, -1, 0);
filter2D(src, dst, -1, kernel);
double timeconsume = (getTickCount() - t) / getTickFrequency();
cout << "时间消耗"<< timeconsume << endl;
namedWindow("output image", WINDOW_AUTOSIZE);
imshow("output image", dst);
waitKey(0);
destroyAllWindows();
return 0;
}
Mat对象使用4个要点:
输出图像的内存是自动分配的;
使用OpenCv的C++接口,不需要考虑内存分配的问题;
赋值操作和拷贝构造函数只会复制头部分;
使用clone与copyTo两个函数实现数据完全复制。
#include
#include
using namespace std;
using namespace cv;
int main(int argc, char**argv)
{
Mat src;
src = imread("E:/技能学习/opencv基础/fruits.jpg");
if (src.empty())
{
cout << "could not load image!" << endl;
return -1;
}
namedWindow("input", WINDOW_AUTOSIZE);
imshow("input", src);
//矩阵初始化 ,赋值
/*Mat dst;
dst = Mat(src.size(), src.type());
dst = Scalar(127, 0, 255);
namedWindow("output", WINDOW_AUTOSIZE);
imshow("output", dst);*/
//克隆
/*Mat dst = src.clone();
namedWindow("output", WINDOW_AUTOSIZE);
imshow("output", dst);*/
//拷贝
/*Mat dst;
src.copyTo(dst);
namedWindow("output", WINDOW_AUTOSIZE);
imshow("output", dst); */
//空间颜色转换
Mat dst;
cvtColor(src, dst, COLOR_BGR2GRAY);
namedWindow("output", WINDOW_AUTOSIZE);
imshow("output", dst);
//获取输入输出图像的通道数
cout << "input image channels:" << src.channels() << endl;
cout << "output image channels:" <(0);
//printf("first pixel value:%d\n", *firstrow);
cout << "first pixel value: " << (int)firstrow[0] << endl; //与48行等价
int row = dst.rows;
int col = dst.cols;
cout << "row = " << row << " col = " << col << endl;
//创建一个Mat对象
Mat m(3, 3, CV_8UC3,Scalar(0,0,255));
cout << "m = " << m << endl;
//创建Mat
Mat m1;
m1.create(src.size(), src.type());
m1 = Scalar(0, 0, 255);
imshow("output", m1);
//定义一个小数组
Mat kernel = (Mat_(3, 3) << 0, -1, 0, -1, 5, -1, 0, -1, 0);
cout << " kernel = " << endl << " " << kernel << endl;
waitKey(0);
system("pause");
destroyAllWindows();
return 0;
}
eg:
#include
#include
using namespace std;
using namespace cv;
int main(int argc, char**argv)
{
Mat src, gray_src;
src = imread("E:/技能学习/opencv基础/fruits.jpg");
if (src.empty())
{
cout << "could not load image!" << endl;
return -1;
}
namedWindow("input", WINDOW_AUTOSIZE);
imshow("input", src);
cvtColor(src, gray_src, COLOR_BGR2GRAY);
int height = gray_src.rows;
int width = gray_src.cols;
/*namedWindow("gray invert", WINDOW_AUTOSIZE);
imshow("gray invert", gray_src);*/
Mat dst;
dst.create(src.size(), src.type());
height = src.rows;
width = src.cols;
int channnel = src.channels();
for (int row = 0; row < height; row++)
{
for (int col = 0; col < width; col++)
{
if (channnel == 1) //单通道 读取和改变像素值
{
int gray = src.at(row, col);
dst.at(row, col) = 255 - gray;
}
else if (channnel == 3) //三通道 读取和改变像素值
{
//读取
int b = src.at(row, col)[0];
int g = src.at(row, col)[1];
int r = src.at(row, col)[2];
//写
dst.at(row, col)[0] = 255 - b;
dst.at(row, col)[1] = 255 - g;
dst.at(row, col)[2] = 255 - r;
gray_src.at(row, col) = min(r, max(g, b)); //取b, g, r中最大的两个赋给gray_src
}
}
}
namedWindow("output", WINDOW_AUTOSIZE);
//imshow("output", dst);
imshow("output", gray_src);
waitKey(0);
system("pause");
destroyAllWindows();
return 0;
}
#include
#include
using namespace std;
using namespace cv;
int main(int argc, char**argv)
{
Mat src1, src2,dst;
src1 = imread("E:/技能学习/opencv基础/lena.jpg");
src2 = imread("E:/技能学习/opencv基础/baboon.jpg");
if (src1.empty())
{
cout << "could not load image lena!" << endl;
return -1;
}
if (src2.empty())
{
cout << "could not load image baboon!" << endl;
return -1;
}
double alpha = 0.5;
if (src1.size() == src2.size() && src1.type() == src2.type())
{
addWeighted(src1, alpha, src2, (1 - alpha),0.0, dst); // 线性混合
//add(src1, src2, dst); //图像直接相加
//multiply(src1, src2, dst); //图像直接相乘
imshow("input image1", src1);
imshow("input image2", src2);
imshow("blend demo", dst);
}
else
{
cout << "could not blend image!" << endl;
return -1;
}
waitKey(0);
system("pause");
destroyAllWindows();
return 0;
}
#include
#include
using namespace std;
using namespace cv;
int main(int argc, uchar** urgv)
{
Mat src, dst;
src = imread("E:/技能学习/opencv基础/lena.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
char input_win[] = "input image";
namedWindow(input_win, WINDOW_AUTOSIZE);
imshow(input_win, src);
//contrast and brightness change demo
int height = src.rows;
int width = src.cols;
dst = Mat::zeros(src.size(), src.type());
float alpha = 1.2;
float beta = 30;
for (int row = 0; row < height; row++)
{
for (int col = 0; col < width; col++)
{
if (src.channels() == 1)
{
float v = src.at(row, col);
dst.at(row, col) = saturate_cast(v * alpha + beta);
}
else if (src.channels() == 3)
{
float b = src.at(row, col)[0];
float g = src.at(row, col)[1];
float r = src.at(row, col)[2];
dst.at(row, col)[0] = saturate_cast(b * alpha + beta);
dst.at(row, col)[1] = saturate_cast(g * alpha + beta);
dst.at(row, col)[2] = saturate_cast(r * alpha + beta);
}
}
}
char output_title[] = "contrast and brightness change demo";
namedWindow(output_title, WINDOW_AUTOSIZE);
imshow(output_title, dst);
waitKey(0);
destroyAllWindows();
return 0;
}
绘制文字:putText()
函数原型:
void cv::putText(
cv::Mat& img, // 待绘制的图像
const string& text, // 待绘制的文字
cv::Point origin, // 文本框的左下角
int fontFace, // 字体 (如cv::FONT_HERSHEY_PLAIN)
double fontScale, // 尺寸因子,值越大文字越大
cv::Scalar color, // 线条的颜色(RGB)
int thickness = 1, // 线条宽度
int lineType = 8, // 线型(4邻域或8邻域,默认8邻域)
bool bottomLeftOrigin = false // true='origin at lower left'
);
#include
#include
using namespace std;
using namespace cv;
Mat bgImage;
const char drawdemo_win[] = "draw shapes and test demo";
//函数声明
void Mylines();
void MyRectangle();
void MyEllipse();
void MyCircle();
void MyCircle();
void MyPolygon();
void RandomLineDemo();
int main(int argc, uchar** urgv)
{
bgImage = imread("E:/技能学习/opencv基础/lena.jpg");
if (bgImage.empty())
{
cout << "could not load image" << endl;
return -1;
}
//Mylines();
//MyRectangle();
//MyEllipse();
//MyCircle();
//MyPolygon();
绘制字体
//putText(bgImage, "Hello World", Point(300, 300), FONT_HERSHEY_COMPLEX, 1.0, Scalar(0, 255, 255), 1, 8);
//namedWindow(drawdemo_win, WINDOW_AUTOSIZE);
//imshow(drawdemo_win, bgImage);
RandomLineDemo();
waitKey(0);
destroyAllWindows();
return 0;
}
//函数定义
void Mylines()
{
Point p1 = Point(20, 30);
Point p2;
p2.x = 300;
p2.y = 300;
Scalar color = Scalar(0, 0, 255);
//画线
line(bgImage, p1, p2, color, 1, LINE_AA);
}
void MyRectangle()
{
Rect rect = Rect(200, 100, 300, 300); //定义矩形的范围 起始点(200,100),宽300,高300
Scalar color = Scalar(0, 255, 0);
//画矩形
rectangle(bgImage, rect, color, 2, 8);
}
void MyEllipse()
{
Scalar color = Scalar(255, 0, 0);
//画椭圆
ellipse(bgImage, Point(bgImage.cols / 2, bgImage.rows / 2),
Point(bgImage.cols / 4, bgImage.rows / 8),
90, 0, 360, color, 1, 8);
}
void MyCircle()
{
Scalar color = Scalar(0, 255, 255);
//画圆
circle(bgImage, Point(bgImage.cols / 2, bgImage.rows / 2), 150, color, 2, 8);
}
void MyPolygon()
{
Point p1(100, 100);
Point p2(100, 200);
Point p3(200, 200);
Point p4(200, 100);
Point p5(100, 100);
std::vector pts;
pts.push_back(p1);
pts.push_back(p2);
pts.push_back(p3);
pts.push_back(p4);
pts.push_back(p5);
//填充多边形
fillPoly(bgImage, pts, Scalar(255, 255, 0), 8, 0);
}
void RandomLineDemo()
{
//随机数
RNG rng(123456);
Point pt1;
Point pt2;
Mat bg = Mat::zeros(bgImage.size(), bgImage.type());
namedWindow("random line demo", WINDOW_AUTOSIZE);
for (int i = 0; i < 10000; i++)
{
//uniform函数可以返回指定范围的随机数
pt1.x = rng.uniform(0, bgImage.cols);
pt2.x = rng.uniform(0, bgImage.cols);
pt1.y = rng.uniform(0, bgImage.rows);
pt2.y = rng.uniform(0, bgImage.rows);
//生成随机颜色
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255));
if (waitKey(50) > 0)
{
break;
}
line(bg, pt1, pt2, color, 1, 8);
imshow("random line demo", bg);
}
}
作用:对输入的图像src进行均值滤波后用dst输出。
函数原型:
void blur(InputArray src, OutputArray dst, Size ksize,
Point anchor=Point(-1,-1), int borderType=BORDER_DEFAULT )
参数解释:
src:输入图像,即源图像,填Mat类的对象即可。该函数对通道是独立处理的,且可以处理任意通道数的图片,但需要注意,待处理的图片深度应该为CV_8U, CV_16U, CV_16S, CV_32F 以及 CV_64F之一。
dst:即目标图像,需要和源图片有一样的尺寸和类型。比如可以用Mat::Clone,以源图片为模板,来初始化得到如假包换的目标图。
ksize:卷积核。一般这样写Size( w,h )来表示内核的大小( 其中,w 为像素宽度, h为像素高度)。Size(3,3)就表示3x3的核大小,Size(5,5)就表示5x5的核大小,默认系数全是1,就是均值卷积。
anchor,表示锚点(即被平滑的那个点),注意它有默认值Point(-1,-1)。如果这个点坐标是负值的话,就表示取核的中心为锚点,所以默认值Point(-1,-1)表示这个锚点在核的中心。
borderType,图像边缘处理方式。有默认值BORDER_DEFAULT,我们一般不去管它。
注:
卷积核10x1 就是在水平方向上的一维卷积
卷积核1x10 就是在竖直方向上的一维卷积
GaussianBlur() --- 高斯模糊
函数原型:
void GaussianBlur(InputArray src, OutputArray dst,
Size ksize, double sigmaX, double sigmaY=0,
int borderType=BORDER_DEFAULT )
参数解释:
InputArray src: 输入图像,可以是Mat类型,图像深度为CV_8U、CV_16U、CV_16S、CV_32F、CV_64F。
OutputArray dst: 输出图像,与输入图像有相同的类型和尺寸。
Size ksize: 高斯内核大小。ksize.width和ksize.height可以不相同但是这两个值必须为正奇数,如果这两个值为0,他们的值将由sigma计算。
double sigmaX: 高斯核函数在X方向上的标准偏差。
double sigmaY: 高斯核函数在Y方向上的标准偏差,如果sigmaY是0,则函数会自动将sigmaY的值设置为与sigmaX相同的值,如果sigmaX和sigmaY都是0,这两个值将由ksize.width和ksize.height计算而来。具体可以参考getGaussianKernel()函数查看具体细节。建议将size、sigmaX和sigmaY都指定出来。
int borderType=BORDER_DEFAULT: 推断图像外部像素的某种便捷模式,有默认值BORDER_DEFAULT,如果没有特殊需要不用更改,具体可以参考borderInterpolate()函数。
#include
#include
using namespace std;
using namespace cv;
int main(int argc, uchar** urgv)
{
Mat src,dst;
src = imread("E:/技能学习/opencv基础/lena.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
namedWindow("output image", WINDOW_AUTOSIZE);
imshow("input image", src);
//均值模糊
blur(src, dst, Size(11, 11), Point(-1, -1));
imshow("output image", dst);
//高斯模糊
Mat gblur;
GaussianBlur(src, gblur, Size(11,11), 11, 11);
imshow("GaussianBlur image", gblur);
waitKey(0);
destroyAllWindows();
return 0;
}
#include
#include
using namespace std;
using namespace cv;
int main(int argc, uchar** urgv)
{
Mat src, dst1,dst2;
src = imread("E:/技能学习/opencv基础/jiaoyan.png");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
imshow("input image", src);
//中值滤波
medianBlur(src, dst1, 3);
namedWindow("median Filter output image", WINDOW_AUTOSIZE);
imshow("median Filter output image", dst1);
//高斯双边滤波
bilateralFilter(src, dst2, 15, 150, 3);
namedWindow("bilateralFilter output image", WINDOW_AUTOSIZE);
imshow("bilateralFilter output image", dst2);
//高斯模糊
Mat gblur;
GaussianBlur(src, gblur, Size(15, 15), 3, 3);
namedWindow("GaussianBlur output image", WINDOW_AUTOSIZE);
imshow("GaussianBlur output image", gblur);
waitKey(0);
destroyAllWindows();
return 0;
}
getStructuringElement()函数:
用于构造一个特定大小和形状的结构元素,可以被作为参数传递给 erode,dilate或morphologyEx函数。用于图像形态学处理。
#include
#include
using namespace std;
using namespace cv;
Mat src, dst;
int element_size = 3;
int max_size = 21;
void CallBack_Demo(int, void*);
int main(int argc, uchar** urgv)
{
src = imread("E:/技能学习/opencv基础/jiaoyan.png");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
imshow("input image", src);
namedWindow("output image", WINDOW_AUTOSIZE);
//创建滚动条
createTrackbar("Element Size:", "output image", &element_size, max_size, CallBack_Demo);
CallBack_Demo(0, 0);
waitKey(0);
destroyAllWindows();
return 0;
}
void CallBack_Demo(int, void*)
{
int s = element_size * 2 + 1;
Mat structuringElement = getStructuringElement(MORPH_RECT, Size(s, s), Point(-1, -1));
//膨胀操作
//dilate(src, dst, structuringElement, Point(-1, -1),1);
//腐蚀操作
erode(src, dst, structuringElement, Point(-1, -1), 1);
imshow("output image", dst);
}
#include
#include
using namespace std;
using namespace cv;
int main(int argc, uchar** urgv)
{
Mat src1, src2,dst;
src1 = imread("E:/技能学习/opencv基础/164.jpg");
src2 = imread("E:/技能学习/opencv基础/HappyFish.jpg");
if (src1.empty())
{
cout << "could not load image" << endl;
return -1;
}
if (src2.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image1", WINDOW_AUTOSIZE);
imshow("input image1", src1);
namedWindow("input image2", WINDOW_AUTOSIZE);
imshow("input image2", src2);
namedWindow("morphology demo", WINDOW_AUTOSIZE);
Mat kernel = getStructuringElement(MORPH_RECT, Size(3, 3), Point(-1, -1));
//开操作 MORPH_OPEN
morphologyEx(src1, dst, MORPH_OPEN, kernel);
//imshow("morphology demo", dst);
//闭操作 MORPH_CLOSE
morphologyEx(src1, dst, MORPH_CLOSE, kernel);
//imshow("morphology demo", dst);
//形态学梯度 MORPH_GRADIENT
morphologyEx(src2, dst, MORPH_GRADIENT, kernel);
imshow("morphology demo", dst);
//顶帽 MORPH_TOPHAT
morphologyEx(src1, dst, MORPH_TOPHAT, kernel);
imshow("morphology demo", dst);
//黑帽 MORPH_BLACKHAT
morphologyEx(src1, dst, MORPH_BLACKHAT, kernel);
imshow("morphology demo", dst);
waitKey(0);
destroyAllWindows();
return 0;
}
#include
#include
using namespace std;
using namespace cv;
int main(int argc, uchar** urgv)
{
Mat src, dst;
//src = imread("E:/技能学习/opencv基础/bin1.jpg");
src = imread("E:/技能学习/opencv基础/chars.png");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
imshow("input image", src);
Mat gray_src;
//RGB图像转灰度图
cvtColor(src, gray_src, COLOR_BGR2GRAY);
imshow("gray image", gray_src);
Mat binImg;
//灰度图转为二值图像
adaptiveThreshold(gray_src, binImg, 255, ADAPTIVE_THRESH_GAUSSIAN_C, THRESH_BINARY, 15, -2);
//二值图像取反
adaptiveThreshold(~gray_src, binImg, 255, ADAPTIVE_THRESH_GAUSSIAN_C, THRESH_BINARY, 15, -2);
imshow("binary image", binImg);
//水平结构元素
Mat hline = getStructuringElement(MORPH_RECT, Size(src.cols / 16, 1), Point(-1, -1));
//垂直结构元素
Mat vline = getStructuringElement(MORPH_RECT, Size(1, src.rows / 16), Point(-1, -1));
//矩形结构
Mat kernel = getStructuringElement(MORPH_RECT, Size(3,3), Point(-1, -1));
//提取出水平线
//Mat temp;
腐蚀
//erode(binImg, temp, hline);
膨胀
//dilate(temp, dst, hline);
//bitwise_not(dst, dst);
//imshow("Final Result", dst);
//提取出垂直线
//Mat temp;
//腐蚀
//erode(binImg, temp, vline);
//膨胀
/*dilate(temp, dst, vline);
bitwise_not(dst, dst);
imshow("Final Result", dst); */
Mat temp;
//腐蚀
erode(binImg, temp, kernel);
//膨胀
dilate(temp, dst, kernel);
bitwise_not(dst, dst);
imshow("Final Result", dst);
waitKey(0);
destroyAllWindows();
return 0;
}
图像金字塔保证图像的特征一直存在。
#include
#include
using namespace std;
using namespace cv;
int main(int argc, uchar** urgv)
{
Mat src, dst;
src = imread("E:/技能学习/opencv基础/HappyFish.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
imshow("input image", src);
namedWindow("output image", WINDOW_AUTOSIZE);
//上采样
pyrUp(src, dst, Size(src.cols * 2, src.rows * 2));
imshow("output image", dst);
//降采样
Mat s_down;
pyrDown(src, s_down, Size(src.cols / 2, src.rows / 2));
imshow("sample down", s_down);
//高斯不同
Mat gray_src,g1, g2,dogImg;
cvtColor(src, gray_src, COLOR_BGR2GRAY); //颜色转换
GaussianBlur(gray_src, g1, Size(5, 5), 0, 0); //高斯模糊
GaussianBlur(gray_src, g2, Size(5, 5), 0, 0); //高斯模糊
subtract(g1, g2, dogImg,Mat());
//归一化显示
normalize(dogImg, dogImg, 2,255, NORM_MINMAX);
imshow("DOG Image", dogImg);
waitKey(0);
destroyAllWindows();
return 0;
}
图像二值化 --- threshold()函数
函数原型:
double threshold( InputArray src, OutputArray dst,
double thresh, double maxval, int type );
参数解释:
src:源图像,可以为8位的灰度图,也可以为32位的彩色图像。
dst:输出图像
thresh:当前阈值
maxval:最大阈值,一般为255
type:阈值类型,主要有下面几种:
enum ThresholdTypes {
THRESH_BINARY = 0,
THRESH_BINARY_INV = 1,
THRESH_TRUNC = 2,
THRESH_TOZERO = 3,
THRESH_TOZERO_INV = 4,
THRESH_MASK = 7,
THRESH_OTSU = 8,
THRESH_TRIANGLE = 16
};
注意:
THRESH_OTSU和THRESH_TRIANGLE是作为优化算法配合THRESH_BINARY、THRESH_BINARY_INV、THRESH_TRUNC、THRESH_TOZERO以及THRESH_TOZERO_INV来使用的。
当使用了THRESH_OTSU和THRESH_TRIANGLE两个标志时,输入图像必须为8位单通道。
#include
#include
using namespace std;
using namespace cv;
Mat src, gray_src, dst;
int threshold_value = 127; //定义阈值
int threshold_max = 255;
int type_value = 2;
int type_max = 4;
const char* output_title = "binary image";
void Threshold_Demo(int, void*);
int main(int argc, uchar** urgv)
{
src = imread("E:/技能学习/opencv基础/HappyFish.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
namedWindow(output_title, WINDOW_AUTOSIZE);
imshow("input image", src);
//转为灰度图
cvtColor(src, gray_src, COLOR_BGR2GRAY);
//创建滚动条
createTrackbar("Threshold Value", output_title, &threshold_value, threshold_max, Threshold_Demo);
createTrackbar("Type Value", output_title, &type_value, type_max, Threshold_Demo);
Threshold_Demo(0, 0);
waitKey(0);
destroyAllWindows();
return 0;
}
void Threshold_Demo(int, void*)
{
cvtColor(src, gray_src, COLOR_BGR2GRAY);
//threshold(gray_src, dst, threshold_value, threshold_max, type_value);
//threshold(gray_src, dst, 0, 255, THRESH_OTSU | type_value); // THRESH_OTSU可以自动计算阈值
threshold(gray_src, dst, 0, 255, THRESH_TRIANGLE | type_value); //THRESH_TRIANGLE也可以自动计算阈值
imshow(output_title, dst);
}
卷积的作用:
模糊图像、提取边缘、图像增强(锐化)
卷积核又称为算子
卷积核(滤波器/kernel)的规则要求:
卷积核的大小应该是奇数,这样它才有一个中心,例如3x3,5x5或者7x7。有中心也就有 了半径的称呼,例如5x5大小的核的半径就是2。
卷积核所有的元素之和应该要等于1,这是为了保证卷积前后图像的亮度保持不变。但这不是硬性要求。
如果卷积核所有元素之和大于1,那么卷积后的图像就会比原图像更亮,反之,如果小于1,那么得到的图像就会变暗。如果和为0,图像不会变黑,但也会非常暗。
对于卷积后的结构,可能会出现负数或者大于255的数值。这种情况将它们直接截断到0和255之间即可。对于负数,也可以取绝对值。
#include
#include
using namespace std;
using namespace cv;
const char* output_title = "Robert X";
int main(int argc, uchar** urgv)
{
Mat src, dst;
src = imread("E:/技能学习/opencv基础/HappyFish.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
namedWindow(output_title, WINDOW_AUTOSIZE);
imshow("input image", src);
//Robert算子 x方向
/*Mat kernel_x = (Mat_(2, 2) << 1, 0, 0, -1);
filter2D(src, dst, -1, kernel_x, Point(-1, -1),0.0);
imshow(output_title, dst);*/
//Robert算子 y方向
/*Mat yImg;
Mat kernel_y = (Mat_(2, 2) << 0, 1, -1, 0);
filter2D(src, yImg, -1, kernel_y, Point(-1, -1), 0.0);
imshow("Robert Y", yImg);*/
//Sobel算子 x方向
/*Mat SImgx;
Mat kernel_x = (Mat_(3, 3) << -1, 0, 1, -2, 0, 2, -1, 0, 0);
filter2D(src, SImgx, -1, kernel_x, Point(-1, -1), 0.0);
imshow("Sobel X", SImgx);*/
//Sobel算子 y方向
/*Mat SImgy;
Mat kernel_y = (Mat_(3, 3) << -1, -2, -1, 0, 0, 0, 1, 2, 1);
filter2D(src, SImgy, -1, kernel_y, Point(-1, -1), 0.0);
imshow("Sobel Y", SImgy);*/
//拉普拉斯算子
/*Mat lImgy;
Mat kernel = (Mat_(3, 3) << 0, -1, 0, -1, 4, -1, 0, -1, 0);
filter2D(src, lImgy, -1, kernel, Point(-1, -1), 0.0);
imshow("拉普拉斯算子", lImgy);*/
//自定义卷积模糊
int c = 0;
int index = 0;
int ksize = 0;
while (true)
{
c = waitKey(500);
if (c == 27)
{
break;
}
ksize = 4 + (index % 5) * 2 + 1;
Mat kernel = Mat::ones(Size(ksize, ksize), CV_32F) / (float)((ksize * ksize));
filter2D(src, dst, -1, kernel, Point(-1, -1));
index++;
imshow(output_title, dst);
}
//waitKey(0);
destroyAllWindows();
return 0;
}
图像卷积的时候边界像素不能被卷积操作,原因在于边界像素没有完全跟kernel重叠,所以当3 * 3滤波时会有1个像素的边缘没有被处理,5 * 5滤波时会有2个像素的边缘没有被处理.
#include
#include
using namespace std;
using namespace cv;
const char* output_title = "output image";
int main(int argc, uchar** urgv)
{
Mat src, dst;
src = imread("E:/技能学习/opencv基础/baboon.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
namedWindow(output_title, WINDOW_AUTOSIZE);
imshow("input image", src);
/*int top = (int)(0.05 * src.rows);
int bottom = (int)(0.05 * src.rows);
int right = (int)(0.05 * src.cols);
int left = (int)(0.05 * src.cols);
RNG rng(12345);
int borderType = BORDER_DEFAULT;
int c = 0;
while (true)
{
c = waitKey(500);
if (c == 27)
{
break;
}
if ((char)c == 'r')
{
borderType = BORDER_REPLICATE;
}
else if ((char)c == 'w')
{
borderType = BORDER_WRAP;
}
else if ((char)c == 'c')
{
borderType = BORDER_CONSTANT;
}
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0,255));
copyMakeBorder(src, dst, top, bottom, left, right, borderType, color);
imshow(output_title, dst);
}*/
//高斯模糊
GaussianBlur(src, dst, Size(5, 5), 0, 0, BORDER_DEFAULT);
imshow(output_title, dst);
waitKey(0);
destroyAllWindows();
return 0;
}
对噪声比较敏感,容易受到噪声的干扰,所以在用Sobel算子之前要先对图像降噪。
#include
#include
using namespace std;
using namespace cv;
int main(int argc, uchar** urgv)
{
Mat src, dst;
src = imread("E:/技能学习/opencv基础/lena.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
imshow("input image", src);
Mat gray_src;
//步骤1:高斯平滑
GaussianBlur(src, dst, Size(3, 3), 0, 0);
//步骤2:转灰度图
cvtColor(src, gray_src, COLOR_BGR2GRAY);
imshow("gray image", gray_src);
//步骤3:求梯度X和Y
Mat xgrad, ygrad;
//Sobel算子
//Sobel(gray_src, xgrad, CV_16S, 1, 0, 3);
//Sobel(gray_src, ygrad, CV_16S, 0, 1, 3);
//Scharr算子
Sobel(gray_src, xgrad, CV_16S, 1, 0, 3);
Sobel(gray_src, ygrad, CV_16S, 0, 1, 3);
convertScaleAbs(xgrad, xgrad); //convertScaleAbs()用于实现对整个图像数组中的每一个元素取绝对值
convertScaleAbs(ygrad, ygrad);
imshow("xgrad image", xgrad);
imshow("ygrad image", ygrad);
//步骤4:得到振幅图像
Mat xygrad = Mat(xgrad.size(), xgrad.type());
/*cout << "type : " << xygrad.type() << endl;
int width = xgrad.cols;
int height = ygrad.rows;
for (int row = 0; row < height; row++)
{
for (int col = 0; col < width; col++)
{
int xg = xgrad.at(row, col); //signed char 取值范围是-128-127;
//unsigned char 取值范围是0-255;
int yg = ygrad.at(row, col);
int xy = xg + yg;
xygrad.at(row, col) = saturate_cast(xy);
}
}*/
add(xgrad, ygrad, xygrad);//与上面for循环等价
imshow("Final result", xygrad);
waitKey(0);
destroyAllWindows();
return 0;
}
函数原型:
Laplacian( src_gray, dst, ddepth, kernel_size,
scale, delta, BORDER_DEFAULT );
参数解释:
src_gray,输入图像
dst,Laplace操作结果
ddepth,输出图像深度,因为输入图像一般为CV_8U,为了避免数据溢出,输出图像深度应该设置为CV_16S
kernel_size,filter mask的规模,我们的mask时3x3的,所以这里应该设置为3
scale,delta,BORDER_DEFAULT,默认设置就好
#include
#include
using namespace std;
using namespace cv;
int main(int argc, uchar** urgv)
{
Mat src, dst;
src = imread("E:/技能学习/opencv基础/lena.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
imshow("input image", src);
Mat gray_src, edg_image;
//高斯模糊
GaussianBlur(src, dst, Size(3, 3), 0, 0);
//转灰度图
cvtColor(dst, gray_src, COLOR_BGR2GRAY);
//拉普拉斯变换
Laplacian(gray_src, edg_image, CV_16S, 3);
//取绝对值
convertScaleAbs(edg_image, edg_image);
//二值化
threshold(edg_image, edg_image, 0, 255, THRESH_OTSU);
imshow("output image", edg_image);
waitKey(0);
destroyAllWindows();
return 0;
}
输入图像为灰度图
默认情况一般选择是L1,参数设置为false。
#include
#include
using namespace std;
using namespace cv;
Mat src, dst;
Mat gray_src;
int t1_value = 50;
int max_value = 255;
void Canny_Demo(int, void*);
int main(int argc, uchar** urgv)
{
src = imread("E:/技能学习/opencv基础/lena.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
namedWindow("output image", WINDOW_AUTOSIZE);
imshow("input image", src);
cvtColor(src, gray_src, COLOR_BGR2GRAY);
createTrackbar("Threshold Value: ", "output image", &t1_value, max_value, Canny_Demo);
Canny_Demo(0, 0);
waitKey(0);
destroyAllWindows();
return 0;
}
void Canny_Demo(int, void*)
{
Mat edg_output;
blur(gray_src, gray_src, Size(3, 3), Point(-1, -1)); //均值滤波
Canny(gray_src, edg_output, t1_value, t1_value * 2, 3, false);
//image.copyTo(imageROI,mask) 作用是把mask和image重叠以后
//把mask中像素值为0(black)的点对应的image中的点变为透明,而保留其他点。
/*dst.create(src.size(), src.type());
src.copyTo(dst, edg_output);*/
imshow("output image", edg_output);
}
使用霍夫变换检测直线具体步骤:
1.彩色图像转灰度图
2.高斯去噪
3.边缘提取(梯度算子、拉普拉斯算子、canny、sobel)
4.二值化(判断此处是否为边缘点,就看灰度值==255)
5.映射到霍夫空间(准备两个容器,一个用来展示hough-space概况,一个数组hough-space用来储存voting的值,因为投票过程往往有某个极大值超过阈值,多达几千,不能直接用灰度图来记录投票信息)
6.取局部极大值,设定阈值,过滤干扰直线
7.绘制直线、标定角点
#include
#include
using namespace std;
using namespace cv;
int main(int argc, uchar** urgv)
{
Mat src, gray_src, dst;
src = imread("E:/技能学习/opencv基础/bin1.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
namedWindow("output image", WINDOW_AUTOSIZE);
imshow("input image", src);
//边缘检测
Canny(src, gray_src, 100, 200);
cvtColor(gray_src, dst, COLOR_GRAY2BGR);
imshow("edge image", gray_src);
//二维直线类型为cv::Vec4f,输出参数的前半部分给出的是直线的方向,
//而后半部分给出的是直线上的一点(即通常所说的点斜式直线)。
vector pline;
HoughLinesP(gray_src, pline, 1, CV_PI / 180.0, 10, 0, 10);
Scalar color = Scalar(0, 0, 255);
for (size_t i = 0; i < pline.size(); i++) //size_t 可以理解为int
{
Vec4f hline = pline[i];
line(dst, Point(hline[0], hline[1]), Point(hline[2], hline[3]), color, 3, 8);
}
imshow("output image", dst);
waitKey(0);
destroyAllWindows();
return 0;
}
#include
#include
using namespace std;
using namespace cv;
int hough_value = 30;
Mat src, dst;
Mat moutput;
void hough_change(int, void*)
{
//霍夫圆检测
vector pcircles;
HoughCircles(moutput, pcircles, HOUGH_GRADIENT, 1, 30, 110, hough_value, 0, 0);
src.copyTo(dst);
for (size_t i = 0; i < pcircles.size(); i++)
{
Vec3f cc = pcircles[i];
//画圆
circle(dst, Point(cc[0], cc[1]), cc[2], Scalar(0, 0, 255), 2, LINE_AA);
//将圆心位置标出来
circle(dst, Point(cc[0], cc[1]), 2, Scalar(198, 23, 255), 2, LINE_AA);
}
imshow("Hough circle image", dst);
}
int main(int argc, uchar** urgv)
{
src = imread("E:/技能学习/opencv基础/test1.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
namedWindow("Hough circle image", WINDOW_AUTOSIZE);
imshow("input image", src);
//中值滤波
medianBlur(src, moutput, 3);
//转灰度
cvtColor(moutput, moutput, COLOR_BGR2GRAY);
createTrackbar("hough value", "Hough circle image", &hough_value, 200, hough_change);
hough_change(0, 0);
waitKey(0);
destroyAllWindows();
return 0;
}
#include
#include
using namespace std;
using namespace cv;
Mat src, dst, map_x, map_y;
int index = 0;
void updata_map(int, void*)
{
for (int row = 0; row < src.rows; row++)
{
for (int col = 0; col < src.cols; col++)
{
switch (index)
{
case 0:
if (col >= (src.cols * 0.25) && col <= (src.cols * 0.75) && row >=(src.rows * 0.25) && row <= (src.rows * 0.75))
{
map_x.at(row, col) = 2 * (col - (src.cols * 0.25) + 0.5);
map_y.at(row, col) = 2 * (row - (src.rows * 0.25) + 0.5);
}
else
{
map_x.at(row, col) = 0;
map_y.at(row, col) = 0;
}
break;
case 1: //行不变,列变
{
map_x.at(row, col) = (src.cols - col - 1);
map_y.at(row, col) = row;
}
case 2: //行变,列不变
{
map_x.at(row, col) = col;
map_y.at(row, col) = (src.rows - row - 1);
}
case 3: //行,列都变
{
map_x.at(row, col) = (src.cols - col - 1);
map_y.at(row, col) = (src.rows - row - 1);
}
}
}
}
}
int main(int argc, uchar** urgv)
{
src = imread("E:/技能学习/opencv基础/lena.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
namedWindow("remap image", WINDOW_AUTOSIZE);
imshow("input image", src);
map_x.create(src.size(), CV_32FC1);
map_y.create(src.size(), CV_32FC1);
while (true)
{
int c = waitKey(500);
index = c % 4;
if (c == 27)
{
break;
}
updata_map(0, 0);
remap(src, dst, map_x, map_y, INTER_LINEAR, BORDER_CONSTANT, Scalar(0, 255, 255));
imshow("remap image", dst);
}
waitKey(0);
destroyAllWindows();
return 0;
}
#include
#include
using namespace std;
using namespace cv;
int main(int argc, uchar** urgv)
{
Mat src, dst;
src = imread("E:/技能学习/opencv基础/lena.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
imshow("input image", src);
cvtColor(src, src, COLOR_BGR2GRAY);
equalizeHist(src, dst);
imshow("equalizeHist image", dst);
waitKey(0);
destroyAllWindows();
return 0;
}
#include
#include
using namespace std;
using namespace cv;
int main(int argc, uchar** urgv)
{
Mat src, dst;
src = imread("E:/技能学习/opencv基础/lena.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
imshow("input image", src);
vector bgr_planes;
//通道分离
split(src, bgr_planes);
int histSize = 256;
float range[] = { 0,256 }; //左闭右开
const float *histRanges = { range };
Mat b_hist, g_hist, r_hist;
//直方图计算
calcHist(&bgr_planes[0], 1, 0, Mat(), b_hist, 1, &histSize, &histRanges, true, false);
calcHist(&bgr_planes[1], 1, 0, Mat(), g_hist, 1, &histSize, &histRanges, true, false);
calcHist(&bgr_planes[2], 1, 0, Mat(), r_hist, 1, &histSize, &histRanges, true, false);
int hist_h = 400;
int hist_w = 512;
int bin_w = hist_w / histSize;
//归一化
normalize(b_hist, b_hist, 0, hist_h, NORM_MINMAX, -1, Mat()); //将值归一化到0 - hist_h范围内
normalize(g_hist, g_hist, 0, hist_h, NORM_MINMAX, -1, Mat());
normalize(r_hist, r_hist, 0, hist_h, NORM_MINMAX, -1, Mat());
//绘制直方图
Mat histImage(hist_w, hist_h, CV_8UC3, Scalar(0,0,0));
for (int i = 1; i < histSize; i++)
{
line(histImage, Point((i - 1) * bin_w, hist_h - cvRound(b_hist.at(i - 1))),
Point((i)* bin_w, hist_h - cvRound(b_hist.at(i))), Scalar(255, 0, 0), 2, LINE_AA);
line(histImage, Point((i - 1) * bin_w, hist_h - cvRound(g_hist.at(i - 1))),
Point((i)* bin_w, hist_h - cvRound(g_hist.at(i))), Scalar(0, 255, 0), 2, LINE_AA);
line(histImage, Point((i - 1) * bin_w, hist_h - cvRound(r_hist.at(i - 1))),
Point((i)* bin_w, hist_h - cvRound(r_hist.at(i))), Scalar(0, 0, 255), 2, LINE_AA);
}
//显示结果
imshow("直方图显示", histImage);
waitKey(0);
destroyAllWindows();
return 0;
}
compareHist 输出结果是一个double值,将double值变为string
#include
#include
using namespace std;
using namespace cv;
string convertToString(double d)
{
ostringstream os;
if (os << d)
{
return os.str();
return "invalid conversion";
}
}
int main(int argc, uchar** urgv)
{
Mat base, test1, test2;
base = imread("E:/技能学习/opencv基础/test.jpg");
if (base.empty())
{
cout << "could not load image" << endl;
return -1;
}
test1 = imread("E:/技能学习/opencv基础/lena.jpg");
test2 = imread("E:/技能学习/opencv基础/lenanoise.jpg");
//RGB转HSV
cvtColor(base, base, COLOR_BGR2HSV);
cvtColor(test1, test1, COLOR_BGR2HSV);
cvtColor(test2, test2, COLOR_BGR2HSV);
int h_bins = 50;
int s_bins = 60;
int histSize[] = { h_bins , s_bins };
//H 0 - 180 S 0 - 256
float h_ranges[] = { 0, 180 };
float s_ranges[] = { 0, 256 };
const float *ranges[] = { h_ranges, s_ranges };
int channels[] = { 0, 1 };
//因为求出来的直方图是一个多维的,所以类型定义为MatND
MatND hist_base; //ND表示多维的
MatND hist_test1;
MatND hist_test2;
//直方图计算 并 归一化
calcHist(&base, 1, channels, Mat(), hist_base, 2, histSize, ranges, true, false);
normalize(hist_base, hist_base, 0, 1, NORM_MINMAX, -1, Mat()); //将值归一化到0 - 1范围内
calcHist(&test1, 1, channels, Mat(), hist_test1, 2, histSize, ranges, true, false);
normalize(hist_test1, hist_test1, 0, 1, NORM_MINMAX, -1, Mat()); //将值归一化到0 - 1范围内
calcHist(&test2, 1, channels, Mat(), hist_test2, 2, histSize, ranges, true, false);
normalize(hist_test2, hist_test2, 0, 1, NORM_MINMAX, -1, Mat()); //将值归一化到0 - 1范围内
//直方图比较
double basebase = compareHist(hist_base, hist_base, HISTCMP_CORREL); //base与base做直方图相关性比较
double basetest1 = compareHist(hist_base, hist_test1, HISTCMP_CORREL);//base与test1做直方图相关性比较
double basetest2 = compareHist(hist_base, hist_test2, HISTCMP_CORREL);//base与test2做直方图相关性比较
double test1test2 = compareHist(hist_test1, hist_test2, HISTCMP_CORREL);//test1与test2做直方图相关性比较
//putText是写文本
putText(base, convertToString(basebase), Point(50, 50), FONT_HERSHEY_COMPLEX, 1, Scalar(0, 0, 255), 2, LINE_AA);
putText(test1, convertToString(basetest1), Point(50, 50), FONT_HERSHEY_COMPLEX, 1, Scalar(0, 0, 255), 2, LINE_AA);
putText(test2, convertToString(basetest2), Point(50, 50), FONT_HERSHEY_COMPLEX, 1, Scalar(0, 0, 255), 2, LINE_AA);
//显示结果
imshow("base", base);
imshow("test1", test1);
imshow("test2", test2);
waitKey(0);
destroyAllWindows();
return 0;
}
calcBackProject()函数:
void calcBackProject( const Mat* images, int nimages,
const int* channels, InputArray hist,
OutputArray backProject, const float** ranges,
double scale = 1, bool uniform = true );
#include
#include
using namespace std;
using namespace cv;
Mat src, hsv, hue;
int bins = 12;
void Hist_And_Backprojection(int, void*);
int main(int argc, uchar** urgv)
{
src = imread("E:/技能学习/opencv基础/t1.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
imshow("input image", src);
//RGB转HSV
cvtColor(src, hsv, COLOR_BGR2HSV);
//分离hue色调通道
hue.create(hsv.size(), hsv.depth());
int nchannels[] = { 0,0 };
mixChannels(&hsv, 1, &hue, 1, nchannels, 1);
createTrackbar("Histogram bins", "input image", &bins, 180, Hist_And_Backprojection);
Hist_And_Backprojection(0, 0);
waitKey(0);
destroyAllWindows();
return 0;
}
void Hist_And_Backprojection(int, void*)
{
float range[] = { 0,180 };
const float *HistRanges = { range };
Mat h_hist;
//直方图计算
calcHist(&hue, 1, 0, Mat(), h_hist, 1, &bins, &HistRanges, true, false);
//归一化
normalize(h_hist, h_hist, 0, 255, NORM_MINMAX, -1, Mat());
//计算反向投影图像
Mat backPrjImage;
calcBackProject(&hue, 1, 0, h_hist, backPrjImage, &HistRanges, 1, true);
//显示图像
imshow("BackPro", backPrjImage);
//绘制直方图
int hist_h = 400;
int hist_w = 512;
Mat histImage(hist_w, hist_h, CV_8UC3, Scalar(0, 0, 0));
int bin_w = (hist_w / bins);
for (int i = 1; i < bins; i++)
{
rectangle(histImage,
Point((i - 1) * bin_w, hist_h - cvRound(h_hist.at(i - 1) * (400 / 255))),
//Point((i)* bin_w, hist_h - cvRound(h_hist.at(i) * (400 / 255))),
Point(i * bin_w, hist_h),
Scalar(0, 0, 255), -1);
}
imshow("直方图", histImage);
return;
}
#include
#include
using namespace std;
using namespace cv;
Mat src, temp, dst;
int match_method = TM_SQDIFF;
int max_track = 5;
void max_demo(int, void*)
{
int width = src.cols - temp.cols + 1;
int heigh = src.rows - temp.rows + 1;
Mat result(width, heigh, CV_32FC1);
//模板匹配
matchTemplate(src, temp, result, match_method, Mat());
//结果归一化
normalize(result, result, 0, 1, NORM_MINMAX, -1, Mat());
Point minLoc;
Point maxLoc;
double min, max;
src.copyTo(dst);
Point temLoc;
//找出最大最小值和最大最小值的位置
minMaxLoc(result, &min, &max, &minLoc, &maxLoc, Mat());
if (match_method == TM_SQDIFF || match_method == TM_SQDIFF_NORMED)
{
temLoc = minLoc;
}
else
{
temLoc = maxLoc;
}
//把最大最小值的位置绘制出来
rectangle(dst, Rect(temLoc.x, temLoc.y, temp.cols, temp.rows), Scalar(0, 0, 255), 2, 8);
rectangle(result, Rect(temLoc.x, temLoc.y, temp.cols, temp.rows), Scalar(0, 0, 255), 2, 8);
//显示
imshow("output image", dst);
imshow("result image", result);
}
int main(int argc, uchar** urgv)
{
//待检测图像
src = imread("E:/技能学习/opencv基础/baboon.jpg");
//模板图像
temp = imread("E:/技能学习/opencv基础/sample.jpg");
if (src.empty() || temp.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("output image", WINDOW_AUTOSIZE);
imshow("input image", src);
imshow("input sample image",temp);
createTrackbar("匹配算法类型", "output image", &match_method, max_track, max_demo);
max_demo(0, 0);
waitKey(0);
destroyAllWindows();
return 0;
}
#include
#include
using namespace std;
using namespace cv;
Mat src, dst;
int threshold_value = 100;
int threshold_max = 255;
RNG rng(12345);
void Demo_Contours(int, void*)
{
Mat canny_output;
vector>contoursPoints;
vectorhierachy; //vector:放了4维int向量
//边缘检测
Canny(src, canny_output, threshold_value, threshold_value * 2, 3, false);
//发现轮廓
findContours(canny_output, contoursPoints, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(0, 0));
dst = Mat::zeros(src.size(), CV_8UC3);
for (size_t i = 0; i < contoursPoints.size(); i++)
{
//生成随机颜色
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255));
//绘制轮廓
drawContours(dst, contoursPoints, i, color, 2, 8, hierachy, 0, Point(0, 0));
}
imshow("output image", dst);
}
int main(int argc, uchar** urgv)
{
src = imread("E:/技能学习/opencv基础/HappyFish.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
namedWindow("output image", WINDOW_AUTOSIZE);
imshow("input image", src);
//转灰度图
cvtColor(src, src, COLOR_BGR2GRAY);
createTrackbar("Threshold_Value", "output image", &threshold_value, threshold_max, Demo_Contours);
Demo_Contours(0, 0);
waitKey(0);
destroyAllWindows();
return 0;
}
#include
#include
using namespace std;
using namespace cv;
Mat src, src_gray, dst;
int threshold_value = 100;
int threshold_max = 255;
RNG rng(12345);
void Threshold_Callback(int, void*)
{
Mat bin_output;
vector>contoursPoints;
vectorhierachy; //vector:放了4维int向量
//变为二值图像
threshold(src_gray, bin_output, threshold_value, threshold_max, THRESH_BINARY);
//发现轮廓
findContours(bin_output, contoursPoints, hierachy, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(0, 0));
vector>convexs(contoursPoints.size());
for (size_t i = 0; i < contoursPoints.size(); i++)
{
//计算凸包
convexHull(contoursPoints[i], convexs[i], false, true);
}
dst = Mat::zeros(src.size(), CV_8UC3);
vector empty(0);
for (size_t k = 0; k < contoursPoints.size(); k++)
{
//绘制显示
//生成随机颜色
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255));
//绘制轮廓
drawContours(dst, contoursPoints, k, color, 2, LINE_8, hierachy, 0, Point(0, 0));
drawContours(dst, convexs, k, color, 2, LINE_8, empty, 0, Point(0, 0));
}
imshow("convex hull demo", dst);
return;
}
int main(int argc, uchar** urgv)
{
src = imread("E:/技能学习/opencv基础/HappyFish.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
namedWindow("convex hull demo ", WINDOW_AUTOSIZE);
imshow("input image", src);
//转灰度图
cvtColor(src, src_gray, COLOR_BGR2GRAY);
//去噪
blur(src_gray, src_gray, Size(3, 3), Point(-1, -1), BORDER_DEFAULT);
//创建滑动条
createTrackbar("Threshold", "convex hull demo ", &threshold_value, threshold_max, Threshold_Callback);
Threshold_Callback(0, 0);
waitKey(0);
destroyAllWindows();
return 0;
}
第一个参数 InputArray curve:输入的点集
第二个参数OutputArray approxCurve:输出的点集,当前点集是能最小包容指定点集的。画出来即是一个多边形;
第三个参数double epsilon:指定的精度,也即是原始曲线与近似曲线之间的最大距离。
第四个参数bool closed:若为true,则说明近似曲线是闭合的,反之,若为false,则断开。
#include
#include
using namespace std;
using namespace cv;
Mat src, gray_src, drawImg;
int threshold_v = 170;
int threshold_max = 255;
RNG rng(12345);
void Contours_Callback(int, void*)
{
Mat bin_output;
vector> contours;
vectorhierachy; //vector:放了4维int向量
//变为二值图像
threshold(gray_src, bin_output, threshold_v, threshold_max, THRESH_BINARY);
imshow("binary image", bin_output);
//发现轮廓
findContours(bin_output, contours, hierachy, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(-1, -1));
vector>contours_ploy(contours.size());
vectorploy_rects(contours.size());
vectorccs(contours.size()); //Point2f表示Point类的两个数据x,y为float类型;
vectorradius(contours.size());
vector minRects(contours.size());
vector myellipse(contours.size());
for (size_t i = 0; i < contours.size(); i++)
{
approxPolyDP((contours[i]), contours_ploy[i], 3, true);
ploy_rects[i] = boundingRect(contours_ploy[i]); //得到轮廓周围最小矩形
minEnclosingCircle(contours_ploy[i], ccs[i], radius[i]); //得到最小区域圆形
if (contours_ploy[i].size() > 5)
{
myellipse[i] = fitEllipse(contours_ploy[i]); //得到最小椭圆
minRects[i] = minAreaRect(contours_ploy[i]); //得到旋转矩形
}
}
//src.copyTo(drawImg);
drawImg = Mat::zeros(src.size(), src.type());
Point2f pts[4];
for (size_t t = 0; t < contours.size(); t++)
{
//生成随机颜色
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255));
//绘制显示
//rectangle(drawImg, ploy_rects[t], color, 2, 8);
//circle(drawImg, ccs[t], radius[t], color, 2, 8);
if (contours_ploy[t].size() > 5)
{
ellipse(drawImg, myellipse[t], color, 1, 8);
minRects[t].points(pts); //把最小矩形的点都放到pts中
for (int r = 0; r < 4; r++)
{
line(drawImg, pts[r], pts[(r + 1) % 4], color, 1, 8); //对四取余是为了第一个点和最后一个点相连接
}
}
}
imshow("rectangle demo ", drawImg);
return;
}
int main(int argc, uchar** urgv)
{
src = imread("E:/技能学习/opencv基础/hotball.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
namedWindow("rectangle demo ", WINDOW_AUTOSIZE);
imshow("input image", src);
//转灰度图
cvtColor(src, gray_src, COLOR_BGR2GRAY);
//去噪
blur(gray_src, gray_src, Size(3, 3), Point(-1, -1), BORDER_DEFAULT);
//创建滑动条
createTrackbar("Threshold", "rectangle demo ", &threshold_v, threshold_max, Contours_Callback,nullptr);
Contours_Callback(0, 0);
waitKey(0);
destroyAllWindows();
return 0;
}
moments()函数用来计算多边形和光栅形状的最高达三阶的所有矩。用来计算形状的重心、面积、主轴和其他形状特征。
函数原型:
Moments moments( InputArray array, //输入图像
bool binaryImage = false ); //是否为二值图像,默认值是false,取true,所有的非零元素为1
输出参数:moments 是一个类
用来计算整个轮廓或部分轮廓的面积
函数原型:
double contourArea( InputArray contour, //输入轮廓数据
bool oriented = false ); //默认false,返回绝对值
用来计算封闭轮廓的周长或曲线的长度
函数原型:
double arcLength( InputArray curve, //输入曲线数据
bool closed ); //是否为封闭曲线
#include
#include
using namespace std;
using namespace cv;
Mat src, gray_src;
int threshold_value = 80;
int threshold_max = 255;
RNG rng(12345);
void Demo_Moment(int, void*)
{
Mat canny_output;
vector>contours;
vectorhierachy;
//边缘检测
Canny(gray_src, canny_output, threshold_value, threshold_value * 2, 3, false);
//找到轮廓
findContours(canny_output, contours, hierachy, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(0, 0));
vector contours_moments(contours.size());
vector ccs(contours.size()); //Point2f表示Point类的两个数据x,y为float类型;
for (size_t i = 0; i < contours.size(); i++)
{
//计算每个轮廓对象的矩
contours_moments[i] = moments(contours[i]);
//计算中心位置
ccs[i] = Point(static_cast(contours_moments[i].m10 / contours_moments[i].m00),
static_cast(contours_moments[i].m01 / contours_moments[i].m00));
}
Mat drawImg;
src.copyTo(drawImg);
for (size_t i = 0; i < contours.size(); i++)
{
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0, 255));
cout << "center point x = " << ccs[i].x << " y = " << ccs[i].y << endl;
cout << i << " contours area:" << contourArea(contours[i]) << " arc length"
<< arcLength(contours[i],true) << endl;
drawContours(drawImg, contours, i, color, 2, 8, hierachy, 0, Point(0, 0));
circle(drawImg, ccs[i], 2, color, 2, 8);
}
imshow("image moment demo", drawImg);
}
int main(int argc, uchar** urgv)
{
src = imread("E:/技能学习/opencv基础/hotball.jpg");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
namedWindow("image moment demo", WINDOW_AUTOSIZE);
imshow("input image", src);
//RGB转GRAY
cvtColor(src, gray_src, COLOR_BGR2GRAY);
//高斯模糊
GaussianBlur(gray_src, gray_src, Size(3, 3), 0, 0);
//创建滚动条
createTrackbar("threshold value: ", "image moment demo", &threshold_value, threshold_max, Demo_Moment);
waitKey(0);
destroyAllWindows();
return 0;
}
#include
#include
using namespace std;
using namespace cv;
int main(int argc, uchar** urgv)
{
const int r = 100;
Mat src = Mat::zeros(r * 4, r * 4, CV_8UC1); //构建一张图片
//六边形的六个顶点
vector vert(6);
vert[0] = Point(3 * r / 2, static_cast(1.34 * r));
vert[1] = Point(1 * r, 2 * r);
vert[2] = Point(3 * r / 2, static_cast(2.866 * r));
vert[3] = Point(5 * r / 2, static_cast(2.866 * r));
vert[4] = Point(3 * r , 2 * r);
vert[5] = Point(5 * r / 2, static_cast(1.34 * r));
for (int i = 0; i < 6; i++)
{
line(src, vert[i], vert[(i + 1) % 6], Scalar(255), 3, 8, 0); //画六边形
}
namedWindow("input image", WINDOW_AUTOSIZE);
namedWindow("point polygon test demo", WINDOW_AUTOSIZE);
imshow("input image", src);
vector < vector> contours;
vectorhierachy;
Mat csrc;
src.copyTo(csrc);
//找到轮廓
findContours(csrc, contours, hierachy, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(0, 0));
Mat raw_dist = Mat::zeros(csrc.size(), CV_32FC1);
for (int row = 0; row < raw_dist.rows; row++)
{
for (int col = 0; col < raw_dist.cols; col++)
{
double dist = pointPolygonTest(contours[0], Point2f(static_cast(col), static_cast(row)), true);
//contours[0]: 输入的轮廓
//Point2f(static_cast(col), static_cast(row)): 测试点
//true:是否返回距离值,如果是false,1表示在内面,0表示在边界上,-1表示在外部,true返回实际距离
raw_dist.at(row, col) = static_cast(dist); //把每一个像素点都用距离值代替
}
}
double minvalue, maxvalue;
//minMaxLoc 在一个数组中找到全局最小值和全局最大值,该函数不能用于多通道数组
minMaxLoc(raw_dist, &minvalue, &maxvalue, 0, 0, Mat());
Mat drawImg = Mat::zeros(src.size(), CV_8UC3);
for (int row = 0; row < drawImg.rows; row++)
{
for (int col = 0; col < drawImg.cols; col++)
{
float dist = raw_dist.at(row, col); //读取每一个点到轮廓的距离
if (dist > 0) //点在轮廓内
{
drawImg.at(row, col)[0] = (uchar)(abs(dist / maxvalue) * 255); //abs是取绝对值
//蓝色:离轮廓越远,蓝色越深
}
else if (dist < 0) //点在轮廓外
{
drawImg.at(row, col)[2] = (uchar)(abs(dist / minvalue) * 255);
//红色:离轮廓越远,红色越深
}
else //点在轮廓上dist=0
{
drawImg.at(row, col)[0] = (uchar)(abs(255 - dist));
drawImg.at(row, col)[1] = (uchar)(abs(255 - dist));
drawImg.at(row, col)[2] = (uchar)(abs(255 - dist));
// 255,255,255 白色
}
}
}
imshow("point polygon test demo", drawImg);
waitKey(0);
destroyAllWindows();
return 0;
}
距离变换的定义 :计算图像中像素点到最近零像素点的距离,也就是零像素点的最短距离。
距离变换的方法:首先对图像进行二值化处理,然后给每个像素赋值为离它最近的背景像素点与其距离(Manhattan距离or欧氏距离),得到distance metric(距离矩阵),那么离边界越远的点越亮。
距离变换常用应用:
分水岭变换常见的算法 :
- 基于浸泡理论实现,假设图像每个位置的像素值为不同的地貌势必会形成山峰和山谷,在山底不停加水,直到各大山头之间形成了明显的分水线——分水岭算法的基本思想。
//基于距离变换和分水岭的图像分割(image segmentation)
//图像分割的目标是将图像中像素根据一定的规则分为若干(N)个cluster集合,每个集合包含一类像素。
//步骤:1.将白色背景变成黑色-目的是为后面的变换做准备
//2. 使用filter2D与拉普拉斯算子实现图像对比度提高,sharp
//3. 转为二值图像通过threshold
//4. 距离变换
//5. 对距离变换结果进行归一化到[0~1]之间
//6. 使用阈值,再次二值化,得到标记
//7. 腐蚀得到每个Peak - erode
//8.发现轮廓 – findContours
//9. 绘制轮廓 - drawContours
//10.分水岭变换 watershed
//11. 对每个分割区域着色输出结果
#include
#include
using namespace std;
using namespace cv;
int main(int argc, uchar** urgv)
{
Mat src = imread("E:/技能学习/opencv基础/stars.png");
if (src.empty())
{
cout << "could not load image" << endl;
return -1;
}
namedWindow("input image", WINDOW_AUTOSIZE);
namedWindow("demo", WINDOW_AUTOSIZE);
//显示输入图像
imshow("input image", src);
//1. 将输入图像的白色背景改为黑色
for (int row = 0; row < src.rows; row++)
{
for (int col = 0; col < src.cols; col++)
{
if (src.at(row, col)[0]>200 && src.at(row, col)[1]>200
&& src.at(row, col)[2]>200)
{
src.at(row, col)[0] = 0;
src.at(row, col)[1] = 0;
src.at(row, col)[2] = 0;
}
}
}
namedWindow("black background", WINDOW_AUTOSIZE);
imshow("black background", src);
//2. 使用filter2D与拉普拉斯算子实现图像对比度提高,sharp
Mat kernel = (Mat_(3, 3) << 1, 1, 1, 1, -8, 1, 1, 1, 1);
Mat imgLaplance;
Mat sharpenImg = src;
/*为什么用CV_32F,因为拉普拉斯计算的是浮点数,有正值有负值,可能会超0~255范围*/
filter2D(src, imgLaplance, CV_32F, kernel, Point(-1, -1), 0, BORDER_DEFAULT);
src.convertTo(sharpenImg, CV_32F);
Mat resultImg = sharpenImg - imgLaplance;
resultImg.convertTo(resultImg, CV_8UC3);
imgLaplance.convertTo(imgLaplance, CV_8UC3);
imshow("sharpen image", resultImg);
src = resultImg; // copy back
//3. 转为二值图像 通过threshold
Mat binaryImg;
//转灰度图
cvtColor(src, binaryImg, COLOR_BGR2GRAY);
threshold(binaryImg, binaryImg, 40, 255, THRESH_OTSU | THRESH_BINARY);
imshow("binary image", binaryImg);
//4. 距离变换
Mat distImg;
distanceTransform(binaryImg, distImg, DIST_L1, 3, 5);
//5. 对距离变换结果进行归一化到[0~1]之间
normalize(distImg, distImg, 0, 1, NORM_MINMAX);
imshow("distance result", distImg);
//6. 使用阈值,再次二值化,得到标记
threshold(distImg, distImg, .2, 1, THRESH_BINARY);
imshow("distance binary result", distImg);
//7. 腐蚀(erode)得到每个 Peak
Mat k1 = Mat::ones(5, 5, CV_8UC1);
erode(distImg, distImg, k1, Point(-1, -1));
imshow("distance binary erode image", distImg);
//8.发现轮廓 – findContours finContours只支持CV_8UC1的格式,所以要进行通道转换
Mat dist_8u;
distImg.convertTo(dist_8u, CV_8U);
vector> contours;
findContours(dist_8u, contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE, Point(0, 0));
//9. 绘制轮廓 - drawContours
Mat markers = Mat::zeros(src.size(), CV_8UC1);
for (size_t i = 0; i < contours.size(); i++)
{
drawContours(markers, contours, static_cast(i), Scalar::all(static_cast(i) + 1), -1); // -1 是将轮廓填充
}
circle(markers, Point(5, 5), 3, Scalar(255, 255, 255), -1);
imshow("markers", markers * 1000); //因为makers的值很低很低,所以这里乘1000
markers.convertTo(markers, CV_32SC1); // 如果使用CV_8UC1 ,watershed 函数会报错
//因为masker最后的边缘存储是-1,所以必须使用有符号的
//10.分水岭变换 watershed
watershed(src, markers);
Mat mark = Mat::zeros(markers.size(), CV_8UC1);
markers.convertTo(mark, CV_8UC1);
bitwise_not(mark, mark); //取反
imshow("watershed image", mark);
// 产生随机颜色
vector colors;
for (size_t i = 0; i < contours.size(); i++) {
int r = theRNG().uniform(0, 255); //theRNG(),自带的函数,随机数生成器
int g = theRNG().uniform(0, 255);
int b = theRNG().uniform(0, 255);
colors.push_back(Vec3b((uchar)b, (uchar)g, (uchar)r));
}
//11. 对每个分割区域着色 输出结果
Mat dst = Mat::zeros(markers.size(), CV_8UC3);
for (int row = 0; row < markers.rows; row++)
{
for (int col = 0; col < markers.cols; col++) {
int index = markers.at(row, col);
if (index > 0 && index <= static_cast(contours.size())) {
dst.at(row, col) = colors[index - 1];
}
else {
dst.at(row, col) = Vec3b(0, 0, 0);
}
}
}
imshow("Final Result", dst);
waitKey(0);
destroyAllWindows();
return 0;
}