本文主要介绍基于图像强度变换算法来实现图像对比度均衡。通过图像对比度均衡能够抑制图像中的无效信息,使图像转换为更符合计算机或人处理分析的形式,以提高图像的视觉价值和使用价值。本文主要介绍通过OpenCV contrib中的intensity_transform模块实现图像对比度均衡。如果想了解具体相关方法原理见冈萨雷斯主编的图像处理经典书籍 数字图像处理Digital Image Processing 第四版第三章。
本文需要OpenCV contrib库,OpenCV contrib库的编译安装见:
OpenCV_contrib库在windows下编译使用指南
本文所有代码见:
OpenCV-Practical-Exercise
图像强度的英文名称是image intensity,意思是单通道图像像素的值大小。在灰度图像中,图像强度是就是图像的灰度级。在RGB颜色空间中,可以理解为RGB三个通道的像素灰度值,即RGB包含三种图像强度。其他颜色空间也是同样的道理。
对比度是指图像中物体在亮度或颜色上的差异,对比度使图像中一个物体区别于同一视场内的其他物体。对比度越大,图像类各个物体的颜色差别就越大,图像也就越鲜艳。
如下图所示。显然,左图像的对比度较低,因为与右图像相比,很难识别图像中存在的细节。
现实生活中的例子可以是晴天和大雾天。在阳光明媚的日子里,我们觉得一切都很清晰,因此与雾天相比,一切看起来几乎都一样强烈(暗淡、灰暗)。晴天的图像代码对比度高,雾天代表对比度低。
一种更有效的检查图像对比度是低还是高的方法是绘制图像直方图,让我们为上面的图像绘制直方图。如下图所示:
很明显,从左边的图像直方图中,我们可以看到图像强度值位于一个狭窄的范围内。因为很难区分几乎相同的强度值,因此左图像的对比度较低。如果不理解可以看看下面灰度范围图,可以看到灰度变化范围越大,可视化区分度越好。因此,对于高对比度,图像直方图应该跨越整个动态范围。
到目前为止,我们讨论了对比度,但没有讨论低对比度图像的原因。低对比度图像可能是由于照明不足、成像传感器缺乏动态范围,甚至在图像采集过程中镜头光圈设置错误等原因造成的。因此我们需要对低对比度的图像进行图像增强。
OpenCV contrib中的intensity_transform模块包含于图像强度的对比度增强算法。主要包括的算法有:
OpenCV contrib的intensity_transform模块官方代码仓库见:intensity_transform
BIMEF算法,是一个C++实现的原始MATLAB算法。与原始代码相比,此实现速度稍慢,并且无法提供相同的结果。特别是,在一定条件下,对于明亮区域,图像增强的质量会降低,而且OpenCV需要engine库才能运行BIMEF算法,所以本文就不介绍该算法。
关于图像强度的进一步详细介绍见:图像增强综述
本文介绍OpenCV contrib的intensity_transform模块中四种图像强度增强算法。所有图像增加代码都在intensity_transform模块中。本文提供C++和Python版本的实现,不同图像强度增强算法调用接口如下:
C++
// Apply intensity transformations
// 应用强度转换
Mat imgAutoscaled, imgLog;
// autoscaling
autoscaling(g_image, imgAutoscaled);
// gamma变换
gammaCorrection(g_image, g_imgGamma, g_gamma / 100.0f);
// 对数变换
logTransform(g_image, imgLog);
// 对比度拉伸
contrastStretching(g_image, g_contrastStretch, g_r1, g_s1, g_r2, g_s2);
Python
# Apply intensity transformations
# 应用强度转换
# autoscaling
imgAutoscaled = np.zeros(g_image.shape, np.uint8)
cv2.intensity_transform.autoscaling(g_image, imgAutoscaled)
# gamma变换
g_imgGamma = np.zeros(g_image.shape, np.uint8)
cv2.intensity_transform.gammaCorrection(g_image, g_imgGamma, g_gamma / 100.0)
# 对数变换
imgLog = np.zeros(g_image.shape, np.uint8)
cv2.intensity_transform.logTransform(g_image, imgLog)
# 对比度拉伸
g_contrastStretch = np.zeros(g_image.shape, np.uint8)
cv2.intensity_transform.contrastStretching(g_image, g_contrastStretch, g_r1, g_s1, g_r2, g_s2)
不同的方法所需要设定的参数不同,具体如下:
此外为了比较不同图像强度增强方法的效果,加入了图像对比度计算方法
图像对比度计算方法为RMS Contrast,来自于How to calculate the contrast of an image?
方法原理很简单,就是将图像变为灰度图,然后计算图像方差。
代码功能很简单,就是获得输入图像,然后对输入图像应用不同的图像增强算法。对于可调参数的,创建滑动条以调整方法的输入参数。但是要注意的是,输入图像必须为三通道RGB图像。C++和Python代码如下:
C++
#include
#include
#include
using namespace std;
using namespace cv;
using namespace cv::intensity_transform;
// 计算对比度
double rmsContrast(Mat srcImg)
{
Mat dstImg, dstImg_mean, dstImg_std;
// 灰度化
cvtColor(srcImg, dstImg, COLOR_BGR2GRAY);
// 计算图像均值和方差
meanStdDev(dstImg, dstImg_mean, dstImg_std);
// 获得图像对比度
double contrast = dstImg_std.at(0, 0);
return contrast;
}
// 设置命名空间避免污染用户变量
namespace
{
// global variables
Mat g_image;
// gamma变换变量
int g_gamma = 40;
const int g_gammaMax = 500;
Mat g_imgGamma;
const std::string g_gammaWinName = "Gamma Correction";
// 对比度拉伸
Mat g_contrastStretch;
int g_r1 = 70;
int g_s1 = 15;
int g_r2 = 120;
int g_s2 = 240;
const std::string g_contrastWinName = "Contrast Stretching";
// 创建gamma变换滑动条
static void onTrackbarGamma(int, void*)
{
float gamma = g_gamma / 100.0f;
gammaCorrection(g_image, g_imgGamma, gamma);
imshow(g_gammaWinName, g_imgGamma);
cout << g_gammaWinName << ": " << rmsContrast(g_imgGamma) << endl;
}
// 创建对数变换滑动条
static void onTrackbarContrastR1(int, void*)
{
contrastStretching(g_image, g_contrastStretch, g_r1, g_s1, g_r2, g_s2);
imshow("Contrast Stretching", g_contrastStretch);
cout << g_contrastWinName << ": " << rmsContrast(g_contrastStretch) << endl;
}
static void onTrackbarContrastS1(int, void*)
{
contrastStretching(g_image, g_contrastStretch, g_r1, g_s1, g_r2, g_s2);
imshow("Contrast Stretching", g_contrastStretch);
cout << g_contrastWinName << ": " << rmsContrast(g_contrastStretch) << endl;
}
static void onTrackbarContrastR2(int, void*)
{
contrastStretching(g_image, g_contrastStretch, g_r1, g_s1, g_r2, g_s2);
imshow("Contrast Stretching", g_contrastStretch);
cout << g_contrastWinName << ": " << rmsContrast(g_contrastStretch) << endl;
}
static void onTrackbarContrastS2(int, void*)
{
contrastStretching(g_image, g_contrastStretch, g_r1, g_s1, g_r2, g_s2);
imshow("Contrast Stretching", g_contrastStretch);
cout << g_contrastWinName << ": " << rmsContrast(g_contrastStretch) << endl;
}
}
int main()
{
// 图像路径
const std::string inputFilename = "./image/tree.jpg";
// Read input image
// 读图
g_image = imread(inputFilename);
if (g_image.empty())
{
printf("image is empty");
return 0;
}
// Create trackbars
// 创建滑动条
namedWindow(g_gammaWinName);
// 创建gamma变换筛选方法
createTrackbar("Gamma value", g_gammaWinName, &g_gamma, g_gammaMax, onTrackbarGamma);
// 对比度拉伸 Contrast Stretching
namedWindow(g_contrastWinName);
createTrackbar("Contrast R1", g_contrastWinName, &g_r1, 256, onTrackbarContrastR1);
createTrackbar("Contrast S1", g_contrastWinName, &g_s1, 256, onTrackbarContrastS1);
createTrackbar("Contrast R2", g_contrastWinName, &g_r2, 256, onTrackbarContrastR2);
createTrackbar("Contrast S2", g_contrastWinName, &g_s2, 256, onTrackbarContrastS2);
// Apply intensity transformations
// 应用强度转换
Mat imgAutoscaled, imgLog;
// autoscaling
autoscaling(g_image, imgAutoscaled);
// gamma变换
gammaCorrection(g_image, g_imgGamma, g_gamma / 100.0f);
// 对数变换
logTransform(g_image, imgLog);
// 对比度拉伸
contrastStretching(g_image, g_contrastStretch, g_r1, g_s1, g_r2, g_s2);
// Display intensity transformation results
// 展示结果
imshow("Original Image", g_image);
cout << "Original Image: " << rmsContrast(g_image) << endl;
imshow("Autoscale", imgAutoscaled);
cout << "Autoscale: " << rmsContrast(imgAutoscaled) << endl;
imshow(g_gammaWinName, g_imgGamma);
cout << g_gammaWinName << ": " << rmsContrast(g_imgGamma) << endl;
imshow("Log Transformation", imgLog);
cout << "Log Transformation: " << rmsContrast(imgLog) << endl;
imshow(g_contrastWinName, g_contrastStretch);
cout << g_contrastWinName << ": " << rmsContrast(g_contrastStretch) << endl;
waitKey(0);
return 0;
}
Python
# -*- coding: utf-8 -*-
"""
Created on Thu Sep 10 18:48:56 2020
@author: luohenyueji
"""
import cv2
import numpy as np
# ----- 全局变量
# 输入图片
g_image = np.zeros((3, 3, 3), np.uint8)
# gamma变换变量
g_gamma = 40
g_gammaMax = 500
g_gammaWinName = "Gamma Correction"
# 对比度拉伸
g_r1 = 70
g_s1 = 15
g_r2 = 120
g_s2 = 240
g_contrastWinName = "Contrast Stretching"
# 创建gamma变换滑动条
def onTrackbarGamma(x):
g_gamma = x
gamma = g_gamma / 100.0
g_imgGamma = np.zeros(g_image.shape, np.uint8)
cv2.intensity_transform.gammaCorrection(g_image, g_imgGamma, gamma)
cv2.imshow(g_gammaWinName, g_imgGamma);
print(g_gammaWinName + ": " + str(rmsContrast(g_imgGamma)))
# 创建对数变换滑动条
def onTrackbarContrastR1(x):
g_r1 = x
g_contrastStretch = np.zeros(g_image.shape, np.uint8)
cv2.intensity_transform.contrastStretching(g_image, g_contrastStretch, g_r1, g_s1, g_r2, g_s2)
cv2.imshow("Contrast Stretching", g_contrastStretch)
print(g_contrastWinName + ": " + str(rmsContrast(g_contrastStretch)))
def onTrackbarContrastS1(x):
g_s1 = x
g_contrastStretch = np.zeros(g_image.shape, np.uint8)
cv2.intensity_transform.contrastStretching(g_image, g_contrastStretch, g_r1, g_s1, g_r2, g_s2)
cv2.imshow("Contrast Stretching", g_contrastStretch)
print(g_contrastWinName + ": " + str(rmsContrast(g_contrastStretch)))
def onTrackbarContrastR2(x):
g_r2 = x
g_contrastStretch = np.zeros(g_image.shape, np.uint8)
cv2.intensity_transform.contrastStretching(g_image, g_contrastStretch, g_r1, g_s1, g_r2, g_s2)
cv2.imshow("Contrast Stretching", g_contrastStretch)
print(g_contrastWinName + ": " + str(rmsContrast(g_contrastStretch)))
def onTrackbarContrastS2(x):
g_s2 = x
g_contrastStretch = np.zeros(g_image.shape, np.uint8)
cv2.intensity_transform.contrastStretching(g_image, g_contrastStretch, g_r1, g_s1, g_r2, g_s2)
cv2.imshow("Contrast Stretching", g_contrastStretch)
print(g_contrastWinName + ": " + str(rmsContrast(g_contrastStretch)))
# 计算对比度
def rmsContrast(scrImg):
dstImg = cv2.cvtColor(scrImg, cv2.COLOR_BGR2GRAY)
contrast = dstImg.std()
return contrast
def main():
# 图像路径
inputFilename = "./image/car.png"
# 读图
global g_image
g_image = cv2.imread(inputFilename)
if g_image is None:
print("image is empty")
return
# 创建滑动条
cv2.namedWindow(g_gammaWinName)
# 创建gamma变换筛选方法
cv2.createTrackbar("Gamma value", g_gammaWinName, g_gamma, g_gammaMax, onTrackbarGamma)
# 对比度拉伸 Contrast Stretching
cv2.namedWindow(g_contrastWinName)
cv2.createTrackbar("Contrast R1", g_contrastWinName, g_r1, 256, onTrackbarContrastR1)
cv2.createTrackbar("Contrast S1", g_contrastWinName, g_s1, 256, onTrackbarContrastS1)
cv2.createTrackbar("Contrast R2", g_contrastWinName, g_r2, 256, onTrackbarContrastR2)
cv2.createTrackbar("Contrast S2", g_contrastWinName, g_s2, 256, onTrackbarContrastS2)
# Apply intensity transformations
# 应用强度转换
# autoscaling
imgAutoscaled = np.zeros(g_image.shape, np.uint8)
cv2.intensity_transform.autoscaling(g_image, imgAutoscaled)
# gamma变换
g_imgGamma = np.zeros(g_image.shape, np.uint8)
cv2.intensity_transform.gammaCorrection(g_image, g_imgGamma, g_gamma / 100.0)
# 对数变换
imgLog = np.zeros(g_image.shape, np.uint8)
cv2.intensity_transform.logTransform(g_image, imgLog)
# 对比度拉伸
g_contrastStretch = np.zeros(g_image.shape, np.uint8)
cv2.intensity_transform.contrastStretching(g_image, g_contrastStretch, g_r1, g_s1, g_r2, g_s2)
# 展示结果
cv2.imshow("Original Image", g_image);
print("Original Image: " + str(rmsContrast(g_image)))
cv2.imshow("Autoscale", imgAutoscaled)
print("Autoscale: " + str(rmsContrast(imgAutoscaled)))
cv2.imshow(g_gammaWinName, g_imgGamma)
print(g_gammaWinName + ": " + str(rmsContrast(g_imgGamma)))
cv2.imshow("Log Transformation", imgLog)
print("Log Transformation: " + str(rmsContrast(imgLog)))
cv2.imshow(g_contrastWinName, g_contrastStretch)
print(g_contrastWinName + ": " + str(rmsContrast(g_contrastStretch)))
cv2.waitKey(0)
if __name__ == '__main__':
main()
测试图片部分来自于intensity_transformations。本文分别对四种不同场景进行了测试,其中Gamma Correction和Contrast Stretching是手动调整参数后个人觉得最好结果。具体结果如下:
场景1 car
类型 | 结果 |
---|---|
原图 | |
Autoscaling | |
Gamma Correction | |
Contrast Stretching | |
Log Transformations |
场景2 tree
类型 | 结果 |
---|---|
原图 | |
Autoscaling | |
Gamma Correction | |
Contrast Stretching | |
Log Transformations |
场景3 xray
类型 | 结果 |
---|---|
原图 | |
Autoscaling | |
Gamma Correction | |
Contrast Stretching | |
Log Transformations |
场景4 indicator
类型 | 结果 |
---|---|
原图 | |
Autoscaling | |
Gamma Correction | |
Contrast Stretching | |
Log Transformations |
总结不同算法在四个场景表现如下:
总结来说,如果对比度影响不那么大或者需要自动化,autoscaling足以对付绝大部分场景,事实上autoscaling用的也算最多的方式。如果对图像对比度要求特别高,通常都是自动参数寻优+Contrast Stretching+图像对比度结果评价来应用,通过设定不同的参数,然后使用Contrast Stretching对图像进行处理,最后筛选图像对比度最高的一次作为最后结果,但是这种方式可能需要一定处理时间,不过确实是一个很不错的解决方案。在实际场景,结合autoscaling和Contrast Stretching自动寻找参,找对比度最好结果即可。