图像分割演进之路
深度学习下的图像分割
最近正在看一本有关医疗图像处理技术的书籍,里面有一整个章节在讲解各种医疗图像分割技术,
与我之前写的两篇文章相比,细分到了医疗领域,医疗图像与其他领域相比,虽万变不离其宗,
但仍有独特之处,所以想针对医疗领域图像分割再写一个系列的文章,希望对从事医疗算法的同行有所帮助,
本系列文章分为如下四部分:
**医疗图像分割概述和基本理论**
基于模糊聚类的医疗图像分割
基于可形变模型的医疗图像分割
基于神经网络的医疗图像分割
本篇文章主要讲解医疗图像分割概述和基本理论
1概述:
图像分割就是从背景中分离出感兴趣区域,在医疗领域,图像分割可以用来描绘解剖结构和组织类别,例如器官分割,病灶分割等,还可以用于可视化和图像压缩,是图像处理中非常重要的一个领域。
图像分割基本上可以通过两种方式实现,一种是识别出感兴趣区域内所有的像素,另一种先识别出感兴趣区域的边界来进行分割,第一种方法主要依赖于像素的灰度值,但像素的其他属性,例如纹理等也会被应用于图像分割,基于边界的分割方法主要依赖于图像的梯度。
2基于阈值的分割:
有多种基于阈值的图像分割技术,有一部分是基于图像直方图,例如,基于整幅图像的直方图选择一个阈值,
这种方式叫做全局阈值分割;其他的则是基于图像局部属性,例如局部均值和方差,
或者是局部梯度,这种方式叫做局部阈值分割。
2.1全局阈值:
全局阈值假设图像的直方图满足双峰形状:
这样选取一个阈值就能将图像一分为二,例如,我们选取阈值T,则分割的结果为:
下面是一个全局阈值分割的例子:
除了根据直方图选取分割阈值外,还有许多计算分割阈值的方法,其中一个是基于能够让错分概率最小化的分类模型,下面这篇论文讲述基于该方法实现MR脑部分割:
MR quantification of cerebral ventricular volume using a semiautomated algorithm
他的缺点也很明显,它需要目标与背景有很好的对比度,如果对比度低,或者图像有噪声则效果不理想,现实中很少有图像直方图满足严格的双峰形状。
2.2局部阈值:
当全局阈值不适用时,可以考虑使用局部阈值,局部阈值可以通过如下步骤获得:
1.将整幅图像划分为多个小图像,然后分别计算每个小图像的阈值
2.最简单的可以将所有小图像阈值的均值作为整幅图像的分割阈值
2.3图像预处理:
如果图像对比度低,边界模糊,可以通过图像预处理技术来改善图像直方图,例如中值滤波,如下图所示,图像经中值滤波后直方图出现双峰形状,除了中值滤波还有其他方法,这里就不再一一赘述了。
在Opencv中实现了基本的基于阈值的图像分割技术。
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include
using namespace cv;
using std::cout;
/// Global variables
int threshold_value = 0;
int threshold_type = 3;
int const max_value = 255;
int const max_type = 4;
int const max_binary_value = 255;
Mat src, src_gray, dst;
const char* window_name = "Threshold Demo";
const char* trackbar_type = "Type: \n 0: Binary \n 1: Binary Inverted \n 2: Truncate \n 3: To Zero \n 4: To Zero Inverted";
const char* trackbar_value = "Value";
//![Threshold_Demo]
/**
* @function Threshold_Demo
*/
static void Threshold_Demo( int, void* )
{
/* 0: Binary
1: Binary Inverted
2: Threshold Truncated
3: Threshold to Zero
4: Threshold to Zero Inverted
*/
threshold( src_gray, dst, threshold_value, max_binary_value, cv::THRESH_TRIANGLE );
imshow( window_name, dst );
}
//![Threshold_Demo]
/**
* @function main
*/
int main( int argc, char** argv )
{
//! [load]
String imageName("D:\\basketball2.png"); // by default
if (argc > 1)
{
imageName = argv[1];
}
src = imread( samples::findFile( imageName ), IMREAD_COLOR ); // Load an image
if (src.empty())
{
cout << "Cannot read the image: " << imageName << std::endl;
return -1;
}
cvtColor( src, src_gray, COLOR_BGR2GRAY ); // Convert the image to Gray
//cvtColor( src, src_gray, COLOR_BGR2HSV); // Convert the image to Gray
//! [load]
//! [window]
namedWindow( window_name, WINDOW_AUTOSIZE ); // Create a window to display results
//! [window]
//! [trackbar]
createTrackbar( trackbar_type,
window_name, &threshold_type,
max_type, Threshold_Demo ); // Create a Trackbar to choose type of Threshold
createTrackbar( trackbar_value,
window_name, &threshold_value,
max_value, Threshold_Demo ); // Create a Trackbar to choose Threshold value
//! [trackbar]
Threshold_Demo( 0, 0 ); // Call the function to initialize
/// Wait until the user finishes the program
waitKey();
return 0;
}
3区域增长
区域增长也叫区域融合,就是寻找具有相同灰度值的像素组,需要手动或者自动设置一个种子点,然后不断地检查周边像素,如果周边像素与种子点相似,则将该像素吸纳进来,整个搜索过程不断地进行直到找不到相似像素为止,这个过程中相似性度量非常重要,可以对比像素点的灰度值,也可以对比以像素点为中心的一块区域内的均值。
在opencv中已经实现了对区域增长的封装。
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include
#include "math.h"
using namespace cv;
using namespace std;
Point recent_Point, recent_Point1;
Mat RegionGrow(Mat src, Point2i pt, int th)
{
Point2i ptGrowing; //待生长点位置
int nGrowLable = 0; //标记是否生长过
int nSrcValue = 0; //生长起点灰度值
int nCurValue = 0; //当前生长点灰度值
Mat matDst = Mat::zeros(src.size(), CV_8UC1); //创建一个空白区域,填充为黑色
//生长方向顺序数据
int DIR[8][2] = { { -1, -1 }, { 0, -1 }, { 1, -1 }, { 1, 0 }, { 1, 1 }, { 0, 1 }, { -1, 1 }, { -1, 0 } };
vector<Point2i> vcGrowPt; //生长点栈
vcGrowPt.push_back(pt); //将生长点压入栈中
matDst.at<uchar>(pt.y, pt.x) = 255; //标记生长点
nSrcValue = src.at<uchar>(pt.y, pt.x); //记录生长点的灰度值
while (!vcGrowPt.empty()) //生长栈不为空则生长
{
pt = vcGrowPt.back(); //取出一个生长点
vcGrowPt.pop_back();
//分别对八个方向上的点进行生长
for (int i = 0; i < 9; ++i)
{
ptGrowing.x = pt.x + DIR[i][0];
ptGrowing.y = pt.y + DIR[i][1];
//检查是否是边缘点
if (ptGrowing.x < 0 || ptGrowing.y < 0 || ptGrowing.x >(src.cols - 1) || (ptGrowing.y > src.rows - 1))
continue;
nGrowLable = matDst.at<uchar>(ptGrowing.y, ptGrowing.x); //当前待生长点的灰度值
if (nGrowLable == 0) //如果标记点还没有被生长
{
nCurValue = src.at<uchar>(ptGrowing.y, ptGrowing.x);
if (abs(nSrcValue - nCurValue) < th) //在阈值范围内则生长
{
matDst.at<uchar>(ptGrowing.y, ptGrowing.x) = 255; //标记为白色
vcGrowPt.push_back(ptGrowing); //将下一个生长点压入栈中
}
}
}
}
return matDst.clone();
}
void On_mouse(int event, int x, int y, int flags, void*)//每次点击左键,在相应位置画红点
{
if (event == EVENT_LBUTTONDOWN)
{
recent_Point = Point(x, y);
cout << "img_x" << " " << recent_Point.x << " " << "img_y" << " " << recent_Point.y << endl;
//circle(srcimg, recent_Point, 2, Scalar(0, 0, 255), -1);
//imshow("srcImg", srcimg);
}
}
int main() //区域生长
{
Mat binaryimg, greyimg;
Mat regiongrow, regiongrow1, regiongrow2;
Mat dst;
int th = 10;
Mat src = imread("1.jpg");
cvtColor(src, greyimg, COLOR_BGR2GRAY); //转化为灰度图
Mat temp_regiongrow = Mat::zeros(src.size(), CV_8UC1); //创建一个空白区域,填充为黑色
//转化为二值图
threshold(greyimg, binaryimg, 200, 255, THRESH_BINARY);
namedWindow("srcImg", 0);
imshow("srcImg", src);
namedWindow("binaryImg", 0);
imshow("binaryImg", binaryimg);
cout << "select one point in binaryImg" << endl;
setMouseCallback("binaryImg", On_mouse);
for (int i = 0;i < 1;i++) {
char c = (char)waitKey(0);
cout << "select one point in binaryImg" << endl;
setMouseCallback("binaryImg", On_mouse);
if (c == 'b') {
regiongrow1 = RegionGrow(binaryimg, recent_Point, th);
bitwise_or(regiongrow1, temp_regiongrow, regiongrow); //和前一个分割的图做或运算
temp_regiongrow = regiongrow1.clone(); //保存前一个分割图
}
bitwise_and(greyimg, regiongrow, dst); //与原图做与运算
namedWindow("dstimg", 0);
imshow("dstimg", dst);
imwrite("region_growing.jpg", dst);
}
waitKey(0);
return 0;
}
4分水岭
分水岭分割算法是利用图像拓扑学,基于区域的图像分割技术,它需要在目标区域和背景内分别设置种子点,种子点可以手动设置,也可以通过计算得到,可以把一幅图像亮的地方想象成山顶,暗的地方想象成山谷,从每个山谷底部注水,要求不同山谷之间的水不能汇合,为了防止汇合就需要筑起堤坝,这些堤坝就是带分割目标的边界。
Opencv中已经封装了分水岭分割算法
#include
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include
#include
using namespace cv;
using namespace std;
static void help(char** argv)
{
cout << "\nThis program demonstrates the famous watershed segmentation algorithm in OpenCV: watershed()\n"
"Usage:\n" << argv[0] <<" [image_name -- default is fruits.jpg]\n" << endl;
cout << "Hot keys: \n"
"\tESC - quit the program\n"
"\tr - restore the original image\n"
"\tw or SPACE - run watershed segmentation algorithm\n"
"\t\t(before running it, *roughly* mark the areas to segment on the image)\n"
"\t (before that, roughly outline several markers on the image)\n";
}
Mat markerMask, img;
Point prevPt(-1, -1);
static void onMouse( int event, int x, int y, int flags, void* )
{
if( x < 0 || x >= img.cols || y < 0 || y >= img.rows )
return;
if( event == EVENT_LBUTTONUP || !(flags & EVENT_FLAG_LBUTTON) )
prevPt = Point(-1,-1);
else if( event == EVENT_LBUTTONDOWN )
prevPt = Point(x,y);
else if( event == EVENT_MOUSEMOVE && (flags & EVENT_FLAG_LBUTTON) )
{
Point pt(x, y);
if( prevPt.x < 0 )
prevPt = pt;
line( markerMask, prevPt, pt, Scalar::all(255), 5, 8, 0 );
line( img, prevPt, pt, Scalar::all(255), 5, 8, 0 );
prevPt = pt;
imshow("image", img);
}
}
int main( int argc, char** argv )
{
cv::CommandLineParser parser(argc, argv, "{help h | | }{ @input | fruits.jpg | }");
if (parser.has("help"))
{
help(argv);
return 0;
}
string filename = samples::findFile(parser.get<string>("@input"));
Mat img0 = imread(filename, 1), imgGray;
if( img0.empty() )
{
cout << "Couldn't open image ";
help(argv);
return 0;
}
help(argv);
namedWindow( "image", 1 );
img0.copyTo(img);
cvtColor(img, markerMask, COLOR_BGR2GRAY);
cvtColor(markerMask, imgGray, COLOR_GRAY2BGR);
markerMask = Scalar::all(0);
imshow( "image", img );
setMouseCallback( "image", onMouse, 0 );
for(;;)
{
char c = (char)waitKey(0);
if( c == 27 )
break;
if( c == 'r' )
{
markerMask = Scalar::all(0);
img0.copyTo(img);
imshow( "image", img );
}
if( c == 'w' || c == ' ' )
{
int i, j, compCount = 0;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(markerMask, contours, hierarchy, RETR_CCOMP, CHAIN_APPROX_SIMPLE);
if( contours.empty() )
continue;
Mat markers(markerMask.size(), CV_32S);
markers = Scalar::all(0);
int idx = 0;
for( ; idx >= 0; idx = hierarchy[idx][0], compCount++ )
drawContours(markers, contours, idx, Scalar::all(compCount+1), -1, 8, hierarchy, INT_MAX);
if( compCount == 0 )
continue;
vector<Vec3b> colorTab;
for( i = 0; i < compCount; i++ )
{
int b = theRNG().uniform(0, 255);
int g = theRNG().uniform(0, 255);
int r = theRNG().uniform(0, 255);
colorTab.push_back(Vec3b((uchar)b, (uchar)g, (uchar)r));
}
double t = (double)getTickCount();
watershed( img0, markers );
t = (double)getTickCount() - t;
printf( "execution time = %gms\n", t*1000./getTickFrequency() );
Mat wshed(markers.size(), CV_8UC3);
// paint the watershed image
for( i = 0; i < markers.rows; i++ )
for( j = 0; j < markers.cols; j++ )
{
int index = markers.at<int>(i,j);
if( index == -1 )
wshed.at<Vec3b>(i,j) = Vec3b(255,255,255);
else if( index <= 0 || index > compCount )
wshed.at<Vec3b>(i,j) = Vec3b(0,0,0);
else
wshed.at<Vec3b>(i,j) = colorTab[index - 1];
}
wshed = wshed*0.5 + imgGray*0.5;
imshow( "watershed transform", wshed );
}
}
return 0;
}
5基于边缘的图像分割
基于边缘的分割容易理解,Opencv中Sobel算法,和Canny边缘检测
Sobel边缘检测:
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include
using namespace cv;
using namespace std;
/**
* @function main
*/
int main( int argc, char** argv )
{
cv::CommandLineParser parser(argc, argv,
"{@input |lena.jpg|input image}"
"{ksize k|1|ksize (hit 'K' to increase its value at run time)}"
"{scale s|1|scale (hit 'S' to increase its value at run time)}"
"{delta d|0|delta (hit 'D' to increase its value at run time)}"
"{help h|false|show help message}");
cout << "The sample uses Sobel or Scharr OpenCV functions for edge detection\n\n";
parser.printMessage();
cout << "\nPress 'ESC' to exit program.\nPress 'R' to reset values ( ksize will be -1 equal to Scharr function )";
//![variables]
// First we declare the variables we are going to use
Mat image,src, src_gray;
Mat grad;
const String window_name = "Sobel Demo - Simple Edge Detector";
int ksize = parser.get<int>("ksize");
int scale = parser.get<int>("scale");
int delta = parser.get<int>("delta");
int ddepth = CV_16S;
//![variables]
//![load]
String imageName = parser.get<String>("@input");
// As usual we load our source image (src)
image = imread( samples::findFile( imageName ), IMREAD_COLOR ); // Load an image
// Check if image is loaded fine
if( image.empty() )
{
printf("Error opening image: %s\n", imageName.c_str());
return EXIT_FAILURE;
}
//![load]
for (;;)
{
//![reduce_noise]
// Remove noise by blurring with a Gaussian filter ( kernel size = 3 )
GaussianBlur(image, src, Size(3, 3), 0, 0, BORDER_DEFAULT);
//![reduce_noise]
//![convert_to_gray]
// Convert the image to grayscale
cvtColor(src, src_gray, COLOR_BGR2GRAY);
//![convert_to_gray]
//![sobel]
/// Generate grad_x and grad_y
Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y;
/// Gradient X
Sobel(src_gray, grad_x, ddepth, 1, 0, ksize, scale, delta, BORDER_DEFAULT);
/// Gradient Y
Sobel(src_gray, grad_y, ddepth, 0, 1, ksize, scale, delta, BORDER_DEFAULT);
//![sobel]
//![convert]
// converting back to CV_8U
convertScaleAbs(grad_x, abs_grad_x);
convertScaleAbs(grad_y, abs_grad_y);
//![convert]
//![blend]
/// Total Gradient (approximate)
addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad);
//![blend]
//![display]
imshow(window_name, grad);
char key = (char)waitKey(0);
//![display]
if(key == 27)
{
return EXIT_SUCCESS;
}
if (key == 'k' || key == 'K')
{
ksize = ksize < 30 ? ksize+2 : -1;
}
if (key == 's' || key == 'S')
{
scale++;
}
if (key == 'd' || key == 'D')
{
delta++;
}
if (key == 'r' || key == 'R')
{
scale = 1;
ksize = -1;
delta = 0;
}
}
return EXIT_SUCCESS;
}
Canny边缘检测:
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include
using namespace cv;
//![variables]
Mat src, src_gray;
Mat dst, detected_edges;
int lowThreshold = 0;
const int max_lowThreshold = 100;
const int ratio = 3;
const int kernel_size = 3;
const char* window_name = "Edge Map";
//![variables]
/**
* @function CannyThreshold
* @brief Trackbar callback - Canny thresholds input with a ratio 1:3
*/
static void CannyThreshold(int, void*)
{
//![reduce_noise]
/// Reduce noise with a kernel 3x3
blur( src_gray, detected_edges, Size(3,3) );
//![reduce_noise]
//![canny]
/// Canny detector
Canny( detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size );
dst = Scalar::all(0);
//![fill]
//![copyto]
src.copyTo( dst, detected_edges);
//![copyto]
//![display]
imshow( window_name, dst );
//![display]
}
/**
* @function main
*/
int main( int argc, char** argv )
{
//![load]
CommandLineParser parser( argc, argv, "{@input | fruits.jpg | input image}" );
src = imread( samples::findFile( parser.get<String>( "@input" ) ), IMREAD_COLOR ); // Load an image
if( src.empty() )
{
std::cout << "Could not open or find the image!\n" << std::endl;
std::cout << "Usage: " << argv[0] << " " << std::endl;
return -1;
}
//![load]
//![create_mat]
/// Create a matrix of the same type and size as src (for dst)
dst.create( src.size(), src.type() );
//![create_mat]
//![convert_to_gray]
cvtColor( src, src_gray, COLOR_BGR2GRAY );
//![convert_to_gray]
//![create_window]
namedWindow( window_name, WINDOW_AUTOSIZE );
//![create_window]
//![create_trackbar]
/// Create a Trackbar for user to enter threshold
createTrackbar( "Min Threshold:", window_name, &lowThreshold, max_lowThreshold, CannyThreshold );
//![create_trackbar]
/// Show the image
CannyThreshold(0, 0);
/// Wait until user exit program by pressing a key
waitKey(0);
return 0;
}
6多普图像分割
6.1多模态
这里的多模态指的是利用不同的成像设备扫描出来的图像数据,
例如CT,MR,超声,PET等,根据不同设备成像原理不同,同意解剖部位的图像也不同,
多模态图像分割就是利用这种不同来进行分割。
以上就是医疗图像分割中常用的技术。
6.2多时域
多时域就是动态扫描的图像,同一个解剖位置,同一个成像设备,不同时间段的扫描图像,例如,DSA剪影技术就是在检查开始时扫描一张基准图像,然后注射造影剂后扫描另一张图像,用增强图像减去基准图像就是剪影图像,剪影图像用于显示血管信息。