卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)

目的:增强车玻璃后的图像。车玻璃涉及反光等因素。

博主代码地址:https://github.com/Xingxiangrui/view_behind_window_enhancement

目录

一、图像读取与框选

1.1 图像读取

1.2 鼠标框选

1.3 直接框选

二、块内直方图均衡化

2.1 直方图均衡化

2.2 结果

三、限制直方图均衡化

3.1 CLAHE算法

3.2 程序

3.3 效果

四、后续尝试超分辨率重建与reflection removal

五、部分数据结果

5.1 较好数据限制直方图

六、超分辨率重建

6.1 初步运行

6.2 先将bbox存出

6.3 超分重建


一、图像读取与框选

1.1 图像读取

https://www.cnblogs.com/denny402/p/5096001.html

from PIL import Image

img=Image.open('photos/20190614082427627.jpg')
img.show()

1.2 鼠标框选

用鼠标选框暂时模拟目标检测流程,后续找出算法模拟玻璃框的定位。

https://blog.csdn.net/akadiao/article/details/80312254

参考

import cv2
def draw_rectangle(event,x,y,flags,param):
    global ix, iy
    if event==cv2.EVENT_LBUTTONDOWN:
        ix, iy = x, y
        print("point1:=", x, y)
    elif event==cv2.EVENT_LBUTTONUP:
        print("point2:=", x, y)
        print("width=",x-ix)
        print("height=", y - iy)
        cv2.rectangle(img, (ix, iy), (x, y), (0, 255, 0), 2)

img = cv2.imread("max.png")  #加载图片
cv2.namedWindow('image')
cv2.setMouseCallback('image', draw_rectangle)
while(1):
    cv2.imshow('image', img)
    if cv2.waitKey(20) & 0xFF == 27:
        break
cv2.destroyAllWindows()

term端并不稳定

from PIL import Image,ImageDraw,ImageFont,ImageFilter
import cv2

def draw_rectangle(event,x,y,flags,param):
    global ix, iy
    if event==cv2.EVENT_LBUTTONDOWN:
        ix, iy = x, y
        print("point1:=", x, y)
    elif event==cv2.EVENT_LBUTTONUP:
        print("point2:=", x, y)
        print("width=",x-ix)
        print("height=", y - iy)
        cv2.rectangle(img, (ix, iy), (x, y), (0, 255, 0), 2)
	
img=cv2.imread('photos/20190614082427627.jpg')
#img.show()

cv2.namedWindow('image')
cv2.setMouseCallback('image', draw_rectangle)

while(1):
    cv2.imshow('image', img)
    if cv2.waitKey(100) & 0xFF == ord(' '):
        break
		
#cv2.imshow('image', img)
#cv2.waitKey(0)

cv2.destroyAllWindows()

1.3 直接框选

直接框选

        # cut img
        cut_img=img[self.window_left_top[1]:self.window_right_down[1],self.window_left_top[0]:self.window_right_down[0]]

合并与存储

        if self.if_save_enhanced_img==True:
            enhanced_img_path=self.img_path.replace(".jpg","enhanced.jpg")
            img[self.window_left_top[1]:self.window_right_down[1], self.window_left_top[0]:self.window_right_down[0]]=result
            cv2.imwrite(enhanced_img_path,img)

 

二、块内直方图均衡化

2.1 直方图均衡化

        # equal hist
        (b, g, r) = cv2.split(cut_img)
        bH = cv2.equalizeHist(b)
        gH = cv2.equalizeHist(g)
        rH = cv2.equalizeHist(r)  # 合并每一个通道
        result = cv2.merge((bH, gH, rH))

2.2 结果

 

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第1张图片

 

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第2张图片

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第3张图片

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第4张图片

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第5张图片

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第6张图片

三、限制直方图均衡化

3.1 CLAHE算法

https://www.cnblogs.com/jsxyhelu/p/6435601.html?utm_source=debugrun&utm_medium=referral

CLAHE与AHE不同的地方是对比度限幅,为了克服AHE的过度放大噪声的问题;

①设自适应直方图均衡化方法的滑动窗口大小为M*M,则局部映射函数为:

为滑动窗口局部直方图的累积分布函数(cumulative distribution function);

②的导数为直方图,从而局部映射函数的斜率S为:

故,限制直方图高度就等效于限制局部映射函数的斜率,进而限制对比度强度;

③设限定最大斜率为Smax,则允许的直方图高度最大为:

④对高度大于Hmax的直方图应截去多余的部分;

实际处理中,设截取阈值T(而非Hmax)对直方图进行截断,将截去的部分均匀的分布在整个灰阶范围上,以保证总的直方图面积不变,从而使整个直方图上升高度L,则有:

⑤最后改进的直方图为:

综上所述,改变最大的映射函数斜率Smax及相应的最大直方图高度Hmax,可获得不同增强效果的图像;

CLAHE通过限制局部直方图的高度来限制局部对比度的增强幅度,从而限制噪声的放大和局部对比度的过增强。

3.2 程序

http://cpuwdl.com/archives/17/

        # equal hist
        (b, g, r) = cv2.split(cut_img)

        if(self.equal_hist_or_adapt_hist==0):
            bH = cv2.equalizeHist(b)
            gH = cv2.equalizeHist(g)
            rH = cv2.equalizeHist(r)  # 合并每一个通道
        if(self.equal_hist_or_adapt_hist==1):
            clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
            bH = clahe.apply(b)
            gH = clahe.apply(g)
            rH = clahe.apply(r)
        result = cv2.merge((bH, gH, rH))

3.3 效果

明显优于直接直方图均衡。

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第7张图片

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第8张图片

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第9张图片

 

四、后续尝试超分辨率重建与reflection removal

可以考虑洪森科技的人脸增强

http://hillsuntec.com/zh/2109.html

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第10张图片

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第11张图片

 

 

五、部分数据结果

5.1 较好数据限制直方图

限制直方图处理后,视觉效果会有增强,但是模糊的仍然较差。一二为直方图均衡前后对比,第三张为超分重建后的图像。

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第12张图片

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第13张图片

挑选几张较好的图像

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第14张图片

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第15张图片

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第16张图片

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第17张图片

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第18张图片

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第19张图片

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第20张图片

卡车玻璃后的人脸图像增强项目(框取|限制直方图|超分辨率重建)_第21张图片

六、超分辨率重建

DBPN方法

github源码地址:https://github.com/alterzero/DBPN-Pytorch

The project is an official implement of our CVPR2018 paper "Deep Back-Projection Networks for Super-Resolution" (Winner of NTIRE2018 and PIRM2018) https://alterzero.github.io/projects/…

6.1 初步运行

xxr@smartdsp3:/home/ww/DBPN-Pytorch-master$ python3 eval1.py
Namespace(chop_forward=False, gpu_mode=True, gpus=2, input_dir='Input', model='weights_i/smartdsp3IBP_itpami_residual_filter8_epoch_799.pth', model_type='IBP_i', output='Results/', residual=False, seed=123, self_ensemble=False, testBatchSize=1, test_dataset='Set5_LR_x4', threads=1, upscale_factor=4)
===> Loading datasets
===> Building model
Pre-trained SR model is loaded.
===> Processing: butterfly_x4.png || Timer: 0.1000 sec.
===> Processing: woman_x4.png || Timer: 0.0627 sec.
===> Processing: head_x4.png || Timer: 0.0577 sec.
===> Processing: baby_x4.png || Timer: 0.0719 sec.
===> Processing: bird_x4.png || Timer: 0.0682 sec.

6.2 先将bbox存出

为方便运算,先将成批bbox存出

https://blog.csdn.net/weixin_36474809/article/details/89109823 python项目应用实例(二)制作数据集相关: label更改与批量化图片处理

下面程序为取框并存出程序,方便后续超分重建

"""
creted by xingxiangrui on 2019.7.11

   this program is to select out the bbox area and save


"""

from PIL import Image,ImageDraw,ImageFont,ImageFilter
import cv2
import time
import os

if_save_img=True


## ---------------------- click to select windows------------------
class bbox_and_save():
    def __init__(self):
        self.input_dir_path="/home/xxr/trunk_enhancement/photos/"
        self.save_dir_path="/home/xxr/trunk_enhancement/photos/bbox_img/"

    def run_bbox_and_save(self):

        # all txt files
        if not os.path.isdir(self.save_dir_path):
            os.makedirs(self.save_dir_path)

        source_file_list = os.listdir(self.input_dir_path)

        # for all txt files
        for source_txt_name in source_file_list:
            if '.txt' in source_txt_name:
                print(source_txt_name)

                # read images
                source_img_name=source_txt_name.replace(".txt",".jpg")
                path_source_img = os.path.join(self.input_dir_path, source_img_name)
                src_img = cv2.imread(path_source_img)

                # read bbox
                txt_file=open(self.input_dir_path+source_txt_name,'r')
                lines = txt_file.readlines()
                for line in lines:
                    line=line.split(' ')
                cut_img = src_img[int(line[1]):int(line[3]),int(line[0]):int(line[2])]

                #save cuted img
                cut_img_name=source_txt_name.replace(".txt","_cut.jpg")
                path_cut_img=self.save_dir_path+cut_img_name
                #cut_img.save(path_cut_img)
                cv2.imwrite(path_cut_img, cut_img)

if __name__ == '__main__':

    bbox_and_save().run_bbox_and_save()
    print("program done!")

6.3 超分重建

直接运行进行超分重建。

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created by Xing xiangrui on 2019.7.11
    this program is load pretrained model and evaluate on datasets
"""

from __future__ import print_function
import argparse

import os
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
from torch.utils.data import DataLoader
from dbpn import Net as DBPN
from dbpn_v1 import Net as DBPNLL
from dbpn_iterative import Net as DBPNITER
from IBP_i import IBPNet as IBP_i
from IBPN1 import IBPNet as IBPN1 

from data import get_eval_set
from functools import reduce

#from scipy.misc import imsave
#import scipy.io as sio
import time
import cv2

os.environ['CUDA_VISIBLE_DEVICES'] = '1,3'
# Training settings
parser = argparse.ArgumentParser(description='PyTorch Super Res Example')
parser.add_argument('--upscale_factor', type=int, default=4, help="super resolution upscale factor")
parser.add_argument('--testBatchSize', type=int, default=1, help='testing batch size')
parser.add_argument('--gpu_mode', type=bool, default=True)
parser.add_argument('--self_ensemble', type=bool, default=False)
parser.add_argument('--chop_forward', type=bool, default=False)
parser.add_argument('--threads', type=int, default=1, help='number of threads for data loader to use')
parser.add_argument('--seed', type=int, default=123, help='random seed to use. Default=123')
parser.add_argument('--gpus', default=2, type=int, help='number of gpu')
parser.add_argument('--input_dir', type=str, default='/home/xxr/trunk_enhancement/photos/')
parser.add_argument('--output', default='/home/xxr/trunk_enhancement/photos/bbox_img/', help='Location to save checkpoint models or save output results')
parser.add_argument('--test_dataset', type=str, default='bbox_img')
parser.add_argument('--model_type', type=str, default='IBP_i')
parser.add_argument('--residual', type=bool, default=False)
parser.add_argument('--model', default='weights_i/smartdsp3IBP_itpami_residual_filter8_epoch_799.pth', help='sr pretrained base model')

opt = parser.parse_args()

gpus_list=range(opt.gpus)
print(opt)

cuda = opt.gpu_mode
if cuda and not torch.cuda.is_available():
    raise Exception("No GPU found, please run without --cuda")


torch.manual_seed(opt.seed)
if cuda:
    torch.cuda.manual_seed(opt.seed)

print('===> Loading datasets')
test_set = get_eval_set(os.path.join(opt.input_dir,opt.test_dataset), opt.upscale_factor)
testing_data_loader = DataLoader(dataset=test_set, num_workers=opt.threads, batch_size=opt.testBatchSize, shuffle=False)

print('===> Building model')
if opt.model_type == 'DBPNLL':
    model = DBPNLL(num_channels=3, base_filter=64,  feat = 256, num_stages=10, scale_factor=opt.upscale_factor) ###D-DBPN
elif opt.model_type == 'DBPN-RES-MR64-3':
    model = DBPNITER(num_channels=3, base_filter=64,  feat = 256, num_stages=3, scale_factor=opt.upscale_factor) ###D-DBPN
elif opt.model_type == 'IBP_i':
    model = IBP_i(num_channels=3, base_filter=80, feat=256, num_stages=3, scale_factor=opt.upscale_factor)
else:
    model = DBPN(num_channels=3, base_filter=64,  feat = 256, num_stages=7, scale_factor=opt.upscale_factor) ###D-DBPN
    
if cuda:
    model = torch.nn.DataParallel(model, device_ids=gpus_list)

model.load_state_dict(torch.load(opt.model, map_location=lambda storage, loc: storage))
print('Pre-trained SR model is loaded.')

if cuda:
    model = model.cuda(gpus_list[0])

def eval():
    model.eval()
    for batch in testing_data_loader:
        with torch.no_grad():
            input, bicubic, name = Variable(batch[0]), Variable(batch[1]), batch[2]
        if cuda:
            input = input.cuda(gpus_list[0])
            bicubic = bicubic.cuda(gpus_list[0])

        t0 = time.time()
        if opt.chop_forward:
            with torch.no_grad():
                prediction = chop_forward(input, model, opt.upscale_factor)
        else:
            if opt.self_ensemble:
                with torch.no_grad():
                    prediction = x8_forward(input, model)
            else:
                with torch.no_grad():
                    prediction = model(input)
                
        if opt.residual:
            prediction = prediction + bicubic

        t1 = time.time()
        print("===> Processing: %s || Timer: %.4f sec." % (name[0], (t1 - t0)))
        save_img(prediction.cpu().data, name[0])

def save_img(img, img_name):
    save_img = img.squeeze().clamp(0, 1).numpy().transpose(1,2,0)
    # save img
    save_dir=os.path.join(opt.output,opt.test_dataset)
    if not os.path.exists(save_dir):
        os.makedirs(save_dir)
        
    save_fn = save_dir +'/'+ img_name
    cv2.imwrite(save_fn, cv2.cvtColor(save_img*255, cv2.COLOR_BGR2RGB),  [cv2.IMWRITE_PNG_COMPRESSION, 0])

def x8_forward(img, model, precision='single'):
    def _transform(v, op):
        if precision != 'single': v = v.float()

        v2np = v.data.cpu().numpy()
        if op == 'vflip':
            tfnp = v2np[:, :, :, ::-1].copy()
        elif op == 'hflip':
            tfnp = v2np[:, :, ::-1, :].copy()
        elif op == 'transpose':
            tfnp = v2np.transpose((0, 1, 3, 2)).copy()
        
        ret = torch.Tensor(tfnp).cuda()

        if precision == 'half':
            ret = ret.half()
        elif precision == 'double':
            ret = ret.double()

        with torch.no_grad():
            ret = Variable(ret)

        return ret

    inputlist = [img]
    for tf in 'vflip', 'hflip', 'transpose':
        inputlist.extend([_transform(t, tf) for t in inputlist])

    outputlist = [model(aug) for aug in inputlist]
    for i in range(len(outputlist)):
        if i > 3:
            outputlist[i] = _transform(outputlist[i], 'transpose')
        if i % 4 > 1:
            outputlist[i] = _transform(outputlist[i], 'hflip')
        if (i % 4) % 2 == 1:
            outputlist[i] = _transform(outputlist[i], 'vflip')
    
    output = reduce((lambda x, y: x + y), outputlist) / len(outputlist)

    return output
    
def chop_forward(x, model, scale, shave=8, min_size=80000, nGPUs=opt.gpus):
    b, c, h, w = x.size()
    h_half, w_half = h // 2, w // 2
    h_size, w_size = h_half + shave, w_half + shave
    inputlist = [
        x[:, :, 0:h_size, 0:w_size],
        x[:, :, 0:h_size, (w - w_size):w],
        x[:, :, (h - h_size):h, 0:w_size],
        x[:, :, (h - h_size):h, (w - w_size):w]]

    if w_size * h_size < min_size:
        outputlist = []
        for i in range(0, 4, nGPUs):
            with torch.no_grad():
                input_batch = torch.cat(inputlist[i:(i + nGPUs)], dim=0)
            if opt.self_ensemble:
                with torch.no_grad():
                    output_batch = x8_forward(input_batch, model)
            else:
                with torch.no_grad():
                    output_batch = model(input_batch)
            outputlist.extend(output_batch.chunk(nGPUs, dim=0))
    else:
        outputlist = [
            chop_forward(patch, model, scale, shave, min_size, nGPUs) \
            for patch in inputlist]

    h, w = scale * h, scale * w
    h_half, w_half = scale * h_half, scale * w_half
    h_size, w_size = scale * h_size, scale * w_size
    shave *= scale

    with torch.no_grad():
        output = Variable(x.data.new(b, c, h, w))

    output[:, :, 0:h_half, 0:w_half] \
        = outputlist[0][:, :, 0:h_half, 0:w_half]
    output[:, :, 0:h_half, w_half:w] \
        = outputlist[1][:, :, 0:h_half, (w_size - w + w_half):w_size]
    output[:, :, h_half:h, 0:w_half] \
        = outputlist[2][:, :, (h_size - h + h_half):h_size, 0:w_half]
    output[:, :, h_half:h, w_half:w] \
        = outputlist[3][:, :, (h_size - h + h_half):h_size, (w_size - w + w_half):w_size]

    return output

##Eval Start!!!!
eval()

 

 

 

 

 

 

 

 

 

 

 

你可能感兴趣的:(机器学习)