iOS实现类Prisma软件(三)——CoreML

前言


之前两篇文章介绍了如何利用Tensorflow与Metal来实现图片个性化处理,其中详细描述了神经网络框架,训练方式以及原理:

  1. iOS实现类Prisma软件
  2. iOS实现类Prisma软件(二)

本篇文章在以上基础上结合苹果在WWDC2017上提出的CoreML
,介绍一种新的实现思路,大大节省iOS端代码并降低集成难度。

效果图

将模型转换到CoreML


苹果官方给了几种三方模型转换成CoreML model format的转换工具,由于我们的训练模型基于Tensorflow,所以我这里用的TensorFlow converter。具体转换工具的使用可以参看苹果的官方文档。
我这里基于工具写了一个转换工程:

from matplotlib import pyplot
from matplotlib.pyplot import imshow
from PIL import Image
import tfcoreml
import numpy as np
import image_utils
import os
import tensorflow as tf
from coremltools.proto import FeatureTypes_pb2 as _FeatureTypes_pb2
import coremltools

tf_model_path = '../protobuf/frozen.pb'
mlmodel = tfcoreml.convert(
        tf_model_path = tf_model_path,
        mlmodel_path = '../mlmodel/stylize.mlmodel',
        output_feature_names = ['transformer/expand/conv3/conv/Sigmoid:0'],
        input_name_shape_dict = {'input:0':[1,512,512,3], 'style:0':[32]})

# test model
newstyle = np.zeros([32], dtype=np.float32)
newstyle[0] = 1
newImage = np.expand_dims(image_utils.load_np_image(os.path.expanduser("../sample.jpg")), 0)
newImage = newImage.reshape((512,512,3))
imshow(newImage)
pyplot.show()

coreml_image_input = np.transpose(newImage, (2,0,1))
# coreml_image_input = Image.open("../sample.jpg")
# imshow(coreml_image_input)
# pyplot.show()
coreml_style_index = newstyle[:,np.newaxis,np.newaxis,np.newaxis,np.newaxis]
coreml_input = {'input__0': coreml_image_input, 'style__0': coreml_style_index}
coreml_out = mlmodel.predict(coreml_input, useCPUOnly = True)['transformer__expand__conv3__conv__Sigmoid__0']
coreml_out = np.transpose(coreml_out, (1,2,0))
imshow(coreml_out)
pyplot.show()

iOS实现类Prisma软件(二)
中介绍了网络结构,并在存储图的时候定义了输入(input&style)与输出的节点名字(transformer/expand/conv3/conv/Sigmoid),所以在调用tfcoreml.convert的时候直接填写了参数,生成并保存了stylize.mlmodel

iOS中使用CoreML模型


集成mlmodel很简单,只需要引入进工程,Xcode会自动生成模型的接口方法,方便使用的时候调用。


xcode工程

初始化模型只需要:

#import "stylize.h"
@interface HomeViewController ()///
{
    stylize *styleModel;
}
@implementation HomeViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    if (@available(iOS 12.0, *)) {
        styleModel = [[stylize alloc] init];
    } else {
        NSLog(@"Need Run iOS 12.0+");
    }
}

使用Predict函数,需要按要求构建输入与输出:

- (void)createStyleImage:(UIImage *)source
{
    dispatch_async(dispatch_get_global_queue(0, DISPATCH_QUEUE_PRIORITY_DEFAULT), ^{
        MLMultiArray *styleArray = [[MLMultiArray alloc] initWithShape:@[@32,@1,@1,@1,@1] dataType:MLMultiArrayDataTypeDouble error:nil];
        for (int i = 0; i < styleArray.count; i++) {
            [styleArray setObject:@0 atIndexedSubscript:i];
        }
        [styleArray setObject:@1 atIndexedSubscript:self->currentStyle];
        
        stylizeInput *input = [[stylizeInput alloc] initWithStyle__0:styleArray input__0:[self getImagePixel:source]];
        stylizeOutput *output = [styleModel predictionFromFeatures:input error:nil];
        dispatch_async(dispatch_get_main_queue(), ^{
            self->_styleImageView.image = [self createImage:output.transformer__expand__conv3__conv__Sigmoid__0];
            self->isDone = true;
        });
    });
}

- (MLMultiArray *)getImagePixel:(UIImage *)image
{
    int width = image.size.width;
    int height = image.size.height;
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
    NSUInteger bytesPerPixel = 4;
    NSUInteger bytesPerRow = bytesPerPixel * width;
    NSUInteger bitsPerComponent = 8;
    CGContextRef context = CGBitmapContextCreate(rawData, width, height,
                                                 
                                                 bitsPerComponent, bytesPerRow, colorSpace,
                                                 
                                                 kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    
    CGColorSpaceRelease(colorSpace);
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), image.CGImage);
    CGContextRotateCTM(context, M_PI_2);
    UIImage *ogImg = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
    dispatch_async(dispatch_get_main_queue(), ^{
        self->_ogImageView.image = ogImg;
    });
    
    CGContextRelease(context);
    MLMultiArray *tmpArray = [[MLMultiArray alloc] initWithShape:@[[NSNumber numberWithInt: wanted_input_channels],[NSNumber numberWithInt:wanted_input_height],[NSNumber numberWithInt:wanted_input_width]] dataType:MLMultiArrayDataTypeDouble error:nil];
    for (int y = 0; y < wanted_input_height; ++y) {
        for (int x = 0; x < wanted_input_width; ++x) {
            unsigned char *in_pixel =
            rawData + (y * wanted_input_width * bytesPerPixel) + (x * bytesPerPixel);
            for (int c = 0; c < wanted_input_channels; ++c) {
                [tmpArray setObject:[NSNumber numberWithUnsignedChar:in_pixel[c]] atIndexedSubscript:c*wanted_input_height*wanted_input_width+y*wanted_input_width+x];
            }
        }
    }
    free(rawData);
    return tmpArray;
}

- (UIImage *)createImage:(MLMultiArray *)pixels
API_AVAILABLE(ios(11.0)){
    unsigned char *rawData = (unsigned char*) calloc(wanted_input_height * wanted_input_width * 4, sizeof(unsigned char));
    for (int y = 0; y < wanted_input_height; ++y) {
        unsigned char *out_row = rawData + (y * wanted_input_width * 4);
        for (int x = 0; x < wanted_input_width; ++x) {
            int index = x * wanted_input_width + y;
            unsigned char *out_pixel = out_row + (x * 4);
            for (int c = 0; c < wanted_input_channels; ++c) {
                out_pixel[c] = [[pixels objectAtIndexedSubscript:c*wanted_input_height*wanted_input_width+index] floatValue] * 255;
            }
            out_pixel[3] = UINT8_MAX;
        }
    }
    
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    NSUInteger bytesPerPixel = 4;
    NSUInteger bytesPerRow = bytesPerPixel * wanted_input_width;
    NSUInteger bitsPerComponent = 8;
    CGContextRef context = CGBitmapContextCreate(rawData, wanted_input_width, wanted_input_height,
                                                 
                                                 bitsPerComponent, bytesPerRow, colorSpace,
                                                 
                                                 kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    
    CGColorSpaceRelease(colorSpace);
    UIImage *retImg = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
    CGContextRelease(context);
    free(rawData);
    
    return [UIImage imageWithCGImage:retImg.CGImage scale:1 orientation:UIImageOrientationLeftMirrored];
}

由于我们在转换CoreML模型的时候没有设置图片输入,所以所有数据都是以MLMultiArray格式处理。如果觉得这样比较麻烦,可以通过一下方法转换模型:

mlmodel = tfcoreml.convert(
         tf_model_path = tf_model_path,
         mlmodel_path = '../mlmodel/stylize.mlmodel',
         output_feature_names = ['transformer/expand/conv3/conv/Sigmoid:0'],
         input_name_shape_dict = {'input:0':[1,512,512,3], 'style:0':[32]},
         image_input_names=['input:0'])

#spec = mlmodel.get_spec()
#output = spec.description.output[0]
#output.type.imageType.colorSpace = #_FeatureTypes_pb2.ImageFeatureType.ColorSpace.Value('RGB')
#output.type.imageType.width = 512
#output.type.imageType.height = 512
#coremltools.models.utils.save_spec(spec, '../mlmodel/stylize.mlmodel')

其中,在调用tfcoreml.convert时就配置了输入图片的节点image_input_names=['input:0'],输出节点也可以转换成图片,但是我们保存的模型不适用,如果官方的pb文件是可以的。
使用的图片输入后,我们在配置input的时候就可以使用CVPixelBufferRef来封装,不用自己考虑图片转换成bytes后的矩阵转置。

运行效果


直接上图:

效果图
效果图

源码地址:https://github.com/JiaoLiu/style-image/tree/master

你可能感兴趣的:(iOS实现类Prisma软件(三)——CoreML)