iOS开发-图片高斯模糊效果

iOS开发的时候有的时候需要将图片设置模糊,或者通过点击下拉方法,去除模糊,一切都是为了应用更受用户欢迎,iOS7之后半透明模糊效果得到大范围使用的比较大,现在也可以看到很多应用局部用到了图片模糊效果,关于图片实现高斯模糊效果有三种方式,CoreImage,GPUImage(第三方开源类库)和vImage。GPUImage没怎么用过,本文就讲两种方式Core Image和vImage。

Core  Image

开始撸代码之前我们先来看一下实现的效果:

iOS开发-图片高斯模糊效果_第1张图片

iOS5.0之后就出现了Core Image的API,Core Image的API被放在CoreImage.framework库中,在iOS和OS X平台上,Core Image都提供了大量的滤镜(Filter),在OS X上有120多种Filter,而在iOS上也有90多。首先我们扩展一下UIImage,添加类方法:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

+(UIImage *)coreBlurImage:(UIImage *)image

           withBlurNumber:(CGFloat)blur {

    //博客园-FlyElephant

    CIContext *context = [CIContext contextWithOptions:nil];

    CIImage  *inputImage=[CIImage imageWithCGImage:image.CGImage];

    //设置filter

    CIFilter *filter = [CIFilter filterWithName:@"CIGaussianBlur"];

    [filter setValue:inputImage forKey:kCIInputImageKey];

    [filter setValue:@(blur) forKey: @"inputRadius"];

    //模糊图片

    CIImage *result=[filter valueForKey:kCIOutputImageKey];

    CGImageRef outImage=[context createCGImage:result fromRect:[result extent]];

    UIImage *blurImage=[UIImage imageWithCGImage:outImage];

    CGImageRelease(outImage);

    return blurImage;

}

其中过滤的选项设置为高斯模糊:

iOS开发-图片高斯模糊效果_第2张图片  

vImage 方式

vImage属于Accelerate.Framework,需要导入Accelerate下的Accelerate头文件,Accelerate主要是用来做数字信号处理、图像处理相关的向量、矩阵运算的库。图像可以认为是由向量或者矩阵数据构成的,Accelerate里既然提供了高效的数学运算API,自然就能方便我们对图像做各种各样的处理,模糊算法使用的是vImageBoxConvolve_ARGB8888这个函数。 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

+(UIImage *)boxblurImage:(UIImage *)image withBlurNumber:(CGFloat)blur {

    if (blur < 0.f || blur > 1.f) {

        blur = 0.5f;

    }

    int boxSize = (int)(blur * 40);

    boxSize = boxSize - (boxSize % 2) + 1;

     

    CGImageRef img = image.CGImage;

     

    vImage_Buffer inBuffer, outBuffer;

    vImage_Error error;

     

    void *pixelBuffer;

    //从CGImage中获取数据

    CGDataProviderRef inProvider = CGImageGetDataProvider(img);

    CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);

    //设置从CGImage获取对象的属性

    inBuffer.width = CGImageGetWidth(img);

    inBuffer.height = CGImageGetHeight(img);

    inBuffer.rowBytes = CGImageGetBytesPerRow(img);

     

    inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData);

     

    pixelBuffer = malloc(CGImageGetBytesPerRow(img) *

                         CGImageGetHeight(img));

     

    if(pixelBuffer == NULL)

        NSLog(@"No pixelbuffer");

     

    outBuffer.data = pixelBuffer;

    outBuffer.width = CGImageGetWidth(img);

    outBuffer.height = CGImageGetHeight(img);

    outBuffer.rowBytes = CGImageGetBytesPerRow(img);

     

    error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);

     

    if (error) {

        NSLog(@"error from convolution %ld", error);

    }

     

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    CGContextRef ctx = CGBitmapContextCreate(

                                             outBuffer.data,

                                             outBuffer.width,

                                             outBuffer.height,

                                             8,

                                             outBuffer.rowBytes,

                                             colorSpace,

                                             kCGImageAlphaNoneSkipLast);

    CGImageRef imageRef = CGBitmapContextCreateImage (ctx);

    UIImage *returnImage = [UIImage imageWithCGImage:imageRef];

     

    //clean up

    CGContextRelease(ctx);

    CGColorSpaceRelease(colorSpace);

     

    free(pixelBuffer);

    CFRelease(inBitmapData);

     

    CGColorSpaceRelease(colorSpace);

    CGImageRelease(imageRef);

     

    return returnImage;

}

图片模糊调用:

1

2

3

4

5

self.imageView=[[UIImageView alloc]initWithFrame:CGRectMake(0, 300, SCREENWIDTH, 100)];

self.imageView.contentMode=UIViewContentModeScaleAspectFill;

self.imageView.image=[UIImage boxblurImage:self.image withBlurNumber:0.5];

self.imageView.clipsToBounds=YES;

[self.view addSubview:self.imageView];

关于两种方式的选择的建议

效果:第一种Core Image设置模糊之后会在周围产生白边,vImage使用不存在任何问题;

性能:图像模糊处理属于复杂的计算,大部分图片模糊选择的是vImage,性能最佳(没有亲自测试过,有兴趣可以自己测试)

项目地址:https://github.com/SmallElephant/iOS-UIImageBoxBlur

参考资料:https://developer.apple.com/library/ios/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CIGaussianBlur

你可能感兴趣的:(ios,ios开发,iOS开发-图片高斯模糊效果)