转载文章请注明出处:http://blog.csdn.net/dangxw_/article/details/25063673
前些天在github上得到一个关于图像处理的源码(地址找不到了),挺全面,闲了分享一下。感谢开源。
对于图片的反转,倾斜,缩放之类的操作就不提了,网上太多了。大多都是用的Matrix现成方法。
原图:
一:圆角处理
效果:
代码:
public static Bitmap getRoundedCornerBitmap(Bitmap bitmap, float roundPx) { Bitmap output = Bitmap.createBitmap(bitmap.getWidth(), bitmap.getHeight(), Config.ARGB_8888); Canvas canvas = new Canvas(output); final int color = 0xff424242; final Paint paint = new Paint(); final Rect rect = new Rect(0, 0, bitmap.getWidth(), bitmap.getHeight()); final RectF rectF = new RectF(rect); paint.setAntiAlias(true); canvas.drawARGB(0, 0, 0, 0); paint.setColor(color); canvas.drawRoundRect(rectF, roundPx, roundPx, paint); paint.setXfermode(new PorterDuffXfermode(Mode.SRC_IN)); canvas.drawBitmap(bitmap, rect, rect, paint); return output; }
理解:
这个就简单了,实际上是在原图片上画了一个圆角遮罩。对于paint.setXfermode(new PorterDuffXfermode(Mode.SRC_IN));方法我刚看到也是一知半解Mode.SRC_IN参数是个画图模式,该类型是指只显示两层图案的交集部分,且交集部位只显示上层图像。实际就是先画了一个圆角矩形的过滤框,于是形状有了,再将框中的内容填充为图片。该参数总过有十八种:
是区分不同的画图叠加效果,这个人的博客讲的很清楚:http://www.cnblogs.com/sank615/archive/2013/03/12/2955675.html。我也没做demo,所以不啰嗦了。
二:灰白处理
效果:
代码:
public static Bitmap toGrayscale(Bitmap bmpOriginal) { int width, height; height = bmpOriginal.getHeight(); width = bmpOriginal.getWidth(); Bitmap bmpGrayscale = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565); Canvas c = new Canvas(bmpGrayscale); Paint paint = new Paint(); ColorMatrix cm = new ColorMatrix(); cm.setSaturation(0); ColorMatrixColorFilter f = new ColorMatrixColorFilter(cm); paint.setColorFilter(f); c.drawBitmap(bmpOriginal, 0, 0, paint); return bmpGrayscale; }理解:
这个也没什么好说的,就是利用了ColorMatrix 类自带的设置饱和度的方法setSaturation()。不过其方法内部实现的更深一层是利用颜色矩阵的乘法实现的,对于颜色矩阵的乘法下面还有使用。
三 黑白处理
效果:
代码:
public static Bitmap toHeibai(Bitmap mBitmap) { int mBitmapWidth = 0; int mBitmapHeight = 0; mBitmapWidth = mBitmap.getWidth(); mBitmapHeight = mBitmap.getHeight(); Bitmap bmpReturn = Bitmap.createBitmap(mBitmapWidth, mBitmapHeight, Bitmap.Config.ARGB_8888); int iPixel = 0; for (int i = 0; i < mBitmapWidth; i++) { for (int j = 0; j < mBitmapHeight; j++) { int curr_color = mBitmap.getPixel(i, j); int avg = (Color.red(curr_color) + Color.green(curr_color) + Color .blue(curr_color)) / 3; if (avg >= 100) { iPixel = 255; } else { iPixel = 0; } int modif_color = Color.argb(255, iPixel, iPixel, iPixel); bmpReturn.setPixel(i, j, modif_color); } } return bmpReturn; }
理解:
其实看图片效果就能看出来,这张图片不同于灰白处理的那张,不同之处是灰白处理虽然没有了颜色,但是黑白的程度层次依然存在,而此张图片连层次都没有了,只有两个区别十分明显的黑白颜色。实现的算法也很简单,对于每个像素的rgb值求平均数,如果高于100算白色,低于100算黑色。不过感觉100这个标准值太大了,导致图片白色区域太多,把它降低点可能效果会更好。(作者代码方法命名竟然是汉语拼音,原来是国人写的,是不是github也记不清了,我尊重原创,但是下载地址真的忘了。另外我把作者图片换了,额……)
四:镜像处理
效果:
代码:
public static Bitmap createReflectionImageWithOrigin(Bitmap bitmap) { final int reflectionGap = 4; int width = bitmap.getWidth(); int height = bitmap.getHeight(); Matrix matrix = new Matrix(); matrix.preScale(1, -1); Bitmap reflectionImage = Bitmap.createBitmap(bitmap, 0, height / 2, width, height / 2, matrix, false); Bitmap bitmapWithReflection = Bitmap.createBitmap(width, (height + height / 2), Config.ARGB_8888); Canvas canvas = new Canvas(bitmapWithReflection); canvas.drawBitmap(bitmap, 0, 0, null); Paint deafalutPaint = new Paint(); canvas.drawRect(0, height, width, height + reflectionGap, deafalutPaint); canvas.drawBitmap(reflectionImage, 0, height + reflectionGap, null); Paint paint = new Paint(); LinearGradient shader = new LinearGradient(0, bitmap.getHeight(), 0, bitmapWithReflection.getHeight() + reflectionGap, 0x70ffffff, 0x00ffffff, TileMode.CLAMP); paint.setShader(shader); // Set the Transfer mode to be porter duff and destination in paint.setXfermode(new PorterDuffXfermode(Mode.DST_IN)); // Draw a rectangle using the paint with our linear gradient canvas.drawRect(0, height, width, bitmapWithReflection.getHeight() + reflectionGap, paint); return bitmapWithReflection; }
理解:
记得去年android入门时做过gallery的倒影特效,当时感觉很漂亮,想着需要作出反转和倾斜就可以了,原来他也是这么做的。原理就是将原图片反转一下,调整一 下它的颜色作出倒影效果,再将两张图片续加在一起,不过如果在反转的同时再利用Matrix加上一些倾斜角度就更好了,不过那样做的话加工后的图片的高度需要同比例计算出来,不能简单的相加了,否则就图片大小就容不下现有的像素内容。
五:加旧处理
效果:
代码:
public static Bitmap testBitmap(Bitmap bitmap) { Bitmap output = Bitmap.createBitmap(bitmap.getWidth(), bitmap.getHeight(), Config.RGB_565); Canvas canvas = new Canvas(output); Paint paint = new Paint(); ColorMatrix cm = new ColorMatrix(); float[] array = {1,0,0,0,50, 0,1,0,0,50, 0,0,1,0,0, 0,0,0,1,0}; cm.set(array); paint.setColorFilter(new ColorMatrixColorFilter(cm)); canvas.drawBitmap(bitmap, 0, 0, paint); return output; }
理解:
其实每张图片的存储都是存的每个像素的rgba值,而对其操作的时候又将其四个数值定位一个5行1列的矩阵,最后一行值为1,这样一来利用矩阵对其操作确实方便了好多,矩阵的乘法可以轻松的实现某个或全部分量按比例或加常熟的增加或减少。 比如现有一张图片,其每个point的rgba值为{100,100,100,255}也就是灰色全图,我们希望其红色部位增加一倍,剩余部分增加十。就可以将其值虚拟为五行一列矩阵:{100 ,100,100,255,1} 再让这个矩阵:{2,0,0,0,0换行 0,1,0,0,10换行 0,0,1,0,10换行 0,,0,0,1,10} 乘以它。得到{ 200,110,100,100} 。 这个泛黄照片的处理算法原理就是让每个像素点rg值增加50,rg值相混合就得到了黄色。
详细参见:别人的博客
六:哈哈镜处理
效果:
代码:
jintArray Java_com_spore_meitu_jni_ImageUtilEngine_toHahajing (JNIEnv* env,jobject thiz, jintArray buf, jint width, jint height,jint centerX, jint centerY, jint radius, jfloat multiple) { jint * cbuf; cbuf = (*env)->GetIntArrayElements(env, buf, 0); int newSize = width * height; jint rbuf[newSize]; float xishu = multiple; int real_radius = (int)(radius / xishu); int i = 0, j = 0; for (i = 0; i < width; i++) { for (j = 0; j < height; j++) { int curr_color = cbuf[j * width + i]; int pixR = red(curr_color); int pixG = green(curr_color); int pixB = blue(curr_color); int pixA = alpha(curr_color); int newR = pixR; int newG = pixG; int newB = pixB; int newA = pixA; int distance = (int) ((centerX - i) * (centerX - i) + (centerY - j) * (centerY - j)); if (distance < radius * radius) { int src_x = (int) ((float) (i - centerX) / xishu); int src_y = (int) ((float) (j - centerY) / xishu); src_x = (int)(src_x * (sqrt(distance) / real_radius)); src_y = (int)(src_y * (sqrt(distance) / real_radius)); src_x = src_x + centerX; src_y = src_y + centerY; int src_color = cbuf[src_y * width + src_x]; newR = red(src_color); newG = green(src_color); newB = blue(src_color); newA = alpha(src_color); } newR = min(255, max(0, newR)); newG = min(255, max(0, newG)); newB = min(255, max(0, newB)); newA = min(255, max(0, newA)); int modif_color = ARGB(newA, newR, newG, newB); rbuf[j * width + i] = modif_color; } } jintArray result = (*env)->NewIntArray(env, newSize); (*env)->SetIntArrayRegion(env, result, 0, newSize, rbuf); (*env)->ReleaseIntArrayElements(env, buf, cbuf, 0); return result; }
理解:
搞不懂一个图片处理,为什么作者要加到jni那层去,速度能提升多少,赤裸裸的嘲讽我们的技术。纯c代码真懒得看,心情好的时候再看,看完了再来更新,猜测实现原理是根据哈哈镜的半径,以中心点为圆心,每个像素点的坐标位移并扩展,离中心点越近的就扩展越大。
七:放大镜处理
直接给代码吧:
jintArray Java_com_spore_meitu_jni_ImageUtilEngine_toFangdajing (JNIEnv* env,jobject thiz, jintArray buf, jint width, jint height,jint centerX, jint centerY, jint radius, jfloat multiple) { jint * cbuf; cbuf = (*env)->GetIntArrayElements(env, buf, 0); int newSize = width * height; jint rbuf[newSize]; // 鏂板浘鍍忓儚绱犲� float xishu = multiple; int real_radius = (int)(radius / xishu); int i = 0, j = 0; for (i = 0; i < width; i++) { for (j = 0; j < height; j++) { int curr_color = cbuf[j * width + i]; int pixR = red(curr_color); int pixG = green(curr_color); int pixB = blue(curr_color); int pixA = alpha(curr_color); int newR = pixR; int newG = pixG; int newB = pixB; int newA = pixA; int distance = (int) ((centerX - i) * (centerX - i) + (centerY - j) * (centerY - j)); if (distance < radius * radius) { // 鍥惧儚鏀惧ぇ鏁堟灉 int src_x = (int)((float)(i - centerX) / xishu + centerX); int src_y = (int)((float)(j - centerY) / xishu + centerY); int src_color = cbuf[src_y * width + src_x]; newR = red(src_color); newG = green(src_color); newB = blue(src_color); newA = alpha(src_color); } newR = min(255, max(0, newR)); newG = min(255, max(0, newG)); newB = min(255, max(0, newB)); newA = min(255, max(0, newA)); int modif_color = ARGB(newA, newR, newG, newB); rbuf[j * width + i] = modif_color; } } jintArray result = (*env)->NewIntArray(env, newSize); (*env)->SetIntArrayRegion(env, result, 0, newSize, rbuf); (*env)->ReleaseIntArrayElements(env, buf, cbuf, 0); return result; }
八:浮雕处理
效果:
代码:
public static Bitmap toFuDiao(Bitmap mBitmap) { int mBitmapWidth = 0; int mBitmapHeight = 0; mBitmapWidth = mBitmap.getWidth(); mBitmapHeight = mBitmap.getHeight(); Bitmap bmpReturn = Bitmap.createBitmap(mBitmapWidth, mBitmapHeight, Bitmap.Config.RGB_565); int preColor = 0; int prepreColor = 0; preColor = mBitmap.getPixel(0, 0); for (int i = 0; i < mBitmapWidth; i++) { for (int j = 0; j < mBitmapHeight; j++) { int curr_color = mBitmap.getPixel(i, j); int r = Color.red(curr_color) - Color.red(prepreColor) +127; int g = Color.green(curr_color) - Color.red(prepreColor) + 127; int b = Color.green(curr_color) - Color.blue(prepreColor) + 127; int a = Color.alpha(curr_color); int modif_color = Color.argb(a, r, g, b); bmpReturn.setPixel(i, j, modif_color); prepreColor = preColor; preColor = curr_color; } } Canvas c = new Canvas(bmpReturn); Paint paint = new Paint(); ColorMatrix cm = new ColorMatrix(); cm.setSaturation(0); ColorMatrixColorFilter f = new ColorMatrixColorFilter(cm); paint.setColorFilter(f); c.drawBitmap(bmpReturn, 0, 0, paint); return bmpReturn; }
理解:
观察浮雕就不难发现,其实浮雕的特点就是在颜色有跳变的地方就刻条痕迹。127,127,127为深灰色,近似于石头的颜色,此处取该颜色为底色。算法是将上一个点的rgba值减去当前点的rgba值然后加上127得到当前点的颜色。
九:底片处理
效果:
代码:
jintArray Java_com_spore_meitu_jni_ImageUtilEngine_toDipian (JNIEnv* env,jobject thiz, jintArray buf, jint width, jint height) { jint * cbuf; cbuf = (*env)->GetIntArrayElements(env, buf, 0); LOGE("Bitmap Buffer %d %d",cbuf[0],cbuf[1]); int newSize = width * height; jint rbuf[newSize]; int count = 0; int preColor = 0; int prepreColor = 0; int color = 0; preColor = cbuf[0]; int i = 0; int j = 0; int iPixel = 0; for (i = 0; i < width; i++) { for (j = 0; j < height; j++) { int curr_color = cbuf[j * width + i]; int r = 255 - red(curr_color); int g = 255 - green(curr_color); int b = 255 - blue(curr_color); int a = alpha(curr_color); int modif_color = ARGB(a, r, g, b); rbuf[j * width + i] = modif_color; } } jintArray result = (*env)->NewIntArray(env, newSize); (*env)->SetIntArrayRegion(env, result, 0, newSize, rbuf); (*env)->ReleaseIntArrayElements(env, buf, cbuf, 0); return result; }理解:
算法实现是每个点grb值取为255之差,效果也真是底片效果,但是没有想通为什么这样运算就可以得到底片,回头更新。
十:油画处理
效果:
代码:
public static Bitmap toYouHua(Bitmap bmpSource) { Bitmap bmpReturn = Bitmap.createBitmap(bmpSource.getWidth(), bmpSource.getHeight(), Bitmap.Config.RGB_565); int color = 0; int Radio = 0; int width = bmpSource.getWidth(); int height = bmpSource.getHeight(); Random rnd = new Random(); int iModel = 10; int i = width - iModel; while (i > 1) { int j = height - iModel; while (j > 1) { int iPos = rnd.nextInt(100000) % iModel; color = bmpSource.getPixel(i + iPos, j + iPos); bmpReturn.setPixel(i, j, color); j = j - 1; } i = i - 1; } return bmpReturn; }理解:
赞一下这个算法,其实应该说鄙视下自己,在看到效果图的时候,我会先猜一下原理,但是这个始终没有想出来。其实油画因为是用画笔画的,彩笔画的时候没有那么精确会将本该这点的颜色滑到另一个点处。算法实现就是取一个一定范围内的随机数,每个点的颜色是该点减去随机数坐标后所得坐标的颜色。
十一:模糊处理
效果:
代码:
public static Bitmap toMohu(Bitmap bmpSource, int Blur) { int mode = 5; Bitmap bmpReturn = Bitmap.createBitmap(bmpSource.getWidth(), bmpSource.getHeight(), Bitmap.Config.ARGB_8888); int pixels[] = new int[bmpSource.getWidth() * bmpSource.getHeight()]; int pixelsRawSource[] = new int[bmpSource.getWidth() * bmpSource.getHeight() * 3]; int pixelsRawNew[] = new int[bmpSource.getWidth() * bmpSource.getHeight() * 3]; bmpSource.getPixels(pixels, 0, bmpSource.getWidth(), 0, 0, bmpSource.getWidth(), bmpSource.getHeight()); for (int k = 1; k <= Blur; k++) { for (int i = 0; i < pixels.length; i++) { pixelsRawSource[i * 3 + 0] = Color.red(pixels[i]); pixelsRawSource[i * 3 + 1] = Color.green(pixels[i]); pixelsRawSource[i * 3 + 2] = Color.blue(pixels[i]); } int CurrentPixel = bmpSource.getWidth() * 3 + 3; for (int i = 0; i < bmpSource.getHeight() - 3; i++) { for (int j = 0; j < bmpSource.getWidth() * 3; j++) { CurrentPixel += 1; int sumColor = 0; sumColor = pixelsRawSource[CurrentPixel - bmpSource.getWidth() * 3]; sumColor = sumColor + pixelsRawSource[CurrentPixel - 3]; sumColor = sumColor + pixelsRawSource[CurrentPixel + 3]; sumColor = sumColor + pixelsRawSource[CurrentPixel + bmpSource.getWidth() * 3]; pixelsRawNew[CurrentPixel] = Math.round(sumColor / 4); } } for (int i = 0; i < pixels.length; i++) { pixels[i] = Color.rgb(pixelsRawNew[i * 3 + 0], pixelsRawNew[i * 3 + 1], pixelsRawNew[i * 3 + 2]); } } bmpReturn.setPixels(pixels, 0, bmpSource.getWidth(), 0, 0, bmpSource.getWidth(), bmpSource.getHeight()); return bmpReturn; }理解:
算法实现其实是取每三点的平均值做为当前点颜色,这样看上去就变得模糊了。这个算法是三点的平均值,如果能够将范围扩大,并且不是单纯的平均值,而是加权平均肯定效果会更好。不过处理速度实在是太慢了,而Muzei这种软件在处理的时候,不仅仅速度特别快,而且还有逐渐变模糊的变化过程,显然人家不是用这种算法实现的。他们的实现方法正在猜测中,实现后也来更新。