ocr opencv 想必做过程图像识别的同学们都对这两个词不陌生吧。
ocr (optical character recognition ,光学字符识别) 是指电子设备(例如扫描仪或数码相机)检查纸上的字符,通过检测暗,亮的模式确定其形状,然后用字符识别方法将形状翻译成计算机文字的过程。 这样就给我编程提供了接口,我们可以识别图片的文字了 (有些文档我们通过手机拍照的,直接生成word )身份证识别,银行卡识别等。
opencv 是什么呢
OpenCV的全称是:Open Source Computer Vision Library。OpenCV是一个基于BSD许可(开源)发行的跨平台计算机视觉库,可以运行在Linux、Windows和Mac OS操作系统上。它轻量级而且高效——由一系列 C 函数和少量 C++ 类构成,同时提供了Python、Ruby、MATLAB等语言的接口,实现了图像处理和计算机视觉方面的很多通用算法。
上面是 百度百科给出的定义说白了就是给我们编程提供的类库而已
android 如果想使用OCR
我们可以使用google 开源的项目tesseract-ocr
github 下载地址:https://github.com/justin/tesseract-ocr
今天我不讲如何编译 ocr 这个东西
主要说下,识别二维码的这个项目和tesseract-ocr 整合成一个识别身份证号码的 过程
后面我会把他们编译成类库供大家使用的
ORC 识别方法已经封装成一个简单的类 OCR
package com.dynamsoft.tessocr; import android.content.Context; import android.content.res.AssetManager; import android.graphics.Bitmap; import android.os.Environment; import com.googlecode.tesseract.android.TessBaseAPI; import java.io.File; /** * Created by CYL on 2016/3/26. * email:[email protected] * 这个类 就是调用 ocr 的接口 * 这个是识别过程是耗时的 操作 请放到线程 操作 */ public class OCR { private TessBaseAPI mTess; private boolean flag; private Context context; private AssetManager assetManager; public OCR() { // TODO Auto-generated constructor stub mTess = new TessBaseAPI(); String datapath = Environment.getExternalStorageDirectory() + "/tesseract/"; String language = "eng"; //请将你的语言包放到这里 sd 的 tessseract 下的tessdata 下 File dir = new File(datapath + "tessdata/"); if (!dir.exists()) dir.mkdirs(); flag = mTess.init(datapath, language); } /** * 识别出来bitmap 上的文字 * @param bitmap 需要识别的图片 * @return */ public String getOCRResult(Bitmap bitmap) { String result = "dismiss langues"; if(flag){ mTess.setImage(bitmap); result = mTess.getUTF8Text(); } return result; } public void onDestroy() { if (mTess != null) mTess.end(); } }
创建对象,调用getOcrResult方法就行了,注意这个识别过程是耗时,放到线程去操作。避免ANR问题
然后我们需要把识别集成到二维码扫描里面
下面这个对二维码扫描这个项目介绍的比较详细
http://www.cnblogs.com/weixing/archive/2013/08/28/3287120.html
下面给大家介绍一下,ZXing库里面主要的类以及这些类的作用:
我可以简单考虑一下 图片识别,我们需要先获取图片才能识别,当识别成功以后应该将数据返回 并反馈给用户我们已经完成了识别。
第一首先 我们如何获取图像 即 bitmap 从上面主要功能的类可以看出来。
我应该去captureactivityhandler 解码处理处理中去找,不管识别二维码还是图片,身份证啊。最终都是识别bitmap
所以我们这里可以找到相机捕捉到的图像;
DecodeHandler
/* * Copyright (C) 2010 ZXing authors * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.sj.app.decoding; import android.graphics.Bitmap; import android.os.Bundle; import android.os.Handler; import android.os.Looper; import android.os.Message; import android.util.Log; import com.dynamsoft.tessocr.OCR; import com.google.zxing.BinaryBitmap; import com.google.zxing.DecodeHintType; import com.google.zxing.MultiFormatReader; import com.google.zxing.ReaderException; import com.google.zxing.Result; import com.google.zxing.common.HybridBinarizer; import com.sj.app.camera.CameraManager; import com.sj.app.camera.PlanarYUVLuminanceSource; import com.sj.app.utils.IdMatch; import com.sj.erweima.MipcaActivityCapture; import com.sj.erweima.R; import java.util.Hashtable; import java.util.List; final class DecodeHandler extends Handler { private static final String TAG = DecodeHandler.class.getSimpleName(); private final MipcaActivityCapture activity; private final MultiFormatReader multiFormatReader; DecodeHandler(MipcaActivityCapture activity, Hashtable<DecodeHintType, Object> hints) { multiFormatReader = new MultiFormatReader(); multiFormatReader.setHints(hints); this.activity = activity; } @Override public void handleMessage(Message message) { switch (message.what) { case R.id.decode: // Log.d(TAG, "Got decode message"); decode((byte[]) message.obj, message.arg1, message.arg2); break; case R.id.quit: Looper.myLooper().quit(); break; } } /** * Decode the data within the viewfinder rectangle, and time how long it * took. For efficiency, reuse the same reader objects from one decode to * the next. * * @param data * The YUV preview frame. * @param width * The width of the preview frame. * @param height * The height of the preview frame. */ private void decode(byte[] data, int width, int height) { long start = System.currentTimeMillis(); Result rawResult = null; // modify here byte[] rotatedData = new byte[data.length]; for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) rotatedData[x * height + height - y - 1] = data[x + y * width]; } int tmp = width; // Here we are swapping, that's the difference to #11 width = height; height = tmp; PlanarYUVLuminanceSource source = CameraManager.get() .buildLuminanceSource(rotatedData, width, height); BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source)); try { //相机中捕捉到的 Bitmap image = source.renderCroppedGreyscaleBitmap(); doorc(source); rawResult = multiFormatReader.decodeWithState(bitmap); } catch (ReaderException re) { // continue } finally { multiFormatReader.reset(); } if (rawResult != null) { long end = System.currentTimeMillis(); Log.d(TAG, "Found barcode (" + (end - start) + " ms):\n" + rawResult.toString()); Message message = Message.obtain(activity.getHandler(), R.id.decode_succeeded, rawResult); Bundle bundle = new Bundle(); bundle.putParcelable(DecodeThread.BARCODE_BITMAP, source.renderCroppedGreyscaleBitmap()); message.setData(bundle); // Log.d(TAG, "Sending decode succeeded message..."); message.sendToTarget(); } else { Message message = Message.obtain(activity.getHandler(), R.id.decode_failed); message.sendToTarget(); } } private Handler handler = new Handler(){ public void handleMessage(Message msg) { CardId cardId = (CardId) msg.obj; if(cardId != null){ Message message = Message.obtain(activity.getHandler(), R.id.decode_succeeded, cardId.id); Bundle bundle = new Bundle(); bundle.putParcelable(DecodeThread.BARCODE_BITMAP, cardId.bitmap); message.setData(bundle); // Log.d(TAG, "Sending decode succeeded message..."); message.sendToTarget(); } }; }; private void doorc(final PlanarYUVLuminanceSource source) { new Thread(new Runnable() { @Override public void run() { Bitmap bitmap = source.renderCroppedGreyscaleBitmap(); String id = new OCR().getOCRResult(bitmap); if(id != null){ List<String> list = IdMatch.machId(id); if(list!= null && list.size()>0){ String cardId = list.get(0); if(cardId != null){ Message msg = Message.obtain(); CardId cardId2 = new CardId(cardId, bitmap); msg.obj = cardId2; handler.sendMessage(msg); } } } } }).start(); } public class CardId{ private String id; private Bitmap bitmap; public CardId(String id, Bitmap bitmap) { super(); this.id = id; this.bitmap = bitmap; } public String getId() { return id; } public void setId(String id) { this.id = id; } public Bitmap getBitmap() { return bitmap; } public void setBitmap(Bitmap bitmap) { this.bitmap = bitmap; } } }
当解析成功的时候就将结果通过handler 返回到UI 线程中去了,对于 扫描框我们可以响应调节。
CameraManager 这个类 控制扫描框的大小。
public Rect getFramingRect() { Point screenResolution = configManager.getScreenResolution(); if (framingRect == null) { if (camera == null) { return null; } int width = screenResolution.x * 7 / 8; if (width < MIN_FRAME_WIDTH) { width = MIN_FRAME_WIDTH; } else if (width > MAX_FRAME_WIDTH) { // width = MAX_FRAME_WIDTH; } int height = screenResolution.y * 3 / 4; if (height < MIN_FRAME_HEIGHT) { height = MIN_FRAME_HEIGHT; } else if (height > MAX_FRAME_HEIGHT) { height = MAX_FRAME_HEIGHT; } int leftOffset = (screenResolution.x - width) / 2; int topOffset = (screenResolution.y - height) / 2; framingRect = new Rect(leftOffset, topOffset, leftOffset + width, topOffset + height); Log.d(TAG, "Calculated framing rect: " + framingRect); } return framingRect; }
改变这个方法就可以改变这个扫描框的大小了。