参考:
https://blog.csdn.net/u010097644/article/details/56849758 (环境构建)
https://blog.csdn.net/linshuhe1/article/details/51208745# (实例demo)
环境介绍:
Android Studio:3.2.0
Android SDK Tools:26.1.1
Android NDK:16.1.4479499
Win10 64位系统
OpenCV for Android:3.3.0 opencv官网地址,挑选版本的pack安装文件
安装好的NDK在Android SDK的sdk/ndk-bundle文件夹下;
打开Android Studio,新建一个项目,必须勾选“ Include C++ Support”
Minumum SDK请选择API21以上;
选择Empty Activity
默认Activity Name和Layout Name;
然后按照下图勾选这俩项:
将你下载的opencv android sdk解压缩的文件夹OpenCV-android-sdk拷贝到工程上的app文件夹同目录下:
然后点开app下的CMakeLists.txt 文件进行编辑,添加如下代码(图中我加了#新增):
完整的CMakeLists.txt问价内容如下:
cmake_minimum_required(VERSION 3.4.1)
#set(OpenCV_DIR C:/Android/OpenCV-android-sdk/sdk/native/jni) #新增
set(OpenCV_DIR ${CMAKE_SOURCE_DIR}/../OpenCV-android-sdk/sdk/native/jni) #新增
find_package(OpenCV REQUIRED) #新增
include_directories(${CMAKE_SOURCE_DIR}/../OpenCV-android-sdk/sdk/native/jni/include) #新增
add_library( # Sets the name of the library.
native-lib
# Sets the library as a shared library.
SHARED
# Provides a relative path to your source file(s).
src/main/cpp/native-lib.cpp)
find_library( # Sets the name of the path variable.
log-lib
# Specifies the name of the NDK library that
# you want CMake to locate.
log)
target_link_libraries( # Specifies the target library.
native-lib
# Links the target library to the log library
# included in the NDK.
${log-lib}
${OpenCV_LIBS}) #新增
然后在app下的build.gradle的android{defaultConfig{}}下加入如下代码:
ndk{
abiFilters 'armeabi-v7a'
}
app下的build.gradle的完整内容如下:
apply plugin: 'com.android.application'
android {
compileSdkVersion 28
defaultConfig {
applicationId "com.frdc.opencv_androiddemo"
minSdkVersion 21
targetSdkVersion 28
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
externalNativeBuild {
cmake {
cppFlags "-frtti -fexceptions"
}
}
ndk{
abiFilters 'armeabi-v7a'
}
sourceSets.main{
jniLibs.srcDir 'src/main/libs' //set .so files directory to libs 存放jni库
jni.srcDirs = [] //disable automatic ndk-build call
}
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
externalNativeBuild {
cmake {
path "CMakeLists.txt"
}
}
}
dependencies {
implementation fileTree(include: ['*.jar'], dir: 'libs')
implementation 'com.android.support:appcompat-v7:28.0.0-alpha1'
implementation 'com.android.support.constraint:constraint-layout:1.1.0'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:1.0.2'
androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'
implementation project(':openCVLibrary330')
}
这里起一个过滤作用,这里应根据自己手机的CPU类型来进行选择。
*安卓设备的CPU类型:
armeabiv-v7a: 第7代及以上的 ARM 处理器。2011年15月以后的生产的大部分Android设备都使用它.
arm64-v8a: 第8代、64位ARM处理器,很少设备,三星 Galaxy S6是其中之一。
armeabi: 第5代、第6代的ARM处理器,早期的手机用的比较多。
x86: 平板、模拟器用得比较多。
x86_64: 64位的平板。
至此,只要包含opencv的头文件就可以愉快地在native层使用opencv了,当然,注意把函数写在extern “C” {}块里,其目的是使这部分代码块表按照类C的编译和连接规约来编译和连接,而不是C++的编译的连接规约。这样在类C的代码中就可以调用C++的函数or变量等。如果不这么写,程序会闪退,logcat报错“No implementation found for ‘you native method name’”
而且,使用时只需在java层用native关键字声明native层的函数,然后按ALT+ENTER选第一项即可在cpp文件里自动生成函数头,而不再需要像老方法一样使用javah来获得正确的函数头。
把刚才OpenCV for Android目录下的java文件夹作为module导入进来。具体步骤是File->new->import module然后选择自己的路径,由于我已经将OpenCV-android-sdk文件夹拷贝到项目下了,一般Source directory的路径是:yourpath\OpenCV-android-sdk\sdk\java
它会生成一个Model name: openCVLibrary330 (这个模块默认名字应该会同版本有关) ,你会在上面我放的工程结构图中看到我导入的名为openCVLibrary330的模块名;之后的不用修改,点击Finish.导入完后需要将openCVLibrary330下面的build.gradle文件中的相关版本信息改成与app下的build.gradle里的一致。
openCVLibrary330下面的build.gradle完整内容如下:
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
repositories {
google()
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:3.2.0'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
google()
jcenter()
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
接下来点击File->Project Structure,然后按照图片中的步骤顺序点击app, Dependencies, 添加,Module dependency,默认会提示你刚导入的openCVLibrary330模块。点击确认添加。
至此,在代码里使用opencv就会有补全提示了,参考的博文里提到配置到这就可以顺利在java层使用opencv,然而经检验发现这还不够,虽然编译不报错,可以生成apk并顺利部署,但尝试运行却会闪退报错。因而还需继续进行配置。还需要在java层load一个libopencv_java3.so(3.0版本)或libopencv_java.so(2.4版本)库。
在app/src/main目录下建立一个libs文件夹,然后再在其目录下建立子目录,对应指定的编译平台。然后前往本地的OpenCV for Android 的libs目录下选择对应的平台,如OpenCV-android-sdk\sdk\native\libs\armeabi-v7a,然后将里面的libopencv_java3.so库文件拷贝到我们刚才建立的文件夹下
接下了进行同步即可。参考链接中说在Android视图下看到app目录下多了一个jniLibs目录,我操作后没有~,貌似没影响。
工程的作用:灰度化,边缘检测,Hist直方图计算,Sobel边缘检测,SEPIA(色调变换),ZOOM放大镜,PIXELIZE像素化;
界面上放置一个按钮,每次点击按钮就切换一次,循环切换模式:
activity_main.xml文件:
AndroidManifest.xml文件内容如下:
MainActivity.java内容如下:
package com.frdc.opencv_androiddemo;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.imgproc.Imgproc;
import java.util.Arrays;
import org.opencv.android.BaseLoaderCallback;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.OpenCVLoader;
import org.opencv.android.CameraBridgeViewBase.CvCameraViewFrame;
import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener2;
import org.opencv.android.LoaderCallbackInterface;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfFloat;
import org.opencv.core.MatOfInt;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.imgproc.Imgproc;
import android.R.string;
import android.app.Activity;
import android.os.Bundle;
import android.util.Log;
import android.widget.Button;
import android.view.View;
import android.view.View.OnClickListener;
public class MainActivity extends Activity implements CvCameraViewListener2{
private String TAG = "OpenCV_Test";
//OpenCV的相机接口
private CameraBridgeViewBase mCVCamera;
//缓存相机每帧输入的数据
private Mat mRgba,mTmp;
//按钮组件
private Button mButton;
//当前处理状态
private static int Cur_State = 0;
private Size mSize0;
private Mat mIntermediateMat;
private MatOfInt mChannels[];
private MatOfInt mHistSize;
private int mHistSizeNum = 25;
private Mat mMat0;
private float[] mBuff;
private MatOfFloat mRanges;
private Point mP1;
private Point mP2;
private Scalar mColorsRGB[];
private Scalar mColorsHue[];
private Scalar mWhilte;
private Mat mSepiaKernel;
/**
* 通过OpenCV管理Android服务,异步初始化OpenCV
*/
BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
@Override
public void onManagerConnected(int status){
switch (status) {
case LoaderCallbackInterface.SUCCESS:
Log.i(TAG,"OpenCV loaded successfully");
mCVCamera.enableView();
break;
default:
break;
}
}
};
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
mCVCamera = (CameraBridgeViewBase) findViewById(R.id.camera_view);
mCVCamera.setCvCameraViewListener(this);
mButton = (Button) findViewById(R.id.deal_btn);
mButton.setOnClickListener(new OnClickListener(){
@Override
public void onClick(View v) {
if(Cur_State<8){
//切换状态
Cur_State ++;
}else{
//恢复初始状态
Cur_State = 0;
}
}
});
}
@Override
public void onResume() {
super.onResume();
if (!OpenCVLoader.initDebug()) {
Log.d(TAG,"OpenCV library not found!");
} else {
Log.d(TAG, "OpenCV library found inside package. Using it!");
mLoaderCallback.onManagerConnected(LoaderCallbackInterface.SUCCESS);
}
};
@Override
public void onDestroy() {
super.onDestroy();
if (mCVCamera != null) {
mCVCamera.disableView();
}
};
@Override
public void onCameraViewStarted(int width, int height) {
// TODO Auto-generated method stub
mRgba = new Mat(height, width, CvType.CV_8UC4);
mTmp = new Mat(height, width, CvType.CV_8UC4);
mIntermediateMat = new Mat();
mSize0 = new Size();
mChannels = new MatOfInt[] { new MatOfInt(0), new MatOfInt(1), new MatOfInt(2) };
mBuff = new float[mHistSizeNum];
mHistSize = new MatOfInt(mHistSizeNum);
mRanges = new MatOfFloat(0f, 256f);
mMat0 = new Mat();
mColorsRGB = new Scalar[] { new Scalar(200, 0, 0, 255), new Scalar(0, 200, 0, 255), new Scalar(0, 0, 200, 255) };
mColorsHue = new Scalar[] {
new Scalar(255, 0, 0, 255), new Scalar(255, 60, 0, 255), new Scalar(255, 120, 0, 255), new Scalar(255, 180, 0, 255), new Scalar(255, 240, 0, 255),
new Scalar(215, 213, 0, 255), new Scalar(150, 255, 0, 255), new Scalar(85, 255, 0, 255), new Scalar(20, 255, 0, 255), new Scalar(0, 255, 30, 255),
new Scalar(0, 255, 85, 255), new Scalar(0, 255, 150, 255), new Scalar(0, 255, 215, 255), new Scalar(0, 234, 255, 255), new Scalar(0, 170, 255, 255),
new Scalar(0, 120, 255, 255), new Scalar(0, 60, 255, 255), new Scalar(0, 0, 255, 255), new Scalar(64, 0, 255, 255), new Scalar(120, 0, 255, 255),
new Scalar(180, 0, 255, 255), new Scalar(255, 0, 255, 255), new Scalar(255, 0, 215, 255), new Scalar(255, 0, 85, 255), new Scalar(255, 0, 0, 255)
};
mWhilte = Scalar.all(255);
mP1 = new Point();
mP2 = new Point();
// Fill sepia kernel
mSepiaKernel = new Mat(4, 4, CvType.CV_32F);
mSepiaKernel.put(0, 0, /* R */0.189f, 0.769f, 0.393f, 0f);
mSepiaKernel.put(1, 0, /* G */0.168f, 0.686f, 0.349f, 0f);
mSepiaKernel.put(2, 0, /* B */0.131f, 0.534f, 0.272f, 0f);
mSepiaKernel.put(3, 0, /* A */0.000f, 0.000f, 0.000f, 1f);
}
@Override
public void onCameraViewStopped() {
// TODO Auto-generated method stub
mRgba.release();
mTmp.release();
}
/**
* 图像处理都写在此处
*/
@Override
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
Size sizeRgba = mRgba.size();
int rows = (int) sizeRgba.height;
int cols = (int) sizeRgba.width;
Mat rgbaInnerWindow;
int left = cols / 8;
int top = rows / 8;
int width = cols * 3 / 4;
int height = rows * 3 / 4;
switch (Cur_State) {
case 1:
//灰化处理
Imgproc.cvtColor(inputFrame.gray(), mRgba, Imgproc.COLOR_GRAY2RGBA,4);
break;
case 2:
//Canny边缘检测
mRgba = inputFrame.rgba();
Imgproc.Canny(inputFrame.gray(), mTmp, 80, 100);
Imgproc.cvtColor(mTmp, mRgba, Imgproc.COLOR_GRAY2RGBA, 4);
break;
case 3:
//Hist直方图计算
Mat hist = new Mat();
int thikness = (int) (sizeRgba.width / (mHistSizeNum + 10) / 5);
if(thikness > 5) thikness = 5;
int offset = (int) ((sizeRgba.width - (5*mHistSizeNum + 4*10)*thikness)/2);
// RGB
for(int c=0; c<3; c++) {
Imgproc.calcHist(Arrays.asList(mRgba), mChannels[c], mMat0, hist, mHistSize, mRanges);
Core.normalize(hist, hist, sizeRgba.height/2, 0, Core.NORM_INF);
hist.get(0, 0, mBuff);
for(int h=0; h