近期学习了解Android相机的使用,如何预览及切图,看了一些资料,在github上找到了google对于Camera2使用的Demo,就这份Demo做一个个人分析理解。
示例项目中有两份代码,一份Java编写,一份Kotlin编写,将分别做分析记录。
一、Java示例
目录结构非常简单,只有一个Activity,一个Fragment,一个自定义View。
Activity里面只有一个Fragment,这里不做解释。
接下来我们看这个自定义View是什么样的。
/**
* A {@link TextureView} that can be adjusted to a specified aspect ratio.
*/
public class AutoFitTextureView extends TextureView {
private int mRatioWidth = 0;
private int mRatioHeight = 0;
public AutoFitTextureView(Context context) {
this(context, null);
}
public AutoFitTextureView(Context context, AttributeSet attrs) {
this(context, attrs, 0);
}
public AutoFitTextureView(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
}
/**
* Sets the aspect ratio for this view. The size of the view will be measured based on the ratio
* calculated from the parameters. Note that the actual sizes of parameters don't matter, that
* is, calling setAspectRatio(2, 3) and setAspectRatio(4, 6) make the same result.
*
* @param width Relative horizontal size
* @param height Relative vertical size
*/
public void setAspectRatio(int width, int height) {
if (width < 0 || height < 0) {
throw new IllegalArgumentException("Size cannot be negative.");
}
mRatioWidth = width;
mRatioHeight = height;
requestLayout();
}
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
int width = MeasureSpec.getSize(widthMeasureSpec);
int height = MeasureSpec.getSize(heightMeasureSpec);
if (0 == mRatioWidth || 0 == mRatioHeight) {
setMeasuredDimension(width, height);
} else {
if (width < height * mRatioWidth / mRatioHeight) {
setMeasuredDimension(width, width * mRatioHeight / mRatioWidth);
} else {
setMeasuredDimension(height * mRatioWidth / mRatioHeight, height);
}
}
}
}
可以看到,这是一个自定义的TextureView,在相机使用的时候,TextureView将作为载体去展示相机获取的内容,就是我们展示使用的控件。
同时,这个自定义TextureView重写了onMeasure方法,大家都知道这个方法是改变控件大小使用的,接下来我们看内部实现。mRatioHeight和mRatioWidth是一个从外部设置的宽高,当我们的默认要展示的宽高比和设定的宽高比不一样的时候,重新计算,让最短的(宽或高)充满控件,另一个(高或宽)按比例确定其长度。这样做的目的是让相机获取的比例和我们控件所一致,否则会造成图片拉伸的效果。另外,当我们每次设置新的数据进来的时候,会调用requestLayout方法,此方法会重新测量控件宽高,保证显示的准确性。
在看Fragment中使用Camera2进行开发之前,我们先简单了解一下Camera2如何使用。
1、Camera2的显示需要一个TextureView作为载体,所以我们需要一个TextureView。(SurfaceView也可以)
2、当TextureView创建完成后,6.0之后需要先做相机的权限请求验证(Manifest.permission.CAMERA),验证通过后,获取CameraManager对象activity.getSystemService(Context.CAMARA_SERVICE),然后用CameraManager打开摄像头,调用openCamera(@NotNull String : cameraId, @NotNull final CameraDevice.StateCallback callback,
@Nullable Handler : handler)方法打开摄像头。
该方法第一个参数为相机id(通常0为后置摄像头,1为前置摄像头),第二个参数为相机状态的回调,后面详细解析,第三个参数为接收消息使用的Handler,作用是表示在哪个线程调用,如果为null,就在当前线程使用,附上源码注释:
* @param cameraId
* The unique identifier of the camera device to open
* @param callback
* The callback which is invoked once the camera is opened
* @param handler
* The handler on which the callback should be invoked, or
* {@code null} to use the current thread's {@link android.os.Looper looper}.
3、第二步我们需要一个CameraDevice.StateCallback对象,接下来我们就需要创建这个对象,这个抽象类有三个实现,分别是onOpen、onDisconnect、onError,按名称就知道其含义了。
4、在回调onOpen的时候,就证明是相机打开成功了,此时还需要创建CameraCaptureSession,调用CameraDevice的creatCaptureSession方法,源码和部分注释如下
/**
* Create a new camera capture session by providing the target output set of Surfaces to the
* camera device.
* @param outputs The new set of Surfaces that should be made available as
* targets for captured image data.
* @param callback The callback to notify about the status of the new capture session.
* @param handler The handler on which the callback should be invoked, or {@code null} to use
* the current thread's {@link android.os.Looper looper}.
*
* @throws IllegalArgumentException if the set of output Surfaces do not meet the requirements,
* the callback is null, or the handler is null but the current
* thread has no looper.
* @throws CameraAccessException if the camera device is no longer connected or has
* encountered a fatal error
* @throws IllegalStateException if the camera device has been closed
*/
public abstract void createCaptureSession(@NonNull List outputs,
@NonNull CameraCaptureSession.StateCallback callback, @Nullable Handler handler)
throws CameraAccessException;
第一个参数是一个Surface的List,其作用是需要将要显示在这个Surface上面,此处我们需要两个Surface,一个是显示在页面上,另一个是拍照的时候保存用的,先通过我们前面的TextureView获取显示用的Surface。
SurfaceTexture texture = mTextureView.getSurfaceTexture();
assert texture != null;
// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
// This is the output Surface we need to start preview.
Surface surface = new Surface(texture);
在拍照截图的时候,我们还需要一个ImageReader对象,此时将ImageReader的Surface也传入进去。
第二个参数是CameraCaptureSession.StateCallback,看名称即知道是状态回调。
第三个参数是handler,和之前的作用一样,不做解释了。
五、在CameraCaptureSession.StateCallback的回调onConfigured中,我们可以取到CameraCaptureSession的实例,
接下来,我们需要调用CaptureSession的setRepeatingRequest方法,部分注释如下
/**
* Request endlessly repeating capture of images by this capture session.
*
* With this method, the camera device will continually capture images
* using the settings in the provided {@link CaptureRequest}, at the maximum
* rate possible.
*
* @param request the request to repeat indefinitely
* @param listener The callback object to notify every time the
* request finishes processing. If null, no metadata will be
* produced for this stream of requests, although image data will
* still be produced.
* @param handler the handler on which the listener should be invoked, or
* {@code null} to use the current thread's {@link android.os.Looper
* looper}.
*
* @return int A unique capture sequence ID used by
* {@link CaptureCallback#onCaptureSequenceCompleted}.
*
* @throws CameraAccessException if the camera device is no longer connected or has
* encountered a fatal error
* @throws IllegalStateException if this session is no longer active, either because the session
* was explicitly closed, a new session has been created
* or the camera device has been closed.
* @throws IllegalArgumentException If the request references no Surfaces or references Surfaces
* that are not currently configured as outputs; or the request
* is a reprocess capture request; or the capture targets a
* Surface in the middle of being {@link #prepare prepared}; or
* the handler is null, the listener is not null, and the
* calling thread has no looper; or no requests were passed in.
*
* @see #capture
* @see #captureBurst
* @see #setRepeatingBurst
* @see #stopRepeating
* @see #abortCaptures
*/
public abstract int setRepeatingRequest(@NonNull CaptureRequest request,
@Nullable CaptureCallback listener, @Nullable Handler handler)
throws CameraAccessException;
看注释得知,该方法将不停地将图片提供出来,然后我们需要一个CaptureRequest对象,这个对象是设置一些状态使用的。通过CameraDevice的creatCaptyreRequest方法获取CaptureRequest.Builder,然后用在builder里面addTarget,将TextureView取到的surface传入,再设置一些属性(后面再详细介绍这些属性)。最后通过CaptureRequest.Builder的build方法创建CaptureRequest实例。第二个参数是可空的,监听预览过程的回调,不需要特殊处理的时候,可以传null。
此时,就可以将相机展示在UI上面了,接下来我们继续看刚才用到的这些类都是干什么的:
CameraManager:相机的管理类,用于打开相机
CameraDevice:相机的设备,在打开相机后,获取到相机设备这个对象,可以对相机做一些操作。
CameraCaptureSession:相机的连接会话,让我们与相机一致保持联系,其CaptureCallback就是不停的将显示的数据传递过来供我们使用或修改(比如美颜)。
CameraRequest:当我们像相机发送一次请求的时候需要CameraRequest,里面可以设置各种信息,对焦,曝光等。
ImageReader:我们通过ImageReader的获取图像,用作拍照保存使用,将ImageReader的Surface传递到CameraDevice的creatCaptureSession方法中,这个方法中的所有Surface都将获取到相机传回的画面。
了解完Camera2的简单使用方式,让我们来欣赏Google的示例源码,进一步了解Camera2!
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
return inflater.inflate(R.layout.fragment_camera2_basic, container, false);
}
@Override
public void onViewCreated(final View view, Bundle savedInstanceState) {
view.findViewById(R.id.picture).setOnClickListener(this);
view.findViewById(R.id.info).setOnClickListener(this);
mTextureView = (AutoFitTextureView) view.findViewById(R.id.texture);
}
@Override
public void onActivityCreated(Bundle savedInstanceState) {
super.onActivityCreated(savedInstanceState);
mFile = new File(getActivity().getExternalFilesDir(null), "pic.jpg");
}
@Override
public void onResume() {
super.onResume();
startBackgroundThread();
// When the screen is turned off and turned back on, the SurfaceTexture is already
// available, and "onSurfaceTextureAvailable" will not be called. In that case, we can open
// a camera and start preview from here (otherwise, we wait until the surface is ready in
// the SurfaceTextureListener).
if (mTextureView.isAvailable()) {
openCamera(mTextureView.getWidth(), mTextureView.getHeight());
} else {
mTextureView.setSurfaceTextureListener(mSurfaceTextureListener);
}
}
Fragment的创建流程,初始化一些数据,在onResume的时候,如果TextureView已经初始化完成,就去openCamera,否则去设置监听,等待初始化完成。
/**
* {@link TextureView.SurfaceTextureListener} handles several lifecycle events on a
* {@link TextureView}.
*/
private final TextureView.SurfaceTextureListener mSurfaceTextureListener
= new TextureView.SurfaceTextureListener() {
@Override
public void onSurfaceTextureAvailable(SurfaceTexture texture, int width, int height) {
openCamera(width, height);
}
@Override
public void onSurfaceTextureSizeChanged(SurfaceTexture texture, int width, int height) {
configureTransform(width, height);
}
@Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture texture) {
return true;
}
@Override
public void onSurfaceTextureUpdated(SurfaceTexture texture) {
}
};
在TextureView大小改变的时候,也会重新进行同步,这个方法后面还会用到,到后面再分析。我们接着看openCamera方法。
/**
* Opens the camera specified by {@link Camera2BasicFragment#mCameraId}.
*/
private void openCamera(int width, int height) {
if (ContextCompat.checkSelfPermission(getActivity(), Manifest.permission.CAMERA)
!= PackageManager.PERMISSION_GRANTED) {
requestCameraPermission();
return;
}
setUpCameraOutputs(width, height);
configureTransform(width, height);
Activity activity = getActivity();
CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE);
try {
if (!mCameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) {
throw new RuntimeException("Time out waiting to lock camera opening.");
}
manager.openCamera(mCameraId, mStateCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
} catch (InterruptedException e) {
throw new RuntimeException("Interrupted while trying to lock camera opening.", e);
}
}
先进行权限检测,这个不做详细解释,有兴趣可以看Android6.0的新特性,动态获取权限。接下来我们看setUpCameraOutputs方法,是一个重点部分。
/**
* Sets up member variables related to camera.
*
* @param width The width of available size for camera preview
* @param height The height of available size for camera preview
*/
@SuppressWarnings("SuspiciousNameCombination")
private void setUpCameraOutputs(int width, int height) {
Activity activity = getActivity();
CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE);
try {
for (String cameraId : manager.getCameraIdList()) {
CameraCharacteristics characteristics
= manager.getCameraCharacteristics(cameraId);
// We don't use a front facing camera in this sample.
Integer facing = characteristics.get(CameraCharacteristics.LENS_FACING);
if (facing != null && facing == CameraCharacteristics.LENS_FACING_FRONT) {
continue;
}
StreamConfigurationMap map = characteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
if (map == null) {
continue;
}
// For still image captures, we use the largest available size.
Size largest = Collections.max(
Arrays.asList(map.getOutputSizes(ImageFormat.JPEG)),
new CompareSizesByArea());
mImageReader = ImageReader.newInstance(largest.getWidth(), largest.getHeight(),
ImageFormat.JPEG, /*maxImages*/2);
mImageReader.setOnImageAvailableListener(
mOnImageAvailableListener, mBackgroundHandler);
// Find out if we need to swap dimension to get the preview size relative to sensor
// coordinate.
int displayRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
//noinspection ConstantConditions
mSensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
boolean swappedDimensions = false;
switch (displayRotation) {
case Surface.ROTATION_0:
case Surface.ROTATION_180:
if (mSensorOrientation == 90 || mSensorOrientation == 270) {
swappedDimensions = true;
}
break;
case Surface.ROTATION_90:
case Surface.ROTATION_270:
if (mSensorOrientation == 0 || mSensorOrientation == 180) {
swappedDimensions = true;
}
break;
default:
Log.e(TAG, "Display rotation is invalid: " + displayRotation);
}
Point displaySize = new Point();
activity.getWindowManager().getDefaultDisplay().getSize(displaySize);
int rotatedPreviewWidth = width;
int rotatedPreviewHeight = height;
int maxPreviewWidth = displaySize.x;
int maxPreviewHeight = displaySize.y;
if (swappedDimensions) {
rotatedPreviewWidth = height;
rotatedPreviewHeight = width;
maxPreviewWidth = displaySize.y;
maxPreviewHeight = displaySize.x;
}
if (maxPreviewWidth > MAX_PREVIEW_WIDTH) {
maxPreviewWidth = MAX_PREVIEW_WIDTH;
}
if (maxPreviewHeight > MAX_PREVIEW_HEIGHT) {
maxPreviewHeight = MAX_PREVIEW_HEIGHT;
}
// Danger, W.R.! Attempting to use too large a preview size could exceed the camera
// bus' bandwidth limitation, resulting in gorgeous previews but the storage of
// garbage capture data.
mPreviewSize = chooseOptimalSize(map.getOutputSizes(SurfaceTexture.class),
rotatedPreviewWidth, rotatedPreviewHeight, maxPreviewWidth,
maxPreviewHeight, largest);
// We fit the aspect ratio of TextureView to the size of preview we picked.
int orientation = getResources().getConfiguration().orientation;
if (orientation == Configuration.ORIENTATION_LANDSCAPE) {
mTextureView.setAspectRatio(
mPreviewSize.getWidth(), mPreviewSize.getHeight());
} else {
mTextureView.setAspectRatio(
mPreviewSize.getHeight(), mPreviewSize.getWidth());
}
// Check if the flash is supported.
Boolean available = characteristics.get(CameraCharacteristics.FLASH_INFO_AVAILABLE);
mFlashSupported = available == null ? false : available;
mCameraId = cameraId;
return;
}
} catch (CameraAccessException e) {
e.printStackTrace();
} catch (NullPointerException e) {
// Currently an NPE is thrown when the Camera2API is used but not supported on the
// device this code runs.
ErrorDialog.newInstance(getString(R.string.camera_error))
.show(getChildFragmentManager(), FRAGMENT_DIALOG);
}
}
在这个方法中,先获取了CameraManager,然后遍历了manager.getCameraIdList,此处是获取到所有的相机列表进行遍历。接着我们看到了一个全新的类,叫做CameraCharacteristics,这个类的功能是获取相机的特性,可以取到一些我们需要确定的属性,比如接下来用到的,相机的方向LENS_FACING。
附上CameraCharacteristics的API文档连接 CameraCharacteristics - Android SDK | Android Developers
由于这个Demo里面只做了后置摄像头,所以获取的LENS_FACING与LENS_FACING_FRONT(当摄像头方向和屏幕方向相同,就是前置摄像头)相同的时候,直接continue了。
接下来获取SCALER_STREAM_CONFIGURATION_MAP,这个key对应数据,在API文档的解释是
The available stream configurations that this camera device supports; also includes the minimum frame durations and the stall durations for each format/size combination.
意思是相机支持的可用流,包括一些数据等。
然后取JEPG模式输出的Size,为了显示效果,通过Collection获取最大的那个尺寸,创建JEPG格式的ImageReader,最后给ImageReader设置监听,监听的作用是存储图片使用,后面再做分析。
获取displayRotation和mSensorOrientation,分别是屏幕的旋转角度和相机传感器的角度,如果方向不一致,标记seappedDimensions需要改变。
接下来是尺寸的修正处理,然后选择最终要使用的尺寸chooseOptimalSize,优先选择能充满屏幕且最接近屏幕大小的输出比例,否则的话选择不能占满屏幕但是最大的输出比例(总之两个都是选取最接近的比例)。
/**
* Given {@code choices} of {@code Size}s supported by a camera, choose the smallest one that
* is at least as large as the respective texture view size, and that is at most as large as the
* respective max size, and whose aspect ratio matches with the specified value. If such size
* doesn't exist, choose the largest one that is at most as large as the respective max size,
* and whose aspect ratio matches with the specified value.
*
* @param choices The list of sizes that the camera supports for the intended output
* class
* @param textureViewWidth The width of the texture view relative to sensor coordinate
* @param textureViewHeight The height of the texture view relative to sensor coordinate
* @param maxWidth The maximum width that can be chosen
* @param maxHeight The maximum height that can be chosen
* @param aspectRatio The aspect ratio
* @return The optimal {@code Size}, or an arbitrary one if none were big enough
*/
private static Size chooseOptimalSize(Size[] choices, int textureViewWidth,
int textureViewHeight, int maxWidth, int maxHeight, Size aspectRatio) {
// Collect the supported resolutions that are at least as big as the preview Surface
List bigEnough = new ArrayList<>();
// Collect the supported resolutions that are smaller than the preview Surface
List notBigEnough = new ArrayList<>();
int w = aspectRatio.getWidth();
int h = aspectRatio.getHeight();
for (Size option : choices) {
if (option.getWidth() <= maxWidth && option.getHeight() <= maxHeight &&
option.getHeight() == option.getWidth() * h / w) {
if (option.getWidth() >= textureViewWidth &&
option.getHeight() >= textureViewHeight) {
bigEnough.add(option);
} else {
notBigEnough.add(option);
}
}
}
// Pick the smallest of those big enough. If there is no one big enough, pick the
// largest of those not big enough.
if (bigEnough.size() > 0) {
return Collections.min(bigEnough, new CompareSizesByArea());
} else if (notBigEnough.size() > 0) {
return Collections.max(notBigEnough, new CompareSizesByArea());
} else {
Log.e(TAG, "Couldn't find any suitable preview size");
return choices[0];
}
}
获取实际要显示的尺寸之后,传给了我们前面自定义的TextureView,去适配对应的尺寸。
因为是一个Demo,这里只处理了一个摄像头,流程执行完,直接return,不再做后面的遍历了。
然后我们回到openCamera方法,来看TextureView的监听中也有的configureTransform方法。
/**
* Configures the necessary {@link android.graphics.Matrix} transformation to `mTextureView`.
* This method should be called after the camera preview size is determined in
* setUpCameraOutputs and also the size of `mTextureView` is fixed.
*
* @param viewWidth The width of `mTextureView`
* @param viewHeight The height of `mTextureView`
*/
private void configureTransform(int viewWidth, int viewHeight) {
Activity activity = getActivity();
if (null == mTextureView || null == mPreviewSize || null == activity) {
return;
}
int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
Matrix matrix = new Matrix();
RectF viewRect = new RectF(0, 0, viewWidth, viewHeight);
RectF bufferRect = new RectF(0, 0, mPreviewSize.getHeight(), mPreviewSize.getWidth());
float centerX = viewRect.centerX();
float centerY = viewRect.centerY();
if (Surface.ROTATION_90 == rotation || Surface.ROTATION_270 == rotation) {
bufferRect.offset(centerX - bufferRect.centerX(), centerY - bufferRect.centerY());
matrix.setRectToRect(viewRect, bufferRect, Matrix.ScaleToFit.FILL);
float scale = Math.max(
(float) viewHeight / mPreviewSize.getHeight(),
(float) viewWidth / mPreviewSize.getWidth());
matrix.postScale(scale, scale, centerX, centerY);
matrix.postRotate(90 * (rotation - 2), centerX, centerY);
} else if (Surface.ROTATION_180 == rotation) {
matrix.postRotate(180, centerX, centerY);
}
mTextureView.setTransform(matrix);
}
该方法经过一系列计算变换,调整页面的尺寸,最后重新将数据设置给TextureView,以达到我们所需要的效果。TextureView尺寸相关的搞定之后,就获取openCamera,打开相机了。接下来看相机状态的回调:
/**
* {@link CameraDevice.StateCallback} is called when {@link CameraDevice} changes its state.
*/
private final CameraDevice.StateCallback mStateCallback = new CameraDevice.StateCallback() {
@Override
public void onOpened(@NonNull CameraDevice cameraDevice) {
// This method is called when the camera is opened. We start camera preview here.
mCameraOpenCloseLock.release();
mCameraDevice = cameraDevice;
createCameraPreviewSession();
}
@Override
public void onDisconnected(@NonNull CameraDevice cameraDevice) {
mCameraOpenCloseLock.release();
cameraDevice.close();
mCameraDevice = null;
}
@Override
public void onError(@NonNull CameraDevice cameraDevice, int error) {
mCameraOpenCloseLock.release();
cameraDevice.close();
mCameraDevice = null;
Activity activity = getActivity();
if (null != activity) {
activity.finish();
}
}
};
主要的是creatCameraPreviewSession方法,我们继续看
/**
* Creates a new {@link CameraCaptureSession} for camera preview.
*/
private void createCameraPreviewSession() {
try {
SurfaceTexture texture = mTextureView.getSurfaceTexture();
assert texture != null;
// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
// This is the output Surface we need to start preview.
Surface surface = new Surface(texture);
// We set up a CaptureRequest.Builder with the output Surface.
mPreviewRequestBuilder
= mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(surface);
// Here, we create a CameraCaptureSession for camera preview.
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
// The camera is already closed
if (null == mCameraDevice) {
return;
}
// When the session is ready, we start displaying the preview.
mCaptureSession = cameraCaptureSession;
try {
// Auto focus should be continuous for camera preview.
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
setAutoFlash(mPreviewRequestBuilder);
// Finally, we start displaying the camera preview.
mPreviewRequest = mPreviewRequestBuilder.build();
mCaptureSession.setRepeatingRequest(mPreviewRequest,
mCaptureCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
@Override
public void onConfigureFailed(
@NonNull CameraCaptureSession cameraCaptureSession) {
showToast("Failed");
}
}, null
);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
获取Surface,然后初始化CaptureRequest,创建CameraCaptureSession,在创建的回调中再次设置CaptureRequest,进行RepeatingRequest请求,显示UI。CaptureRequest的属性,仍建议查看API文档进行了解
CaptureRequest - Android SDK | Android Developers
我们来看CameraCaptureSession的监听如何写的:
/**
* A {@link CameraCaptureSession.CaptureCallback} that handles events related to JPEG capture.
*/
private CameraCaptureSession.CaptureCallback mCaptureCallback
= new CameraCaptureSession.CaptureCallback() {
private void process(CaptureResult result) {
switch (mState) {
case STATE_PREVIEW: {
// We have nothing to do when the camera preview is working normally.
break;
}
case STATE_WAITING_LOCK: {
Integer afState = result.get(CaptureResult.CONTROL_AF_STATE);
if (afState == null) {
captureStillPicture();
} else if (CaptureResult.CONTROL_AF_STATE_FOCUSED_LOCKED == afState ||
CaptureResult.CONTROL_AF_STATE_NOT_FOCUSED_LOCKED == afState) {
// CONTROL_AE_STATE can be null on some devices
Integer aeState = result.get(CaptureResult.CONTROL_AE_STATE);
if (aeState == null ||
aeState == CaptureResult.CONTROL_AE_STATE_CONVERGED) {
mState = STATE_PICTURE_TAKEN;
captureStillPicture();
} else {
runPrecaptureSequence();
}
}
break;
}
case STATE_WAITING_PRECAPTURE: {
// CONTROL_AE_STATE can be null on some devices
Integer aeState = result.get(CaptureResult.CONTROL_AE_STATE);
if (aeState == null ||
aeState == CaptureResult.CONTROL_AE_STATE_PRECAPTURE ||
aeState == CaptureRequest.CONTROL_AE_STATE_FLASH_REQUIRED) {
mState = STATE_WAITING_NON_PRECAPTURE;
}
break;
}
case STATE_WAITING_NON_PRECAPTURE: {
// CONTROL_AE_STATE can be null on some devices
Integer aeState = result.get(CaptureResult.CONTROL_AE_STATE);
if (aeState == null || aeState != CaptureResult.CONTROL_AE_STATE_PRECAPTURE) {
mState = STATE_PICTURE_TAKEN;
captureStillPicture();
}
break;
}
}
}
@Override
public void onCaptureProgressed(@NonNull CameraCaptureSession session,
@NonNull CaptureRequest request,
@NonNull CaptureResult partialResult) {
process(partialResult);
}
@Override
public void onCaptureCompleted(@NonNull CameraCaptureSession session,
@NonNull CaptureRequest request,
@NonNull TotalCaptureResult result) {
process(result);
}
};
当页面完成后,调用onCaptureCompleted方法,而在显示之前会调用onCaptureProgressed方法,数据是一样的,区别在于显示之前我们可以对数据进行修改,做一些操作,此Demo只做一下截图,就不处理了。两者都调用process方法,将数据传入。
当拍照的时候,我们也是发起一个请求,代码如下
/**
* Lock the focus as the first step for a still image capture.
*/
private void lockFocus() {
try {
// This is how to tell the camera to lock focus.
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER,
CameraMetadata.CONTROL_AF_TRIGGER_START);
// Tell #mCaptureCallback to wait for the lock.
mState = STATE_WAITING_LOCK;
mCaptureSession.capture(mPreviewRequestBuilder.build(), mCaptureCallback,
mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
发起请求后,我们将mState的状态置为了STATE_WATING_LOCK,然后看上面process中的对应分支
CaptureResult.CONTROL_AF_STATE :自动对焦(AF)算法的状态
CaptureResult.CONTROL_AF_STATE_FOCUSED_LOCKED : 成功对焦锁定了焦点
CaptureResult.CONTROL_AF_STATE_NOT_FOCUSED_LOCKED : 未成功对焦,但锁定了焦点
CaptureResult.CONTROL_AE_STATE : 自动曝光(AE)算法的状态
CaptureResult.CONTROL_AE_STATE_CONVERGED : AE对于当前场景的控制
当这些数据均准备好了之后,执行captureStillPicture方法
/**
* Capture a still picture. This method should be called when we get a response in
* {@link #mCaptureCallback} from both {@link #lockFocus()}.
*/
private void captureStillPicture() {
try {
final Activity activity = getActivity();
if (null == activity || null == mCameraDevice) {
return;
}
// This is the CaptureRequest.Builder that we use to take a picture.
final CaptureRequest.Builder captureBuilder =
mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(mImageReader.getSurface());
// Use the same AE and AF modes as the preview.
captureBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
setAutoFlash(captureBuilder);
// Orientation
int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, getOrientation(rotation));
CameraCaptureSession.CaptureCallback CaptureCallback
= new CameraCaptureSession.CaptureCallback() {
@Override
public void onCaptureCompleted(@NonNull CameraCaptureSession session,
@NonNull CaptureRequest request,
@NonNull TotalCaptureResult result) {
showToast("Saved: " + mFile);
Log.d(TAG, mFile.toString());
unlockFocus();
}
};
mCaptureSession.stopRepeating();
mCaptureSession.abortCaptures();
mCaptureSession.capture(captureBuilder.build(), CaptureCallback, null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
还是创建CaptureRequest,然后创建会话,执行capture方法。其中的关键点是addTarget的时候,将ImageReader的surface传进去,这样此时ImageReader的回调就将触发。
/**
* This a callback object for the {@link ImageReader}. "onImageAvailable" will be called when a
* still image is ready to be saved.
*/
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
@Override
public void onImageAvailable(ImageReader reader) {
mBackgroundHandler.post(new ImageSaver(reader.acquireNextImage(), mFile));
}
};
ImageSaver其实是一个Runnable,作用就是存储图片,代码很简单
/**
* Saves a JPEG {@link Image} into the specified {@link File}.
*/
private static class ImageSaver implements Runnable {
/**
* The JPEG image
*/
private final Image mImage;
/**
* The file we save the image into.
*/
private final File mFile;
ImageSaver(Image image, File file) {
mImage = image;
mFile = file;
}
@Override
public void run() {
ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
FileOutputStream output = null;
try {
output = new FileOutputStream(mFile);
output.write(bytes);
} catch (IOException e) {
e.printStackTrace();
} finally {
mImage.close();
if (null != output) {
try {
output.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
截图保存完成后,调用unlockFocus,再次进入预览
/**
* Unlock the focus. This method should be called when still image capture sequence is
* finished.
*/
private void unlockFocus() {
try {
// Reset the auto-focus trigger
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER,
CameraMetadata.CONTROL_AF_TRIGGER_CANCEL);
setAutoFlash(mPreviewRequestBuilder);
mCaptureSession.capture(mPreviewRequestBuilder.build(), mCaptureCallback,
mBackgroundHandler);
// After this, the camera will go back to the normal state of preview.
mState = STATE_PREVIEW;
mCaptureSession.setRepeatingRequest(mPreviewRequest, mCaptureCallback,
mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
整个Camera2的使用方式就是这样了,总体就是初始化完成后,每次操作都先去创建请求,然后通过会话的方式进行处理。
二,Kotlin示例
其实Kotlin和Java部分的业务逻辑是一样的,但是Google也为我们提供了Kotlin编写的示例,相比于Java的写法,kotlin只是在语法上有不同,对于我们学习Kotlin的时候,看一下这种实际使用的代码会有很大的帮助,此处就不做详细解释了,有兴趣的朋友可以去读一下Kotlin版的示例源码。
最后 附上所看代码GitHub地址 Google关于Camera2的Demo地址