ARFrame

A video image, with position-tracking information, captured as part of an AR session.

具有位置跟踪信息的视频图像作为AR Session的一部分被捕获。


Overview

A running AR session continuously captures video frames from the device camera. For each frame, ARKit analyzes the image together with data from the device's motion sensing hardware to estimate the device's real-world position. ARKit delivers this tracking information and imaging parameters in the form of an ARFrame object.

概述

正在运行的AR会话不断捕捉设备摄像头的视频帧。 对于每一帧,ARKit都会分析图像以及来自设备运动传感硬件的数据,以估算设备的实际位置。 ARKit以ARFrame对象的形式提供此跟踪信息和成像参数。


             Accessing Captured Video Frames


capturedImage
A pixel buffer containing the image captured by the camera.
包含相机捕获的图像的像素缓冲区。

Discussion

ARKit captures pixel buffers in a planar YCbCr format (also known as YUV) format. To render these images on a device display, you'll need to access the luma and chroma planes of the pixel buffer and convert YCbCr values to an RGB format according to the ITU_R_601_4 standard. 

The following matrix (shown in Metal shader syntax) performs this conversion when multiplied by a 4-element vector (containing Y', Cb, Cr values and an "alpha" value of 1.0):

讨论

ARKit以平面YCbCr格式(也称为YUV)格式捕获像素缓冲区。 要在设备显示器上显示这些图像,您需要访问像素缓冲区的亮度和色度平面,并根据ITU_R_601_4标准将YCbCr值转换为RGB格式。

以下矩阵(以Metal着色器语法显示)在乘以4元素向量(包含Y',Cb,Cr值和“alpha”值为1.0)时执行此转换:


ARFrame_第1张图片

For more details, see Displaying an AR Experience with Metal, or use the Metal variant of the AR app template when creating a new project in Xcode.

有关更多详细信息,请参阅在metal中显示AR体验,或者在Xcode中创建新项目时使用AR应用程序模板的metal变体。

timestamp
The time at which the frame was captured.
捕获帧的时间。

capturedDepthData
The depth map, if any, captured along with the video frame.
深度图(如果有)随视频帧一起捕获。

Discussion

Face-based AR (see ARFaceTrackingConfiguration) uses the front-facing, depth-sensing camera on compatible devices. When running such a configuration, frames vended by the session contain a depth map captured by the depth camera in addition to the color pixel buffer (see capturedImage) captured by the color camera. This property’s value is always nil when running other AR configurations.

The depth-sensing camera provides data at a different frame rate than the color camera, so this property’s value can also be nil if no depth data was captured at the same time as the current color image.

讨论

基于人脸的AR(请参阅ARFaceTrackingConfiguration)使用兼容设备上的前置深度感应相机。 运行此配置时,Session 给出的帧除了包含由彩色摄像机捕获的彩色像素缓冲区(参见capturedImage)之外,还包含深度相机捕获的深度图。 运行其他AR配置时,此属性的值始终为零。

深度感测摄像机以不同于彩色摄像机的帧率提供数据,因此如果没有深度数据与当前彩色图像同时被捕获,则该属性的值也可以为零。

capturedDepthDataTimestamp
The time at which depth data for the frame (if any) was captured.
捕获帧(如果有)的深度数据的时间。

Discussion

Face-based AR (see ARFaceTrackingConfiguration) uses the front-facing, depth-sensing camera on compatible devices. When running such a configuration, frames vended by the session contain a depth map captured by the depth camera in addition to the color pixel buffer (see capturedImage) captured by the color camera. This property’s value is always zero when running other AR configurations.

The depth-sensing camera provides data at a different frame rate than the color camera, so this property’s value may not exactly match the timestamp property for the image captured by the color camera, and can also be zero if no depth data was captured at the same time as the current color image.

讨论

基于人脸的AR(请参阅ARFaceTrackingConfiguration)使用兼容设备上的前置深度感应相机。 运行此配置时,Session 给出的帧除了包含由彩色摄像机捕获的彩色像素缓冲区(参见capturedImage)之外,还包含深度相机捕获的深度图。 运行其他AR配置时,此属性的值始终为零。

深度感应摄像机以与彩色摄像机不同的帧率提供数据,因此该属性的值可能与彩色摄像机捕获的图像的时间戳属性不完全匹配,并且如果没有捕获到深度数据,也可能为零 与当前彩色图像相同的时间。


                        Examining Scene Parameters


camera
Information about the camera position, orientation, and imaging parameters used to capture the frame.
有关用于捕获帧的相机位置,方向和成像参数的信息。

lightEstimate
An estimate of lighting conditions based on the camera image.
基于相机图像估计照明条件。

Discussion

If you render your own overlay graphics for the AR scene, you can use this information in shading algorithms to help make those graphics match the real-world lighting conditions of the scene captured by the camera. (The ARSCNView class automatically uses this information to configure SceneKit lighting.)

This property's value is nil if the lightEstimationEnabled property of the session configuration that captured this frame is NO.

讨论

如果您为AR场景渲染自己的叠加图形,则可以在着色算法中使用此信息,以帮助使这些图形与相机捕捉的场景的真实照明条件相匹配。 (ARSCNView类自动使用此信息配置SceneKit照明。)

如果捕获此帧的会话配置的lightEstimationEnabled属性为NO,则此属性的值为零。

- displayTransformForOrientation:viewportSize:
Returns an affine transform for converting between normalized image coordinates and a coordinate space appropriate for rendering the camera image onscreen.
返回一个仿射变换,用于在归一化图像坐标和适合于在屏幕上渲染相机图像的坐标空间之间进行转换。

Parameters

orientation

The orientation intended for presenting the view. 

旨在呈现视图的方向。

viewportSize

The size, in points, of the view intended for rendering the camera image.

用于渲染相机图像的视图的大小(以点为单位)。

Return Value

A transform matrix that converts from normalized image coordinates in the captured image to normalized image coordinates that account for the specified parameters.

一个转换矩阵,可将捕获图像中的归一化图像坐标转换为规范化图像坐标,从而解释指定参数。

Discussion

Normalized image coordinates range from (0,0) in the upper left corner of the image to (1,1) in the lower right corner.

This method creates an affine transform representing the rotation and aspect-fit crop operations necessary to adapt the camera image to the specified orientation and to the aspect ratio of the specified viewport. The affine transform does not scale to the viewport's pixel size.

The capturedImage pixel buffer is the original image captured by the device camera, and thus not adjusted for device orientation or view aspect ratio.

讨论

标准化的图像坐标范围从图像左上角的(0,0)到右下角的(1,1)。

此方法创建一个仿射变换,该仿射变换表示将摄像机图像调整为指定方向和指定视口的高宽比所需的旋转和宽高比拟的裁剪操作。 仿射变换不会缩放到视口的像素大小。

被捕获的图像像素缓存是由设备相机捕获的原始图像,因此未针对设备方向或视图宽高比进行调整。


                   Tracking and Finding Objects


anchors
The list of anchors representing positions tracked or objects detected in the scene.
表示跟踪的位置或在场景中检测到的对象的锚点列表。

Discussion

You can manually add or remove anchors to track locations in the scene using the ARSessionclass. Depending on session configuration, ARKit may also add anchors, such as the origin of the world coordinate system or automatically detected planes.

讨论

您可以使用ARSession类手动添加或删除锚点以跟踪场景中的位置。 根据Session配置的不同,ARKit还可能会添加锚点,例如世界坐标系的原点或自动检测到的平面。

- hitTest:types:
Searches for real-world objects or AR anchors in the captured camera image.
搜索捕获的相机图像中的真实世界对象或AR定位点。

Parameters

point

A point in normalized image coordinate space. (The point (0,0) represents the top left corner of the image, and the point (1,1) represents the bottom right corner.)

归一化图像坐标空间中的一个点。 (点(0,0)代表图像的左上角,点(1,1)代表右下角)。

types

The types of hit-test result to search for.

搜索的结果的类型。

Return Value

A list of results, sorted from nearest to farthest (in distance from the camera).

返回值

结果列表,从最近到最远(与相机的距离)排序。

Discussion

Hit testing searches for real-world objects or surfaces detected through the AR session's processing of the camera image. A 2D point in the image coordinates can refer to any point along a 3D line that starts at the device camera and extends in a direction determined by the device orientation and camera projection. This method searches along that line, returning all objects that intersect it in order of distance from the camera.

Note

If you use ARKit with a SceneKit or SpriteKit view, the ARSCNView hitTest:types: or ARSKView hitTest:types: method lets you specify a search point in view coordinates.

讨论

通过AR Session处理相机图像搜索的真实世界对象或表面。 图像坐标中的二维点可以指沿着三维线上的任何点,该三维线开始于设备相机并且在由设备方向和相机投影确定的方向上延伸。 该方法沿着该线搜索,按距离相机的顺序返回与其相交的所有对象。

注意

如果您使用带有SceneKit或SpriteKit视图的ARKit,则可以使用ARSCNView hitTest:types:或ARSKView hitTest:types:方法指定视图坐标中的搜索点。


                      Debugging Scene Detection


rawFeaturePoints
The current intermediate results of the scene analysis ARKit uses to perform world tracking.
ARKit用于执行世界跟踪的场景分析的当前中间结果。

Discussion

These points represent notable features detected in the camera image. Their positions in 3D world coordinate space are extrapolated as part of the image analysis that ARKit performs in order to accurately track the device's position, orientation, and movement. Taken together, these points loosely correlate to the contours of real-world objects in view of the camera.

ARKit does not guarantee that the number and arrangement of raw feature points will remain stable between software releases, or even between subsequent frames in the same session. Regardless, the point cloud can sometimes prove useful when debugging your app's placement of virtual objects into the real-world scene.

If you display AR content with SceneKit using the ARSCNView class, you can display this point cloud with the ARSCNDebugOptionShowFeaturePoints debug option.

Feature point detection requires a ARWorldTrackingConfiguration session.

讨论

这些点表示在相机图像中检测到的显着特征。 它们在3D世界坐标空间中的位置被推断为ARKit为了精确跟踪设备的位置,方向和移动而执行的图像分析的一部分。 总之,考虑到相机,这些点与真实世界物体的轮廓松散相关。

ARKit不保证原始特征点的数量和排列在软件版本之间,甚至在同一会话中的后续帧之间保持稳定。 无论如何, point cloud 在调试应用程序将虚拟对象放置到真实世界场景时有时会证明是有用的。

如果使用ARSCNView类使用SceneKit显示AR内容,则可以使用ARSCNDebugOptionShowFeaturePoints调试选项显示此 point cloud 。

特征点检测需要ARWorldTrackingConfiguration会话。

ARPointCloud
A collection of points in the world coordinate space of the AR session.
AR Session世界坐标空间中的一组点。

Overview

Use the ARFrame rawFeaturePoints property to obtain a point cloud representing intermediate results of the scene analysis ARKit uses to perform world tracking.

概述

使用ARFrame rawFeaturePoints属性获取表示ARKit用于执行世界跟踪的场景分析的中间结果的point cloud。

Identifying Feature Points

count
The number of points in the point cloud.
point cloud的点数。

points
The list of detected points.
检测点的列表。

identifiers
A list of unique identifiers corresponding to detected feature points.
与检测到的特征点对应的唯一标识符列表。

Discussion

Each identifier in this list corresponds to the point vector at the same index in the points array.

讨论

该列表中的每个标识符对应于points数组中相同索引处的点向量。

Inherits From NSObject

Conforms To NSSecureCoding


Inherits From NSObject

Conforms To NSCopying

你可能感兴趣的:(ARFrame)