将我在github pages上面的文章转载到这里,还没有翻译,见谅。
In ARKit 1, we have:
In ARKit 2, we have:
ARWorldMap
between real world and the coordinate system. However the mapping is not exposed to developers.ARWorldMap
to developers.ARWorldmap
contains:
We can use the map in two different ways:
The SwiftShot is an multiuser experience AR game:
and the following is a small piece of the demo:
In order to share or restore the map, we need to get a good one first. A good map should be:
* Multiple points of view: If we record the mapping from one point of view, and try to restore the coordinate system from another point of view, it will fail.
* Static, well-textured environment.
* Dense feature points on the map.
We can use the WorldMappingStatus
status from ARFrame
to decide if the current map is good enough for sharing or storing:
public enum WorldMappingStatus : Int {
case notAvailable
case limited
case extending
case mapped
}
With the help of Environment Texturing, AR scene objects can reflect the environment texture on the surface of themselves, just like:
Moving objects can not be positioned in ARKit 1. In ARKit 2, specified images can be tracked in AR scene.
The classes in ARKit 2 for image tracking are:
The detected ARImageAnchor
s have properties like:
open class ARImageAnchor : ARAnchor, ARTrackable {
public var isTracked: Bool { get }
open var transform: simd_float4x4 { get }
open var referenceImage: ARReferenceImage { get }
}
The specified image should:
The input of the cat image frame demo are:
The video is played at the position of the specified picture frame, with the same orientation of the picture frame.
3D object detection workflow is:
The ARObjectAnchor
contains properties like:
open class ARObjectAnchor : ARAnchor {
open var transform: simd_float4x4 { get }
open var referenceObject: ARReferenceObject { get }
}
and ARReferenceObject
is the scanned 3D object:
open class ARReferenceObject
: NSObject, NSCopying, NSSecureCoding {
open var name: String?
open var center: simd_float3 { get }
open var extent: simd_float3 { get }
open var rawFeaturePoints: ARPointCloud { get }
}
An
ARReferenceObject
contains only the spatial feature information needed for ARKit to recognize the real-world object, and is not a displayable 3D reconstruction of that object.
In order to get the ARReferenceObject
, we should scan the real object, and store the result as an file (.arobject) or an xcode asset catalog for ARKit to use. Fortunately, Apple supplies a demo for scanning 3D object to get the ARReferenceObject
. Refer to: Scanning and Detecting 3D Objects for detail and the rough steps of object scanning are:
For scanned object in the real world, we can dynamically add some info around it (Museum is a good use case.), like the demo does:
With face tracking, we can place something on it or around it.
Enhancements in ARKit 2:
Gaze and Tongue can be input of the AR app.
New changes in one screenshot:
This document demos an app for basic usage of ARKit.
Make your AR experience more robust by
ARCamera.TrackingState
.ARCamera.TrackingState.Reason.relocalizing
.ARWorldMap
.This post describes how to rendering virtual objects, how to interact with virtual objects, how to handling interruptions. It is for UX.
This document describes the best practices for visual feedback, gesture interactions, and realistic rendering in AR experiences. And a demo app is supplied.
This WWDC session explains the detail of the tracking process used in ARKit 2.
This document demos an app on how to transmit ARKit world-mapping data between nearby devices with the MultipeerConnectivity framework (introduced in iOS 7.0) to create a shared basis for AR experiences. MultipeerConnectivity supports peer-to-peer connectivity and the discovery of nearby devices. With MultipeerConnectivity, you can not only share ARWorldMap
, but also some actions. This makes multiuser AR game possible.
However:
This document demos the SwiftShot game shown on WWDC 2018, including: