旧的Camera API/HAL框架,Camera 参数是通过Camera.setParameters()来下发参数,在新的CameraAPI2/HAL3架构中,则使用了Camera Metadata的形式来下发参数。
在Camera API2 中,Java层中直接对参数进行设置并将其封装到Capture_Request即可,
而兼容 API1 ,则在 API1中的 setParameter()方法中进行转换,最终在 Camera2Client.cpp中以Metadata的形式传递下去。
Camera Metadata 就是将参数以共享内存的形式,将所有的Camera 参数以 有序的结构体的形式 保存在一块连接的内存中。
Camera Metadata主要在 /system/media/camera/ 目录中定义,
从 Android.bp 中可以看出,最终是编译成 libcamera_metadata.so库。
path:xref: /system/media/camera/Android.bp
11 cc_library_shared {
12 name: "libcamera_metadata",
13 vendor_available: true,
14 product_available: true,
15 // TODO(b/153609531): remove when no longer needed.
16 native_bridge_supported: true,
17 host_supported: true,
18 vndk: {
19 enabled: true,
20 },
21 double_loadable: true,
22 srcs: ["src/camera_metadata.c"],
23
24 include_dirs: ["system/media/private/camera/include"],
25 local_include_dirs: ["include"],
26 export_include_dirs: ["include"],
27
28 header_libs: [
29 "libcutils_headers",
30 ],
31
32 export_header_lib_headers: [
33 "libcutils_headers",
34 ],
35
36 shared_libs: [
37 "liblog",
38 ],
39
40 cflags: [
41 "-Wall",
42 "-Wextra",
43 "-Werror",
44 "-fvisibility=hidden",
45 "-std=c11",
46 ],
47
48 product_variables: {
49 eng: {
50 // Enable assert()
51 cflags: [
52 "-UNDEBUG",
53 "-DLOG_NDEBUG=1",
54 ],
55 },
56 },
57 }
Camera MetaData 头文件定义在如下几个文件中:
MetaData 层次结构定义及 基本宏定义 /system/media/camera/include/system/camera_metadata_tags.h
MetaData 枚举定义及常用API 定义 /system/media/camera/include/system/camera_metadata.h
MetaData 基本函数操作结构体定义 /system/media/camera/include/system/camera_vendor_tags.h
MetaData 宏定义与字符串绑定 /system/media/camera/src/camera_metadata_tag_info.c
MetaData 核心代码实现 /system/media/camera/src/camera_metadata.c
设备中libcamera_metadata.so在设备中的路径:
ndk调用时使用的libcamera_metadata.so
./apex/com.android.vndk.v31/lib64/libcamera_metadata.so
./apex/com.android.vndk.v31/lib/libcamera_metadata.so
./apex/com.android.vndk.v31@1/lib64/libcamera_metadata.so
./apex/com.android.vndk.v31@1/lib/libcamera_metadata.so
API接口调用使用的libcamera_metadata.so
./system/lib/libcamera_metadata.so
./system/lib64/libcamera_metadata.sod
在 camera_metadata.c 中,有一幅 内存分存图,可以看出 Camera Metadata 数据结构是一块连续的内存空间。
其内存区分布如下:
区域一 :何存camera_metadata_t 结构体定义,占用内存 96 Byte
区域二 :保留区,供未来使用
区域三 :何存所有 Tag 结构体定义,TAG[0]、TAG[1]、…、TAG[entry_count-1]
区域四 :剩余未使用的 Tag 结构体的内存保留,该区域大小为 (entry_capacity - entry_count) 个TAG
区域五 :所有 Tag对应的具体 metadata 数据
区域六 :剩余未使用的 Tag 占用的内存
# system/media/camera/src/camera_metadata.c
59 /**
60 * A packet of metadata. This is a list of entries, each of which may point to
61 * its values stored at an offset in data.
62 *
63 * It is assumed by the utility functions that the memory layout of the packet
64 * is as follows:
65 *
66 * |-----------------------------------------------|
67 * | camera_metadata_t | 区域一 :何存camera_metadata_t 结构体定义
68 * | |
69 * |-----------------------------------------------|
70 * | reserved for future expansion | 区域二 :保留区,供未来使用
71 * |-----------------------------------------------|
72 * | camera_metadata_buffer_entry_t #0 | 区域三 :储存所有 Tag 结构体定义
73 * |-----------------------------------------------| TAG[0]、TAG[1]、.....、TAG[entry_count-1]
74 * | .... |
75 * |-----------------------------------------------|
76 * | camera_metadata_buffer_entry_t #entry_count-1 |
77 * |-----------------------------------------------|
78 * | free space for | 区域四 :剩余未使用的 Tag 结构体的内存保留,
79 * | (entry_capacity-entry_count) entries | 该区域大小为 (entry_capacity - entry_count) 个TAG
80 * |-----------------------------------------------|
81 * | start of camera_metadata.data | 区域五 :所有 Tag对应的具体 metadata 数据
82 * | |
83 * |-----------------------------------------------|
84 * | free space for | 区域六 :剩余未使用的 Tag 占用的内存
85 * | (data_capacity-data_count) bytes |
86 * |-----------------------------------------------|
87 *
88 * With the total length of the whole packet being camera_metadata.size bytes.
89 *
90 * In short, the entries and data are contiguous in memory after the metadata
91 * header.
92 */
93 #define METADATA_ALIGNMENT ((size_t) 4)
94 struct camera_metadata {
95 metadata_size_t size; //整个metadata数据大小
96 uint32_t version; //version
97 uint32_t flags;
98 metadata_size_t entry_count; //已经添加TAG的入口数量,(即内存块中已经包含多少TAG了)
99 metadata_size_t entry_capacity; //最大能容纳TAG的入口数量(即最大能放多少tag)
100 metadata_uptrdiff_t entries_start; // //TAG区域相对开始处的偏移 Offset from camera_metadata
101 metadata_size_t data_count; //记录数据段当前已用的内存空间
102 metadata_size_t data_capacity; //总的数据段内存空间
103 metadata_uptrdiff_t data_start; //数据区相对开始处的偏移 Offset from camera_metadata
104 uint32_t padding; // padding to 8 bytes boundary
105 metadata_vendor_id_t vendor_id; // vendor id
106 };
每个TAG 对应的数据结构体如下,占用内存 33 Byte,由于是以 8字节对齐,所以该结构体占用 40 个Byte。
108 /**
109 * A datum of metadata. This corresponds to camera_metadata_entry_t::data
110 * with the difference that each element is not a pointer. We need to have a
111 * non-pointer type description in order to figure out the largest alignment
112 * requirement for data (DATA_ALIGNMENT).
113 */
114 #define DATA_ALIGNMENT ((size_t) 8)
115 typedef union camera_metadata_data {
116 uint8_t u8;
117 int32_t i32;
118 float f;
119 int64_t i64;
120 double d;
121 camera_metadata_rational_t r;
122 } camera_metadata_data_t;
Camera MetaData 中所有的TAG 定义在 camera_metadata_tags.h 中。可以看出,目录系统默认定义了 30 个Tag(Android 13),分别如下:
# /system/media/camera/include/system/camera_metadata_tags.h
37 typedef enum camera_metadata_section {
38 ANDROID_COLOR_CORRECTION,
39 ANDROID_CONTROL,
40 ANDROID_DEMOSAIC,
41 ANDROID_EDGE,
42 ANDROID_FLASH,
43 ANDROID_FLASH_INFO,
44 ANDROID_HOT_PIXEL,
45 ANDROID_JPEG,
46 ANDROID_LENS,
47 ANDROID_LENS_INFO,
48 ANDROID_NOISE_REDUCTION,
49 ANDROID_QUIRKS,
50 ANDROID_REQUEST,
51 ANDROID_SCALER,
52 ANDROID_SENSOR,
53 ANDROID_SENSOR_INFO,
54 ANDROID_SHADING,
55 ANDROID_STATISTICS,
56 ANDROID_STATISTICS_INFO,
57 ANDROID_TONEMAP,
58 ANDROID_LED,
59 ANDROID_INFO,
60 ANDROID_BLACK_LEVEL,
61 ANDROID_SYNC,
62 ANDROID_REPROCESS,
63 ANDROID_DEPTH,
64 ANDROID_LOGICAL_MULTI_CAMERA,
65 ANDROID_DISTORTION_CORRECTION,
66 ANDROID_HEIC,
67 ANDROID_HEIC_INFO,
68 ANDROID_AUTOMOTIVE,
69 ANDROID_AUTOMOTIVE_LENS,
70 ANDROID_SECTION_COUNT,
71
72 VENDOR_SECTION = 0x8000
73 } camera_metadata_section_t;
由于在内存中,各个tag 数据都是以有序的结构体形式保存起来,各个tag 对应的偏移地址如下:
# /system/media/camera/include/system/camera_metadata_tags.h
75 /**
76 * Hierarchy positions in enum space. All vendor extension tags must be
77 * defined with tag >= VENDOR_SECTION_START
78 */
79 typedef enum camera_metadata_section_start {
80 ANDROID_COLOR_CORRECTION_START = ANDROID_COLOR_CORRECTION << 16,
81 ANDROID_CONTROL_START = ANDROID_CONTROL << 16,
82 ANDROID_DEMOSAIC_START = ANDROID_DEMOSAIC << 16,
83 ANDROID_EDGE_START = ANDROID_EDGE << 16,
84 ANDROID_FLASH_START = ANDROID_FLASH << 16,
85 ANDROID_FLASH_INFO_START = ANDROID_FLASH_INFO << 16,
86 ANDROID_HOT_PIXEL_START = ANDROID_HOT_PIXEL << 16,
87 ANDROID_JPEG_START = ANDROID_JPEG << 16,
88 ANDROID_LENS_START = ANDROID_LENS << 16,
89 ANDROID_LENS_INFO_START = ANDROID_LENS_INFO << 16,
90 ANDROID_NOISE_REDUCTION_START = ANDROID_NOISE_REDUCTION << 16,
91 ANDROID_QUIRKS_START = ANDROID_QUIRKS << 16,
92 ANDROID_REQUEST_START = ANDROID_REQUEST << 16,
93 ANDROID_SCALER_START = ANDROID_SCALER << 16,
94 ANDROID_SENSOR_START = ANDROID_SENSOR << 16,
95 ANDROID_SENSOR_INFO_START = ANDROID_SENSOR_INFO << 16,
96 ANDROID_SHADING_START = ANDROID_SHADING << 16,
97 ANDROID_STATISTICS_START = ANDROID_STATISTICS << 16,
98 ANDROID_STATISTICS_INFO_START = ANDROID_STATISTICS_INFO << 16,
99 ANDROID_TONEMAP_START = ANDROID_TONEMAP << 16,
100 ANDROID_LED_START = ANDROID_LED << 16,
101 ANDROID_INFO_START = ANDROID_INFO << 16,
102 ANDROID_BLACK_LEVEL_START = ANDROID_BLACK_LEVEL << 16,
103 ANDROID_SYNC_START = ANDROID_SYNC << 16,
104 ANDROID_REPROCESS_START = ANDROID_REPROCESS << 16,
105 ANDROID_DEPTH_START = ANDROID_DEPTH << 16,
106 ANDROID_LOGICAL_MULTI_CAMERA_START
107 = ANDROID_LOGICAL_MULTI_CAMERA
108 << 16,
109 ANDROID_DISTORTION_CORRECTION_START
110 = ANDROID_DISTORTION_CORRECTION
111 << 16,
112 ANDROID_HEIC_START = ANDROID_HEIC << 16,
113 ANDROID_HEIC_INFO_START = ANDROID_HEIC_INFO << 16,
114 ANDROID_AUTOMOTIVE_START = ANDROID_AUTOMOTIVE << 16,
115 ANDROID_AUTOMOTIVE_LENS_START = ANDROID_AUTOMOTIVE_LENS << 16,
116 VENDOR_SECTION_START = VENDOR_SECTION << 16
117 } camera_metadata_section_start_t;
接下来,定义了,各个TAG 对应换详细的参数,每个 TAG 以 ##TAG##_START 和 ##TAG##_END 结束。
# /system/media/camera/include/system/camera_metadata_tags.h
119 /**
120 * Main enum for defining camera metadata tags. New entries must always go
121 * before the section _END tag to preserve existing enumeration values. In
122 * addition, the name and type of the tag needs to be added to
123 * system/media/camera/src/camera_metadata_tag_info.c
124 */
125 typedef enum camera_metadata_tag {
126 ANDROID_COLOR_CORRECTION_MODE = // enum | public | HIDL v3.2
127 ANDROID_COLOR_CORRECTION_START,
128 ANDROID_COLOR_CORRECTION_TRANSFORM, // rational[] | public | HIDL v3.2
129 ANDROID_COLOR_CORRECTION_GAINS, // float[] | public | HIDL v3.2
130 ANDROID_COLOR_CORRECTION_ABERRATION_MODE, // enum | public | HIDL v3.2
131 ANDROID_COLOR_CORRECTION_AVAILABLE_ABERRATION_MODES,
132 // byte[] | public | HIDL v3.2
133 ANDROID_COLOR_CORRECTION_END,
134
135 ANDROID_CONTROL_AE_ANTIBANDING_MODE = // enum | public | HIDL v3.2
136 ANDROID_CONTROL_START,
137 ANDROID_CONTROL_AE_EXPOSURE_COMPENSATION, // int32 | public | HIDL v3.2
138 ANDROID_CONTROL_AE_LOCK, // enum | public | HIDL v3.2
139 ANDROID_CONTROL_AE_MODE, // enum | public | HIDL v3.2
......
193 ANDROID_CONTROL_END,
......
205 ANDROID_FLASH_FIRING_POWER = // byte | system | HIDL v3.2
206 ANDROID_FLASH_START,
207 ANDROID_FLASH_FIRING_TIME, // int64 | system | HIDL v3.2
208 ANDROID_FLASH_MODE, // enum | public | HIDL v3.2
209 ANDROID_FLASH_COLOR_TEMPERATURE, // byte | system | HIDL v3.2
210 ANDROID_FLASH_MAX_ENERGY, // byte | system | HIDL v3.2
211 ANDROID_FLASH_STATE, // enum | public | HIDL v3.2
212 ANDROID_FLASH_END,
# /system/media/camera/include/system/camera_metadata.h
37 #include "camera_metadata_tags.h"
38
// 根据 TAG 数量定义两个数组。
42 ANDROID_API
43 extern unsigned int camera_metadata_section_bounds[ANDROID_SECTION_COUNT][2];
44 ANDROID_API
45 extern const char *camera_metadata_section_names[ANDROID_SECTION_COUNT];
73 /**
74 * A reference to a metadata entry in a buffer.
75 *
76 * The data union pointers point to the real data in the buffer, and can be
77 * modified in-place if the count does not need to change. The count is the
78 * number of entries in data of the entry's type, not a count of bytes.
79 */
//每个tag的数据结构体定义
80 typedef struct camera_metadata_entry {
81 size_t index;
82 uint32_t tag;
83 uint8_t type;
84 size_t count;
85 union {
86 uint8_t *u8;
87 int32_t *i32;
88 float *f;
89 int64_t *i64;
90 double *d;
91 camera_metadata_rational_t *r;
92 } data;
93 } camera_metadata_entry_t;
接着在该头文件中定义了一些常用的 API 方法:
# /system/media/camera/include/system/camera_metadata.h
ANDROID_API
camera_metadata_t *allocate_copy_camera_metadata_checked(
const camera_metadata_t *src,
size_t src_size);
ANDROID_API
camera_metadata_t *place_camera_metadata(void *dst, size_t dst_size,
size_t entry_capacity,
size_t data_capacity);
ANDROID_API
void free_camera_metadata(camera_metadata_t *metadata);
ANDROID_API
size_t calculate_camera_metadata_size(size_t entry_count,
size_t data_count);
ANDROID_API
camera_metadata_t *copy_camera_metadata(void *dst, size_t dst_size,
const camera_metadata_t *src);
ANDROID_API
int add_camera_metadata_entry(camera_metadata_t *dst,
uint32_t tag,
const void *data,
size_t data_count);
在该头文件中,定义了供产商自定义 metadata 及查询的方法。
# /system/media/camera/include/system/camera_vendor_tags.h
38 typedef struct vendor_tag_ops vendor_tag_ops_t;
39 struct vendor_tag_ops {
45 int (*get_tag_count)(const vendor_tag_ops_t *v);
53 void (*get_all_tags)(const vendor_tag_ops_t *v, uint32_t *tag_array);
72 const char *(*get_section_name)(const vendor_tag_ops_t *v, uint32_t tag);
82 const char *(*get_tag_name)(const vendor_tag_ops_t *v, uint32_t tag);
90 int (*get_tag_type)(const vendor_tag_ops_t *v, uint32_t tag);
93 void* reserved[8];
94 };
95
96 struct vendor_tag_cache_ops {
102 int (*get_tag_count)(metadata_vendor_id_t id);
110 void (*get_all_tags)(uint32_t *tag_array, metadata_vendor_id_t id);
129 const char *(*get_section_name)(uint32_t tag, metadata_vendor_id_t id);
139 const char *(*get_tag_name)(uint32_t tag, metadata_vendor_id_t id);
147 int (*get_tag_type)(uint32_t tag, metadata_vendor_id_t id);
150 void* reserved[8];
151 };
# system/media/camera/src/camera_metadata_tag_info.c
33 const char *camera_metadata_section_names[ANDROID_SECTION_COUNT] = {
34 [ANDROID_COLOR_CORRECTION] = "android.colorCorrection",
35 [ANDROID_CONTROL] = "android.control",
36 [ANDROID_DEMOSAIC] = "android.demosaic",
37 [ANDROID_EDGE] = "android.edge",
38 [ANDROID_FLASH] = "android.flash",
39 [ANDROID_FLASH_INFO] = "android.flash.info",
40 [ANDROID_HOT_PIXEL] = "android.hotPixel",
41 [ANDROID_JPEG] = "android.jpeg",
42 [ANDROID_LENS] = "android.lens",
43 [ANDROID_LENS_INFO] = "android.lens.info",
44 [ANDROID_NOISE_REDUCTION] = "android.noiseReduction",
45 [ANDROID_QUIRKS] = "android.quirks",
46 [ANDROID_REQUEST] = "android.request",
47 [ANDROID_SCALER] = "android.scaler",
48 [ANDROID_SENSOR] = "android.sensor",
49 [ANDROID_SENSOR_INFO] = "android.sensor.info",
50 [ANDROID_SHADING] = "android.shading",
51 [ANDROID_STATISTICS] = "android.statistics",
52 [ANDROID_STATISTICS_INFO] = "android.statistics.info",
53 [ANDROID_TONEMAP] = "android.tonemap",
54 [ANDROID_LED] = "android.led",
55 [ANDROID_INFO] = "android.info",
56 [ANDROID_BLACK_LEVEL] = "android.blackLevel",
57 [ANDROID_SYNC] = "android.sync",
58 [ANDROID_REPROCESS] = "android.reprocess",
59 [ANDROID_DEPTH] = "android.depth",
60 [ANDROID_LOGICAL_MULTI_CAMERA] = "android.logicalMultiCamera",
61 [ANDROID_DISTORTION_CORRECTION]
62 = "android.distortionCorrection",
63 [ANDROID_HEIC] = "android.heic",
64 [ANDROID_HEIC_INFO] = "android.heic.info",
65 [ANDROID_AUTOMOTIVE] = "android.automotive",
66 [ANDROID_AUTOMOTIVE_LENS] = "android.automotive.lens",
67 };
282 static tag_info_t android_flash[ANDROID_FLASH_END -
283 ANDROID_FLASH_START] = {
284 [ ANDROID_FLASH_FIRING_POWER - ANDROID_FLASH_START ] =
285 { "firingPower", TYPE_BYTE },
286 [ ANDROID_FLASH_FIRING_TIME - ANDROID_FLASH_START ] =
287 { "firingTime", TYPE_INT64 },
288 [ ANDROID_FLASH_MODE - ANDROID_FLASH_START ] =
289 { "mode", TYPE_BYTE },
290 [ ANDROID_FLASH_COLOR_TEMPERATURE - ANDROID_FLASH_START ] =
291 { "colorTemperature", TYPE_BYTE },
292 [ ANDROID_FLASH_MAX_ENERGY - ANDROID_FLASH_START ] =
293 { "maxEnergy", TYPE_BYTE },
294 [ ANDROID_FLASH_STATE - ANDROID_FLASH_START ] =
295 { "state", TYPE_BYTE },
296 };
297
前面了解清楚它的内存分布,宏定义,及操作方法后,我们开始进入c代码看下它的核心实现。
# system/media/camera/src/camera_metadata.c
#define LOG_TAG "camera_metadata"
#include
#include
// 获取 entries
static camera_metadata_buffer_entry_t *get_entries( const camera_metadata_t *metadata) {
return (camera_metadata_buffer_entry_t*) ((uint8_t*)metadata + metadata->entries_start);
}
// 获取 数据
static uint8_t *get_data(const camera_metadata_t *metadata) {
return (uint8_t*)metadata + metadata->data_start;
}
// 分配一个 camera_metadata 结构体对象
camera_metadata_t *allocate_camera_metadata(size_t entry_capacity,size_t data_capacity) {
size_t memory_needed = calculate_camera_metadata_size(entry_capacity,data_capacity);
void *buffer = calloc(1, memory_needed);
camera_metadata_t *metadata = place_camera_metadata( buffer, memory_needed, entry_capacity, data_capacity);
return metadata;
}
// 获取 metadata 结构体
camera_metadata_t *place_camera_metadata(void *dst, size_t dst_size, size_t entry_capacity, size_t data_capacity) {
size_t memory_needed = calculate_camera_metadata_size(entry_capacity, data_capacity);
if (memory_needed > dst_size) return NULL;
camera_metadata_t *metadata = (camera_metadata_t*)dst;
metadata->version = CURRENT_METADATA_VERSION;
metadata->flags = 0;
metadata->entry_count = 0;
metadata->entry_capacity = entry_capacity;
metadata->entries_start = ALIGN_TO(sizeof(camera_metadata_t), ENTRY_ALIGNMENT);
metadata->data_count = 0;
metadata->data_capacity = data_capacity;
metadata->size = memory_needed;
size_t data_unaligned = (uint8_t*)(get_entries(metadata) + metadata->entry_capacity) - (uint8_t*)metadata;
metadata->data_start = ALIGN_TO(data_unaligned, DATA_ALIGNMENT);
metadata->vendor_id = CAMERA_METADATA_INVALID_VENDOR_ID;
assert(validate_camera_metadata_structure(metadata, NULL) == OK);
return metadata;
}
void free_camera_metadata(camera_metadata_t *metadata) {
free(metadata);
}
// 拷贝 metadata 结构体
camera_metadata_t* copy_camera_metadata(void *dst, size_t dst_size,const camera_metadata_t *src) {
size_t memory_needed = get_camera_metadata_compact_size(src);
camera_metadata_t *metadata = place_camera_metadata(dst, dst_size, src->entry_count, src->data_count);
metadata->flags = src->flags;
metadata->entry_count = src->entry_count;
metadata->data_count = src->data_count;
metadata->vendor_id = src->vendor_id;
memcpy(get_entries(metadata), get_entries(src), sizeof(camera_metadata_buffer_entry_t[metadata->entry_count]));
memcpy(get_data(metadata), get_data(src), sizeof(uint8_t[metadata->data_count]));
assert(validate_camera_metadata_structure(metadata, NULL) == OK);
return metadata;
}
int add_camera_metadata_entry(camera_metadata_t *dst, uint32_t tag, const void *data, size_t data_count) {
int type = get_local_camera_metadata_tag_type(tag, dst);
return add_camera_metadata_entry_raw(dst, tag, type, data, data_count);
}
int find_camera_metadata_entry(camera_metadata_t *src, uint32_t tag, camera_metadata_entry_t *entry) {
if (src == NULL) return ERROR;
uint32_t index;
if (src->flags & FLAG_SORTED) {
// Sorted entries, do a binary search
camera_metadata_buffer_entry_t *search_entry = NULL;
camera_metadata_buffer_entry_t key;
key.tag = tag;
search_entry = bsearch(&key, get_entries(src), src->entry_count,
sizeof(camera_metadata_buffer_entry_t), compare_entry_tags);
if (search_entry == NULL) return NOT_FOUND;
index = search_entry - get_entries(src);
} else {
// Not sorted, linear search
camera_metadata_buffer_entry_t *search_entry = get_entries(src);
for (index = 0; index < src->entry_count; index++, search_entry++) {
if (search_entry->tag == tag) {
break;
}
}
if (index == src->entry_count) return NOT_FOUND;
}
return get_camera_metadata_entry(src, index, entry);
}
int delete_camera_metadata_entry(camera_metadata_t *dst, size_t index) {
camera_metadata_buffer_entry_t *entry = get_entries(dst) + index;
size_t data_bytes = calculate_camera_metadata_entry_data_size(entry->type, entry->count);
if (data_bytes > 0) {
// Shift data buffer to overwrite deleted data
uint8_t *start = get_data(dst) + entry->data.offset;
uint8_t *end = start + data_bytes;
size_t length = dst->data_count - entry->data.offset - data_bytes;
memmove(start, end, length);
// Update all entry indices to account for shift
camera_metadata_buffer_entry_t *e = get_entries(dst);
size_t i;
for (i = 0; i < dst->entry_count; i++) {
if (calculate_camera_metadata_entry_data_size( e->type, e->count) > 0 &&
e->data.offset > entry->data.offset) {
e->data.offset -= data_bytes;
}
++e;
}
dst->data_count -= data_bytes;
}
// Shift entry array
memmove(entry, entry + 1, sizeof(camera_metadata_buffer_entry_t) *(dst->entry_count - index - 1) );
dst->entry_count -= 1;
assert(validate_camera_metadata_structure(dst, NULL) == OK);
return OK;
}
int update_camera_metadata_entry(camera_metadata_t *dst,size_t index, const void *data,size_t data_count,
camera_metadata_entry_t *updated_entry) {
camera_metadata_buffer_entry_t *entry = get_entries(dst) + index;
size_t data_bytes =calculate_camera_metadata_entry_data_size(entry->type, data_count);
size_t data_payload_bytes =data_count * camera_metadata_type_size[entry->type];
size_t entry_bytes = calculate_camera_metadata_entry_data_size(entry->type, entry->count);
if (data_bytes != entry_bytes) {
// May need to shift/add to data array
if (dst->data_capacity < dst->data_count + data_bytes - entry_bytes) {
// No room
return ERROR;
}
if (entry_bytes != 0) {
// Remove old data
uint8_t *start = get_data(dst) + entry->data.offset;
uint8_t *end = start + entry_bytes;
size_t length = dst->data_count - entry->data.offset - entry_bytes;
memmove(start, end, length);
dst->data_count -= entry_bytes;
// Update all entry indices to account for shift
camera_metadata_buffer_entry_t *e = get_entries(dst);
size_t i;
for (i = 0; i < dst->entry_count; i++) {
if (calculate_camera_metadata_entry_data_size( e->type, e->count) > 0 && e->data.offset > entry->data.offset) {
e->data.offset -= entry_bytes;
}
++e;
}
}
if (data_bytes != 0) {
// Append new data
entry->data.offset = dst->data_count;
memcpy(get_data(dst) + entry->data.offset, data, data_payload_bytes);
dst->data_count += data_bytes;
}
} else if (data_bytes != 0) {
// data size unchanged, reuse same data location
memcpy(get_data(dst) + entry->data.offset, data, data_payload_bytes);
}
if (data_bytes == 0) {
// Data fits into entry
memcpy(entry->data.value, data, data_payload_bytes);
}
entry->count = data_count;
if (updated_entry != NULL) {
get_camera_metadata_entry(dst, index, updated_entry);
}
assert(validate_camera_metadata_structure(dst, NULL) == OK);
return OK;
}
通过 Vendor Ops ,用户可以自已定义 metadata 及 对应的操作方法 ops。通过 set_camera_metadata_vendor_ops() 及 set_camera_metadata_vendor_cache_ops() 方法 自定义对应的 ops。
# system/media/camera/src/camera_metadata.c
static const vendor_tag_ops_t *vendor_tag_ops = NULL;
static const struct vendor_tag_cache_ops *vendor_cache_ops = NULL;
// Declared in system/media/private/camera/include/camera_metadata_hidden.h
int set_camera_metadata_vendor_ops(const vendor_tag_ops_t* ops) {
vendor_tag_ops = ops;
return OK;
}
// Declared in system/media/private/camera/include/camera_metadata_hidden.h
int set_camera_metadata_vendor_cache_ops( const struct vendor_tag_cache_ops *query_cache_ops) {
vendor_cache_ops = query_cache_ops;
return OK;
}
static void print_data(int fd, const uint8_t *data_ptr, uint32_t tag, int type, int count, int indentation);
void dump_camera_metadata(const camera_metadata_t *metadata, int fd, int verbosity) {
dump_indented_camera_metadata(metadata, fd, verbosity, 0);
}
Camera Metadata 代码 主要在 frameworks/av/camera/CameraMetadata.cpp 中。
从Android.mk 中可以看出,CameraMetadata.cpp和 camera client 一起编译到 libcamera_client.so 库中的。
# /frameworks/av/camera/Android.bp
61 srcs: [
62 // AIDL files for camera interfaces
63 // The headers for these interfaces will be available to any modules that
64 // include libcamera_client, at the path "aidl/package/path/BnFoo.h"
65 ":libcamera_client_aidl",
66
67 // Source for camera interface parcelables, and manually-written interfaces
68 "Camera.cpp",
69 "CameraMetadata.cpp",
70 "CameraParameters.cpp",
71 "CaptureResult.cpp",
72 "CameraParameters2.cpp",
73 "CameraSessionStats.cpp",
74 "ICamera.cpp",
75 "ICameraClient.cpp",
76 "ICameraRecordingProxy.cpp",
77 "camera2/CaptureRequest.cpp",
78 "camera2/ConcurrentCamera.cpp",
79 "camera2/OutputConfiguration.cpp",
80 "camera2/SessionConfiguration.cpp",
81 "camera2/SubmitInfo.cpp",
82 "CameraBase.cpp",
83 "CameraUtils.cpp",
84 "VendorTagDescriptor.cpp",
85 ],
87 shared_libs: [
88 "libbase",
89 "libcutils",
90 "libutils",
91 "liblog",
92 "libbinder",
93 "libgui",
94 "libcamera_metadata", // 使用 system 中的 libcamera_metadata.so 共享库
95 "libnativewindow",
96 ],
参考 frameworks/av/services/camera/libcameraservice/CameraFlashlight.cpp 中的代码。
可以看出,当要使用 CameraMetadata,主要步骤如下:
① 初始化 mMetadata 对象
② 获取 TAG 为 CAMERA3_TEMPLATE_PREVIEW 的 Metadata
③ 调用 mMetadata->update 更新 Metadata 参数
④ 调用setStreamingRequest 下发参数
# frameworks/av/services/camera/libcameraservice/CameraFlashlight.cpp
status_t CameraDeviceClientFlashControl::submitTorchEnabledRequest() {
status_t res;
if (mMetadata == NULL) {
// 1. 初始化 mMetadata 对像
mMetadata = new CameraMetadata();
// 2. 获取 TAG 为 CAMERA3_TEMPLATE_PREVIEW 的 Metadata。
res = mDevice->createDefaultRequest( CAMERA3_TEMPLATE_PREVIEW, mMetadata);
}
// 3. 调用 mMetadata->update 更新 Metadata 参数
uint8_t torchOn = ANDROID_FLASH_MODE_TORCH;
mMetadata->update(ANDROID_FLASH_MODE, &torchOn, 1);
mMetadata->update(ANDROID_REQUEST_OUTPUT_STREAMS, &mStreamId, 1);
uint8_t aeMode = ANDROID_CONTROL_AE_MODE_ON;
mMetadata->update(ANDROID_CONTROL_AE_MODE, &aeMode, 1);
int32_t requestId = 0;
mMetadata->update(ANDROID_REQUEST_ID, &requestId, 1);
if (mStreaming) {
// 4. 调用setStreamingRequest 下发参数
res = mDevice->setStreamingRequest(*mMetadata);
======================>
+ @ frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp
+ List requests;
+ requests.push_back(request);
+ return setStreamin=RequestList(requests, /*lastFrameNumber*/NULL);
+ =======>
+ return submitRequestsHelper(requests, /*repeating*/true, lastFrameNumber);
<======================
} else {
res = mDevice->capture(*mMetadata);
}
return res;
}
可以看到 ,最终跑到了Camera3Device.cpp 中提交 request ,最终将 request 放入mRequestQueue 中,由 Camera3Device::RequestThread 来对消息进行处理。
# frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp
status_t Camera3Device::submitRequestsHelper(
const List &requests, bool repeating, /*out*/ int64_t *lastFrameNumber) {
RequestList requestList;
res = convertMetadataListToRequestListLocked(requests, /*out*/&requestList);
if (repeating) {
res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber);
} else {
res = mRequestThread->queueRequestList(requestList, lastFrameNumber);
}
if (res == OK) {
waitUntilStateThenRelock(/*active*/true, kActiveTimeout);
if (res != OK) {
SET_ERR_L("Can't transition to active in %f seconds!", kActiveTimeout/1e9);
}
ALOGV("Camera %d: Capture request %" PRId32 " enqueued", mId,
(*(requestList.begin()))->mResultExtras.requestId);
}
return res;
}
我们来看下 Camera3Device::RequestThread::threadLoop() 的具体实现:
① 等待下一个 request 请求,将请求保存在 mNextRequests 中。
② 获取 最新的request 的Entry, 这里为 CAMERA3_TEMPLATE_PREVIEW
③ 调用hardware层的process_capture_request()方法,处理request 请求
# frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp
bool Camera3Device::RequestThread::threadLoop() {
// 1. 等待下一个 request 请求,将请求保存在 mNextRequests 中。
// Wait for the next batch of requests.
waitForNextRequestBatch();
===========>
+ additionalRequest.captureRequest = waitForNextRequestLocked();
+ mNextRequests.add(additionalRequest);
<===========
if (mNextRequests.size() == 0) {
return true;
}
// 2. 获取 最新的request 的Entry, 这里为 CAMERA3_TEMPLATE_PREVIEW
// Get the latest request ID, if any
int latestRequestId;
camera_metadata_entry_t requestIdEntry = mNextRequests[mNextRequests.size() - 1].
captureRequest->mSettings.find(ANDROID_REQUEST_ID);
if (requestIdEntry.count > 0) {
latestRequestId = requestIdEntry.data.i32[0];
}
// Prepare a batch of HAL requests and output buffers.
res = prepareHalRequests();
=============>
+ status_t res = insertTriggers(captureRequest);
+ ------------->
+ mTriggerRemovedMap.add(tag, trigger);
+ res = metadata.update(tag, &entryValue, /*count*/1);
+ <-------------
+ mPrevRequest = captureRequest;
<=============
mLatestRequestId = latestRequestId;
mLatestRequestSignal.signal();
// 3. 调用hardware层的方法,处理request 请求
ALOGVV("%s: %d: submitting %zu requests in a batch.", __FUNCTION__, __LINE__, mNextRequests.size());
for (auto& nextRequest : mNextRequests) {
// Submit request and block until ready for next one
ATRACE_ASYNC_BEGIN("frame capture", nextRequest.halRequest.frame_number);
ATRACE_BEGIN("camera3->process_capture_request");
res = mHal3Device->ops->process_capture_request(mHal3Device, &nextRequest.halRequest);
============>
+ # hardware/qcom/camera/QCamera2/HAL3/QCamera3HWI.cpp
+ QCamera3HardwareInterface *hw = reinterpret_cast(device->priv);
+ int rc = hw->orchestrateRequest(request);
+
<============
// Mark that the request has be submitted successfully.
nextRequest.submitted = true;
// Update the latest request sent to HAL
if (nextRequest.halRequest.settings != NULL) { // Don't update if they were unchanged
Mutex::Autolock al(mLatestRequestMutex);
camera_metadata_t* cloned = clone_camera_metadata(nextRequest.halRequest.settings);
mLatestRequest.acquire(cloned);
sp parent = mParent.promote();
if (parent != NULL) {
parent->monitorMetadata(TagMonitor::REQUEST, nextRequest.halRequest.frame_number,
0, mLatestRequest);
}
}
// 移除当前请求
// Remove any previously queued triggers (after unlock)
res = removeTriggers(mPrevRequest);
}
mNextRequests.clear();
return true;
}
处理request是供应商的代码,此处暂不做分析。
# frameworks/av/include/camera/CameraMetadata.h
class CameraMetadata: public Parcelable {
public:
/** Creates an empty object; best used when expecting to acquire contents from elsewhere */
CameraMetadata();
/** Creates an object with space for entryCapacity entries, with dataCapacity extra storage */
CameraMetadata(size_t entryCapacity, size_t dataCapacity = 10);
/** Takes ownership of passed-in buffer */
CameraMetadata(camera_metadata_t *buffer);
/** Clones the metadata */
CameraMetadata(const CameraMetadata &other);
/* Update metadata entry. Will create entry if it doesn't exist already, and
* will reallocate the buffer if insufficient space exists. Overloaded for
* the various types of valid data. */
status_t update(uint32_t tag, const uint8_t *data, size_t data_count);
status_t update(uint32_t tag, const int32_t *data, size_t data_count);
status_t update(uint32_t tag, const float *data, size_t data_count);
status_t update(uint32_t tag, const int64_t *data, size_t data_count);
status_t update(uint32_t tag, const double *data, size_t data_count);
status_t update(uint32_t tag, const camera_metadata_rational_t *data, size_t data_count);
status_t update(uint32_t tag, const String8 &string);
status_t update(const camera_metadata_ro_entry &entry);
template
status_t update(uint32_t tag, Vector data) {
return update(tag, data.array(), data.size());
}
// Metadata object is unchanged when reading from parcel fails.
virtual status_t readFromParcel(const Parcel *parcel) override;
virtual status_t writeToParcel(Parcel *parcel) const override;
/* Caller becomes the owner of the new metadata
* 'const Parcel' doesnt prevent us from calling the read functions.
* which is interesting since it changes the internal state
*
* NULL can be returned when no metadata was sent, OR if there was an issue
* unpacking the serialized data (i.e. bad parcel or invalid structure).*/
static status_t readFromParcel(const Parcel &parcel, camera_metadata_t** out);
/* Caller retains ownership of metadata
* - Write 2 (int32 + blob) args in the current position */
static status_t writeToParcel(Parcel &parcel, const camera_metadata_t* metadata);
private:
camera_metadata_t *mBuffer;
# frameworks/av/camera/CameraMetadata.cpp
203 status_t CameraMetadata::update(uint32_t tag,
204 const int32_t *data, size_t data_count) {
205 status_t res;
206 if (mLocked) {
207 ALOGE("%s: CameraMetadata is locked", __FUNCTION__);
208 return INVALID_OPERATION;
209 }
210 if ( (res = checkType(tag, TYPE_INT32)) != OK) {
211 return res;
212 }
213 return updateImpl(tag, (const void*)data, data_count);
214 }
可以看出,最终调用的都是 CameraMetadata::updateImpl() 方法,我们来看下它的具体实现
可以看出,它处理方法是,如果entry 已经有了,则更新其数据,如果不存在,则新增一个entry。
最终,metadata 在保存在内存中, 注意,由于此时参数并没有下发,所以此时参数肯定是不生效的。
# frameworks/av/camera/CameraMetadata.cpp
status_t CameraMetadata::updateImpl(uint32_t tag, const void *data, size_t data_count) {
int type = get_camera_metadata_tag_type(tag);//获取tag的Type,为后面计算内存做准备
// Safety check - ensure that data isn't pointing to this metadata, since
// that would get invalidated if a resize is needed
size_t bufferSize = get_camera_metadata_size(mBuffer);
uintptr_t bufAddr = reinterpret_cast(mBuffer);
uintptr_t dataAddr = reinterpret_cast(data);
size_t data_size = calculate_camera_metadata_entry_data_size(type, data_count);
res = resizeIfNeeded(1, data_size);
if (res == OK) {
camera_metadata_entry_t entry;
res = find_camera_metadata_entry(mBuffer, tag, &entry);
if (res == NAME_NOT_FOUND) {
res = add_camera_metadata_entry(mBuffer,tag, data, data_count);
} else if (res == OK) {
res = update_camera_metadata_entry(mBuffer, entry.index, data, data_count, NULL);
}
}
return res;
}
int update_camera_metadata_entry(camera_metadata_t *dst, size_t index, const void *data,
size_t data_count, camera_metadata_entry_t *updated_entry) {
camera_metadata_buffer_entry_t *entry = get_entries(dst) + index;
size_t data_bytes = calculate_camera_metadata_entry_data_size(entry->type, data_count);
size_t data_payload_bytes = data_count * camera_metadata_type_size[entry->type];
size_t entry_bytes = calculate_camera_metadata_entry_data_size(entry->type, entry->count);
if (data_bytes != entry_bytes) {
if (entry_bytes != 0) {
// Remove old data
uint8_t *start = get_data(dst) + entry->data.offset;
uint8_t *end = start + entry_bytes;
size_t length = dst->data_count - entry->data.offset - entry_bytes;
memmove(start, end, length);
dst->data_count -= entry_bytes;
// Update all entry indices to account for shift
camera_metadata_buffer_entry_t *e = get_entries(dst);
size_t i;
for (i = 0; i < dst->entry_count; i++) {
if (calculate_camera_metadata_entry_data_size(
e->type, e->count) > 0 &&
e->data.offset > entry->data.offset) {
e->data.offset -= entry_bytes;
}
++e;
}
}
if (data_bytes != 0) {
// Append new data
entry->data.offset = dst->data_count;
memcpy(get_data(dst) + entry->data.offset, data, data_payload_bytes);
dst->data_count += data_bytes;
}
} else if (data_bytes != 0) {
// data size unchanged, reuse same data location
memcpy(get_data(dst) + entry->data.offset, data, data_payload_bytes);
}
if (data_bytes == 0) {
// Data fits into entry
memcpy(entry->data.value, data,data_payload_bytes);
}
entry->count = data_count;
if (updated_entry != NULL) {
get_camera_metadata_entry(dst, index, updated_entry);
}
assert(validate_camera_metadata_structure(dst, NULL) == OK);
return OK;
}
五、参考
版权声明:本文为CSDN博主「程序员Android」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/wjky2014/article/details/120480345