【TI毫米波雷达笔记】毫米波雷达芯片结构框架解析:out_of_box开箱demo代码解读(以IWR6843AOP为例)
blog.csdn.net/weixin_53403301/article/details/132522364
IWR6843AOP的开箱工程是根据IWR6843AOPEVM开发板来的
该工程可以将IWR6843AOP的两个串口利用起来 实现的功能主要是两个方面:
通过115200波特率的串口配置参数 建立握手协议
通过115200*8的串口输出雷达数据
此工程需要匹配TI官方的上位机:mmWave_Demo_Visualizer_3.6.0
来使用
该上位机可以在连接串口后自动化操作 并且对雷达数据可视化
关于雷达参数配置 则在SDK的mmw\profiles
目录下
言简意赅 可以直接更改该目录下的文件参数来达到配置雷达参数的目的
但这种方法不利于直接更改 每次用上位机运行后的参数是固定的(上位机运行需要SDK环境) 所以也可以在代码中写死 本文探讨的就是这个方向
首先 在工业雷达包目录下找到该工程设置
C:\ti\mmwave_industrial_toolbox_4_12_0\labs\Out_Of_Box_Demo\src\xwr6843AOP
使用CCS的import project功能导入工程后 即可完成环境搭建
这里用到的SDK最新版为3.6版本
以下来自官方文档 可以直接跳过
The demo consists of the following (SYSBIOS) tasks:
MmwDemo_initTask. This task is created/launched by main and is a one-time active task whose main functionality is to initialize drivers (<driver>_init), MMWave module (MMWave_init), DPM module (DPM_init), open UART and data path related drivers (EDMA, HWA), and create/launch the following tasks (the CLI_task is launched indirectly by calling CLI_open).
CLI_task. This command line interface task provides a simplified 'shell' interface which allows the configuration of the BSS via the mmWave interface (MMWave_config). It parses input CLI configuration commands like chirp profile and GUI configuration. When sensor start CLI command is parsed, all actions related to starting sensor and starting the processing the data path are taken. When sensor stop CLI command is parsed, all actions related to stopping the sensor and stopping the processing of the data path are taken
MmwDemo_mmWaveCtrlTask. This task is used to provide an execution context for the mmWave control, it calls in an endless loop the MMWave_execute API.
MmwDemo_DPC_ObjectDetection_dpmTask. This task is used to provide an execution context for DPM (Data Path Manager) execution, it calls in an endless loop the DPM_execute API. In this context, all of the registered object detection DPC (Data Path Chain) APIs like configuration, control and execute will take place. In this task. When the DPC's execute API produces the detected objects and other results, they are transmitted out of the UART port for display using the visualizer.
Top Level Data Path Processing Chain
Top Level Data Path Timing
The data path processing consists of taking ADC samples as input and producing detected objects (point-cloud and other information) to be shipped out of UART port to the PC. The algorithm processing is realized using the DPM registered Object Detection DPC. The details of the processing in DPC can be seen from the following doxygen documentation:
ti/datapath/dpc/objectdetection/objdethwa/docs/doxygen/html/index.html
Output packets with the detection information are sent out every frame through the UART. Each packet consists of the header MmwDemo_output_message_header_t and the number of TLV items containing various data information with types enumerated in MmwDemo_output_message_type_e. The numerical values of the types can be found in mmw_output.h. Each TLV item consists of type, length (MmwDemo_output_message_tl_t) and payload information. The structure of the output packet is illustrated in the following figure. Since the length of the packet depends on the number of detected objects it can vary from frame to frame. The end of the packet is padded so that the total packet length is always multiple of 32 Bytes.
Output packet structure sent to UART
The following subsections describe the structure of each TLV.
Type: (MMWDEMO_OUTPUT_MSG_DETECTED_POINTS)
Length: (Number of detected objects) x (size of DPIF_PointCloudCartesian_t)
Value: Array of detected objects. The information of each detected object is as per the structure DPIF_PointCloudCartesian_t. When the number of detected objects is zero, this TLV item is not sent. The maximum number of objects that can be detected in a sub-frame/frame is DPC_OBJDET_MAX_NUM_OBJECTS.
The orientation of x,y and z axes relative to the sensor is as per the following figure. (Note: The antenna arrangement in the figure is shown for standard EVM (see gAntDef_default) as an example but the figure is applicable for any antenna arrangement.)
Coordinate Geometry
The whole detected objects TLV structure is illustrated in figure below.
Detected objects TLV
Type: (MMWDEMO_OUTPUT_MSG_RANGE_PROFILE)
Length: (Range FFT size) x (size of uint16_t)
Value: Array of profile points at 0th Doppler (stationary objects). The points represent the sum of log2 magnitudes of received antennas expressed in Q9 format.
Noise floor profile
Type: (MMWDEMO_OUTPUT_MSG_NOISE_PROFILE)
Length: (Range FFT size) x (size of uint16_t)
Value: This is the same format as range profile but the profile is at the maximum Doppler bin (maximum speed objects). In general for stationary scene, there would be no objects or clutter at maximum speed so the range profile at such speed represents the receiver noise floor.
Type: (MMWDEMO_OUTPUT_MSG_AZIMUT_STATIC_HEAT_MAP)
Length: (Range FFT size) x (Number of "azimuth" virtual antennas) (size of cmplx16ImRe_t_)
Value: Array DPU_AoAProcHWA_HW_Resources::azimuthStaticHeatMap. The antenna data are complex symbols, with imaginary first and real second in the following order:
Imag(ant 0, range 0), Real(ant 0, range 0),...,Imag(ant N-1, range 0),Real(ant N-1, range 0)
...
Imag(ant 0, range R-1), Real(ant 0, range R-1),...,Imag(ant N-1, range R-1),Real(ant N-1, range R-1)
Note that the number of virtual antennas is equal to the number of “azimuth” virtual antennas. The antenna symbols are arranged in the order as they occur at the input to azimuth FFT. Based on this data the static azimuth heat map could be constructed by the GUI running on the host.
Type: (MMWDEMO_OUTPUT_MSG_AZIMUT_ELEVATION_STATIC_HEAT_MAP)
Length: (Range FFT size) x (Number of all virtual antennas) (size of cmplx16ImRe_t_)
Value: Array DPU_AoAProcHWA_HW_Resources::azimuthStaticHeatMap. The antenna data are complex symbols, with imaginary first and real second in the following order:
Imag(ant 0, range 0), Real(ant 0, range 0),...,Imag(ant N-1, range 0),Real(ant N-1, range 0)
...
Imag(ant 0, range R-1), Real(ant 0, range R-1),...,Imag(ant N-1, range R-1),Real(ant N-1, range R-1)
Note that the number of virtual antennas is equal to the total number of active virtual antennas. The antenna symbols are arranged in the order as they occur in the radar cube matrix. This TLV is sent by AOP version of MMW demo, that uses AOA2D DPU. Based on this data the static azimuth or elevation heat map could be constructed by the GUI running on the host.
Type: (MMWDEMO_OUTPUT_MSG_RANGE_DOPPLER_HEAT_MAP)
Length: (Range FFT size) x (Doppler FFT size) (size of uint16_t)
Value: Detection matrix DPIF_DetMatrix::data. The order is :
X(range bin 0, Doppler bin 0),...,X(range bin 0, Doppler bin D-1),
...
X(range bin R-1, Doppler bin 0),...,X(range bin R-1, Doppler bin D-1)
Type: (MMWDEMO_OUTPUT_MSG_STATS )
Length: (size of MmwDemo_output_message_stats_t)
Value: Timing information as per MmwDemo_output_message_stats_t. See timing diagram below related to the stats.
Processing timing
Note:
The MmwDemo_output_message_stats_t::interChirpProcessingMargin is not computed (it is always set to 0). This is because there is no CPU involvement in the 1D processing (only HWA and EDMA are involved), and it is not possible to know how much margin is there in chirp processing without CPU being notified at every chirp when processing begins (chirp event) and when the HWA-EDMA computation ends. The CPU is intentionally kept free during 1D processing because a real application may use this time for doing some post-processing algorithm execution.
While the MmwDemo_output_message_stats_t::interFrameProcessingTime reported will be of the current sub-frame/frame, the MmwDemo_output_message_stats_t::interFrameProcessingMargin and MmwDemo_output_message_stats_t::transmitOutputTime will be of the previous sub-frame (of the same MmwDemo_output_message_header_t::subFrameNumber as that of the current sub-frame) or of the previous frame.
The MmwDemo_output_message_stats_t::interFrameProcessingMargin excludes the UART transmission time (available as MmwDemo_output_message_stats_t::transmitOutputTime). This is done intentionally to inform the user of a genuine inter-frame processing margin without being influenced by a slow transport like UART, this transport time can be significantly longer for example when streaming out debug information like heat maps. Also, in a real product deployment, higher speed interfaces (e.g LVDS) are likely to be used instead of UART. User can calculate the margin that includes transport overhead (say to determine the max frame rate that a particular demo configuration will allow) using the stats because they also contain the UART transmission time.
The CLI command “guMonitor” specifies which TLV element will be sent out within the output packet. The arguments of the CLI command are stored in the structure MmwDemo_GuiMonSel_t.
Type: (MMWDEMO_OUTPUT_MSG_DETECTED_POINTS_SIDE_INFO)
Length: (Number of detected objects) x (size of DPIF_PointCloudSideInfo_t)
Value: Array of detected objects side information. The side information of each detected object is as per the structure DPIF_PointCloudSideInfo_t). When the number of detected objects is zero, this TLV item is not sent.
Type: (MMWDEMO_OUTPUT_MSG_TEMPERATURE_STATS)
Length: (size of MmwDemo_temperatureStats_t)
Value: Structure of detailed temperature report as obtained from Radar front end. MmwDemo_temperatureStats_t::tempReportValid is set to return value of rlRfGetTemperatureReport. If MmwDemo_temperatureStats_t::tempReportValid is 0, values in MmwDemo_temperatureStats_t::temperatureReport are valid else they should be ignored. This TLV is sent along with Stats TLV described in Stats information
Because of imperfections in antenna layouts on the board, RF delays in SOC, etc, there is need to calibrate the sensor to compensate for bias in the range estimation and receive channel gain and phase imperfections. The following figure illustrates the calibration procedure.
Calibration procedure ladder diagram
The calibration procedure includes the following steps:
Set a strong target like corner reflector at the distance of X meter (X less than 50 cm is not recommended) at boresight.
Set the following command in the configuration profile in .../profiles/profile_calibration.cfg, to reflect the position X as follows: where D (in meters) is the distance of window around X where the peak will be searched. The purpose of the search window is to allow the test environment from not being overly constrained say because it may not be possible to clear it of all reflectors that may be stronger than the one used for calibration. The window size is recommended to be at least the distance equivalent of a few range bins. One range bin for the calibration profile (profile_calibration.cfg) is about 5 cm. The first argument "1" is to enable the measurement. The stated configuration profile (.cfg) must be used otherwise the calibration may not work as expected (this profile ensures all transmit and receive antennas are engaged among other things needed for calibration).
measureRangeBiasAndRxChanPhase 1 X D
Start the sensor with the configuration file.
In the configuration file, the measurement is enabled because of which the DPC will be configured to perform the measurement and generate the measurement result (DPU_AoAProc_compRxChannelBiasCfg_t) in its result structure (DPC_ObjectDetection_ExecuteResult_t::compRxChanBiasMeasurement), the measurement results are written out on the CLI port (MmwDemo_measurementResultOutput) in the format below: For details of how DPC performs the measurement, see the DPC documentation.
compRangeBiasAndRxChanPhase <rangeBias> <Re(0,0)> <Im(0,0)> <Re(0,1)> <Im(0,1)> ... <Re(0,R-1)> <Im(0,R-1)> <Re(1,0)> <Im(1,0)> ... <Re(T-1,R-1)> <Im(T-1,R-1)>
The command printed out on the CLI now can be copied and pasted in any configuration file for correction purposes. This configuration will be passed to the DPC for the purpose of applying compensation during angle computation, the details of this can be seen in the DPC documentation. If compensation is not desired, the following command should be given (depending on the EVM and antenna arrangement) Above sets the range bias to 0 and the phase coefficients to unity so that there is no correction. Note the two commands must always be given in any configuration file, typically the measure commmand will be disabled when the correction command is the desired one.
For ISK EVM:
compRangeBiasAndRxChanPhase 0.0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
For AOP EVM
compRangeBiasAndRxChanPhase 0.0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0
The LVDS streaming feature enables the streaming of HW data (a combination of ADC/CP/CQ data) and/or user specific SW data through LVDS interface. The streaming is done mostly by the CBUFF and EDMA peripherals with minimal CPU intervention. The streaming is configured through the MmwDemo_LvdsStreamCfg_t CLI command which allows control of HSI header, enable/disable of HW and SW data and data format choice for the HW data. The choices for data formats for HW data are:
MMW_DEMO_LVDS_STREAM_CFG_DATAFMT_DISABLED
MMW_DEMO_LVDS_STREAM_CFG_DATAFMT_ADC
MMW_DEMO_LVDS_STREAM_CFG_DATAFMT_CP_ADC_CQ
In order to see the high-level data format details corresponding to the above data format configurations, refer to the corresponding slides in ti\drivers\cbuff\docs\CBUFF_Transfers.pptx
When HW data LVDS streaming is enabled, the ADC/CP/CQ data is streamed per chirp on every chirp event. When SW data streaming is enabled, it is streamed during inter-frame period after the list of detected objects for that frame is computed. The SW data streamed every frame/sub-frame is composed of the following in time:
HSI header (HSIHeader_t): refer to HSI module for details.
User data header: MmwDemo_LVDSUserDataHeader
User data payloads:
Point-cloud information as a list : DPIF_PointCloudCartesian_t x number of detected objects
Point-cloud side information as a list : DPIF_PointCloudSideInfo_t x number of detected objects
The format of the SW data streamed is shown in the following figure:
LVDS SW Data format
Note:
Only single-chirp formats are allowed, multi-chirp is not supported.
When number of objects detected in frame/sub-frame is 0, there is no transmission beyond the user data header.
For HW data, the inter-chirp duration should be sufficient to stream out the desired amount of data. For example, if the HW data-format is ADC and HSI header is enabled, then the total amount of data generated per chirp is:
(numAdcSamples * numRxChannels * 4 (size of complex sample) + 52 [sizeof(HSIDataCardHeader_t) + sizeof(HSISDKHeader_t)] ) rounded up to multiples of 256 [=sizeof(HSIHeader_t)] bytes.
The chirp time Tc in us = idle time + ramp end time in the profile configuration. For n-lane LVDS with each lane at a maximum of B Mbps,
maximum number of bytes that can be send per chirp = Tc * n * B / 8 which should be greater than the total amount of data generated per chirp i.e
Tc * n * B / 8 >= round-up(numAdcSamples * numRxChannels * 4 + 52, 256).
E.g if n = 2, B = 600 Mbps, idle time = 7 us, ramp end time = 44 us, numAdcSamples = 512, numRxChannels = 4, then 7650 >= 8448 is violated so this configuration will not work. If the idle-time is doubled in the above example, then we have 8700 > 8448, so this configuration will work.
For SW data, the number of bytes to transmit each sub-frame/frame is:
52 [sizeof(HSIDataCardHeader_t) + sizeof(HSISDKHeader_t)] + sizeof(MmwDemo_LVDSUserDataHeader_t) [=8] +
number of detected objects (Nd) * { sizeof(DPIF_PointCloudCartesian_t) [=16] + sizeof(DPIF_PointCloudSideInfo_t) [=4] } rounded up to multiples of 256 [=sizeof(HSIHeader_t)] bytes.
or X = round-up(60 + Nd * 20, 256). So the time to transmit this data will be
X * 8 / (n*B) us. The maximum number of objects (Ndmax) that can be detected is defined in the DPC (DPC_OBJDET_MAX_NUM_OBJECTS). So if Ndmax = 500, then time to transmit SW data is 68 us. Because we parallelize this transmission with the much slower UART transmission, and because UART transmission is also sending at least the same amount of information as the LVDS, the LVDS transmission time will not add any burdens on the processing budget beyond the overhead of reconfiguring and activating the CBUFF session (this overhead is likely bigger than the time to transmit).
The total amount of data to be transmitted in a HW or SW packet must be greater than the minimum required by CBUFF, which is 64 bytes or 32 CBUFF Units (this is the definition CBUFF_MIN_TRANSFER_SIZE_CBUFF_UNITS in the CBUFF driver implementation). If this threshold condition is violated, the CBUFF driver will return an error during configuration and the demo will generate a fatal exception as a result. When HSI header is enabled, the total transfer size is ensured to be at least 256 bytes, which satisfies the minimum. If HSI header is disabled, for the HW session, this means that numAdcSamples * numRxChannels * 4 >= 64. Although mmwavelink allows minimum number of ADC samples to be 2, the demo is supported for numAdcSamples >= 64. So HSI header is not required to be enabled for HW only case. But if SW session is enabled, without the HSI header, the bytes in each packet will be 8 + Nd * 20. So for frames/sub-frames where Nd < 3, the demo will generate exception. Therefore HSI header must be enabled if SW is enabled, this is checked in the CLI command validation.
The LVDS implementation is mostly present in mmw_lvds_stream.h and mmw_lvds_stream.c with calls in mss_main.c. Additionally HSI clock initialization is done at first time sensor start using MmwDemo_mssSetHsiClk.
EDMA channel resources for CBUFF/LVDS are in the global resource file (mmw_res.h, see Hardware Resource Allocation) along with other EDMA resource allocation. The user data header and two user payloads are configured as three user buffers in the CBUFF driver. Hence SW allocation for EDMA provides for three sets of EDMA resources as seen in the SW part (swSessionEDMAChannelTable[.]) of MmwDemo_LVDSStream_EDMAInit. The maximum number of HW EDMA resources are needed for the data-format MMW_DEMO_LVDS_STREAM_CFG_DATAFMT_CP_ADC_CQ, which as seen in the corresponding slide in ti\drivers\cbuff\docs\CBUFF_Transfers.pptx is 12 channels (+ shadows) including the 1st special CBUFF EDMA event channel which CBUFF IP generates to the EDMA, hence the HW part (hwwSessionEDMAChannelTable[.]) of MmwDemo_LVDSStream_EDMAInit has 11 table entries.
Although the CBUFF driver is configured for two sessions (hw and sw), at any time only one can be active. So depending on the LVDS CLI configuration and whether advanced frame or not, there is logic to activate/deactivate HW and SW sessions as necessary.
The CBUFF session (HW/SW) configure-create and delete depends on whether or not re-configuration is required after the first time configuration.
For HW session, re-configuration is done during sub-frame switching to re-configure for the next sub-frame but when there is no advanced frame (number of sub-frames = 1), the HW configuration does not need to change so HW session does not need to be re-created.
For SW session, even though the user buffer start addresses and sizes of headers remains same, the number of detected objects which determines the sizes of some user buffers changes from one sub-frame/frame to another sub-frame/frame. Therefore SW session needs to be recreated every sub-frame/frame.
User may modify the application software to transmit different information than point-cloud in the SW data e.g radar cube data (output of range DPU). However the CBUFF also has a maximum link list entry size limit of 0x3FFF CBUFF units or 32766 bytes. This means it is the limit for each user buffer entry [there are maximum of 3 entries -1st used for user data header, 2nd for point-cloud and 3rd for point-cloud side information]. During session creation, if this limit is exceeded, the CBUFF will return an error (and demo will in turn generate an exception). A single physical buffer of say size 50000 bytes may be split across two user buffers by providing one user buffer with (address, size) = (start address, 25000) and 2nd user buffer with (address, size) = (start address + 25000, 25000), beyond this two (or three if user data header is also replaced) limit, the user will need to create and activate (and wait for completion) the SW session multiple times to accomplish the transmission.
The following figure shows a timing diagram for the LVDS streaming (the figure is not to scale as actual durations will vary based on configuration).
Re-implement the file mmw_cli.c as follows:
MmwDemo_CLIInit should just create a task with input taskPriority. Lets say the task is called "MmwDemo_sensorConfig_task".
All other functions are not needed
Implement the MmwDemo_sensorConfig_task as follows:
Fill gMmwMCB.cfg.openCfg
Fill gMmwMCB.cfg.ctrlCfg
Add profiles and chirps using MMWave_addProfile and MMWave_addChirp functions
Call MmwDemo_CfgUpdate for every offset in Offsets for storing CLI configuration (MMWDEMO_xxx_OFFSET in mmw.h)
Fill gMmwMCB.dataPathObj.objDetCommonCfg.preStartCommonCfg
Call MmwDemo_openSensor
Call MmwDemo_startSensor (One can use helper function MmwDemo_isAllCfgInPendingState to know if all dynamic config was provided)
The Object Detection DPC needs to configure the DPUs hardware resources (HWA, EDMA). Even though the hardware resources currently are only required to be allocated for this one and only DPC in the system, the resource partitioning is shown to be in the ownership of the demo. This is to illustrate the general case of resource allocation across more than one DPCs and/or demo's own processing that is post-DPC processing. This partitioning can be seen in the mmw_res.h file. This file is passed as a compiler command line define
"--define=APP_RESOURCE_FILE="<ti/demo/xwr64xx/mmw/mmw_res.h>"
in mmw.mak when building the DPC sources as part of building the demo application and is referred in object detection DPC sources where needed as
#include APP_RESOURCE_FILE
这里我将从main函数、三个主要线程、一个次要线程来叙述整体代码结构:
除了MmwDemo_initTask没有配置优先级外(因为是最先调用的)
其他的三个线程都有优先级
分别是:
#define MMWDEMO_CLI_TASK_PRIORITY 3
#define MMWDEMO_DPC_OBJDET_DPM_TASK_PRIORITY 4
#define MMWDEMO_MMWAVE_CTRL_TASK_PRIORITY 5
数字越大 越优先响应(与NVIC不同)
TI官方的代码有个特点 所有的handle句柄、配置参数都会放在一个全局变量结构体里 在这个工程下为gMmwMCB
而配置流程也是先将参数写入到结构体里 然后再把结构体的参数传到各个配置函数
比如以下两个函数:
/* Platform specific configuration */
MmwDemo_platformInit(&gMmwMCB.cfg.platformCfg);
/* Initialize the Data Path: */
MmwDemo_dataPathInit(&gMmwMCB.dataPathObj);
串口的配置参数就在第一个函数内写入
然后再调用UART_open
:
/* Setup the default UART Parameters */
UART_Params_init(&uartParams);
uartParams.clockFrequency = gMmwMCB.cfg.platformCfg.sysClockFrequency;
uartParams.baudRate = gMmwMCB.cfg.platformCfg.commandBaudRate;
uartParams.isPinMuxDone = 1;
/* Open the UART Instance */
gMmwMCB.commandUartHandle = UART_open(0, &uartParams);
毫米波雷达外设(BSS) 配套使用的控制外设为MMWaveLink
邮箱 用于BSS和MSS、DSS之间的通信 主要是起到传递雷达数据和参数的作用
原始雷达数据(ADC采样数据)控制外设
CBUFF(通用缓冲控制器)负责将数据从多个来源(如ADCBUFF、线性调频参数(CP)、线性调频质量(CQ)或任何其他来源)传输到LVDS Tx或CSI2模块。
DPM 模块提供定义完善的 IPC 机制,允许应用程序和数据路径处理链 (DPC) 相互通信。应用程序和 DPC 可以位于同一子系统上,也可以位于不同的子系统上。
低电压差分信号流
该部分是demo代码里面的一种信号处理解决方案 不属于芯片外设资源
数据路径处理
用于获取时间戳等
增强型DMA 用于数据移动处理
硬件加速器 用于硬件计算等
初始化SOC 这是一套标准流程 同时上电BSS
SOC_waitBSSPowerUp(socHandle, &errCode)
然后建立MmwDemo_initTask主线程
Task_create(MmwDemo_initTask, &taskParams, NULL);
在初始化毫米波雷达中 打开毫米波雷达 并执行同步后调用:
需要在调用MMWave_init
和MMWave_sync
后启用
该线程一直循环调用MMWave_execute函数
用于在毫米波上下文中调用 说白了就是固定用法
while (1)
{
/* Execute the mmWave control module: */
if (MMWave_execute (gMmwMCB.ctrlHandle, &errCode) < 0)
{
}
}
主要线程中最先被调用的是MmwDemo_initTask
其承担了主要的初始化任务
首先
UART_init();
GPIO_init();
Mailbox_init(MAILBOX_TYPE_MSS);
这三个最重要的函数 必须提前调用
前两个就不说了了 Mailbox_init
也是BSS上电函数 如果不调用 则MMWave_init
会报错
然后是Pinmux引脚复用和UART配置
MmwDemo_platformInit(&gMmwMCB.cfg.platformCfg);
两个UART分别是115200和115200*8的波特率
采用UART_open
对两个串口进行打开,编号为0和1
函数MmwDemo_dataPathInit(&gMmwMCB.dataPathObj);
初始化了HWA和EDMA(仅仅是上电,不是配置 类似UART_init();
)
这类上电函数要在外设配置函数之前就要被调用 同样类似的还有ADCBuf_init
等
调用MMWave_init
和MMWave_sync
函数
全运行MSS独立模式
回调函数:MmwDemo_eventCallbackFxn
同时打开EDMA、HWA和ADCBuf外设(open函数)
初始化后建立MmwDemo_mmWaveCtrlTask
线程
初始化DPM、LVDS、DPC等
DPC报告回调:MmwDemo_DPC_ObjectDetection_reportFxn
DPC相关函数回调:gDPC_ObjectDetectionCfg
(这部分下文会讲到)
这里DPC建立了一个新线程MmwDemo_DPC_ObjectDetection_dpmTask
在函数MmwDemo_calibInit
中对校准参数进行获取 另外 初始化用户级QSPI
建立CLI线程CLI_task
CLI让SOC变成一个类似shell指令的设备 可以通过串口接收函数来进行配置、操作 也可以进行log输出等等
CLI的配置中指定了串口0作为CLI功能
cliCfg.cliUartHandle = gMmwMCB.commandUartHandle;
函数MmwDemo_CLIInit
定义了多个命令
比如:
cliCfg.tableEntry[cnt].cmd = "sensorStart";
cliCfg.tableEntry[cnt].helpString = "[doReconfig(optional, default:enabled)]";
cliCfg.tableEntry[cnt].cmdHandlerFxn = MmwDemo_CLISensorStart;
接收到"sensorStart"
字符串的时候执行MmwDemo_CLISensorStart
函数
helpString
为传递的参数
再比如:
cliCfg.tableEntry[cnt].cmd = "adcbufCfg";
cliCfg.tableEntry[cnt].helpString = " " ;
cliCfg.tableEntry[cnt].cmdHandlerFxn = MmwDemo_CLIADCBufCfg;
配置ADCBuf
传入的参数按helpString
指示 中间以空格分开
命令函数一览:
static int32_t MmwDemo_CLICfarCfg (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLIMultiObjBeamForming (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLICalibDcRangeSig (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLIClutterRemoval (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLISensorStart (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLISensorStop (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLIGuiMonSel (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLIADCBufCfg (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLICompRangeBiasAndRxChanPhaseCfg (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLIMeasureRangeBiasAndRxChanPhaseCfg (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLICfarFovCfg (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLIAoAFovCfg (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLIExtendedMaxVelocity (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLIChirpQualityRxSatMonCfg (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLIChirpQualitySigImgMonCfg (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLIAnalogMonitorCfg (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLILvdsStreamCfg (int32_t argc, char* argv[]);
static int32_t MmwDemo_CLIConfigDataPort (int32_t argc, char* argv[]);
开启雷达则调用的是MmwDemo_CLISensorStart
函数
还有一部分配置是在SDK里面就已经定义了的:
/* CLI Command Functions */
static int32_t CLI_MMWaveVersion (int32_t argc, char* argv[]);
static int32_t CLI_MMWaveFlushCfg (int32_t argc, char* argv[]);
static int32_t CLI_MMWaveDataOutputMode (int32_t argc, char* argv[]);
static int32_t CLI_MMWaveChannelCfg (int32_t argc, char* argv[]);
static int32_t CLI_MMWaveADCCfg (int32_t argc, char* argv[]);
static int32_t CLI_MMWaveProfileCfg (int32_t argc, char* argv[]);
static int32_t CLI_MMWaveChirpCfg (int32_t argc, char* argv[]);
static int32_t CLI_MMWaveFrameCfg (int32_t argc, char* argv[]);
static int32_t CLI_MMWaveAdvFrameCfg (int32_t argc, char* argv[]);
static int32_t CLI_MMWaveSubFrameCfg (int32_t argc, char* argv[]);
static int32_t CLI_MMWaveLowPowerCfg (int32_t argc, char* argv[]);
static int32_t CLI_MMWaveContModeCfg (int32_t argc, char* argv[]);
static int32_t CLI_MMWaveBPMCfgAdvanced (int32_t argc, char* argv[]);
CLI接收到信号后 调用MmwDemo_CLISensorStart
函数 对应到MmwDemo_openSensor
这里函数MMWave_open
是固定参数 配置为默认校准 频率范围为60G-64G
/* Setup the calibration frequency */
gMmwMCB.cfg.openCfg.freqLimitLow = 600U;
gMmwMCB.cfg.openCfg.freqLimitHigh = 640U;
/* start/stop async events */
gMmwMCB.cfg.openCfg.disableFrameStartAsyncEvent = false;
gMmwMCB.cfg.openCfg.disableFrameStopAsyncEvent = false;
/* No custom calibration: */
gMmwMCB.cfg.openCfg.useCustomCalibration = false;
gMmwMCB.cfg.openCfg.customCalibrationEnableMask = 0x0;
/* calibration monitoring base time unit
* setting it to one frame duration as the demo doesnt support any
* monitoring related functionality
*/
gMmwMCB.cfg.openCfg.calibMonTimeUnit = 1;
同样在函数MmwDemo_CLISensorStart
中被调用 对应到MmwDemo_configSensor
参数ctrlCfg
则在CLI中通过串口发送过来
在线程MmwDemo_DPC_ObjectDetection_dpmTask
中调用了DPM_ioctl
从而触发回调
这里回调函数:DPC_ObjectDetection_ioctl
CLI接收到信号后 调用MmwDemo_CLISensorStart
函数 对应到MmwDemo_startSensor
该函数调用了MmwDemo_dataPathStart
(DPM_start)用于开始DPM
而后就开始了
这里回调函数:DPC_ObjectDetection_start
MmwDemo_CLISensorStart
该函数下 完善了剩余配置 尤其是DPC与雷达参数配置的参数统一
并调用了MmwDemo_openSensor
与MmwDemo_configSensor
来进行MMWave_open
和MMWave_config
最后用MmwDemo_startSensor
开始
在完善参数时 还调用了以下几个函数:
CLI_getMMWaveExtensionOpenConfig (&openCfg);
CLI_getMMWaveExtensionOpenConfig (&gMmwMCB.cfg.openCfg);
CLI_getMMWaveExtensionConfig (&ctrlCfg);
CLI_getMMWaveExtensionConfig (&gMmwMCB.cfg.ctrlCfg);
该函数实现的功能就是把CLI内部的一个配置结构体复制到你想要的结构体中(数据复制 memcpy)
因为在SDK里面就已经定义了的函数 启配置运行时操作的CLI内部的结构体 而不是用户自己定义的结构 配置frame chirp profile等 都是在这个里面
所对应的结构体也就是MMWave_OpenCfg
和MMWave_CtrlCfg
类型
在覆写CLI时这一点要格外注意
frame事件:回调DPC_ObjectDetection_frameStart
chirp事件:回调DPC_ObjectDetection_execute
这里调用了HWA
该线程也是一个死循环
主要任务就是执行DPC和计算
该线程执行时 同步CLI线程(其实是一直在调用 同时同步毫米波雷达采集 只不过这部分是在CLI中被控制的)
在CLI中调用MmwDemo_CLISensorStart
函数后 会调用MmwDemo_startSensor
从而执行MmwDemo_dataPathStart
调用DPM_start函数 然后这一部分线程才能进行有效工作
执行DPM 函数DPM_execute
该函数同MMWave_execute
需要在DPC上下文中调用
对各个参数进行赋值 配置ADCBuf、LVDS、EDMA等
执行DPM 函数DPM_execute
中传参resultBuffer
用于接收结果
当if (resultBuffer.size[0] == sizeof(DPC_ObjectDetection_ExecuteResult))
时
数据有效
才能继续进行
该判断语句有if
没有else
所以数据无效 则一直调用函数DPM_execute
直到数据有效
如果CLI中没有配置好的话 这一部分就一直是个空函数
通过Cycleprofiler外设获取时间戳 函数Cycleprofiler_getTimeStamp
同时 可以用其获取到该周期内的当前时间、开始时间等
currentInterFrameProcessingEndTime = Cycleprofiler_getTimeStamp();
startTime = Cycleprofiler_getTimeStamp();
执行LVDS Stream、DPC等相关函数
大部分以回调的形式体现gDPC_ObjectDetectionCfg
回调部分进行了HWA等模板的调用 见下文
调用MmwDemo_transmitProcessedOutput
函数 采用串口1进行数据输出
在此函数内 有一个结构体类型DPIF_PointCloudCartesian
就是点云数据
该结构体属于DPC模块下的DPIF部分 见下文
等待当前帧的处理完成
重新获取时间戳 并配置ADCBuf等 为下一帧做准备
调用DPM_ioctl
触发DPC相关回调
下一次frame事件继续上面的操作 直到接收到stop指令后 关闭相关功能
数据输出函数MmwDemo_transmitProcessedOutput
传入三个参数
UART_Handle uartHandle,
DPC_ObjectDetection_ExecuteResult *result,
MmwDemo_output_message_stats *timingInfo
第一个为串口句柄
第二个result的类型为DPC_ObjectDetection_ExecuteResult
第三个为时间数据(各个部分处理时间耗时 可以看文档Stats information部分)
/** @brief Transmits detection data over UART
*
* The following data is transmitted:
* 1. Header (size = 32bytes), including "Magic word", (size = 8 bytes)
* and including the number of TLV items
* TLV Items:
* 2. If detectedObjects flag is 1 or 2, DPIF_PointCloudCartesian structure containing
* X,Y,Z location and velocity for detected objects,
* size = sizeof(DPIF_PointCloudCartesian) * number of detected objects
* 3. If detectedObjects flag is 1, DPIF_PointCloudSideInfo structure containing SNR
* and noise for detected objects,
* size = sizeof(DPIF_PointCloudCartesian) * number of detected objects
* 4. If logMagRange flag is set, rangeProfile,
* size = number of range bins * sizeof(uint16_t)
* 5. If noiseProfile flag is set, noiseProfile,
* size = number of range bins * sizeof(uint16_t)
* 6. If rangeAzimuthHeatMap flag is set, the zero Doppler column of the
* range cubed matrix, size = number of Rx Azimuth virtual antennas *
* number of chirps per frame * sizeof(uint32_t)
* 7. If rangeDopplerHeatMap flag is set, the log magnitude range-Doppler matrix,
* size = number of range bins * number of Doppler bins * sizeof(uint16_t)
* 8. If statsInfo flag is set, the stats information
* @param[in] uartHandle UART driver handle
* @param[in] result Pointer to result from object detection DPC processing
* @param[in] timingInfo Pointer to sub-frame object stats that contains timing info
*/
当满足条件时 即可获取到正确数据
typedef struct DPC_ObjectDetection_ExecuteResult_t
{
/*! @brief Sub-frame index, this is in the range [0..numSubFrames - 1] */
uint8_t subFrameIdx;
/*! @brief Number of detected objects */
uint32_t numObjOut;
/*! @brief Detected objects output list of @ref numObjOut elements */
DPIF_PointCloudCartesian *objOut;
/*! @brief Radar Cube structure */
DPIF_RadarCube radarCube;
/*! @brief Detection Matrix structure */
DPIF_DetMatrix detMatrix;
/*! @brief Detected objects side information (snr + noise) output list,
* of @ref numObjOut elements */
DPIF_PointCloudSideInfo *objOutSideInfo;
/*! @brief Pointer to range-azimuth static heat map, this is a 2D FFT
* array in range direction (cmplx16ImRe_t x[numRangeBins][numVirtualAntAzim]),
* at doppler index 0 */
cmplx16ImRe_t *azimuthStaticHeatMap;
/*! @brief Number of elements of @ref azimuthStaticHeatMap, this will be
* @ref DPC_ObjectDetection_StaticCfg_t::numVirtualAntAzim *
* @ref DPC_ObjectDetection_StaticCfg_t::numRangeBins */
uint32_t azimuthStaticHeatMapSize;
/*! @brief Pointer to DPC stats structure */
DPC_ObjectDetection_Stats *stats;
/*! @brief Pointer to Range Bias and rx channel gain/phase compensation measurement
* result. Note the contents of this pointer are independent of sub-frame
* i.e all sub-frames will report the same result although it is
* expected that when measurement is enabled,
* the number of sub-frames will be 1 (i.e advanced frame
* feature will be disabled). If measurement
* was not enabled, then this pointer will be NULL. */
DPU_AoAProc_compRxChannelBiasCfg *compRxChanBiasMeasurement;
} DPC_ObjectDetection_ExecuteResult;
其中numObjOut
表示有几个目标 决定了后面几个数组的大小
subFrameIdx
表示当前第几个子帧
cmplx16ImRe_t
为2D FFT的实数与虚数值
时间参数在DPC_ObjectDetection_Stats
结构体下
typedef struct DPC_ObjectDetection_Stats_t
{
/*! @brief interChirpProcess margin in CPU cyctes */
uint32_t interChirpProcessingMargin;
/*! @brief Counter which tracks the number of frame start interrupt */
uint32_t frameStartIntCounter;
/*! @brief Frame start CPU time stamp */
uint32_t frameStartTimeStamp;
/*! @brief Inter-frame start CPU time stamp */
uint32_t interFrameStartTimeStamp;
/*! @brief Inter-frame end CPU time stamp */
uint32_t interFrameEndTimeStamp;
/*! @brief Sub frame preparation cycles. Note when this is reported as part of
* the process result reporting, then it indicates the cycles that took
* place in the previous sub-frame/frame for preparing to switch to
* the sub-frame that is being reported because switching happens
* in the processing of DPC_OBJDET_IOCTL__DYNAMIC_EXECUTE_RESULT_EXPORTED,
* which is after the DPC process. */
uint32_t subFramePreparationCycles;
} DPC_ObjectDetection_Stats;
由此可以得到时间戳
这里主要关注两个参数:
当detectedObjects flag is 1 or 2时 DPIF_PointCloudCartesian
和DPIF_PointCloudSideInfo
数据有效
前者表示点云数据 后者则是SNR和噪音
typedef struct DPIF_PointCloudCartesian_t
{
/*! @brief x - coordinate in meters. This axis is parallel to the sensor plane
* and makes the azimuth plane with y-axis. Positive x-direction is rightward
* in the azimuth plane when observed from the sensor towards the scene
* and negative is the opposite direction. */
float x;
/*! @brief y - coordinate in meters. This axis is perpendicular to the
* sensor plane with positive direction from the sensor towards the scene */
float y;
/*! @brief z - coordinate in meters. This axis is parallel to the sensor plane
* and makes the elevation plane with the y-axis. Positive z direction
* is above the sensor and negative below the sensor */
float z;
/*! @brief Doppler velocity estimate in m/s. Positive velocity means target
* is moving away from the sensor and negative velocity means target
* is moving towards the sensor. */
float velocity;
}DPIF_PointCloudCartesian;
typedef struct DPIF_PointCloudSideInfo_t
{
/*! @brief snr - CFAR cell to side noise ratio in dB expressed in 0.1 steps of dB */
int16_t snr;
/*! @brief y - CFAR noise level of the side of the detected cell in dB expressed in 0.1 steps of dB */
int16_t noise;
}DPIF_PointCloudSideInfo;
这里的点云数据包含xyz坐标和速度值
在objectdetection.c文件中
DPM_ProcChainCfg gDPC_ObjectDetectionCfg =
{
DPC_ObjectDetection_init, /* Initialization Function: */
DPC_ObjectDetection_start, /* Start Function: */
DPC_ObjectDetection_execute, /* Execute Function: */
DPC_ObjectDetection_ioctl, /* Configuration Function: */
DPC_ObjectDetection_stop, /* Stop Function: */
DPC_ObjectDetection_deinit, /* Deinitialization Function: */
NULL, /* Inject Data Function: */
NULL, /* Chirp Available Function: */
DPC_ObjectDetection_frameStart /* Frame Start Function: */
};
该部分函数多以回调的方式进行 属于线程MmwDemo_DPC_ObjectDetection_dpmTask
中的一部分
该函数调用MmwDemo_MMWave_stop
并用MMWave_stop
关闭雷达
停止传感器函数 没有被调用过
休眠
汇编指令WFI:
asm(" WFI ");
任意中断都可以退出 没有被调用