TCL脚本的使用

TCL脚本的使用

  • 1 总体说明
  • 2 BD脚本
    • 2.1 主流程
    • 2.2 最高层级
    • 2.3 BRAM层级
  • 3 普通范例

本文通过几个TCL脚本范例介绍TCL脚本的使用方法,TCL的全称为Tool Command Language,通过TCL脚本可以快速创建工程,完成工程的编译和实现。

1 总体说明

单条的TCL命令可以复制到Vivado的Tcl Console中执行,在Vivado中通过GUI进行的操作,在Tcl Console也会出现对应的TCL命令。可以在Vivado→Run Tcl Script下选择TCL脚本执行,对于Linux系统,也可以在终端输入vivado -mode tcl -source 执行。更进一步,可以新建Shell,脚本,输入下述内容,直接在终端执行Shell脚本完成工程的创建和编译。

# 设置Vivado路径
source /opt/Xilinx/Vitis/2021.1/settings64.sh
# 删除旧的工程文件
rm -rf proj *.jou *.log
# 执行脚本
vivado -mode tcl -source ./run.tcl

2 BD脚本

Block Design的TCl脚本可以通过Vivado的工具进行创建,打开一个使用GUI创建好的BD文件,点击Vivado工具栏的File→Export→Export Block Design生成创建该BD的TCL脚本。本节分析一下创建下图所示BD的TCL脚本中重要的内容。
TCL脚本的使用_第1张图片

2.1 主流程

create_root_design ""

2.2 最高层级

proc create_root_design { parentCell } {

  variable script_folder
  variable design_name

  if { $parentCell eq "" } {
     set parentCell [get_bd_cells /]
  }

  # Get object for parentCell
  set parentObj [get_bd_cells $parentCell]
  if { $parentObj == "" } {
     catch {common::send_gid_msg -ssname BD::TCL -id 2090 -severity "ERROR" "Unable to find parent cell <$parentCell>!"}
     return
  }

  # Make sure parentObj is hier blk
  set parentType [get_property TYPE $parentObj]
  if { $parentType ne "hier" } {
     catch {common::send_gid_msg -ssname BD::TCL -id 2091 -severity "ERROR" "Parent <$parentObj> has TYPE = <$parentType>. Expected to be ."}
     return
  }

  # Save current instance; Restore later
  set oldCurInst [current_bd_instance .]

  # Set parent object as current
  current_bd_instance $parentObj

  # Create interface ports
  set host_clk [ create_bd_intf_port -mode Slave -vlnv xilinx.com:interface:diff_clock_rtl:1.0 host_clk ]
  set pcie_mgt_host [ create_bd_intf_port -mode Master -vlnv xilinx.com:interface:pcie_7x_mgt_rtl:1.0 pcie_mgt_host ]

  # Create ports
  set host_rstn [ create_bd_port -dir I -type rst host_rstn ]

  # Create instance: bram
  create_hier_cell_bram [current_bd_instance .] bram

  # Create instance: buf_hostclk, and set properties
  set buf_hostclk [ create_bd_cell -type ip -vlnv xilinx.com:ip:util_ds_buf:2.1 buf_hostclk ]
  set_property -dict [ list \
   CONFIG.C_BUF_TYPE {IBUFDSGTE} \
 ] $buf_hostclk

  # Create instance: xdma_host, and set properties
  set xdma_host [ create_bd_cell -type ip -vlnv xilinx.com:ip:xdma:4.1 xdma_host ]
  set_property -dict [ list \
   CONFIG.PF0_DEVICE_ID_mqdma {9048} \
   CONFIG.PF0_SRIOV_VF_DEVICE_ID {A03F} \
   CONFIG.PF1_SRIOV_VF_DEVICE_ID {0000} \
   CONFIG.PF2_DEVICE_ID_mqdma {9048} \
   CONFIG.PF2_SRIOV_VF_DEVICE_ID {0000} \
   CONFIG.PF3_DEVICE_ID_mqdma {9048} \
   CONFIG.PF3_SRIOV_VF_DEVICE_ID {0000} \
   CONFIG.axi_bypass_64bit_en {false} \
   CONFIG.axi_data_width {512_bit} \
   CONFIG.axil_master_64bit_en {true} \
   CONFIG.axil_master_prefetchable {true} \
   CONFIG.axilite_master_en {true} \
   CONFIG.axilite_master_scale {Kilobytes} \
   CONFIG.axilite_master_size {64} \
   CONFIG.axist_bypass_en {false} \
   CONFIG.axisten_freq {250} \
   CONFIG.cfg_mgmt_if {false} \
   CONFIG.coreclk_freq {500} \
   CONFIG.disable_gt_loc {true} \
   CONFIG.en_gt_selection {true} \
   CONFIG.mode_selection {Basic} \
   CONFIG.pcie_blk_locn {PCIE4C_X0Y0} \
   CONFIG.pf0_device_id {9048} \
   CONFIG.pf0_msix_cap_pba_bir {BAR_3:2} \
   CONFIG.pf0_msix_cap_table_bir {BAR_3:2} \
   CONFIG.pl_link_cap_max_link_speed {16.0_GT/s} \
   CONFIG.pl_link_cap_max_link_width {X8} \
   CONFIG.plltype {QPLL0} \
   CONFIG.select_quad {GTY_Quad_226} \
   CONFIG.xdma_pcie_64bit_en {true} \
   CONFIG.xdma_pcie_prefetchable {true} \
 ] $xdma_host

  # Create interface connections
  connect_bd_intf_net -intf_net CLK_IN_D_0_1 [get_bd_intf_ports host_clk] [get_bd_intf_pins buf_hostclk/CLK_IN_D]
  connect_bd_intf_net -intf_net xdma_host_M_AXI [get_bd_intf_pins bram/S_AXI] [get_bd_intf_pins xdma_host/M_AXI]
  connect_bd_intf_net -intf_net xdma_host_M_AXI_LITE [get_bd_intf_pins bram/S_AXI1] [get_bd_intf_pins xdma_host/M_AXI_LITE]
  connect_bd_intf_net -intf_net xdma_host_pcie_mgt [get_bd_intf_ports pcie_mgt_host] [get_bd_intf_pins xdma_host/pcie_mgt]

  # Create port connections
  connect_bd_net -net buf_hostclk_IBUF_DS_ODIV2 [get_bd_pins buf_hostclk/IBUF_DS_ODIV2] [get_bd_pins xdma_host/sys_clk]
  connect_bd_net -net buf_hostclk_IBUF_OUT [get_bd_pins buf_hostclk/IBUF_OUT] [get_bd_pins xdma_host/sys_clk_gt]
  connect_bd_net -net sys_rst_n_0_1 [get_bd_ports host_rstn] [get_bd_pins xdma_host/sys_rst_n]
  connect_bd_net -net xdma_host_axi_aclk [get_bd_pins bram/s_axi_aclk] [get_bd_pins xdma_host/axi_aclk]
  connect_bd_net -net xdma_host_axi_aresetn [get_bd_pins bram/s_axi_aresetn] [get_bd_pins xdma_host/axi_aresetn]

  # Create address segments
  assign_bd_address -offset 0x00000000 -range 0x00200000 -target_address_space [get_bd_addr_spaces xdma_host/M_AXI] [get_bd_addr_segs bram/bram_ctrl_axi/S_AXI/Mem0] -force
  assign_bd_address -offset 0x00000000 -range 0x00010000 -target_address_space [get_bd_addr_spaces xdma_host/M_AXI_LITE] [get_bd_addr_segs bram/bram_ctrl_axil/S_AXI/Mem0] -force

  # Restore current instance
  current_bd_instance $oldCurInst

  save_bd_design
}

2.3 BRAM层级

在2.2中通过create_hier_cell_bram [current_bd_instance .] bram调用下述过程块创建BRAM层级。

proc create_hier_cell_bram { parentCell nameHier } {

  variable script_folder

  if { $parentCell eq "" || $nameHier eq "" } {
     catch {common::send_gid_msg -ssname BD::TCL -id 2092 -severity "ERROR" "create_hier_cell_bram() - Empty argument(s)!"}
     return
  }

  # Get object for parentCell
  set parentObj [get_bd_cells $parentCell]
  if { $parentObj == "" } {
     catch {common::send_gid_msg -ssname BD::TCL -id 2090 -severity "ERROR" "Unable to find parent cell <$parentCell>!"}
     return
  }

  # Make sure parentObj is hier blk
  set parentType [get_property TYPE $parentObj]
  if { $parentType ne "hier" } {
     catch {common::send_gid_msg -ssname BD::TCL -id 2091 -severity "ERROR" "Parent <$parentObj> has TYPE = <$parentType>. Expected to be ."}
     return
  }

  # Save current instance; Restore later
  set oldCurInst [current_bd_instance .]

  # Set parent object as current
  current_bd_instance $parentObj

  # Create cell and set as current instance
  set hier_obj [create_bd_cell -type hier $nameHier]
  current_bd_instance $hier_obj

  # Create interface pins
  create_bd_intf_pin -mode Slave -vlnv xilinx.com:interface:aximm_rtl:1.0 S_AXI
  create_bd_intf_pin -mode Slave -vlnv xilinx.com:interface:aximm_rtl:1.0 S_AXI1

  # Create pins
  create_bd_pin -dir I -type clk s_axi_aclk
  create_bd_pin -dir I -type rst s_axi_aresetn

  # Create instance: bram_axi, and set properties
  set bram_axi [ create_bd_cell -type ip -vlnv xilinx.com:ip:blk_mem_gen:8.4 bram_axi ]
  set_property -dict [ list \
   CONFIG.EN_SAFETY_CKT {false} \
 ] $bram_axi

  # Create instance: bram_axil, and set properties
  set bram_axil [ create_bd_cell -type ip -vlnv xilinx.com:ip:blk_mem_gen:8.4 bram_axil ]
  set_property -dict [ list \
   CONFIG.EN_SAFETY_CKT {false} \
 ] $bram_axil

  # Create instance: bram_ctrl_axi, and set properties
  set bram_ctrl_axi [ create_bd_cell -type ip -vlnv xilinx.com:ip:axi_bram_ctrl:4.1 bram_ctrl_axi ]
  set_property -dict [ list \
   CONFIG.DATA_WIDTH {512} \
   CONFIG.ECC_TYPE {0} \
   CONFIG.SINGLE_PORT_BRAM {1} \
 ] $bram_ctrl_axi

  # Create instance: bram_ctrl_axil, and set properties
  set bram_ctrl_axil [ create_bd_cell -type ip -vlnv xilinx.com:ip:axi_bram_ctrl:4.1 bram_ctrl_axil ]
  set_property -dict [ list \
   CONFIG.ECC_TYPE {0} \
   CONFIG.PROTOCOL {AXI4LITE} \
   CONFIG.SINGLE_PORT_BRAM {1} \
 ] $bram_ctrl_axil

  # Create interface connections
  connect_bd_intf_net -intf_net axi_bram_ctrl_0_BRAM_PORTA [get_bd_intf_pins bram_axi/BRAM_PORTA] [get_bd_intf_pins bram_ctrl_axi/BRAM_PORTA]
  connect_bd_intf_net -intf_net bram_ctrl_axil_BRAM_PORTA [get_bd_intf_pins bram_axil/BRAM_PORTA] [get_bd_intf_pins bram_ctrl_axil/BRAM_PORTA]
  connect_bd_intf_net -intf_net xdma_host_M_AXI [get_bd_intf_pins S_AXI] [get_bd_intf_pins bram_ctrl_axi/S_AXI]
  connect_bd_intf_net -intf_net xdma_host_M_AXI_LITE [get_bd_intf_pins S_AXI1] [get_bd_intf_pins bram_ctrl_axil/S_AXI]

  # Create port connections
  connect_bd_net -net xdma_host_axi_aclk [get_bd_pins s_axi_aclk] [get_bd_pins bram_ctrl_axi/s_axi_aclk] [get_bd_pins bram_ctrl_axil/s_axi_aclk]
  connect_bd_net -net xdma_host_axi_aresetn [get_bd_pins s_axi_aresetn] [get_bd_pins bram_ctrl_axi/s_axi_aresetn] [get_bd_pins bram_ctrl_axil/s_axi_aresetn]

  # Restore current instance
  current_bd_instance $oldCurInst
}

3 普通范例

通过该脚本可以快速的创建一个工程,添加源文件、约束文件等,实现工程的综合和实现,最后生成Bitstream。

## 工程设置 ##
# 设置当前TCL脚本的路径
set launchDir [file dirname [file normalize [info script]]]
# 设置源文件和约束文件路径
set srcDir ${launchDir}/src
set xdcDir ${launchDir}/xdc
# 设置工程名字和路径
set projName "proj"
set projDir "./$projName"
# 设置芯片型号为xcvu23p-vsva1365-2-e
set projPart "xcvu23p-vsva1365-2-e"
# 创建工程
create_project ${projName} ${projDir} -part $projPart

## 文件添加 ##
# 如果想一次添加多个源文件,可以使用{}包含多个源文件,源文件之间以空格隔开
# 添加RTL源文件,源文件不复制到工程文件夹下
add_files -norecurse ${srcDir}/rtl/cmac_1.v
# 添加已生成的IP文件,IP文件复制到工程文件夹下
import_files -norecurse ${srcDir}/ip/clkwiz/clk_wiz_freerun.xci
# 使用另一个TCL脚本创建一个Block Design
# 设置Block Design的名称为xdma_soc
set bd_name xdma_soc
# 执行TCL脚本创建Block Design
source ${srcDir}/tcl/xdma_soc.tcl
# 生成Block Design的Wrapper
make_wrapper -files [get_files ${projDir}/${projName}.srcs/sources_1/bd/${bd_name}/${bd_name}.bd] -top
# 将生成的Wrapper添加到工程中
add_files -norecurse ${projDir}/${projName}.gen/sources_1/bd/${bd_name}/hdl/${bd_name}_wrapper.v
# 添加约束文件,约束文件不复制到工程文件夹下
add_files -fileset constrs_1 -norecurse ${xdcDir}/cmac.xdc

## 工程编译 ##
# 执行综合流程,-jobs后更使用的CPU核数量
launch_runs synth_1 -jobs 20
wait_on_run synth_1
# 执行实现流程
launch_runs impl_1 -jobs 20
wait_on_run impl_1
# 生成Bitstream
launch_runs impl_1 -to_step write_bitstream -jobs 20

## 其它命令 ##
# 从工程中移除源文件
remove_files -norecurse ${srcDir}/rtl/cmac_1.v
# 从硬盘上删除源文件
file delete -force ${srcDir}/usr/cmac_1.v
# 复位综合,在执行完一次综合后想再次进行综合需要执行复位
reset_run synth_1
# 复位实现
reset_run impl_1
# 生成调试文件
open_run impl_1
write_debug_probe <文件名>.ltx

你可能感兴趣的:(#,Xilinx,Vivado编程技术,硬件工程,fpga开发)