iSCSI card hack and development


1         Introduction

 

This document is the feasibility study report of implementing the software on the Bridgeworks iSCSI-to-SAS card. It is the primary deliverable of the project Dublin done by the BDT Zhuhai software team aimed at understanding the possibility and effort required to build BDT’s own iSCSI-to-SAS solution for the FlexStor product family.

 


2         Background and Approach

 

The Bridgeworks iSCSI-to-SAS adapter is used in FlexStor to provide iSCSI connectivity to system on the network. It runs a customized version of Linux with a Web based management interface. The core iSCSI module is a Bridgeworks’ own production and it is said to have some patented technology inside that gives better I/O performance.

 

We believe it is possible for BDT to replace the Bridgeworks’ software on the adapter with an open source based software solution that gives similar features as the original Bridgeworks software is capable of. We don’t know what the stability and performance differences would be between the open source solution and Bridgeworks solution, which needs study to find out.

 

The adapter is based on Intel XScale-V3 revision 9 processor, with 256MB DRAM, 16/32MB Flash, dual-GigE interface and a SAS interface. By populating a serial connector to the adapter, we can access the adapter’s uBoot console, which allows us to upload firmware to the adapter to replace Bridgeworks’ software, without any hardware modifications.

 

Therefore, the approach for this project is:

  1. Understand the iSCSI protocol.
  2. Look for open source iSCSI implementations and identify one that has best stability and performance metrics.
  3. Build a Linux operating system package with selected iSCSI package integrated to be loaded and run on the Bridgeworks adapter.
  4. Build a basic management Web UI for demo purpose.
  5. Run stability and performance test with Bridgeworks software and our own software to understand stability and performance issues.
  6. Based on the work above, evaluate the effort required to do a full implementation, if it is feasible to do a full implementation.

 

3         iSCSI Overview

 

The iSCSI protocol is a mapping of the SCSI remote procedure invocation model (see [SAM2]) over the TCP protocol.  SCSI commands are carried by iSCSI requests and SCSI responses and status are carried by iSCSI responses.  iSCSI also uses the request response mechanism for iSCSI protocol mechanisms.  – RFC3720

 

 

 

The set of initiators, targets, and the connecting network makes up an iSCSI storage area network (SAN).

 

3.1      Initiator

 

Initiator is the iSCSI client in the iSCSI architecture. Each host has one or more initiators. The initiator will take the responsibilities as follows:

 

l        Generate CDB (Command Description Block) in the SCSI layer, passing to the iSCSI protocol layer.

l        iSCSI layer will create the iSCSI PDU (Contain CDB), then send PDU to the target via IP network. iSCSI doesn’t know the content of CDB, it just encapsulates the whole CDB in the PDU.

 

3.2      Target

 

Target is the iSCSI server in the iSCSI architecture. In generally, there are multiple storage devices in the target endpoint. The target can be a server enclosure or iSCSI HBA device. The target will take the responsibilities as follows:

 

l        The iSCSI layer receives PDU sent by the initiator, passing to the SCSI layer.

l        SCSI layer interprets the meaning of CDB. It will send the response if it is necessary.

 

3.3      Security Considerations

 

The entities involved in iSCSI security are the initiator, target, and the IP communication end points.  iSCSI scenarios in which multiple initiators or targets share a single communication end point are expected. To ccommodate such scenarios, iSCSI uses two separate security mechanisms: In-band authentication between the initiator and the target at the iSCSI connection level (carried out by exchange of iSCSI Login PDUs), and packet protection (integrity, authentication, and confidentiality) by IPsec at the IP level.  The two security mechanisms complement each other.  The in-band authentication provides end-to-end trust (at login time) between the iSCSI initiator and the target while IPsec provides a secure channel between the IP communication end points. – RFC3720

 

3.3.1      Authentication Methods

 

The authentication methods that can be used (appear in the list-of-values) are either those listed in the following table or are vendor-unique methods:

 

Name

Description

KRB5

Kerberos V5 - defined in [RFC1510]

SPKM1

Simple Public-Key GSS-API Mechanism defined in [RFC2025]

SPKM2

Simple Public-Key GSS-API Mechanism defined in [RFC2025]

SRP

Secure Remote Password defined in [RFC2945]

CHAP

Challenge Handshake Authentication Protocol defined in [RFC1994]

None

No authentication

 

Most iSCSI products have implemented None, SRP and CHAP authentication methods. Therefore, this feasibility study will focus on the three authentication methods.

 

CHAP

 

CHAP is an authentication protocol that is used to authenticate iSCSI initiators at target login and at various random times during a connection. Note that CHAP provides authentication, not encryption. Using CHAP does not affect the data being transmitted.

 

CHAP sends 128-bit MD-5 hash value based on the CHAP password instead of the plain text of password. The target will periodically challenge the initiator after the session has been established to make sure that an imposter has not inserted itself in place of the initiator. If the periodic challenge fails, the session is dropped.

 

3.3.2      Data Encryption

 

If the iSCSI traffic flows over a public network, and the user are concerned about the data being intercepted, he can choose to encrypt the data using the IPSec protocol. The iSCSI HBA doesn’t need to provide IPSec feature in his implementation since the user can set up data transportation via its IPsec gateway.

 

 

3.4      Session and Connection

 

iSCSC allows one or more TCP connections between the initiator and target. The TCP connections carry control messages, SCSI commands, parameters, and data within iSCSI Protocol Data Units. The group of TCP connections that link an initiator with a target form a session. TCP connections can be added and removed from a session.

 

 

 

4         Bridgeworks iSCSI-to-SAS Bridge Analysis

 

The iSCSI card can be plugged into the Tape library and physically connected to the Tape library via the slot. The tape library works as the power supply for the iSCSI card and the attached SAS tape drives. The medium changer of the Tape library also should be exported as Changer device for the iSCSI card.

 

We hack the Bridgeworks iSCSI card with two methods: system log and serial console. The system log tells us a lot of information regarding of hardware and software. With the serial console, we can debug our kernel and filesystem before normally booting the Bridgeworks kernel.

 

4.1      Hardware Components

 

By reviewing the system log and the circuit layout, we get the summary of the hardware as following table:

 

Name

Description

Comment

CPU

Intel 81348 I/O processor (XScale-V3 based processor revision 9 (ARMv5TE))

Dual Cores

1200 MHz

Memory

SDRAM

256 MB

FLASH

Nor Flash

16/32 MB

Dual Network interfaces

Intel(R) PRO/1000 Network Driver - version 7.3.20-k2-NAPI

 

SAS Controller

Intel ISC813x SAS Controller (On chip)

 

LED

4 LEDs

GPIO control

Serial

8250/16550

 

 

There are many resources available for the Intel 81348 I/O processor in the Intel site, they include:

 

l        Intel81348_Design Guide

l        Intel81348_Design Manual

l        Intel_ I_O Processors Linux Installation Application Note

l        External Storage Design Customer Reference Board Manual for 8134x I_O Processors

 

The Linux Installation Application is very important for us to build the kernel for the iSCSI card.

 

Intel 81348 I/O processor has dual-core Intel XScale technology and dual-interface architecture. One of the cores is dedicated to running the SAS transport firmware and is called the “Transport Core”. The other core is named the “Application Core” and is where customer BSPs and OSes execute. The Transport Core is composed of as follows:

 

l        Common boot (128K)

l        Transport Core (1920K)

 

The (Application Core) boot loader written by customer only can be burned after the 0x00200000 address. Currently, the Bridgeworks uses the u-boot as the boot loader.

 

Transport Core (Core0) Owns

UART0

I2C_Bus0

SRAM

TPMI (except for interface to SLI on TPMI0)

SAS PHYs

 

Application Core (Core1) Owns

UART1

I2C_Bus1 and I2C_Bus2

SDRAM

 

That information is useful to write and understand the BSP code for the iSCSI card.

 

4.2      Software Components

 

We can’t login the Bridgeworks system with super user so we don’t know the detail information of the running system. We can only give the summary of the system with our experience as follows:

 

Name

Description

Comment

bwmanager

Contain modules: Events,serial,ui,template,corelink,scsi,Socket,Configuration,base

 

clmanager

registered application `bwmanager'

 

bwcore

Protocol-Neutral(dell_v0_00_15 Sep  5 2008 17:40:55)
Patented GB2380642. Patents pending

 

boxconfig

Box Configuration

 

httpd

WEB GUI

 

iscsit

iSCSI target deamon

 

isns

SNS management

 

network

Network management for the multipath

 

ntp

Network time protocol

 

persistent

Persistent LUN's module

 

responder

Unknow

 

sasi

SAS Initiator

 

sscl

Unknow

 

system

Unknow

 

 

In order to debug our kernel in the system, we can upload our kernel image into the SDRAM and mount our root file system with NFS mechanism. Note that both the entry point and load address of the kernel image should be 0x00008000. You can upload the kernel image to the SDRAM address 0x01008000 or other place. But you should make sure that should not overwrite the running boot loader in SDRAM.

 

You also should do some modifications for the kernel image built by the host system as follow script:

 

mkimage -A arm -O linux -T kernel -C none -a 0x00008000 -e 0x00008000 -n "Linux-iop13xx" -d linux-2.6.x/arch/arm/boot/zImage uImage

 

 

4.3      Remarkable Features

 

The iSCSI card is an stand-alone device in the Flex Stor library. It transports the SCSI command via the IP network instead of general SCSI bus. It is possible to build the IP-SAN with iSCSI card. The remarkable features of the Bridgeworks iSCSI card are as follows:

 

l        Internet Storage Name Service (iSNS)

 

 iSNS allows an iSCSI initiator to know about all the iSCSI card that have been registered with the iSNS server.

 

l        Multipath

 

We haven’t done much research on multi-path feature but the following description from its online help can serve as an introduction to this:

 

Multipath allows an iSCSI Initiator to connect to an iSCSI Target a number of times in the same iSCSI Session. The connections can be on the same physical network cable or on different network cables. The connections can either be on the same port or different ports (depending on configuration). Multipath allows the iSCSI Initiator to either send data down each connection, therefore increasing the available bandwidth, or to use the additional connections as a fail-over. – Online help

5         Open Source iSCSI Target Implementations

 

There are many iSCSI target implementations on the Linux platform. When choosing the suitable iSCSI package for the iSCSI card, the following issues should be considered:

 

1)        Is the source free for commercial usage? We should make sure that it is legal to use the source.

2)        Is the product stable enough? Can we use it for a stable product? If not, what is the effort to make it stable enough?

3)        Does the source package support tape drive and medium changer?

4)        Is performance of the candidate package good enough?

 

5.1      TGT

 

TGT package has been chosen by Fedora distribution currently. The Linux kernel supports TGT package in main line. So you don’t need to patch any of source code for the TGT. The key design philosophy of TGT is implementing a significant portion in user space. You should enable kernel option “SCSI target support”.

 

5.2      IET

 

IET package divides the system into two layers: kernel and user space. You can build the kernel module separately. After Kernel 2.6.22, there is no demand to modify kernel source by patch. Because the kernel module has implemented a lot of things, it reduces the context switch between the kernel and the user space with result to high performance.

 

5.3      SCST

 

SCST is started as a fork from IET. However, the framework of SCST had been improved largely by the authors. They add device abstracted layer based on IET. The kernel layout is as follows:

 

l        SCST core

l        Device handler including tape, changer, cdrom, RAID, dist and vdisk etc

l        iSCSI protocol

 

SCST is planning to add their code to the main line of Linux source.

 

5.4      Target Comparison

 

Point

TGT

IET

SCST

License

GPL

GPL

GPL

Tape drive

Support, but test failure

No

Support

Medium Changer

No

No

Support

Stability

Production

Beta

Production

Version

0.9.2

0.4.17

1.0.1

Performance

Write: 77 MB/s

Read:69 MB/s

Write: 108 MB/s

Read: 62 MB/s

Write: 110 MB/s

Read:  56 MB/s

 

All performance tests are based on Gbytes Smart Net Switch. The test environment is as follows:

 

Target host

CPU:            Intel(R) Pentium(R) 4 CPU  3.40GHz (DoubleCore)

Memory:      1GB

Storage:  RAID0 (4 disk, Over 200 MB/s)

OS:        Fedora 10

 

Initiator host

CPU:            Intel(R) Pentium(R) 4 CPU  3.0GHz (DoubleCore)

Memory:      1GB

OS:        CentOS4.4

 

Therefore, we think the SCST package is the best choice as it supports all devices we use.

 

 

6         SCST Features and Limitations

 

SCST is a mature and stable iSCSI implementation. SCST project consists from a set of subprojects: SCST core itself with a set of device handlers as well as target drivers and user space utilities.

 

6.1      iSCSI Protocol Compliance

 

SCST supports a set of iSCSI protocol as follows:

 

l        iSCSI core

 

l        Necessary functionality (task attributes, etc.) as specified in SAM-2, SPC-2, SAM-3, SPC-3 and other SCSI standards

 

l        Fiber Channel

 

l        RDMA (SRP)

Secure Remote Password

 

l        Parallel (Wide) SCSI

 

6.2      Runtime Characteristics and Limitations

 

SCST has three outstanding characteristics in contrast to other open source iSCSI implementations:

 

1)        Device handlers, i.e. plugins, architecture provides extra flexibility by allowing various I/O modes in backstorage handling. For example, pass-through device handlers allow using real SCSI hardware and vdisk device handler allows using files as virtual disks.

 

2)        Provides advanced per-initiator device visibility management (LUN masking), which allows different initiators to see different set of devices with different access permissions. For instance, initiator A could see exported from target T devices X and Y read-writable, and initiator B from the same target T could see devices Y read-only and Z read-writable.

 

3)        Pass-through mode with one to many relationships, i.e. when multiple initiators can connect to the exported pass-through devices, for virtually all SCSI devices types: disks (type 0), tapes (type 1), processors (type 3), CDROMs (type 5), MO disks (type 7), medium changers (type 8) and RAID controllers (type 0xC).

 

4)        Local access to emulated backstorage devices. You can access the SCSI target devices locally emulated by the target host. For instance, you can mount your ISO image as a SCSI CDROM device locally on the target host.

 

However, it hasn’t implemented some unimportant features for some reasonable reasons:

 

l        SAS

Under development

 

l        FCoE (Fibre Channel over Ethernet)

Under development

 

l        iSER

iSER target driver has long known (since Feb 2008) data corruption problem, which localization hasn't been determined yet and might be in the STGT core. See 

http://lists.berlios.de/pipermail/stgt-devel/2008-February/001367.html and 

http://lists.wpkg.org/pipermail/stgt/2009-February/002630.html

 

l        Multiple connections per session

Currently, SCST only supports one connection per session.

 

l        ALUA (Asymmetric logical unit access)

 

Asymmetric logical unit access occurs when the access characteristics of one port may differ from those of another port. SCSI target devices with target ports implemented in separate physical units may need to designate differing levels of access for the target ports associated with each logical unit. While commands and task management functions (see SAM-3) may be routed to a logical unit through any target port, the performance may not be optimal, and the allowable command set may be less complete than when the same commands and task management functions are routed through a different target port. When a failure on the path to one target port is detected, the SCSI target device may perform automatic internal reconfiguration to make a logical unit accessible from a different set of target ports or may be instructed by the application client to make a logical unit accessible from a different set of target ports. –< SPC-3r23 5.8.2>

 

 

 

7         iSCSI-to-SAS Bridge Prototyping with SCST

 

We decided to build the prototype with uClinux, but the because we can’t modify the flash content of the iSCSI card, therefore, we will debug the kernel and applications in RAM or remote network. The debug procedure should be as follows:

 

1)        Upload the kernel image with the serial console to the specified address in SDRAM

2)        Modify the kernel parameters in the u-boot command. It makes sure that the kernel mounts the NFS filesystem by network.

3)        Boot the kernel image at the specified address used by step 1).

 

In this feasibility study process, we plan to finish the following tasks for the prototype of iSCSI based on the Bridgeworks.

 

7.1      Development Platform based on uClinux

 

We use uClinux to build the overall programs for the iSCSI prototype. In order to meet with our requirements, we modified some of source code in the uClinux as follows:

 

1)        Add IOP13xx directory to the vendors. We copy the original source from the smdk2410, then change the IOP13xx Makefile and arch file (vendors/config/arm/xscale.arch) in build script

 

2)        Replace the kernel source of uClinux with Intel IOP patched kernel 2.6.24. We also need to modify the Makefile of the Linux kernel since it has problem to make the modules dependents.

 

3)        Integrate the openssl into the lib of uClinux. We should customize a makefile for the cross compiling.

 

#

# Makefile for openssl

#

 

CONF_OPTS = linux-elf-arm --prefix=/usr/local/arm-linux

CLEAN_FILES  = config.log config.status

 

LIB_NAME=libcrypto.so.0.9.7

 

.PHONY: romfs clean distclean

 

all: Makefile

        $(MAKE) -f Makefile

        $(MAKE) -f Makefile build-shared

 

Makefile:

        sh ./Configure $(CONF_OPTS)

        patch -p1 < ./opensslForARM.Makefile.patch

.PHONY: romfs

romfs:

        . $(ROOTDIR)/uClibc/.config; /

        if [ "$$HAVE_SHARED" = "y" -a -f $(LIB_NAME) ]; then /

                $(ROMFSINST) $(LIB_NAME) /lib/ ; /

                $(ROMFSINST) -s $(LIB_NAME) /lib/libcrypto.so; /

                $(ROMFSINST) -s $(LIB_NAME) /lib/libcrypto.so.0; /

        fi

 

distclean:

clean:

        -$(MAKE) -f Makefile $@

        -rm -f a.out

        -rm -f Makefile

        -rm -rf $(CLEAN_FILES)

 

 

We also need to write a set of makefiles like as above for modules module_init_tools, mtx and scst etc.

 

4)        Integrate SCST into uClinux. This will involve modifying following configurations in uClinux:

 

l        user/Kconfig

l        user/Makefile

 

5)        Configure the kernel and application for iSCSI in uClinux

 

6)        Fix some problems found during building software image as follows:

 

l        No fork function in the uClibc since it doesn’t configure the uClibc to __ARCH_USE_MMU__

 

l        Comment getenv function in the bash user/bash/lib/sh/getenv.c.

 

7.2      Linux Kernel

 

The iSCSI card uses Intel 81348 I/O processor based on IOP13xx architecture. And Intel had released the Linux kernel in Sourceforge. Therefore, it is not a problem to build the Linux kernel for the hardware. However, we still need to customize the kernel options to be met with our requirements.

 

1)        Download IOPX3XX 2.6 Kernel Patches from Sourceforge site, then patch it with ‘quilt push -a’ command to the standard Linux kernel 2.6.24. The commands can be as follows:

 

 

# tar xjvf linux-{kernel_version}.tar.bz2

# cd linux-{kernel_version}

# tar xzvf ../patches-2.6.22.1-iop1.tar.gz

# cp patches/series.xscale patches/series

# quilt push –a

 

 

 

2)        Create BSP source code in Linux kernel for the iSCSI hardware. The source file is “bdt8138mc.c” under linux-{version}/arch/arm/mach_iop13xx. We need to add a new machine ID for the iSCSI card since it is a new machine type to Linux kernel. The machine type ID is 1611.

 

3)        Following Kernel options are mandatory:

 

Vendor/Product Selection:

 

l        Vendor – Intel

l        Intel Products – IOP13xx

 

Kernel/Library/Defaults Selection:

 

l        Libc Version (uClibc)

 

General setup:

 

l        Prompt for development and/or incomplete code/drivers

l        Support for paging of anonymous memory (swap)

l        System V IPC

l        POSIX Message Queues

l        Initial RAM filesystem and RAM disk (initramfs/initrd) support

l        Choose SLAB allocator (SLAB)

 

Enable the block layer:

l        Support for Large Block Devices

l        Block layer SG support V4

 

System Type:

 

l        ARM system type (IOP13xx-based)

l        Support Thumb user binaries

 

Floating point emulation:

 

l        NWFPE math emulation

 

Userspace binary formats:

 

l        Kernel support for ELF binaries

l        Kernel support for a.out and ECOFF binaries

 

Networking options:

 

l        Packet socket

l        Packet socket: mmapped IO

l        Unix domain sockets

l        TCP/IP networking

n        IP: multicasting

n        IP: kernel level autoconfiguration

u      IP: DHCP support

u      IP: BOOTP support

 

Device Drivers:

 

l        Memory Technology Device (MTD) support

n        MTD partitioning support

u      Command line partition table parsing

u      ARM Firmware Suite partition parsing

n        Direct char device access to MTD devices

n        Common interface to block layer for MTD ‘translation layers’

n        Caching block device access to MTD devices

n        RAM/ROM/Flash chip drivers

u      Detect flash chips by Common Flash Interface (CFI) probe

u      Flash chip driver advanced configuration options

u      Support for Intel/Sharp flash chips

 

n        Mapping drivers for chip access

u      CFI Flash device in physical memory map

 

l        Block devices

n        RAM disk support

u      Default number of RAM disks (1)

u      Default RAM disk block size (bytes) (8192)

u      Default RAM disk block size (bytes) (1024)

 

l        Misc devices

l        SCSI device support

n        SCSI device support

n        SCSI target support

n        Legacy /proc/scsi/ support

n        SCSI generic support

n        Probe all LUNs on each SCSI device

n        SCSI Transhports

u      Parallel SCSI (SPI) Transport Attributes

u      iSCSI Transport Attributes

u      SAS Transport Attributes

u      SAS Domain Transport Attributes

n        SCSI low-level drivers

u      Intel ISC813xx SAS/SATA support

l        Support the local bus controller

 

l        Network device support

n        Ethernet (1000 Mbit)

u      Intel® PRO/1000 Gigabit Ethernet support

u      Use Rx polling

l        I2C support

 

File systems:

 

l        Second extended fs support

l        Network File Systems

n        NFS file system support

u      Provide NFSv3 client support

n        Root file system on NFS

 

Cryptographic API:

 

l        MD5 digest algorithm

l        CRC32c CRC algorithm

l        Hardware crypto devices

 

7.2.1      Kernel Debug Procedure

 

When booting the Linux kernel from NFS root FS or RAM disk, we need to perform following processes:

 

1)        Enter u-boot console. Modify the bootargs according to your filesystem:

 

a.       NFS root FS

 

u-boot# setenv bootargs console=ttyS0,115200 iop13xx_init_atu=e noinitrd init=/linuxrc root=/dev/nfs nfsroot=192.168.123.157:/opt

/nfsrootfs/ ip=dhcp mtdparts=physmap-flash.0:128K(c),1920K(t),512K(b),2M(p),1M(k),1M(mp),512K(m),6656K(fs),1536K(f

),640K(l),384K(o)

 

b.      RAM disk

 

u-boot# setenv bootargs console=ttyS0,115200 iop13xx_init_atu=e root=/dev/ram mtdparts=physmap-flash.0:128K(c),1920K(t),512K(b),2M(p),1M(k),1M(mp),512K(m),6656K(fs),1536K(f

),640K(l),384K(o)

 

2)        Upload the uImage made by u-boot utility ‘mkimage’ to address 0x01008000. The uImage can be made by command:

 

devhost# mkimage -A arm -O linux -T kernel -C none -a 0x00008000 -e 0x00008000 -n "Linux-iop13xx" -d linux-2.6.x/arch/arm/boot/zImage uImage

 

3)        If it is RAM disk boot, we need to upload the ramdisk image made by u-boot utility ‘mkimage’ to the address 0x02000000.

 

4)        Boot the system with command ‘bootm 0x01008000’ or ‘bootm 0x01008000 0x02000000’ (RAM disk).

 

 

7.3      Root File System

 

As previous section said, we can debug the Linux kernel by uploading the image with u-boot. After booting up the kernel, it needs to mount the root filesystem that we choose using NFS or RAM disk mechanism. So we need to use a customized filesystem for the iSCSI system. In uClinux, you can freely choose what you want to use in the iSCSI system.

 

Following Kernel options are mandatory:

 

Library Configuration:

 

l        Build OPENSSL

l        Build libnet

 

Core Applications:

 

l        Enable console shell

l        Shell Program (bash)

 

Filesystem Applications:

 

l        mount

 

Network Applications:

l        thttpd

l        ifconfig

 

Miscellaneous Applications:

l        mtx

l        tinytcl

n        build static libtcl and extensions

 

BusyBox:

Choose basic settings

 

tinylogin:

l        adduser

l        addgroup

l        deluser

l        delgroup

l        login

l        su

l        sulogin

l        passwd

l        getty

 

ISCSI Target Function:

l        scst

 

module-init-tools:

l        module-init-tools

 

After the build finishes, it will generate the ‘romfs’ in the uClinux root directory.

 

7.4      Web UI

 

WEB UI is to provide the management interface for the user. For iSCSI card, it should be simple and easily to use. Therefore, we setup WEB UI with “thttpd + cgi” and cgi can be written by tinytcl script.

 

In this feasibility, we only setup the prototype of the iSCSI system to verify this structure. We have finished following core configuration features for the iSCSI card:

 

1)        Main page is as follow:

 

 

2)        iSCSI Target configuration page is as follow:

 

 

3)        iSCSI Sessions page is as follow:

 

 

 

4)        Device Management page is as follow:

 

 

thttpd is a simple and high performance WEB server for embedded system. Its performance of loading page reaches to microsecond level. Because thttpd is written by C language, it occupies less memory than other light http server such mini-perl.

 

tinytcl is a rommable, minimal Tcl for embedded applications. The available version in uClinux is 6.8.

 

7.5      Test Environment

 

All tests are based on Gbytes Smart Net Switch. The test environment is as follows:

 

Target host

CPU:             Intel 81348 I/O processor (XScale-V3 based processor revision 9 (ARMv5TE))

Memory:      256MB

Storage:  SAS Tape drive (4 disk, Over 200 MB/s)

OS:        Linux kernel IOP 2.6.24 + uClinux + SCST

 

Initiator host

CPU:            Intel(R) Pentium(R) 4 CPU 3.4GHz (Multiple Cores)

Memory:      1GB

OS:        CentOS4.4 and WindowsXP

Software:       Microsoft iSCSI Initiator

Test utility:     hp StorageWorks library and tape tools 4.7

 

7.6      Test Report

 

This is the overall feature test for the BDT iSCSI card we have done:

 

Test Groups

Description

Result

LTO Drive Assessment test

This test will check the functionality of your LTO Drive, and ensure that it is working correctly.  Please load a data cartridge (preferably a new cartridge) and set the 'Allow overwrite' test option to true before starting the test.

Test passed with warnings

LTO Media Assessment test

This test will check the functionality of your LTO Data Cartridge, and ensure that it is working correctly.  Please load the required cartridge before starting the test.  Before running this test, please ensure that the drives operation has been checked using the LTO Drive Assessment Test

Test failed, but Bridgeworks is OK. It is due to SCST can’t support 0xab operation code in SCSI.

Device Analysis

This test performs analysis on your product's internal data logs. If it finds problems it will give you advice on how best to solve them.

Device Analysis passed

Data compression test

This test will check the hardware compression capability of your tape drive. You will need a writable tape to complete this test.  A file is created on the tape that is designed to have a compression ratio of approximately 2:1.  Anything less than 2:1 would mean that your hardware compression is not working properly.

Data Compression Test - PASSED.

Connectivity Test

This test writes data to the internal buffers in your product, to verify that the physical connection between your host and the storage device is operational. No data is written to a media, should there be one present in the drive.

Connectivity Test - PASSED.

LTO Stuck Tape Test

This test will try to determine if the cartridge in the drive is physically stuck, or if it can be recovered/unloaded.  This test should only be run if you believe that the currently loaded cartridge is STUCK.

Test failed, Bridgeworks also fails in this case

LTO Cooling Check

This test will monitor the internal temperature of your Ultrium drive and measure any temperature change after writing data for approximately 30 minutes. If the cooling is found to be inadequate, then the test will output recommendations.

Test Passed

LTO4 Encryption Test

This test will check the encryption operation of your LTO drive, and ensure that it is working correctly.  Please load a writeable data cartridge and set the ‘Allow overwrite’ test option to true before starting the test. 

Test aborted. See the analysis results in the test operations log. Bridgeworks also fails in this case

Read/Write Test

This test verifies the ability to READ & WRITE data to and from the removable media in your storage device.** NOTE - This test is destructive and will overwrite data on the media that is present in your product when the test is started

Read/Write Test - PASSED

Device Self Test

This test performs a power on self test to check the internal electronics of your product

Passed

 

You can refer to the full-test document get the details.

 

7.7      Remaining  Issues Analysis

7.7.1      Medium Changer

 

Currently, the Linux kernel can’t recognize the medium changer via the SAS controller. It only find out the SAS tape drive and another same tape drive. After some research, we just find that the INQUIRY command returns a ‘MultiPort’ indicator which makes sense.

 We still need to understand SAM-3 and SPC-3 protocols deeply.

 

7.7.2      Boot Loader

 

We can choose u-boot or redboot as the bootloader. I suggest we can use redboot since there is Intel redboot solution for the IOP architecture.

 

7.7.3      Flash Partition

 

We can pass the flash partitions in the boot parameters as previous say. Actually, the flash partition can be decided during production design phase.

 

7.7.4      Console Management

 

We have not implemented the prototype of the console management like as Bridgeworks for it just only program job.

 

7.7.5      Others

 

7.7.5.1  Multipath

 

The multipath has been implemented by Bridgeworks. I think the simple solution can be implemented with the network bonding which we are familiar with it.

 

7.7.5.2  WWN

 

WWN is the union ID of the device. I have no idea and further research for it. I think there is some of interface we can access the WWN of the device.

 

7.7.5.3  Persistent SCSI Address

 

In some of scenarios, the users would like to keep the device SCSI address persistently in spite of plugging in extra devices. This mechanism can be implemented by udev or other modules. It needs the depth research.

 

 

8         Bridgeworks Solution and Open Source Solution Differences

 

8.1      Stability

 

As SCST declare, SCST implementation is mature and stability.

 

At least 3 companies already have designed and are selling products

with SCST-based engines (for instance, http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=35&menu_section=34; unfortunately, I don't have rights to name others) and, at least, 5

companies are developing their SCST-based products in areas of HA

solutions, VTLs, FC/iSCSI bridges, thin provisioning appliances, regular

SANs, etc. How many are there commercially sold STGT-based products?

Also you can compare yourself how widely and production-wise used SCST

and STGT by simply looking at their mail lists archives:

http://sourceforge.net/mailarchive/forum.php?forum_name=scst-devel for

SCST and http://lists.wpkg.org/pipermail/stgt. In the SCST's archive you

can find many questions from people actively using it in production or

developing own SCST-based products. In the STGT's archive you can find

that people implementing basic things, like management interface, and

fixing basic problems, like crash when there are more than 40 LUNs per

target (SCST successfully tested with >4000 LUNs). (STGT developers, no

personal offense here, please.) - http://lkml.org/lkml/2008/12/10/245

 

Because of project plan, we have no more time to verify the stability of SCST.

 

8.2      Compatibility

 

We have provided the tape drive test of iSCSI card in section 7.6.

 

We have not completely tested all compatibility of SCST. The following comparison table is generated by the document of SCST.

 

 

Bridgeworks

SCST

RDMA (SRP)

No

Yes

Multiple connections per session

Yes

No

Fiber Channel

No

Yes

Direct SAS

Unsure

Under development

LUN masking

No

Yes

 

Note:

LUN masking is the advanced per-initiator device visibility management, which allows different initiators to see different set of devices with different access permissions.

 

8.3      Performance

 

After some performance tests, we found that the Birdgeworks iSCSI card can reach to 80 MB/s speed. But the iSCSI card made by us only can reach to 60MB/s speed. We think there are some factors to take affect on the performance of BDT iSCSI card:

 

1)        We still need to improve the network tuning. Up till now, we can’t enable the JUMBO frame option for the Ethnet in the iSCSI card. We only can set the MTU up to 9260 instead of over 9500.

 

2)        We noticed that there is SCSI management layer in the Bridgeworks software. Further more, they have their patent in this software. We suspect they implement the SCSI management by themselves completely.

 

3)        In the performance test, CPU usage only reaches about 50%. Sometime it will be up to 55%. Therefore, I believe that the block-neck is not in the CPU.

 

4)        Various kernel options will take affect on the performance to the system. We have tried many kernel options, but still can’t find the efficient way to improve the performance. So maybe this is not the reason to the bad performance.

 

5)        If we configure the SCSI uses device performance handler (Pass through to SCST tape driver), the throughput can reach to 86 MB/s. However, we expect the throughput should be over 100 MB/s since there is no any tape drive I/O. It should be reached to 80MB/s speed if tape I/O speed reaches to 80MB/s. Therefore, we suspect the I/O speed of the tape drive can’t reach to 80MB/s in the BDT iSCSI card.

 

The performance test of the BDT iSCSI card is as follow:

 

 

The performance test of the Bridgeworks iSCSI card is as follow:

 

 

 

 

9         Full Implementation Analysis

 

9.1      Main Components and Enhancements

 

A full implementation of the iSCSI card will have the following main components and features:

 

l        Bootloader

l        Firmware upgrade mechanism

l        iSCSI Target (using SCST)

l        Linux Kernel and Root File system (based on ucLinux)

l        Web Management (thttpd + tinytcl)

 

To bring the prototype into production we still need to complete the following major enhancements:

l        Develop a bootloader based on RedBoot

l        Improve SCST iSCSI target performance

l        Improve Linux kernel SCSI layer to work with media changer

l        Complete the Web management interface

l        Intensive test to prove compatibility, stability and performance

 

9.2      Workload and Risks

We estimate it will take 2 development engineers and 1 tester 4 months to fully productize the product.

 

The main risks are:

l        It may take very big effort to bring performance of the card to the same level of the Bridgeworks card.

l        Stability of the SCST package turns out to be unsatisfactory

 

10   Conclusion

 

We think it is technically feasible for BDT to fully implement a iSCSI-to-SAS bridge product similar to the Bridgeworks product.

11   Appendix. Related Documents

 

Document Name

Description

Link

RFC3720-iSCSI-protocol

iSCSI RFC paper

 

SCST’s home page

SCST site

http://scst.sourceforge.net/

SCST Document

SCST applies for taking source code into Linux kernel mainline.

http://lkml.org/lkml/2008/12/10/245

CPU technical documents

Intel I/O 81348 (IOP348) processor Technical Documents

http://www.intel.com/design/iio/docs/iop348.htm

 

 

 

 

< >

 

<End of Document>

你可能感兴趣的:(iSCSI card hack and development)