架构:b/s
服务器端:封装好的linux系统
客户端:浏览器
相关包:封装好的linux系统
硬件RAID卡还是要比OpenFiler自身的软RAID要好
如果想用硬RAID卡做iSCSI Target,可以试试用CentOS之类的标准Linux发行版来做,驱动比较齐全
现在我们来配置共享设备。 先对我们没有格式的分区格式化成扩展分区,一定要扩展分区:
两种算法是连写(Write-throu曲)援存和 后写(Write-back)嫒存
Cache写机制分为write through和write back两种。
Write-through(直写模式)在数据更新时,同时写入缓存Cache和后端存储。此模式的优点是操作简单;缺点是因为数据修改需要同时写入存储,数据写入速度较慢。
Write-back(回写模式)在数据更新时只写入缓存Cache。只在数据被替换出缓存时,被修改的缓存数据才会被写到后端存储。此模式的优点是数据写入速度快,因为不需要写存储;缺点是一旦更新后的数据未被写入存储时出现系统掉电的情况,数据将无法找回。
对于写操作,存在写入缓存缺失数据的情况,这时有两种处理方式:
Write allocate方式将写入位置读入缓存,然后采用write-hit(缓存命中写入)操作。写缺失操作与读缺失操作类似。
No-write allocate方式并不将写入位置读入缓存,而是直接将数据写入存储。这种方式下,只有读操作会被缓存。
无论是Write-through还是Write-back都可以使用写缺失的两种方式之一。只是通常Write-back采用Write allocate方式,而Write-through采用No-write allocate方式;因为多次写入同一缓存时,Write allocate配合Write-back可以提升性能;而对于Write-through则没有帮助。
web界面不能创建pv及vg,需要在命令行下手动执行才可以,至少在只有一块硬盘的情况下剩余分区是这样操作的,两块盘的话应该没有问题
fdisk /dev/sda
n
+100G
t
8e
w
partprobe /dev/sda
pvcreate /dev/sda5
vgcreate vg_data /dev/sda5
这样后,就可以在web界面上看到vg了
但是这个独立机器设置network acl时,只有设0.0.0.0后,esxi主机才能连接到lun,如果指定网段172或192,非此群集的主机才可以连接lun
At the bottom, it's seeing 15TB of unused space, and the starting cylinder begins right after the end cylinder for /dev/sdb1. When I click on the Create button to make a new partition that uses all of the available space, the browser waits a moment and then simply reloads this page. Nothing is created.
I am not sure if this is the case, but there were some issues with GPT versions used in the earlier 2.99.x's
Perhaps do the following..
Conary update conary
Conary updateall
Conary update openfiler
Followed by a reboot, and see if you have any better success...
I think I have the same problem. I am running 2.99.2, and have run the update commands above and rebooted. So when I look at the partition, it is 70% free: https://onedrive.liv...hint=photo,.jpg. When I then click on "create", nothing happens... https://onedrive.liv...hint=photo,.jpg (sorry, can't see to put line breaks in here).
There is a huge bug in Openfiler. I just installed a brand new system. I try to create a new physical volume in either /sda (100Gb) or /sdb (7.8 TB) and I click "Create". Nothing happens. The "Reset" button is not clickable, is "In Use". Where do I go from here?
Now I want to create a Physical Volume, but after I click the Create button the page reloads and nothing happens.
I've read somewhere that on 1 logical drive i can only make 3 partitions, is that true or am I doing something wrong?
Otherwise i need to make 2 logical drives, but then some diskspace will be wasted..
问题点
openfiler版本是2.99的,无法新增物理卷,点击Create 保存不了,百度狂搜,最好找到解决方法了,把开始加大点,结束减少点就create了,也可以保存了。
安装后惊奇的发现,当用系统自己检测到的开始和结束磁盘位置数据,无论如何没有办法增加新的physical volume,点create没有任何反应。
又上openfiler官网的论坛寻觅,发现有用户也反映同意的问题。其中一个高手提到改了开始和结束磁盘位置数据修复了这个问题。抱着试试看的心情,把开始改大了点,把结束位置改小了点,居然就成功create了!浪费了我几十G的磁盘空间。创建了physical volume后其他到还算正常。
Openfiler Open Source Edition (OSE) is perfect for cost-constrained budgets and delivers block-level (basic iSCSI target) and file-level storage export protocols.
Openfiler Commercial Edition (CE) builds upon Openfiler OSE to provide key features such as iSCSI target for virtualization, Fibre Channel target support, block level replication and High Availabilty that are the mainstay of any business critical storage environment.
Openfiler性能问题
在Openfiler上测试,发现NFS性能比ISCSI好很多
在NFS上的VM,Linux写入性能能达到60MB/s
ISCSI上的FreeBSD和Linux只能到7MB/s,WINDOWS低于5MB/s,除了Solaris 10 ZFS,可以到30MB/s
暂时没太多时间测试IOMETER
在UNIX里用DD写2G的文件,一般在7MB/s,在WIN里用其他工具测试,一般在4MB/s多一点点。
测试的同时,用esxtop监视I/O和带宽使用。
我的OPENFILER用了5个1T的硬盘做RAID-5,在OPENFILER上读可以到200MB/s,写可以到70MB/s以上,我的主板是P4的,受限于IHC1.5Gb/s的带宽,但是也已经足够了
事实上,我着了很多资料,发现确实在VM较多的情况下,NFS比FC-SAN都快很多。所以那些准备上FC-SAN做VIC存储的可以重新好好考虑一下了。
ISCSI应该比NFS快才对吧。 如果一个LUN一个VM,可能ISCSI快,不过多个VM的话肯定是NFS快
fileio 和blockio 的设定,导致的;
iscsi一般默认为blockio 是直写;
NFS一般默认是fileio 是用内存做缓存的回写;