Subject: Re: [vserver] Re: [OT] [vserver] hybrid zfs pools as iSCSI targets for vserver
From: Adrian Reyer <are@lihas.de>
Date: Sun, 7 Aug 2011 11:10:29 +0200

On Sat, Aug 06, 2011 at 09:07:00PM -0400, John A. Sullivan III wrote:
> It's not acking every packet, it's acking every block.  At least that is
> what we have been told.  I don't recall if I put a protocol analyzer on
> the line to confirm that.  So it's not a transport layer ACK; it's a
> data layer ACK.

iSCSI is SCSI, You should be able to use tagged command queuing with
somewhat deep queues there.
I dislike SANs and think iSCSI is a slow mess, but I currently export
disks by iSCSI from one linux host to another to use it as backup disk
for a bacula (backup) server.
I heavily played with the schedulers and tuned them specific to my load
(1GB files, mostly sequentially accessed) by using sysfs via sysfsutils:

block/sdb/queue/scheduler = deadline
# documentation states the scheduler keeps heads where they are after
# read requests for a short time to see if further requests to that
# location come in. As it is a) a SAN and b) a RAID-system, head
# locations are not in any way transparent anyway
block/sdb/queue/rq_affinity = 0
block/sdb/queue/nr_requests = 1024
# quite high read expiry because of my specific work load, you will want
# a smaller one
block/sdb/queue/iosched/read_expire = 5000
# write expiry should be fine, with ext4 my mount options are:
# rw,noatime,data=writeback,journal_async_commit,delalloc
block/sdb/queue/iosched/write_expire = 2000
block/sdb/device/queue_depth = 1024
# Again: I use big files on a backup server, jobs are migrated from
# storage to tape, 1MB readahead, default is 128kB.
block/sdb/queue/read_ahead_kb = 1024

Regards,
	_are_
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: lihas@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting & Support - USt-ID: DE 227 816 626 Stuttgart