Subject: Re: [vserver] Request: Help to a 2.6.22.14+vs2.2.05 and the libata patches which make PMP work
From: Corey Wright <undefined@pobox.com>
Date: Fri, 28 Dec 2007 21:57:31 -0600
Fri, 28 Dec 2007 21:57:31 -0600
On Fri, 28 Dec 2007 23:07:14 +1030
"Mike O'Connor" <vserver@pineview.net> wrote:

> These patches add proper support for handling a number of SATA soft
> errors plus add support for Port Multiplier and Hotplug.

you mean like:

BEGIN SYSLOG

Dec 20 13:59:30 host kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x2
frozen
Dec 20 13:59:30 host kernel: ata1.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0
cdb 0x0 data 0
Dec 20 13:59:30 host kernel:          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask
0x4 (timeout)
Dec 20 13:59:30 host kernel: ata1: soft resetting port
Dec 20 13:59:31 host kernel: ata1.00: configured for UDMA/100
Dec 20 13:59:31 host kernel: ata1: EH complete
Dec 20 13:59:31 host kernel: sd 0:0:0:0: [sda] 625142448 512-byte hardware sectors (320073
MB)
Dec 20 13:59:31 host kernel: sd 0:0:0:0: [sda] Write Protect is off
Dec 20 13:59:31 host kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
Dec 20 13:59:31 host kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled,
doesn't support DPO or FUA

END SYSLOG

i get about two errors (ATA bus error, timeout, device error) a
week, spread evenly between two identical drives in a raid1 array.

i never saw this with 2.6.12 or 2.6.15, but since 2.6.18 the problem has
persisted.  with 2.6.18 the problem was so bad that the kernel would weekly
mistake the affected drive as failed and kick it out of the raid1 array,
degrading the array (and giving me a very uneasy feeling until i had
verified it with badblocks and added it back to the array).  with 2.6.20:
errors every other day, but failed-drive-leading-to-degraded-array about
once a month.  with 2.6.22: errors about twice a week, no failed drives and
resulting degraded array.

this has persisted for the last year or so and gets better with each kernel
release, so i tend to believe the "problem" is with linux and not the
drives.

i would be happy to get rid of those occasional errors with a kernel patch,
if only for my peace of mind.  and if not, i need the exericse in c (as my
native language is c++ and i mess with c so infrequently that the kernel
source resembles assembly while my c++ is more like to java).

> I have been able to get a 2.6.22.1, vs2.2.05 and a patch from this site
> to compile and it worked.

i'm currently running 2.6.22.14 + vs2.2.0.5.

> Its great on the drive side of things I have
> 500G WD drives which just would not format working and PMP and Hotplug
> are just cool.

interesting: my problematic drives are also WD (but 320 GB).

> I've not used it long enough to know if any issues, but I would feel
> much happier if I was using the 2.6.22.14 kernel.

peace-of-mind.  i understand.

> A 2.6.22.14 plus vs.2.2.05 did not patch with the the libata patch for
> 2.6.22.1.

nah, it patches pretty well considering the large size of the patch and
small number of rejects.  try applying the vserver patch to a ubuntu kernel.

> I just do not have the skills to adjust the patch, but I do feel
> strongly that reliable support for SATA drives in a current and stable
> vserver patched kernel is needed.

it wasn't that difficult (patching is usually more time consuming than
difficult) as most of the rejects are because bits of the patch (mainly new
hardware) have already been applied upstream.

here's my patch log (ie abbreviated log of patch failures and my analysis
of the rejects):

BEGIN LOG

patching file drivers/ata/ahci.c
Hunk #18 FAILED at 442.
***** hunk already applied upstream *****
1 out of 48 hunks FAILED -- saving rejects to file drivers/ata/ahci.c.rej

patching file drivers/ata/ata_piix.c
Hunk #1 FAILED at 200.
***** hunk already applied upstream *****
1 out of 12 hunks FAILED -- saving rejects to file
drivers/ata/ata_piix.c.rej

patching file drivers/ata/libata-core.c
Hunk #70 FAILED at 3804.
***** hunk partially applied upstream; create patch *****
Hunk #71 FAILED at 3822.
***** hunk (and more) already applied upstream *****
Hunk #72 FAILED at 3838.
***** hunk (and more) already applied upstream *****
3 out of 114 hunks FAILED -- saving rejects to file
drivers/ata/libata-core.c.rej

patching file drivers/ata/libata-sff.c
Hunk #2 FAILED at 211.
***** hunk already applied upstream *****
1 out of 13 hunks FAILED -- saving rejects to file
drivers/ata/libata-sff.c.rej

patching file drivers/ata/pata_atiixp.c
Hunk #4 FAILED at 286.
***** hunk already applied upstream *****
1 out of 4 hunks FAILED -- saving rejects to file
drivers/ata/pata_atiixp.c.rej

patching file drivers/ata/pata_scc.c
Hunk #2 FAILED at 363.
***** hunk already applied upstream *****
1 out of 7 hunks FAILED -- saving rejects to file drivers/ata/pata_scc.c.rej

END LOG

be sure to still apply the original patch to ata_piix.c even though it
complains about detecting a previously applied patch (ie Assume -R? NO.
Apply anyway? YES).  the only true reject is fixed with the attached patch.

> Any help appreciated
> Mike

hth,
corey
-- 
undefined@pobox.com


diff -urNpd linux-source-2.6.22.14-vs2.2.0.5/drivers/ata/libata-core.c linux-source-2.6.22.14-vs2.2.0.5-libata/drivers/ata/libata-core.c
--- linux-source-2.6.22.14-vs2.2.0.5/drivers/ata/libata-core.c	2007-12-28 19:33:26.000000000
-0600
+++ linux-source-2.6.22.14-vs2.2.0.5-libata/drivers/ata/libata-core.c	2007-12-28 19:12:57.000000000
-0600
@@ -3806,6 +3806,9 @@ static const struct ata_blacklist_entry 
 	{ "IOMEGA  ZIP 250       ATAPI", NULL,	ATA_HORKAGE_NODMA }, /* temporary fix */
 	{ "IOMEGA  ZIP 250       ATAPI       Floppy",
 				NULL,		ATA_HORKAGE_NODMA },
+	/* Odd clown on sil3726/4726 PMPs */
+	{ "Config  Disk",	NULL,		ATA_HORKAGE_NODMA |
+						ATA_HORKAGE_SKIP_PM },
 
 	/* Weird ATAPI devices */
 	{ "TORiSAN DVD-ROM DRD-N216", NULL,	ATA_HORKAGE_MAX_SEC_128 },