Subject: Re: [vserver] Again: [vserver] Linux vServer: general protection fault with apache2 and kernel 2.6.38.6
From: Herbert Poetzl <herbert@13thfloor.at>
Date: Sat, 6 Aug 2011 14:38:16 +0200

On Sat, Aug 06, 2011 at 02:13:37PM +0200, Urban Loesch wrote:

> Hi Herbert,
>> I just tried to pinpoint the location based on my
>> 2.6.38.8-vs2.3.0.37-rc17 kernel and I suspect that
>> task_rq(p) is causing this (for certain p), but
>> I was wondering why your task_rq_lock() is 0xa0
>> bytes in size, where mine is just 0x65 bytes ...

>> especially as the task_rq_lock function is quite
>> compact ...

>> could you upload the output of the folling commands
>> for me (executed in the build directory of your
>> kernel or with the vmlinux object file)

>> # objdump -t vmlinux | grep task_rq_lock

> gives me only on line:

that's fine, nothing more was expected :)

> /usr/src/linux-2.6.38.8 # objdump -t vmlinux | grep task_rq_lock
> ffffffff8104ec60 l     F .text	000000000000009c task_rq_lock

>> # objdump -d vmlinux --start-address=0x`objdump -t vmlinux | sed -n 
>> '/task_rq_lock/ {s/ .*//; p}'` | sed '/task>:/ Q'

> more lines :-)
> You can download it at http://www.enas.net/objdump.txt

I copied the interesting part here:

ffffffff8104ec60 <task_rq_lock>:
ffffffff8104ec60:	55                   	push   %rbp
ffffffff8104ec61:	48 89 e5             	mov    %rsp,%rbp
ffffffff8104ec64:	48 83 ec 20          	sub    $0x20,%rsp
ffffffff8104ec68:	48 89 1c 24          	mov    %rbx,(%rsp)
ffffffff8104ec6c:	4c 89 64 24 08       	mov    %r12,0x8(%rsp)
ffffffff8104ec71:	4c 89 6c 24 10       	mov    %r13,0x10(%rsp)
ffffffff8104ec76:	4c 89 74 24 18       	mov    %r14,0x18(%rsp)
ffffffff8104ec7b:	e8 40 d0 fb ff       	callq  ffffffff8100bcc0 <mcount>

this suggests that you have FTRACE enabled in your
kernel and probably a bunch of related TRACERS

ffffffff8104ec80:	48 c7 c3 40 3c 01 00 	mov    $0x13c40,%rbx
ffffffff8104ec87:	49 89 fc             	mov    %rdi,%r12
ffffffff8104ec8a:	49 89 f5             	mov    %rsi,%r13
ffffffff8104ec8d:	ff 14 25 00 34 a1 81 	callq  *0xffffffff81a13400
ffffffff8104ec94:	48 89 c2             	mov    %rax,%rdx
ffffffff8104ec97:	ff 14 25 10 34 a1 81 	callq  *0xffffffff81a13410
ffffffff8104ec9e:	49 89 55 00          	mov    %rdx,0x0(%r13)
ffffffff8104eca2:	49 8b 44 24 08       	mov    0x8(%r12),%rax
ffffffff8104eca7:	49 89 de             	mov    %rbx,%r14
ffffffff8104ecaa:	8b 40 18             	mov    0x18(%rax),%eax
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

this is where your problem happens, and together with
the information from the original post (which you 
removed) this looks like task_rq looking for a task at 
0x9066669066666605 which is rather unlikely to be a
valid address, so to me this looks like some kind of
memory corruption, maybe caused by a completely
different subsystem ...

ffffffff8104ecad:	4c 03 34 c5 00 d4 ab 	add    -0x7e542c00(,%rax,8),%r14
ffffffff8104ecb4:	81 
ffffffff8104ecb5:	4c 89 f7             	mov    %r14,%rdi
ffffffff8104ecb8:	e8 43 14 54 00       	callq  ffffffff81590100 <_raw_spin_lock>
ffffffff8104ecbd:	49 8b 44 24 08       	mov    0x8(%r12),%rax
ffffffff8104ecc2:	8b 40 18             	mov    0x18(%rax),%eax
ffffffff8104ecc5:	48 8b 14 c5 00 d4 ab 	mov    -0x7e542c00(,%rax,8),%rdx
ffffffff8104eccc:	81 
ffffffff8104eccd:	48 8d 04 13          	lea    (%rbx,%rdx,1),%rax
ffffffff8104ecd1:	49 39 c6             	cmp    %rax,%r14
ffffffff8104ecd4:	75 18                	jne    ffffffff8104ecee <task_rq_lock+0x8e>
ffffffff8104ecd6:	4c 89 f0             	mov    %r14,%rax
ffffffff8104ecd9:	48 8b 1c 24          	mov    (%rsp),%rbx
ffffffff8104ecdd:	4c 8b 64 24 08       	mov    0x8(%rsp),%r12
ffffffff8104ece2:	4c 8b 6c 24 10       	mov    0x10(%rsp),%r13
ffffffff8104ece7:	4c 8b 74 24 18       	mov    0x18(%rsp),%r14
ffffffff8104ecec:	c9                   	leaveq 
ffffffff8104eced:	c3                   	retq   
ffffffff8104ecee:	49 8b 75 00          	mov    0x0(%r13),%rsi
ffffffff8104ecf2:	4c 89 f7             	mov    %r14,%rdi
ffffffff8104ecf5:	e8 86 14 54 00       	callq  ffffffff81590180 <_raw_spin_unlock_irqrestore>
ffffffff8104ecfa:	eb 91                	jmp    ffffffff8104ec8d <task_rq_lock+0x2d>
ffffffff8104ecfc:	0f 1f 40 00          	nopl   0x0(%rax)

but that's just a wild guess and some hand waving, to
get better information and more debug info we would
first need to find a way to trigger this issue within
a short amount of time, let's say a few minutes up to
an hour or so ...

btw, does it happen on more than one system or is it
always the same system?

best,
Herbert

> thanks,
> Urban