Opened 13 years ago
Closed 8 years ago
#9644 closed defect (obsolete)
segfault in VBoxHeadless 4.1.2 while doing clonehd
Reported by: | kerlerm | Owned by: | |
---|---|---|---|
Component: | VMM | Version: | VirtualBox 4.1.4 |
Keywords: | crash clonehd segfault 4.1.2 | Cc: | |
Guest type: | other | Host type: | Linux |
Description (last modified by )
Hi!
I was executing a "clonehd" command. While the system was under that load a segfault occured in VBoxHeadless. Host is a FC15 X64 system with several guest on it. Linux guests showed ATA bus resets when the error occurred, some windows guests were aborted.
I wonder if this line in the logs might be connected with that? EXT4-fs (dm-2): Unaligned AIO/DIO on inode 12845094 by VBoxHeadless; performance will be poor.
The same configuration worked flawlessly with FC12 - VBox 3.2.12
Any hints / workarounds? Thanks a lot!
Cheers Martin
Attachments (4)
Change History (20)
by , 13 years ago
Attachment: | backtrace_vbox_4_1_2_clonehd.txt added |
---|
comment:1 by , 13 years ago
Today I updated to vbox 4.1.4. Same problem persists. Guests get randomly aborted - not only while the server is under high load. Seems to be connected to bug ticket no. 9661: https://www.virtualbox.org/ticket/9661
comment:2 by , 13 years ago
Version: | VirtualBox 4.1.2 → VirtualBox 4.1.4 |
---|
comment:3 by , 13 years ago
problem persists with version 4.1.6 installed. Guests get randomly aborted with ATA errors:
00:01:21.746 PCNet#0: Init: ss32=1 GCRDRA=0x01a46420[64] GCTDRA=0x01a46020[64] 00:01:28.332 PIT: mode=2 count=0x4ad (1197) - 996.81 Hz (ch=0) 00:01:58.016 AHCI#0: Canceled write at offset 3803667968 (512 bytes left) returned rc=VINF_SUCCESS 00:02:11.307 AHCI#0: Canceled read at offset 9197748224 (1024 bytes left) returned rc=VINF_SUCCESS
comment:4 by , 13 years ago
problem still persists with 4.1.8 guests get ramdomly aborted:
00:49:12.629 AHCI#0: Canceled read at offset 9866681856 (2048 bytes left) returned rc=VINF_SUCCESS 00:49:12.638 AHCI#0: Canceled read at offset 9866677760 (2048 bytes left) returned rc=VINF_SUCCESS
Can I do anything to fix this bug?
comment:5 by , 13 years ago
Description: | modified (diff) |
---|
Are you still able to reproduce this bug? How easy is it for you? We fixed a bug which could be related to your problem. Would you be willing to try a test build?
comment:6 by , 12 years ago
Hi Frank!
Sorry for the late answer, but I'm just back from holiday.
I'm still able to reproduce this bug with version 4.1.16 installed. Usually happening under heavy load. Happens less often, when my server is running for serveral weeks. Reproducing should be easy for me.
Is the test build available in rpm format for fedora 15 64-bit?
Best Regards
comment:7 by , 12 years ago
Yesterday I tried to limit IO bandwidth of the guests to 6 MB/s. Things got even worse then. Guests died every few minutes. Is there a problem with IO scheduling?
comment:9 by , 12 years ago
Is this bug fixed in 4.1.18?
Update notes vbox 4.1.18: "AHCI: fixed a rare bug which can cause a guest memory corruption after the guest storage controler has been reset"
I'll have a try...
comment:12 by , 12 years ago
I'll add another 120713_VBox.log.
Console shows the following when the guests are crashing:
Jul 13 16:47:30 kernel: [1815837.189720] VBoxHeadless[14075]:
segfault at 2b0 ip 00000000000002b0 sp 00007fcc3479aa68 error 14 in
VBoxHeadless[400000+6000]
Jul 13 16:57:50 kernel: [1816457.629478] VBoxHeadless[16115] trap int3 ip:7f016c9020ef sp:7f0136db6a70 error:0[[BR]]
Jul 13 17:01:51 kernel: [1816698.664984] VBoxHeadless[17039]:
segfault at 1b78000140 ip 00007fbe9ebca7b4 sp 00007fbe917d29c0 error 4
in libc-2.14.1.so[7fbe9eb52000+190000]
comment:13 by , 12 years ago
Any news on this?
Is this connected to https://www.virtualbox.org/ticket/9975 ?
There's definitely something fishy in Vbox AHCI code and I'm looking forward to a final solution soon. Can I assist in any way?
comment:16 by , 8 years ago
Resolution: | → obsolete |
---|---|
Status: | new → closed |
Please reopen if still relevant with a recent VirtualBox release.
VBox.log & more info