#9878 closed defect (duplicate)
bug when use as "vboxsf + loop device + large RAM" -> fixed as of Jan 5 2012
Reported by: | mchen | Owned by: | |
---|---|---|---|
Component: | guest additions | Version: | VirtualBox 4.1.6 |
Keywords: | vboxsf loop largeRAM | Cc: | |
Guest type: | Linux | Host type: | Windows |
Description
When I running VBOX on my WinXP, and guest running RHEL5.7 Then I mount a dir with vboxsf, next I use loop device for mount another file into workspace.
step: mount -t vboxsf cfs /cfs mount -t ext2 -o loop /cfs/test /test
result: VM_RAM = 500MB, ok VM_RAM = 800MB, some computer ok, others faild, OS is same, but CPU difference. VM_RAM = 1100MB, all faild
info: 00:00:08.609 Guest Log: vboxguest: major 0, IRQ 20, I/O port d020, MMIO at 00000000f0000000 (size 0x400000) 00:00:10.762 Guest Log: VbglR0HGCMInternalCall: vbglR0HGCMInternalPreprocessCall failed. rc=-2 00:00:10.763 Guest Log: VBoxGuestCommonIOCtl: HGCM_CALL: 64 Failed. rc=-2. 00:00:10.765 Guest Log: VbglR0HGCMInternalCall: vbglR0HGCMInternalPreprocessCall failed. rc=-2 00:00:10.766 Guest Log: VBoxGuestCommonIOCtl: HGCM_CALL: 64 Failed. rc=-2.
Attachments (2)
Change History (15)
by , 13 years ago
comment:1 by , 13 years ago
comment:2 by , 13 years ago
Perhaps I have reproduced it. In dmesg I see the following:
SELinux: initialized (dev vboxsf, type vboxsf), not configured for labeling VbglR0HGCMInternalCall: vbglR0HGCMInternalPreprocessCall failed. rc=-2 VBoxGuestCommonIOCtl: HGCM_CALL: 64 Failed. rc=-2. EXT2-fs: unable to read superblock
Do you have something similar?
comment:3 by , 13 years ago
Is this reproducible if you disable PAE in the virtual machine CPU settings?
by , 13 years ago
comment:4 by , 13 years ago
I have been disable SELinux before my test, and I try again this morning, When turn PAE off, nothing changed.
btw: when I turn PAE off and reduce RAM to 800MB, I can mount the file, but I have got kernel crash in vboxsf soon(excuse me, I can't found the log file, the linux hang up, and need force reboot).
comment:5 by , 13 years ago
I am having the same problem on Win7 host, Debian Squeeze guest, 4.1.6. Worked with 4.0.something (IIRC 4.0.4 or .6).
I am trying to mount an iso image from a shared folder
with 2GB guest RAM, I am seeing this in dmesg:
[ 73.163337] VbglR0HGCMInternalCall: vbglR0HGCMInternalPreprocessCall failed. rc=-2 [ 73.163549] VBoxGuestCommonIOCtl: HGCM_CALL: 64 Failed. rc=-2. [ 73.163993] isofs_fill_super: bread failed, dev=loop0, iso_blknum=16, block=32 [ 73.170372] VbglR0HGCMInternalCall: vbglR0HGCMInternalPreprocessCall failed. rc=-2
with 500MB it works.
comment:6 by , 13 years ago
I am trying to -o loop mount 8 iso images, all via shared folders.
With 893MG RAM, I can mount all 8 With 894MB RAM, I can mount 7 iso files, the 8th fails with the same error. With 910MG RAM, I can mount all 8 With 911MG RAM, it once worked, once not.
comment:7 by , 13 years ago
I can reproduce this locally. I think it may be due to our code having problems locking so-called high memory mananged by (32bit) Linux kernels.
comment:8 by , 13 years ago
#10061 might be a duplicate of this. I hope to have this fixed soon, but it took a while as I was not very familiar with Linux in-kernel memory management.
comment:9 by , 13 years ago
You might want to give this pre-release 4.1 Additions build (usual disclaimer applies) a try:
https://www.virtualbox.org/download/testcase/VBoxGuestAdditions-r75555.iso
comment:10 by , 13 years ago
If this build fixes the problem then the ticket is probably a duplicate of #9719.
comment:12 by , 13 years ago
Resolution: | → duplicate |
---|---|
Status: | new → closed |
Summary: | bug when use as "vboxsf + loop device + large RAM" → bug when use as "vboxsf + loop device + large RAM" -> duplicate of #9719 |
Thanks for the confirmation. Closing this as a duplicate. Ticket #10061 doesn't seem to be a duplicate after all by the way.
comment:13 by , 13 years ago
Summary: | bug when use as "vboxsf + loop device + large RAM" -> duplicate of #9719 → bug when use as "vboxsf + loop device + large RAM" -> fixed as of Jan 5 2012 |
---|
Neither is ticket #9719 it turns out.
I couldn't reproduce this here with a 32bit CentOS 5.5 guest with 1100MB RAM. Can you reproduce this with a freshly created VM? And with other guest types and different loop device files? If you copy the loop device file into the VM and try to mount it there does that work?