- Timestamp:
- Jul 29, 2021 8:11:06 AM (4 years ago)
- svn:sync-xref-src-repo-rev:
- 145976
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/src/VBox/VMM/VMMAll/PDMAllCritSect.cpp
r90381 r90390 217 217 spurious wakeups. */ 218 218 RT_NOREF(rcBusy); 219 /** @todo eliminate this and return rcBusy instead. Guru if 220 * rcBusy is VINF_SUCCESS. */ 219 /** @todo eliminate this and return rcBusy instead. Guru if rcBusy is 220 * VINF_SUCCESS. 221 * 222 * Update: If we use cmpxchg to carefully decrement cLockers, we can avoid the 223 * race and spurious wakeup. The race in question are the two decrement 224 * operations, if we lose out to the PDMCritSectLeave CPU, it will signal the 225 * semaphore and leave it signalled while cLockers is zero. If we use cmpxchg 226 * to make sure this won't happen and repeate the loop should cLockers reach 227 * zero (i.e. we're the only one around and the semaphore is or will soon be 228 * signalled), we can make this work. 229 * 230 * The ring-0 RTSemEventWaitEx code never return VERR_INTERRUPTED for an already 231 * signalled event, however we're racing the signal call here so it may not yet 232 * be sinalled when we call RTSemEventWaitEx again... Maybe do a 233 * non-interruptible wait for a short while? Or put a max loop count on this? 234 * There is always the possiblity that the thread is in user mode and will be 235 * killed before it gets to waking up the next waiting thread... We probably 236 * need a general timeout here for ring-0 waits and retun rcBusy/guru if it 237 * we get stuck here for too long... 238 */ 221 239 PVMCPUCC pVCpu = VMMGetCpu(pVM); AssertPtr(pVCpu); 222 240 rc = VMMRZCallRing3(pVM, pVCpu, VMMCALLRING3_VM_R0_PREEMPT, NULL);
Note:
See TracChangeset
for help on using the changeset viewer.