VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR3/TM.cpp@ 37517

Last change on this file since 37517 was 37517, checked in by vboxsync, 13 years ago

TM: Simplified the virtual sync timers by requiring changes to be done while holding the virtual sync lock. This means we can skip all the pending states and move timers on and off the active list immediately, avoiding the problems with timers being on the pending-scheduling list. Also made u64VirtualSync keep track of the last time stamp all the time (when under the lock) and thus really making sure time does not jump backwards.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id
File size: 134.7 KB
Line 
1/* $Id: TM.cpp 37517 2011-06-16 19:24:00Z vboxsync $ */
2/** @file
3 * TM - Time Manager.
4 */
5
6/*
7 * Copyright (C) 2006-2010 Oracle Corporation
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 */
17
18/** @page pg_tm TM - The Time Manager
19 *
20 * The Time Manager abstracts the CPU clocks and manages timers used by the VMM,
21 * device and drivers.
22 *
23 * @see grp_tm
24 *
25 *
26 * @section sec_tm_clocks Clocks
27 *
28 * There are currently 4 clocks:
29 * - Virtual (guest).
30 * - Synchronous virtual (guest).
31 * - CPU Tick (TSC) (guest). Only current use is rdtsc emulation. Usually a
32 * function of the virtual clock.
33 * - Real (host). This is only used for display updates atm.
34 *
35 * The most important clocks are the three first ones and of these the second is
36 * the most interesting.
37 *
38 *
39 * The synchronous virtual clock is tied to the virtual clock except that it
40 * will take into account timer delivery lag caused by host scheduling. It will
41 * normally never advance beyond the head timer, and when lagging too far behind
42 * it will gradually speed up to catch up with the virtual clock. All devices
43 * implementing time sources accessible to and used by the guest is using this
44 * clock (for timers and other things). This ensures consistency between the
45 * time sources.
46 *
47 * The virtual clock is implemented as an offset to a monotonic, high
48 * resolution, wall clock. The current time source is using the RTTimeNanoTS()
49 * machinery based upon the Global Info Pages (GIP), that is, we're using TSC
50 * deltas (usually 10 ms) to fill the gaps between GIP updates. The result is
51 * a fairly high res clock that works in all contexts and on all hosts. The
52 * virtual clock is paused when the VM isn't in the running state.
53 *
54 * The CPU tick (TSC) is normally virtualized as a function of the synchronous
55 * virtual clock, where the frequency defaults to the host cpu frequency (as we
56 * measure it). In this mode it is possible to configure the frequency. Another
57 * (non-default) option is to use the raw unmodified host TSC values. And yet
58 * another, to tie it to time spent executing guest code. All these things are
59 * configurable should non-default behavior be desirable.
60 *
61 * The real clock is a monotonic clock (when available) with relatively low
62 * resolution, though this a bit host specific. Note that we're currently not
63 * servicing timers using the real clock when the VM is not running, this is
64 * simply because it has not been needed yet therefore not implemented.
65 *
66 *
67 * @subsection subsec_tm_timesync Guest Time Sync / UTC time
68 *
69 * Guest time syncing is primarily taken care of by the VMM device. The
70 * principle is very simple, the guest additions periodically asks the VMM
71 * device what the current UTC time is and makes adjustments accordingly.
72 *
73 * A complicating factor is that the synchronous virtual clock might be doing
74 * catchups and the guest perception is currently a little bit behind the world
75 * but it will (hopefully) be catching up soon as we're feeding timer interrupts
76 * at a slightly higher rate. Adjusting the guest clock to the current wall
77 * time in the real world would be a bad idea then because the guest will be
78 * advancing too fast and run ahead of world time (if the catchup works out).
79 * To solve this problem TM provides the VMM device with an UTC time source that
80 * gets adjusted with the current lag, so that when the guest eventually catches
81 * up the lag it will be showing correct real world time.
82 *
83 *
84 * @section sec_tm_timers Timers
85 *
86 * The timers can use any of the TM clocks described in the previous section.
87 * Each clock has its own scheduling facility, or timer queue if you like.
88 * There are a few factors which makes it a bit complex. First, there is the
89 * usual R0 vs R3 vs. RC thing. Then there are multiple threads, and then there
90 * is the timer thread that periodically checks whether any timers has expired
91 * without EMT noticing. On the API level, all but the create and save APIs
92 * must be multithreaded. EMT will always run the timers.
93 *
94 * The design is using a doubly linked list of active timers which is ordered
95 * by expire date. This list is only modified by the EMT thread. Updates to
96 * the list are batched in a singly linked list, which is then processed by the
97 * EMT thread at the first opportunity (immediately, next time EMT modifies a
98 * timer on that clock, or next timer timeout). Both lists are offset based and
99 * all the elements are therefore allocated from the hyper heap.
100 *
101 * For figuring out when there is need to schedule and run timers TM will:
102 * - Poll whenever somebody queries the virtual clock.
103 * - Poll the virtual clocks from the EM and REM loops.
104 * - Poll the virtual clocks from trap exit path.
105 * - Poll the virtual clocks and calculate first timeout from the halt loop.
106 * - Employ a thread which periodically (100Hz) polls all the timer queues.
107 *
108 *
109 * @image html TMTIMER-Statechart-Diagram.gif
110 *
111 * @section sec_tm_timer Logging
112 *
113 * Level 2: Logs a most of the timer state transitions and queue servicing.
114 * Level 3: Logs a few oddments.
115 * Level 4: Logs TMCLOCK_VIRTUAL_SYNC catch-up events.
116 *
117 */
118
119/*******************************************************************************
120* Header Files *
121*******************************************************************************/
122#define LOG_GROUP LOG_GROUP_TM
123#include <VBox/vmm/tm.h>
124#include <iprt/asm-amd64-x86.h> /* for SUPGetCpuHzFromGIP from sup.h */
125#include <VBox/vmm/vmm.h>
126#include <VBox/vmm/mm.h>
127#include <VBox/vmm/ssm.h>
128#include <VBox/vmm/dbgf.h>
129#include <VBox/vmm/dbgftrace.h>
130#include <VBox/vmm/rem.h>
131#include <VBox/vmm/pdmapi.h>
132#include <VBox/vmm/iom.h>
133#include "TMInternal.h"
134#include <VBox/vmm/vm.h>
135
136#include <VBox/vmm/pdmdev.h>
137#include <VBox/param.h>
138#include <VBox/err.h>
139
140#include <VBox/log.h>
141#include <iprt/asm.h>
142#include <iprt/asm-math.h>
143#include <iprt/assert.h>
144#include <iprt/thread.h>
145#include <iprt/time.h>
146#include <iprt/timer.h>
147#include <iprt/semaphore.h>
148#include <iprt/string.h>
149#include <iprt/env.h>
150
151#include "TMInline.h"
152
153
154/*******************************************************************************
155* Defined Constants And Macros *
156*******************************************************************************/
157/** The current saved state version.*/
158#define TM_SAVED_STATE_VERSION 3
159
160
161/*******************************************************************************
162* Internal Functions *
163*******************************************************************************/
164static bool tmR3HasFixedTSC(PVM pVM);
165static uint64_t tmR3CalibrateTSC(PVM pVM);
166static DECLCALLBACK(int) tmR3Save(PVM pVM, PSSMHANDLE pSSM);
167static DECLCALLBACK(int) tmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass);
168static DECLCALLBACK(void) tmR3TimerCallback(PRTTIMER pTimer, void *pvUser, uint64_t iTick);
169static void tmR3TimerQueueRun(PVM pVM, PTMTIMERQUEUE pQueue);
170static void tmR3TimerQueueRunVirtualSync(PVM pVM);
171static DECLCALLBACK(int) tmR3SetWarpDrive(PVM pVM, uint32_t u32Percent);
172#ifndef VBOX_WITHOUT_NS_ACCOUNTING
173static DECLCALLBACK(void) tmR3CpuLoadTimer(PVM pVM, PTMTIMER pTimer, void *pvUser);
174#endif
175static DECLCALLBACK(void) tmR3TimerInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
176static DECLCALLBACK(void) tmR3TimerInfoActive(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
177static DECLCALLBACK(void) tmR3InfoClocks(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
178
179
180/**
181 * Initializes the TM.
182 *
183 * @returns VBox status code.
184 * @param pVM The VM to operate on.
185 */
186VMM_INT_DECL(int) TMR3Init(PVM pVM)
187{
188 LogFlow(("TMR3Init:\n"));
189
190 /*
191 * Assert alignment and sizes.
192 */
193 AssertCompileMemberAlignment(VM, tm.s, 32);
194 AssertCompile(sizeof(pVM->tm.s) <= sizeof(pVM->tm.padding));
195 AssertCompileMemberAlignment(TM, TimerCritSect, 8);
196 AssertCompileMemberAlignment(TM, VirtualSyncLock, 8);
197
198 /*
199 * Init the structure.
200 */
201 void *pv;
202 int rc = MMHyperAlloc(pVM, sizeof(pVM->tm.s.paTimerQueuesR3[0]) * TMCLOCK_MAX, 0, MM_TAG_TM, &pv);
203 AssertRCReturn(rc, rc);
204 pVM->tm.s.paTimerQueuesR3 = (PTMTIMERQUEUE)pv;
205 pVM->tm.s.paTimerQueuesR0 = MMHyperR3ToR0(pVM, pv);
206 pVM->tm.s.paTimerQueuesRC = MMHyperR3ToRC(pVM, pv);
207
208 pVM->tm.s.offVM = RT_OFFSETOF(VM, tm.s);
209 pVM->tm.s.idTimerCpu = pVM->cCpus - 1; /* The last CPU. */
210 pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL].enmClock = TMCLOCK_VIRTUAL;
211 pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL].u64Expire = INT64_MAX;
212 pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].enmClock = TMCLOCK_VIRTUAL_SYNC;
213 pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].u64Expire = INT64_MAX;
214 pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL].enmClock = TMCLOCK_REAL;
215 pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL].u64Expire = INT64_MAX;
216 pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC].enmClock = TMCLOCK_TSC;
217 pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC].u64Expire = INT64_MAX;
218
219
220 /*
221 * We directly use the GIP to calculate the virtual time. We map the
222 * the GIP into the guest context so we can do this calculation there
223 * as well and save costly world switches.
224 */
225 pVM->tm.s.pvGIPR3 = (void *)g_pSUPGlobalInfoPage;
226 AssertMsgReturn(pVM->tm.s.pvGIPR3, ("GIP support is now required!\n"), VERR_INTERNAL_ERROR);
227 AssertMsgReturn((g_pSUPGlobalInfoPage->u32Version >> 16) == (SUPGLOBALINFOPAGE_VERSION >> 16),
228 ("Unsupported GIP version!\n"), VERR_INTERNAL_ERROR);
229
230 RTHCPHYS HCPhysGIP;
231 rc = SUPR3GipGetPhys(&HCPhysGIP);
232 AssertMsgRCReturn(rc, ("Failed to get GIP physical address!\n"), rc);
233
234 RTGCPTR GCPtr;
235#ifdef SUP_WITH_LOTS_OF_CPUS
236 rc = MMR3HyperMapHCPhys(pVM, pVM->tm.s.pvGIPR3, NIL_RTR0PTR, HCPhysGIP, (size_t)g_pSUPGlobalInfoPage->cPages * PAGE_SIZE,
237 "GIP", &GCPtr);
238#else
239 rc = MMR3HyperMapHCPhys(pVM, pVM->tm.s.pvGIPR3, NIL_RTR0PTR, HCPhysGIP, PAGE_SIZE, "GIP", &GCPtr);
240#endif
241 if (RT_FAILURE(rc))
242 {
243 AssertMsgFailed(("Failed to map GIP into GC, rc=%Rrc!\n", rc));
244 return rc;
245 }
246 pVM->tm.s.pvGIPRC = GCPtr;
247 LogFlow(("TMR3Init: HCPhysGIP=%RHp at %RRv\n", HCPhysGIP, pVM->tm.s.pvGIPRC));
248 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
249
250 /* Check assumptions made in TMAllVirtual.cpp about the GIP update interval. */
251 if ( g_pSUPGlobalInfoPage->u32Magic == SUPGLOBALINFOPAGE_MAGIC
252 && g_pSUPGlobalInfoPage->u32UpdateIntervalNS >= 250000000 /* 0.25s */)
253 return VMSetError(pVM, VERR_INTERNAL_ERROR, RT_SRC_POS,
254 N_("The GIP update interval is too big. u32UpdateIntervalNS=%RU32 (u32UpdateHz=%RU32)"),
255 g_pSUPGlobalInfoPage->u32UpdateIntervalNS, g_pSUPGlobalInfoPage->u32UpdateHz);
256 LogRel(("TM: GIP - u32Mode=%d (%s) u32UpdateHz=%u\n", g_pSUPGlobalInfoPage->u32Mode,
257 g_pSUPGlobalInfoPage->u32Mode == SUPGIPMODE_SYNC_TSC ? "SyncTSC"
258 : g_pSUPGlobalInfoPage->u32Mode == SUPGIPMODE_ASYNC_TSC ? "AsyncTSC" : "Unknown",
259 g_pSUPGlobalInfoPage->u32UpdateHz));
260
261 /*
262 * Setup the VirtualGetRaw backend.
263 */
264 pVM->tm.s.VirtualGetRawDataR3.pu64Prev = &pVM->tm.s.u64VirtualRawPrev;
265 pVM->tm.s.VirtualGetRawDataR3.pfnBad = tmVirtualNanoTSBad;
266 pVM->tm.s.VirtualGetRawDataR3.pfnRediscover = tmVirtualNanoTSRediscover;
267 if (ASMCpuId_EDX(1) & X86_CPUID_FEATURE_EDX_SSE2)
268 {
269 if (g_pSUPGlobalInfoPage->u32Mode == SUPGIPMODE_SYNC_TSC)
270 pVM->tm.s.pfnVirtualGetRawR3 = RTTimeNanoTSLFenceSync;
271 else
272 pVM->tm.s.pfnVirtualGetRawR3 = RTTimeNanoTSLFenceAsync;
273 }
274 else
275 {
276 if (g_pSUPGlobalInfoPage->u32Mode == SUPGIPMODE_SYNC_TSC)
277 pVM->tm.s.pfnVirtualGetRawR3 = RTTimeNanoTSLegacySync;
278 else
279 pVM->tm.s.pfnVirtualGetRawR3 = RTTimeNanoTSLegacyAsync;
280 }
281
282 pVM->tm.s.VirtualGetRawDataRC.pu64Prev = MMHyperR3ToRC(pVM, (void *)&pVM->tm.s.u64VirtualRawPrev);
283 pVM->tm.s.VirtualGetRawDataR0.pu64Prev = MMHyperR3ToR0(pVM, (void *)&pVM->tm.s.u64VirtualRawPrev);
284 AssertReturn(pVM->tm.s.VirtualGetRawDataR0.pu64Prev, VERR_INTERNAL_ERROR);
285 /* The rest is done in TMR3InitFinalize since it's too early to call PDM. */
286
287 /*
288 * Init the locks.
289 */
290 rc = PDMR3CritSectInit(pVM, &pVM->tm.s.TimerCritSect, RT_SRC_POS, "TM Timer Lock");
291 if (RT_FAILURE(rc))
292 return rc;
293 rc = PDMR3CritSectInit(pVM, &pVM->tm.s.VirtualSyncLock, RT_SRC_POS, "TM VirtualSync Lock");
294 if (RT_FAILURE(rc))
295 return rc;
296
297 /*
298 * Get our CFGM node, create it if necessary.
299 */
300 PCFGMNODE pCfgHandle = CFGMR3GetChild(CFGMR3GetRoot(pVM), "TM");
301 if (!pCfgHandle)
302 {
303 rc = CFGMR3InsertNode(CFGMR3GetRoot(pVM), "TM", &pCfgHandle);
304 AssertRCReturn(rc, rc);
305 }
306
307 /*
308 * Determine the TSC configuration and frequency.
309 */
310 /* mode */
311 /** @cfgm{/TM/TSCVirtualized,bool,true}
312 * Use a virtualize TSC, i.e. trap all TSC access. */
313 rc = CFGMR3QueryBool(pCfgHandle, "TSCVirtualized", &pVM->tm.s.fTSCVirtualized);
314 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
315 pVM->tm.s.fTSCVirtualized = true; /* trap rdtsc */
316 else if (RT_FAILURE(rc))
317 return VMSetError(pVM, rc, RT_SRC_POS,
318 N_("Configuration error: Failed to querying bool value \"UseRealTSC\""));
319
320 /* source */
321 /** @cfgm{/TM/UseRealTSC,bool,false}
322 * Use the real TSC as time source for the TSC instead of the synchronous
323 * virtual clock (false, default). */
324 rc = CFGMR3QueryBool(pCfgHandle, "UseRealTSC", &pVM->tm.s.fTSCUseRealTSC);
325 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
326 pVM->tm.s.fTSCUseRealTSC = false; /* use virtual time */
327 else if (RT_FAILURE(rc))
328 return VMSetError(pVM, rc, RT_SRC_POS,
329 N_("Configuration error: Failed to querying bool value \"UseRealTSC\""));
330 if (!pVM->tm.s.fTSCUseRealTSC)
331 pVM->tm.s.fTSCVirtualized = true;
332
333 /* TSC reliability */
334 /** @cfgm{/TM/MaybeUseOffsettedHostTSC,bool,detect}
335 * Whether the CPU has a fixed TSC rate and may be used in offsetted mode with
336 * VT-x/AMD-V execution. This is autodetected in a very restrictive way by
337 * default. */
338 rc = CFGMR3QueryBool(pCfgHandle, "MaybeUseOffsettedHostTSC", &pVM->tm.s.fMaybeUseOffsettedHostTSC);
339 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
340 {
341 if (!pVM->tm.s.fTSCUseRealTSC)
342 pVM->tm.s.fMaybeUseOffsettedHostTSC = tmR3HasFixedTSC(pVM);
343 else
344 pVM->tm.s.fMaybeUseOffsettedHostTSC = true;
345 }
346
347 /** @cfgm{TM/TSCTicksPerSecond, uint32_t, Current TSC frequency from GIP}
348 * The number of TSC ticks per second (i.e. the TSC frequency). This will
349 * override TSCUseRealTSC, TSCVirtualized and MaybeUseOffsettedHostTSC.
350 */
351 rc = CFGMR3QueryU64(pCfgHandle, "TSCTicksPerSecond", &pVM->tm.s.cTSCTicksPerSecond);
352 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
353 {
354 pVM->tm.s.cTSCTicksPerSecond = tmR3CalibrateTSC(pVM);
355 if ( !pVM->tm.s.fTSCUseRealTSC
356 && pVM->tm.s.cTSCTicksPerSecond >= _4G)
357 {
358 pVM->tm.s.cTSCTicksPerSecond = _4G - 1; /* (A limitation of our math code) */
359 pVM->tm.s.fMaybeUseOffsettedHostTSC = false;
360 }
361 }
362 else if (RT_FAILURE(rc))
363 return VMSetError(pVM, rc, RT_SRC_POS,
364 N_("Configuration error: Failed to querying uint64_t value \"TSCTicksPerSecond\""));
365 else if ( pVM->tm.s.cTSCTicksPerSecond < _1M
366 || pVM->tm.s.cTSCTicksPerSecond >= _4G)
367 return VMSetError(pVM, VERR_INVALID_PARAMETER, RT_SRC_POS,
368 N_("Configuration error: \"TSCTicksPerSecond\" = %RI64 is not in the range 1MHz..4GHz-1"),
369 pVM->tm.s.cTSCTicksPerSecond);
370 else
371 {
372 pVM->tm.s.fTSCUseRealTSC = pVM->tm.s.fMaybeUseOffsettedHostTSC = false;
373 pVM->tm.s.fTSCVirtualized = true;
374 }
375
376 /** @cfgm{TM/TSCTiedToExecution, bool, false}
377 * Whether the TSC should be tied to execution. This will exclude most of the
378 * virtualization overhead, but will by default include the time spent in the
379 * halt state (see TM/TSCNotTiedToHalt). This setting will override all other
380 * TSC settings except for TSCTicksPerSecond and TSCNotTiedToHalt, which should
381 * be used avoided or used with great care. Note that this will only work right
382 * together with VT-x or AMD-V, and with a single virtual CPU. */
383 rc = CFGMR3QueryBoolDef(pCfgHandle, "TSCTiedToExecution", &pVM->tm.s.fTSCTiedToExecution, false);
384 if (RT_FAILURE(rc))
385 return VMSetError(pVM, rc, RT_SRC_POS,
386 N_("Configuration error: Failed to querying bool value \"TSCTiedToExecution\""));
387 if (pVM->tm.s.fTSCTiedToExecution)
388 {
389 /* tied to execution, override all other settings. */
390 pVM->tm.s.fTSCVirtualized = true;
391 pVM->tm.s.fTSCUseRealTSC = true;
392 pVM->tm.s.fMaybeUseOffsettedHostTSC = false;
393 }
394
395 /** @cfgm{TM/TSCNotTiedToHalt, bool, true}
396 * For overriding the default of TM/TSCTiedToExecution, i.e. set this to false
397 * to make the TSC freeze during HLT. */
398 rc = CFGMR3QueryBoolDef(pCfgHandle, "TSCNotTiedToHalt", &pVM->tm.s.fTSCNotTiedToHalt, false);
399 if (RT_FAILURE(rc))
400 return VMSetError(pVM, rc, RT_SRC_POS,
401 N_("Configuration error: Failed to querying bool value \"TSCNotTiedToHalt\""));
402
403 /* setup and report */
404 if (pVM->tm.s.fTSCVirtualized)
405 CPUMR3SetCR4Feature(pVM, X86_CR4_TSD, ~X86_CR4_TSD);
406 else
407 CPUMR3SetCR4Feature(pVM, 0, ~X86_CR4_TSD);
408 LogRel(("TM: cTSCTicksPerSecond=%#RX64 (%'RU64) fTSCVirtualized=%RTbool fTSCUseRealTSC=%RTbool\n"
409 "TM: fMaybeUseOffsettedHostTSC=%RTbool TSCTiedToExecution=%RTbool TSCNotTiedToHalt=%RTbool\n",
410 pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.fTSCVirtualized, pVM->tm.s.fTSCUseRealTSC,
411 pVM->tm.s.fMaybeUseOffsettedHostTSC, pVM->tm.s.fTSCTiedToExecution, pVM->tm.s.fTSCNotTiedToHalt));
412
413 /*
414 * Configure the timer synchronous virtual time.
415 */
416 /** @cfgm{TM/ScheduleSlack, uint32_t, ns, 0, UINT32_MAX, 100000}
417 * Scheduling slack when processing timers. */
418 rc = CFGMR3QueryU32(pCfgHandle, "ScheduleSlack", &pVM->tm.s.u32VirtualSyncScheduleSlack);
419 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
420 pVM->tm.s.u32VirtualSyncScheduleSlack = 100000; /* 0.100ms (ASSUMES virtual time is nanoseconds) */
421 else if (RT_FAILURE(rc))
422 return VMSetError(pVM, rc, RT_SRC_POS,
423 N_("Configuration error: Failed to querying 32-bit integer value \"ScheduleSlack\""));
424
425 /** @cfgm{TM/CatchUpStopThreshold, uint64_t, ns, 0, UINT64_MAX, 500000}
426 * When to stop a catch-up, considering it successful. */
427 rc = CFGMR3QueryU64(pCfgHandle, "CatchUpStopThreshold", &pVM->tm.s.u64VirtualSyncCatchUpStopThreshold);
428 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
429 pVM->tm.s.u64VirtualSyncCatchUpStopThreshold = 500000; /* 0.5ms */
430 else if (RT_FAILURE(rc))
431 return VMSetError(pVM, rc, RT_SRC_POS,
432 N_("Configuration error: Failed to querying 64-bit integer value \"CatchUpStopThreshold\""));
433
434 /** @cfgm{TM/CatchUpGiveUpThreshold, uint64_t, ns, 0, UINT64_MAX, 60000000000}
435 * When to give up a catch-up attempt. */
436 rc = CFGMR3QueryU64(pCfgHandle, "CatchUpGiveUpThreshold", &pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold);
437 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
438 pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold = UINT64_C(60000000000); /* 60 sec */
439 else if (RT_FAILURE(rc))
440 return VMSetError(pVM, rc, RT_SRC_POS,
441 N_("Configuration error: Failed to querying 64-bit integer value \"CatchUpGiveUpThreshold\""));
442
443
444 /** @cfgm{TM/CatchUpPrecentage[0..9], uint32_t, %, 1, 2000, various}
445 * The catch-up percent for a given period. */
446 /** @cfgm{TM/CatchUpStartThreshold[0..9], uint64_t, ns, 0, UINT64_MAX,
447 * The catch-up period threshold, or if you like, when a period starts. */
448#define TM_CFG_PERIOD(iPeriod, DefStart, DefPct) \
449 do \
450 { \
451 uint64_t u64; \
452 rc = CFGMR3QueryU64(pCfgHandle, "CatchUpStartThreshold" #iPeriod, &u64); \
453 if (rc == VERR_CFGM_VALUE_NOT_FOUND) \
454 u64 = UINT64_C(DefStart); \
455 else if (RT_FAILURE(rc)) \
456 return VMSetError(pVM, rc, RT_SRC_POS, N_("Configuration error: Failed to querying 64-bit integer value \"CatchUpThreshold" #iPeriod "\"")); \
457 if ( (iPeriod > 0 && u64 <= pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod - 1].u64Start) \
458 || u64 >= pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold) \
459 return VMSetError(pVM, VERR_INVALID_PARAMETER, RT_SRC_POS, N_("Configuration error: Invalid start of period #" #iPeriod ": %'RU64"), u64); \
460 pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod].u64Start = u64; \
461 rc = CFGMR3QueryU32(pCfgHandle, "CatchUpPrecentage" #iPeriod, &pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod].u32Percentage); \
462 if (rc == VERR_CFGM_VALUE_NOT_FOUND) \
463 pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod].u32Percentage = (DefPct); \
464 else if (RT_FAILURE(rc)) \
465 return VMSetError(pVM, rc, RT_SRC_POS, N_("Configuration error: Failed to querying 32-bit integer value \"CatchUpPrecentage" #iPeriod "\"")); \
466 } while (0)
467 /* This needs more tuning. Not sure if we really need so many period and be so gentle. */
468 TM_CFG_PERIOD(0, 750000, 5); /* 0.75ms at 1.05x */
469 TM_CFG_PERIOD(1, 1500000, 10); /* 1.50ms at 1.10x */
470 TM_CFG_PERIOD(2, 8000000, 25); /* 8ms at 1.25x */
471 TM_CFG_PERIOD(3, 30000000, 50); /* 30ms at 1.50x */
472 TM_CFG_PERIOD(4, 75000000, 75); /* 75ms at 1.75x */
473 TM_CFG_PERIOD(5, 175000000, 100); /* 175ms at 2x */
474 TM_CFG_PERIOD(6, 500000000, 200); /* 500ms at 3x */
475 TM_CFG_PERIOD(7, 3000000000, 300); /* 3s at 4x */
476 TM_CFG_PERIOD(8,30000000000, 400); /* 30s at 5x */
477 TM_CFG_PERIOD(9,55000000000, 500); /* 55s at 6x */
478 AssertCompile(RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods) == 10);
479#undef TM_CFG_PERIOD
480
481 /*
482 * Configure real world time (UTC).
483 */
484 /** @cfgm{TM/UTCOffset, int64_t, ns, INT64_MIN, INT64_MAX, 0}
485 * The UTC offset. This is used to put the guest back or forwards in time. */
486 rc = CFGMR3QueryS64(pCfgHandle, "UTCOffset", &pVM->tm.s.offUTC);
487 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
488 pVM->tm.s.offUTC = 0; /* ns */
489 else if (RT_FAILURE(rc))
490 return VMSetError(pVM, rc, RT_SRC_POS,
491 N_("Configuration error: Failed to querying 64-bit integer value \"UTCOffset\""));
492
493 /*
494 * Setup the warp drive.
495 */
496 /** @cfgm{TM/WarpDrivePercentage, uint32_t, %, 0, 20000, 100}
497 * The warp drive percentage, 100% is normal speed. This is used to speed up
498 * or slow down the virtual clock, which can be useful for fast forwarding
499 * borring periods during tests. */
500 rc = CFGMR3QueryU32(pCfgHandle, "WarpDrivePercentage", &pVM->tm.s.u32VirtualWarpDrivePercentage);
501 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
502 rc = CFGMR3QueryU32(CFGMR3GetRoot(pVM), "WarpDrivePercentage", &pVM->tm.s.u32VirtualWarpDrivePercentage); /* legacy */
503 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
504 pVM->tm.s.u32VirtualWarpDrivePercentage = 100;
505 else if (RT_FAILURE(rc))
506 return VMSetError(pVM, rc, RT_SRC_POS,
507 N_("Configuration error: Failed to querying uint32_t value \"WarpDrivePercent\""));
508 else if ( pVM->tm.s.u32VirtualWarpDrivePercentage < 2
509 || pVM->tm.s.u32VirtualWarpDrivePercentage > 20000)
510 return VMSetError(pVM, VERR_INVALID_PARAMETER, RT_SRC_POS,
511 N_("Configuration error: \"WarpDrivePercent\" = %RI32 is not in the range 2..20000"),
512 pVM->tm.s.u32VirtualWarpDrivePercentage);
513 pVM->tm.s.fVirtualWarpDrive = pVM->tm.s.u32VirtualWarpDrivePercentage != 100;
514 if (pVM->tm.s.fVirtualWarpDrive)
515 LogRel(("TM: u32VirtualWarpDrivePercentage=%RI32\n", pVM->tm.s.u32VirtualWarpDrivePercentage));
516
517 /*
518 * Gather the Host Hz configuration values.
519 */
520 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzMax", &pVM->tm.s.cHostHzMax, 20000);
521 if (RT_FAILURE(rc))
522 return VMSetError(pVM, rc, RT_SRC_POS,
523 N_("Configuration error: Failed to querying uint32_t value \"HostHzMax\""));
524
525 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorTimerCpu", &pVM->tm.s.cPctHostHzFudgeFactorTimerCpu, 111);
526 if (RT_FAILURE(rc))
527 return VMSetError(pVM, rc, RT_SRC_POS,
528 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorTimerCpu\""));
529
530 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorOtherCpu", &pVM->tm.s.cPctHostHzFudgeFactorOtherCpu, 110);
531 if (RT_FAILURE(rc))
532 return VMSetError(pVM, rc, RT_SRC_POS,
533 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorOtherCpu\""));
534
535 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorCatchUp100", &pVM->tm.s.cPctHostHzFudgeFactorCatchUp100, 300);
536 if (RT_FAILURE(rc))
537 return VMSetError(pVM, rc, RT_SRC_POS,
538 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorCatchUp100\""));
539
540 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorCatchUp200", &pVM->tm.s.cPctHostHzFudgeFactorCatchUp200, 250);
541 if (RT_FAILURE(rc))
542 return VMSetError(pVM, rc, RT_SRC_POS,
543 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorCatchUp200\""));
544
545 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorCatchUp400", &pVM->tm.s.cPctHostHzFudgeFactorCatchUp400, 200);
546 if (RT_FAILURE(rc))
547 return VMSetError(pVM, rc, RT_SRC_POS,
548 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorCatchUp400\""));
549
550 /*
551 * Start the timer (guard against REM not yielding).
552 */
553 /** @cfgm{TM/TimerMillies, uint32_t, ms, 1, 1000, 10}
554 * The watchdog timer interval. */
555 uint32_t u32Millies;
556 rc = CFGMR3QueryU32(pCfgHandle, "TimerMillies", &u32Millies);
557 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
558 u32Millies = 10;
559 else if (RT_FAILURE(rc))
560 return VMSetError(pVM, rc, RT_SRC_POS,
561 N_("Configuration error: Failed to query uint32_t value \"TimerMillies\""));
562 rc = RTTimerCreate(&pVM->tm.s.pTimer, u32Millies, tmR3TimerCallback, pVM);
563 if (RT_FAILURE(rc))
564 {
565 AssertMsgFailed(("Failed to create timer, u32Millies=%d rc=%Rrc.\n", u32Millies, rc));
566 return rc;
567 }
568 Log(("TM: Created timer %p firing every %d milliseconds\n", pVM->tm.s.pTimer, u32Millies));
569 pVM->tm.s.u32TimerMillies = u32Millies;
570
571 /*
572 * Register saved state.
573 */
574 rc = SSMR3RegisterInternal(pVM, "tm", 1, TM_SAVED_STATE_VERSION, sizeof(uint64_t) * 8,
575 NULL, NULL, NULL,
576 NULL, tmR3Save, NULL,
577 NULL, tmR3Load, NULL);
578 if (RT_FAILURE(rc))
579 return rc;
580
581 /*
582 * Register statistics.
583 */
584 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR3.c1nsSteps,STAMTYPE_U32, "/TM/R3/1nsSteps", STAMUNIT_OCCURENCES, "Virtual time 1ns steps (due to TSC / GIP variations).");
585 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR3.cBadPrev, STAMTYPE_U32, "/TM/R3/cBadPrev", STAMUNIT_OCCURENCES, "Times the previous virtual time was considered erratic (shouldn't ever happen).");
586 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR0.c1nsSteps,STAMTYPE_U32, "/TM/R0/1nsSteps", STAMUNIT_OCCURENCES, "Virtual time 1ns steps (due to TSC / GIP variations).");
587 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR0.cBadPrev, STAMTYPE_U32, "/TM/R0/cBadPrev", STAMUNIT_OCCURENCES, "Times the previous virtual time was considered erratic (shouldn't ever happen).");
588 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataRC.c1nsSteps,STAMTYPE_U32, "/TM/RC/1nsSteps", STAMUNIT_OCCURENCES, "Virtual time 1ns steps (due to TSC / GIP variations).");
589 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataRC.cBadPrev, STAMTYPE_U32, "/TM/RC/cBadPrev", STAMUNIT_OCCURENCES, "Times the previous virtual time was considered erratic (shouldn't ever happen).");
590 STAM_REL_REG( pVM,(void*)&pVM->tm.s.offVirtualSync, STAMTYPE_U64, "/TM/VirtualSync/CurrentOffset", STAMUNIT_NS, "The current offset. (subtract GivenUp to get the lag)");
591 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.offVirtualSyncGivenUp, STAMTYPE_U64, "/TM/VirtualSync/GivenUp", STAMUNIT_NS, "Nanoseconds of the 'CurrentOffset' that's been given up and won't ever be attempted caught up with.");
592 STAM_REL_REG( pVM,(void*)&pVM->tm.s.uMaxHzHint, STAMTYPE_U32, "/TM/MaxHzHint", STAMUNIT_HZ, "Max guest timer frequency hint.");
593
594#ifdef VBOX_WITH_STATISTICS
595 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR3.cExpired, STAMTYPE_U32, "/TM/R3/cExpired", STAMUNIT_OCCURENCES, "Times the TSC interval expired (overlaps 1ns steps).");
596 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR3.cUpdateRaces,STAMTYPE_U32, "/TM/R3/cUpdateRaces", STAMUNIT_OCCURENCES, "Thread races when updating the previous timestamp.");
597 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR0.cExpired, STAMTYPE_U32, "/TM/R0/cExpired", STAMUNIT_OCCURENCES, "Times the TSC interval expired (overlaps 1ns steps).");
598 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR0.cUpdateRaces,STAMTYPE_U32, "/TM/R0/cUpdateRaces", STAMUNIT_OCCURENCES, "Thread races when updating the previous timestamp.");
599 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataRC.cExpired, STAMTYPE_U32, "/TM/RC/cExpired", STAMUNIT_OCCURENCES, "Times the TSC interval expired (overlaps 1ns steps).");
600 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataRC.cUpdateRaces,STAMTYPE_U32, "/TM/RC/cUpdateRaces", STAMUNIT_OCCURENCES, "Thread races when updating the previous timestamp.");
601 STAM_REG(pVM, &pVM->tm.s.StatDoQueues, STAMTYPE_PROFILE, "/TM/DoQueues", STAMUNIT_TICKS_PER_CALL, "Profiling timer TMR3TimerQueuesDo.");
602 STAM_REG(pVM, &pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL], STAMTYPE_PROFILE_ADV, "/TM/DoQueues/Virtual", STAMUNIT_TICKS_PER_CALL, "Time spent on the virtual clock queue.");
603 STAM_REG(pVM, &pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL_SYNC], STAMTYPE_PROFILE_ADV, "/TM/DoQueues/VirtualSync", STAMUNIT_TICKS_PER_CALL, "Time spent on the virtual sync clock queue.");
604 STAM_REG(pVM, &pVM->tm.s.aStatDoQueues[TMCLOCK_REAL], STAMTYPE_PROFILE_ADV, "/TM/DoQueues/Real", STAMUNIT_TICKS_PER_CALL, "Time spent on the real clock queue.");
605
606 STAM_REG(pVM, &pVM->tm.s.StatPoll, STAMTYPE_COUNTER, "/TM/Poll", STAMUNIT_OCCURENCES, "TMTimerPoll calls.");
607 STAM_REG(pVM, &pVM->tm.s.StatPollAlreadySet, STAMTYPE_COUNTER, "/TM/Poll/AlreadySet", STAMUNIT_OCCURENCES, "TMTimerPoll calls where the FF was already set.");
608 STAM_REG(pVM, &pVM->tm.s.StatPollELoop, STAMTYPE_COUNTER, "/TM/Poll/ELoop", STAMUNIT_OCCURENCES, "Times TMTimerPoll has given up getting a consistent virtual sync data set.");
609 STAM_REG(pVM, &pVM->tm.s.StatPollMiss, STAMTYPE_COUNTER, "/TM/Poll/Miss", STAMUNIT_OCCURENCES, "TMTimerPoll calls where nothing had expired.");
610 STAM_REG(pVM, &pVM->tm.s.StatPollRunning, STAMTYPE_COUNTER, "/TM/Poll/Running", STAMUNIT_OCCURENCES, "TMTimerPoll calls where the queues were being run.");
611 STAM_REG(pVM, &pVM->tm.s.StatPollSimple, STAMTYPE_COUNTER, "/TM/Poll/Simple", STAMUNIT_OCCURENCES, "TMTimerPoll calls where we could take the simple path.");
612 STAM_REG(pVM, &pVM->tm.s.StatPollVirtual, STAMTYPE_COUNTER, "/TM/Poll/HitsVirtual", STAMUNIT_OCCURENCES, "The number of times TMTimerPoll found an expired TMCLOCK_VIRTUAL queue.");
613 STAM_REG(pVM, &pVM->tm.s.StatPollVirtualSync, STAMTYPE_COUNTER, "/TM/Poll/HitsVirtualSync", STAMUNIT_OCCURENCES, "The number of times TMTimerPoll found an expired TMCLOCK_VIRTUAL_SYNC queue.");
614
615 STAM_REG(pVM, &pVM->tm.s.StatPostponedR3, STAMTYPE_COUNTER, "/TM/PostponedR3", STAMUNIT_OCCURENCES, "Postponed due to unschedulable state, in ring-3.");
616 STAM_REG(pVM, &pVM->tm.s.StatPostponedRZ, STAMTYPE_COUNTER, "/TM/PostponedRZ", STAMUNIT_OCCURENCES, "Postponed due to unschedulable state, in ring-0 / RC.");
617
618 STAM_REG(pVM, &pVM->tm.s.StatScheduleOneR3, STAMTYPE_PROFILE, "/TM/ScheduleOneR3", STAMUNIT_TICKS_PER_CALL, "Profiling the scheduling of one queue during a TMTimer* call in EMT.");
619 STAM_REG(pVM, &pVM->tm.s.StatScheduleOneRZ, STAMTYPE_PROFILE, "/TM/ScheduleOneRZ", STAMUNIT_TICKS_PER_CALL, "Profiling the scheduling of one queue during a TMTimer* call in EMT.");
620 STAM_REG(pVM, &pVM->tm.s.StatScheduleSetFF, STAMTYPE_COUNTER, "/TM/ScheduleSetFF", STAMUNIT_OCCURENCES, "The number of times the timer FF was set instead of doing scheduling.");
621
622 STAM_REG(pVM, &pVM->tm.s.StatTimerSet, STAMTYPE_COUNTER, "/TM/TimerSet", STAMUNIT_OCCURENCES, "Calls, except virtual sync timers");
623 STAM_REG(pVM, &pVM->tm.s.StatTimerSetOpt, STAMTYPE_COUNTER, "/TM/TimerSet/Opt", STAMUNIT_OCCURENCES, "Optimized path taken.");
624 STAM_REG(pVM, &pVM->tm.s.StatTimerSetR3, STAMTYPE_PROFILE, "/TM/TimerSet/R3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSet calls made in ring-3.");
625 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRZ, STAMTYPE_PROFILE, "/TM/TimerSet/RZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSet calls made in ring-0 / RC.");
626 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStActive, STAMTYPE_COUNTER, "/TM/TimerSet/StActive", STAMUNIT_OCCURENCES, "ACTIVE");
627 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStExpDeliver, STAMTYPE_COUNTER, "/TM/TimerSet/StExpDeliver", STAMUNIT_OCCURENCES, "EXPIRED_DELIVER");
628 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStOther, STAMTYPE_COUNTER, "/TM/TimerSet/StOther", STAMUNIT_OCCURENCES, "Other states");
629 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStPendStop, STAMTYPE_COUNTER, "/TM/TimerSet/StPendStop", STAMUNIT_OCCURENCES, "PENDING_STOP");
630 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStPendStopSched, STAMTYPE_COUNTER, "/TM/TimerSet/StPendStopSched", STAMUNIT_OCCURENCES, "PENDING_STOP_SCHEDULE");
631 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStPendSched, STAMTYPE_COUNTER, "/TM/TimerSet/StPendSched", STAMUNIT_OCCURENCES, "PENDING_SCHEDULE");
632 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStPendResched, STAMTYPE_COUNTER, "/TM/TimerSet/StPendResched", STAMUNIT_OCCURENCES, "PENDING_RESCHEDULE");
633 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStStopped, STAMTYPE_COUNTER, "/TM/TimerSet/StStopped", STAMUNIT_OCCURENCES, "STOPPED");
634
635 STAM_REG(pVM, &pVM->tm.s.StatTimerSetVs, STAMTYPE_COUNTER, "/TM/TimerSetVs", STAMUNIT_OCCURENCES, "TMTimerSet calls on virtual sync timers");
636 STAM_REG(pVM, &pVM->tm.s.StatTimerSetVsR3, STAMTYPE_PROFILE, "/TM/TimerSetVs/R3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSet calls made in ring-3 on virtual sync timers.");
637 STAM_REG(pVM, &pVM->tm.s.StatTimerSetVsRZ, STAMTYPE_PROFILE, "/TM/TimerSetVs/RZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSet calls made in ring-0 / RC on virtual sync timers.");
638 STAM_REG(pVM, &pVM->tm.s.StatTimerSetVsStActive, STAMTYPE_COUNTER, "/TM/TimerSetVs/StActive", STAMUNIT_OCCURENCES, "ACTIVE");
639 STAM_REG(pVM, &pVM->tm.s.StatTimerSetVsStExpDeliver, STAMTYPE_COUNTER, "/TM/TimerSetVs/StExpDeliver", STAMUNIT_OCCURENCES, "EXPIRED_DELIVER");
640 STAM_REG(pVM, &pVM->tm.s.StatTimerSetVsStStopped, STAMTYPE_COUNTER, "/TM/TimerSetVs/StStopped", STAMUNIT_OCCURENCES, "STOPPED");
641
642 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelative, STAMTYPE_COUNTER, "/TM/TimerSetRelative", STAMUNIT_OCCURENCES, "Calls, except virtual sync timers");
643 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeOpt, STAMTYPE_COUNTER, "/TM/TimerSetRelative/Opt", STAMUNIT_OCCURENCES, "Optimized path taken.");
644 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeR3, STAMTYPE_PROFILE, "/TM/TimerSetRelative/R3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSetRelative calls made in ring-3 (sans virtual sync).");
645 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeRZ, STAMTYPE_PROFILE, "/TM/TimerSetRelative/RZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSetReltaive calls made in ring-0 / RC (sans virtual sync).");
646 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStActive, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StActive", STAMUNIT_OCCURENCES, "ACTIVE");
647 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStExpDeliver, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StExpDeliver", STAMUNIT_OCCURENCES, "EXPIRED_DELIVER");
648 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStOther, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StOther", STAMUNIT_OCCURENCES, "Other states");
649 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStPendStop, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StPendStop", STAMUNIT_OCCURENCES, "PENDING_STOP");
650 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStPendStopSched, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StPendStopSched",STAMUNIT_OCCURENCES, "PENDING_STOP_SCHEDULE");
651 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStPendSched, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StPendSched", STAMUNIT_OCCURENCES, "PENDING_SCHEDULE");
652 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStPendResched, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StPendResched", STAMUNIT_OCCURENCES, "PENDING_RESCHEDULE");
653 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStStopped, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StStopped", STAMUNIT_OCCURENCES, "STOPPED");
654
655 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeVs, STAMTYPE_COUNTER, "/TM/TimerSetRelativeVs", STAMUNIT_OCCURENCES, "TMTimerSetRelative calls on virtual sync timers");
656 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeVsR3, STAMTYPE_PROFILE, "/TM/TimerSetRelativeVs/R3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSetRelative calls made in ring-3 on virtual sync timers.");
657 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeVsRZ, STAMTYPE_PROFILE, "/TM/TimerSetRelativeVs/RZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSetReltaive calls made in ring-0 / RC on virtual sync timers.");
658 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeVsStActive, STAMTYPE_COUNTER, "/TM/TimerSetRelativeVs/StActive", STAMUNIT_OCCURENCES, "ACTIVE");
659 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeVsStExpDeliver, STAMTYPE_COUNTER, "/TM/TimerSetRelativeVs/StExpDeliver", STAMUNIT_OCCURENCES, "EXPIRED_DELIVER");
660 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeVsStStopped, STAMTYPE_COUNTER, "/TM/TimerSetRelativeVs/StStopped", STAMUNIT_OCCURENCES, "STOPPED");
661
662 STAM_REG(pVM, &pVM->tm.s.StatTimerStopR3, STAMTYPE_PROFILE, "/TM/TimerStopR3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerStop calls made in ring-3.");
663 STAM_REG(pVM, &pVM->tm.s.StatTimerStopRZ, STAMTYPE_PROFILE, "/TM/TimerStopRZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerStop calls made in ring-0 / RC.");
664
665 STAM_REG(pVM, &pVM->tm.s.StatVirtualGet, STAMTYPE_COUNTER, "/TM/VirtualGet", STAMUNIT_OCCURENCES, "The number of times TMTimerGet was called when the clock was running.");
666 STAM_REG(pVM, &pVM->tm.s.StatVirtualGetSetFF, STAMTYPE_COUNTER, "/TM/VirtualGetSetFF", STAMUNIT_OCCURENCES, "Times we set the FF when calling TMTimerGet.");
667 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGet, STAMTYPE_COUNTER, "/TM/VirtualSyncGet", STAMUNIT_OCCURENCES, "The number of times tmVirtualSyncGetEx was called.");
668 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetAdjLast, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/AdjLast", STAMUNIT_OCCURENCES, "Times we've adjusted against the last returned time stamp .");
669 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetELoop, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/ELoop", STAMUNIT_OCCURENCES, "Times tmVirtualSyncGetEx has given up getting a consistent virtual sync data set.");
670 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetExpired, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/Expired", STAMUNIT_OCCURENCES, "Times tmVirtualSyncGetEx encountered an expired timer stopping the clock.");
671 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetLocked, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/Locked", STAMUNIT_OCCURENCES, "Times we successfully acquired the lock in tmVirtualSyncGetEx.");
672 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetLockless, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/Lockless", STAMUNIT_OCCURENCES, "Times tmVirtualSyncGetEx returned without needing to take the lock.");
673 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetSetFF, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/SetFF", STAMUNIT_OCCURENCES, "Times we set the FF when calling tmVirtualSyncGetEx.");
674 STAM_REG(pVM, &pVM->tm.s.StatVirtualPause, STAMTYPE_COUNTER, "/TM/VirtualPause", STAMUNIT_OCCURENCES, "The number of times TMR3TimerPause was called.");
675 STAM_REG(pVM, &pVM->tm.s.StatVirtualResume, STAMTYPE_COUNTER, "/TM/VirtualResume", STAMUNIT_OCCURENCES, "The number of times TMR3TimerResume was called.");
676
677 STAM_REG(pVM, &pVM->tm.s.StatTimerCallbackSetFF, STAMTYPE_COUNTER, "/TM/CallbackSetFF", STAMUNIT_OCCURENCES, "The number of times the timer callback set FF.");
678
679 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupLE010, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupLE010", STAMUNIT_OCCURENCES, "In catch-up mode, 10% or lower.");
680 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupLE025, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupLE025", STAMUNIT_OCCURENCES, "In catch-up mode, 25%-11%.");
681 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupLE100, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupLE100", STAMUNIT_OCCURENCES, "In catch-up mode, 100%-26%.");
682 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupOther, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupOther", STAMUNIT_OCCURENCES, "In catch-up mode, > 100%.");
683 STAM_REG(pVM, &pVM->tm.s.StatTSCNotFixed, STAMTYPE_COUNTER, "/TM/TSC/Intercept/NotFixed", STAMUNIT_OCCURENCES, "TSC is not fixed, it may run at variable speed.");
684 STAM_REG(pVM, &pVM->tm.s.StatTSCNotTicking, STAMTYPE_COUNTER, "/TM/TSC/Intercept/NotTicking", STAMUNIT_OCCURENCES, "TSC is not ticking.");
685 STAM_REG(pVM, &pVM->tm.s.StatTSCSyncNotTicking, STAMTYPE_COUNTER, "/TM/TSC/Intercept/SyncNotTicking", STAMUNIT_OCCURENCES, "VirtualSync isn't ticking.");
686 STAM_REG(pVM, &pVM->tm.s.StatTSCWarp, STAMTYPE_COUNTER, "/TM/TSC/Intercept/Warp", STAMUNIT_OCCURENCES, "Warpdrive is active.");
687 STAM_REG(pVM, &pVM->tm.s.StatTSCSet, STAMTYPE_COUNTER, "/TM/TSC/Sets", STAMUNIT_OCCURENCES, "Calls to TMCpuTickSet.");
688 STAM_REG(pVM, &pVM->tm.s.StatTSCUnderflow, STAMTYPE_COUNTER, "/TM/TSC/Underflow", STAMUNIT_OCCURENCES, "TSC underflow; corrected with last seen value .");
689#endif /* VBOX_WITH_STATISTICS */
690
691 for (VMCPUID i = 0; i < pVM->cCpus; i++)
692 {
693 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.offTSCRawSrc, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_TICKS, "TSC offset relative the raw source", "/TM/TSC/offCPU%u", i);
694#ifndef VBOX_WITHOUT_NS_ACCOUNTING
695# if defined(VBOX_WITH_STATISTICS) || defined(VBOX_WITH_NS_ACCOUNTING_STATS)
696 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsTotal, STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Resettable: Total CPU run time.", "/TM/CPU/%02u", i);
697 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsExecuting, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent executing guest code.", "/TM/CPU/%02u/PrfExecuting", i);
698 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsExecLong, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent executing guest code - long hauls.", "/TM/CPU/%02u/PrfExecLong", i);
699 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsExecShort, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent executing guest code - short stretches.", "/TM/CPU/%02u/PrfExecShort", i);
700 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsExecTiny, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent executing guest code - tiny bits.", "/TM/CPU/%02u/PrfExecTiny", i);
701 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsHalted, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent halted.", "/TM/CPU/%02u/PrfHalted", i);
702 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsOther, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent in the VMM or preempted.", "/TM/CPU/%02u/PrfOther", i);
703# endif
704 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cNsTotal, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Total CPU run time.", "/TM/CPU/%02u/cNsTotal", i);
705 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cNsExecuting, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Time spent executing guest code.", "/TM/CPU/%02u/cNsExecuting", i);
706 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cNsHalted, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Time spent halted.", "/TM/CPU/%02u/cNsHalted", i);
707 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cNsOther, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Time spent in the VMM or preempted.", "/TM/CPU/%02u/cNsOther", i);
708 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cPeriodsExecuting, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_COUNT, "Times executed guest code.", "/TM/CPU/%02u/cPeriodsExecuting", i);
709 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cPeriodsHalted, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_COUNT, "Times halted.", "/TM/CPU/%02u/cPeriodsHalted", i);
710 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.CpuLoad.cPctExecuting, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent executing guest code recently.", "/TM/CPU/%02u/pctExecuting", i);
711 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.CpuLoad.cPctHalted, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent halted recently.", "/TM/CPU/%02u/pctHalted", i);
712 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.CpuLoad.cPctOther, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent in the VMM or preempted recently.", "/TM/CPU/%02u/pctOther", i);
713#endif
714 }
715#ifndef VBOX_WITHOUT_NS_ACCOUNTING
716 STAMR3RegisterF(pVM, &pVM->tm.s.CpuLoad.cPctExecuting, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent executing guest code recently.", "/TM/CPU/pctExecuting");
717 STAMR3RegisterF(pVM, &pVM->tm.s.CpuLoad.cPctHalted, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent halted recently.", "/TM/CPU/pctHalted");
718 STAMR3RegisterF(pVM, &pVM->tm.s.CpuLoad.cPctOther, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent in the VMM or preempted recently.", "/TM/CPU/pctOther");
719#endif
720
721#ifdef VBOX_WITH_STATISTICS
722 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncCatchup, STAMTYPE_PROFILE_ADV, "/TM/VirtualSync/CatchUp", STAMUNIT_TICKS_PER_OCCURENCE, "Counting and measuring the times spent catching up.");
723 STAM_REG(pVM, (void *)&pVM->tm.s.fVirtualSyncCatchUp, STAMTYPE_U8, "/TM/VirtualSync/CatchUpActive", STAMUNIT_NONE, "Catch-Up active indicator.");
724 STAM_REG(pVM, (void *)&pVM->tm.s.u32VirtualSyncCatchUpPercentage, STAMTYPE_U32, "/TM/VirtualSync/CatchUpPercentage", STAMUNIT_PCT, "The catch-up percentage. (+100/100 to get clock multiplier)");
725 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncFF, STAMTYPE_PROFILE, "/TM/VirtualSync/FF", STAMUNIT_TICKS_PER_OCCURENCE, "Time spent in TMR3VirtualSyncFF by all but the dedicate timer EMT.");
726 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGiveUp, STAMTYPE_COUNTER, "/TM/VirtualSync/GiveUp", STAMUNIT_OCCURENCES, "Times the catch-up was abandoned.");
727 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGiveUpBeforeStarting, STAMTYPE_COUNTER, "/TM/VirtualSync/GiveUpBeforeStarting",STAMUNIT_OCCURENCES, "Times the catch-up was abandoned before even starting. (Typically debugging++.)");
728 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRun, STAMTYPE_COUNTER, "/TM/VirtualSync/Run", STAMUNIT_OCCURENCES, "Times the virtual sync timer queue was considered.");
729 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunRestart, STAMTYPE_COUNTER, "/TM/VirtualSync/Run/Restarts", STAMUNIT_OCCURENCES, "Times the clock was restarted after a run.");
730 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunStop, STAMTYPE_COUNTER, "/TM/VirtualSync/Run/Stop", STAMUNIT_OCCURENCES, "Times the clock was stopped when calculating the current time before examining the timers.");
731 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunStoppedAlready, STAMTYPE_COUNTER, "/TM/VirtualSync/Run/StoppedAlready", STAMUNIT_OCCURENCES, "Times the clock was already stopped elsewhere (TMVirtualSyncGet).");
732 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunSlack, STAMTYPE_PROFILE, "/TM/VirtualSync/Run/Slack", STAMUNIT_NS_PER_OCCURENCE, "The scheduling slack. (Catch-up handed out when running timers.)");
733 for (unsigned i = 0; i < RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods); i++)
734 {
735 STAMR3RegisterF(pVM, &pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage, STAMTYPE_U32, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "The catch-up percentage.", "/TM/VirtualSync/Periods/%u", i);
736 STAMR3RegisterF(pVM, &pVM->tm.s.aStatVirtualSyncCatchupAdjust[i], STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_OCCURENCES, "Times adjusted to this period.", "/TM/VirtualSync/Periods/%u/Adjust", i);
737 STAMR3RegisterF(pVM, &pVM->tm.s.aStatVirtualSyncCatchupInitial[i], STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_OCCURENCES, "Times started in this period.", "/TM/VirtualSync/Periods/%u/Initial", i);
738 STAMR3RegisterF(pVM, &pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u64Start, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Start of this period (lag).", "/TM/VirtualSync/Periods/%u/Start", i);
739 }
740#endif /* VBOX_WITH_STATISTICS */
741
742 /*
743 * Register info handlers.
744 */
745 DBGFR3InfoRegisterInternalEx(pVM, "timers", "Dumps all timers. No arguments.", tmR3TimerInfo, DBGFINFO_FLAGS_RUN_ON_EMT);
746 DBGFR3InfoRegisterInternalEx(pVM, "activetimers", "Dumps active all timers. No arguments.", tmR3TimerInfoActive, DBGFINFO_FLAGS_RUN_ON_EMT);
747 DBGFR3InfoRegisterInternalEx(pVM, "clocks", "Display the time of the various clocks.", tmR3InfoClocks, DBGFINFO_FLAGS_RUN_ON_EMT);
748
749 return VINF_SUCCESS;
750}
751
752
753/**
754 * Checks if the host CPU has a fixed TSC frequency.
755 *
756 * @returns true if it has, false if it hasn't.
757 *
758 * @remark This test doesn't bother with very old CPUs that don't do power
759 * management or any other stuff that might influence the TSC rate.
760 * This isn't currently relevant.
761 */
762static bool tmR3HasFixedTSC(PVM pVM)
763{
764 if (ASMHasCpuId())
765 {
766 uint32_t uEAX, uEBX, uECX, uEDX;
767
768 if (CPUMGetHostCpuVendor(pVM) == CPUMCPUVENDOR_AMD)
769 {
770 /*
771 * AuthenticAMD - Check for APM support and that TscInvariant is set.
772 *
773 * This test isn't correct with respect to fixed/non-fixed TSC and
774 * older models, but this isn't relevant since the result is currently
775 * only used for making a decision on AMD-V models.
776 */
777 ASMCpuId(0x80000000, &uEAX, &uEBX, &uECX, &uEDX);
778 if (uEAX >= 0x80000007)
779 {
780 PSUPGLOBALINFOPAGE pGip = g_pSUPGlobalInfoPage;
781
782 ASMCpuId(0x80000007, &uEAX, &uEBX, &uECX, &uEDX);
783 if ( (uEDX & X86_CPUID_AMD_ADVPOWER_EDX_TSCINVAR) /* TscInvariant */
784 && pGip->u32Mode == SUPGIPMODE_SYNC_TSC /* no fixed tsc if the gip timer is in async mode */)
785 return true;
786 }
787 }
788 else if (CPUMGetHostCpuVendor(pVM) == CPUMCPUVENDOR_INTEL)
789 {
790 /*
791 * GenuineIntel - Check the model number.
792 *
793 * This test is lacking in the same way and for the same reasons
794 * as the AMD test above.
795 */
796 ASMCpuId(1, &uEAX, &uEBX, &uECX, &uEDX);
797 unsigned uModel = (uEAX >> 4) & 0x0f;
798 unsigned uFamily = (uEAX >> 8) & 0x0f;
799 if (uFamily == 0x0f)
800 uFamily += (uEAX >> 20) & 0xff;
801 if (uFamily >= 0x06)
802 uModel += ((uEAX >> 16) & 0x0f) << 4;
803 if ( (uFamily == 0x0f /*P4*/ && uModel >= 0x03)
804 || (uFamily == 0x06 /*P2/P3*/ && uModel >= 0x0e))
805 return true;
806 }
807 }
808 return false;
809}
810
811
812/**
813 * Calibrate the CPU tick.
814 *
815 * @returns Number of ticks per second.
816 */
817static uint64_t tmR3CalibrateTSC(PVM pVM)
818{
819 /*
820 * Use GIP when available present.
821 */
822 uint64_t u64Hz = SUPGetCpuHzFromGIP(g_pSUPGlobalInfoPage);
823 if (u64Hz != UINT64_MAX)
824 {
825 if (tmR3HasFixedTSC(pVM))
826 /* Sleep a bit to get a more reliable CpuHz value. */
827 RTThreadSleep(32);
828 else
829 {
830 /* Spin for 40ms to try push up the CPU frequency and get a more reliable CpuHz value. */
831 const uint64_t u64 = RTTimeMilliTS();
832 while ((RTTimeMilliTS() - u64) < 40 /*ms*/)
833 /* nothing */;
834 }
835
836 u64Hz = SUPGetCpuHzFromGIP(g_pSUPGlobalInfoPage);
837 if (u64Hz != UINT64_MAX)
838 return u64Hz;
839 }
840
841 /* call this once first to make sure it's initialized. */
842 RTTimeNanoTS();
843
844 /*
845 * Yield the CPU to increase our chances of getting
846 * a correct value.
847 */
848 RTThreadYield(); /* Try avoid interruptions between TSC and NanoTS samplings. */
849 static const unsigned s_auSleep[5] = { 50, 30, 30, 40, 40 };
850 uint64_t au64Samples[5];
851 unsigned i;
852 for (i = 0; i < RT_ELEMENTS(au64Samples); i++)
853 {
854 RTMSINTERVAL cMillies;
855 int cTries = 5;
856 uint64_t u64Start = ASMReadTSC();
857 uint64_t u64End;
858 uint64_t StartTS = RTTimeNanoTS();
859 uint64_t EndTS;
860 do
861 {
862 RTThreadSleep(s_auSleep[i]);
863 u64End = ASMReadTSC();
864 EndTS = RTTimeNanoTS();
865 cMillies = (RTMSINTERVAL)((EndTS - StartTS + 500000) / 1000000);
866 } while ( cMillies == 0 /* the sleep may be interrupted... */
867 || (cMillies < 20 && --cTries > 0));
868 uint64_t u64Diff = u64End - u64Start;
869
870 au64Samples[i] = (u64Diff * 1000) / cMillies;
871 AssertMsg(cTries > 0, ("cMillies=%d i=%d\n", cMillies, i));
872 }
873
874 /*
875 * Discard the highest and lowest results and calculate the average.
876 */
877 unsigned iHigh = 0;
878 unsigned iLow = 0;
879 for (i = 1; i < RT_ELEMENTS(au64Samples); i++)
880 {
881 if (au64Samples[i] < au64Samples[iLow])
882 iLow = i;
883 if (au64Samples[i] > au64Samples[iHigh])
884 iHigh = i;
885 }
886 au64Samples[iLow] = 0;
887 au64Samples[iHigh] = 0;
888
889 u64Hz = au64Samples[0];
890 for (i = 1; i < RT_ELEMENTS(au64Samples); i++)
891 u64Hz += au64Samples[i];
892 u64Hz /= RT_ELEMENTS(au64Samples) - 2;
893
894 return u64Hz;
895}
896
897
898/**
899 * Finalizes the TM initialization.
900 *
901 * @returns VBox status code.
902 * @param pVM The VM to operate on.
903 */
904VMM_INT_DECL(int) TMR3InitFinalize(PVM pVM)
905{
906 int rc;
907
908 /*
909 * Resolve symbols.
910 */
911 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "tmVirtualNanoTSBad", &pVM->tm.s.VirtualGetRawDataRC.pfnBad);
912 AssertRCReturn(rc, rc);
913 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "tmVirtualNanoTSRediscover", &pVM->tm.s.VirtualGetRawDataRC.pfnRediscover);
914 AssertRCReturn(rc, rc);
915 if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceSync)
916 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLFenceSync", &pVM->tm.s.pfnVirtualGetRawRC);
917 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceAsync)
918 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLFenceAsync", &pVM->tm.s.pfnVirtualGetRawRC);
919 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacySync)
920 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLegacySync", &pVM->tm.s.pfnVirtualGetRawRC);
921 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacyAsync)
922 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLegacyAsync", &pVM->tm.s.pfnVirtualGetRawRC);
923 else
924 AssertFatalFailed();
925 AssertRCReturn(rc, rc);
926
927 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "tmVirtualNanoTSBad", &pVM->tm.s.VirtualGetRawDataR0.pfnBad);
928 AssertRCReturn(rc, rc);
929 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "tmVirtualNanoTSRediscover", &pVM->tm.s.VirtualGetRawDataR0.pfnRediscover);
930 AssertRCReturn(rc, rc);
931 if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceSync)
932 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "RTTimeNanoTSLFenceSync", &pVM->tm.s.pfnVirtualGetRawR0);
933 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceAsync)
934 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "RTTimeNanoTSLFenceAsync", &pVM->tm.s.pfnVirtualGetRawR0);
935 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacySync)
936 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "RTTimeNanoTSLegacySync", &pVM->tm.s.pfnVirtualGetRawR0);
937 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacyAsync)
938 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "RTTimeNanoTSLegacyAsync", &pVM->tm.s.pfnVirtualGetRawR0);
939 else
940 AssertFatalFailed();
941 AssertRCReturn(rc, rc);
942
943#ifndef VBOX_WITHOUT_NS_ACCOUNTING
944 /*
945 * Create a timer for refreshing the CPU load stats.
946 */
947 PTMTIMER pTimer;
948 rc = TMR3TimerCreateInternal(pVM, TMCLOCK_REAL, tmR3CpuLoadTimer, NULL, "CPU Load Timer", &pTimer);
949 if (RT_SUCCESS(rc))
950 rc = TMTimerSetMillies(pTimer, 1000);
951#endif
952
953 return rc;
954}
955
956
957/**
958 * Applies relocations to data and code managed by this
959 * component. This function will be called at init and
960 * whenever the VMM need to relocate it self inside the GC.
961 *
962 * @param pVM The VM.
963 * @param offDelta Relocation delta relative to old location.
964 */
965VMM_INT_DECL(void) TMR3Relocate(PVM pVM, RTGCINTPTR offDelta)
966{
967 int rc;
968 LogFlow(("TMR3Relocate\n"));
969
970 pVM->tm.s.pvGIPRC = MMHyperR3ToRC(pVM, pVM->tm.s.pvGIPR3);
971 pVM->tm.s.paTimerQueuesRC = MMHyperR3ToRC(pVM, pVM->tm.s.paTimerQueuesR3);
972 pVM->tm.s.paTimerQueuesR0 = MMHyperR3ToR0(pVM, pVM->tm.s.paTimerQueuesR3);
973
974 pVM->tm.s.VirtualGetRawDataRC.pu64Prev = MMHyperR3ToRC(pVM, (void *)&pVM->tm.s.u64VirtualRawPrev);
975 AssertFatal(pVM->tm.s.VirtualGetRawDataRC.pu64Prev);
976 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "tmVirtualNanoTSBad", &pVM->tm.s.VirtualGetRawDataRC.pfnBad);
977 AssertFatalRC(rc);
978 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "tmVirtualNanoTSRediscover", &pVM->tm.s.VirtualGetRawDataRC.pfnRediscover);
979 AssertFatalRC(rc);
980
981 if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceSync)
982 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLFenceSync", &pVM->tm.s.pfnVirtualGetRawRC);
983 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceAsync)
984 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLFenceAsync", &pVM->tm.s.pfnVirtualGetRawRC);
985 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacySync)
986 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLegacySync", &pVM->tm.s.pfnVirtualGetRawRC);
987 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacyAsync)
988 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLegacyAsync", &pVM->tm.s.pfnVirtualGetRawRC);
989 else
990 AssertFatalFailed();
991 AssertFatalRC(rc);
992
993 /*
994 * Iterate the timers updating the pVMRC pointers.
995 */
996 for (PTMTIMER pTimer = pVM->tm.s.pCreated; pTimer; pTimer = pTimer->pBigNext)
997 {
998 pTimer->pVMRC = pVM->pVMRC;
999 pTimer->pVMR0 = pVM->pVMR0;
1000 }
1001}
1002
1003
1004/**
1005 * Terminates the TM.
1006 *
1007 * Termination means cleaning up and freeing all resources,
1008 * the VM it self is at this point powered off or suspended.
1009 *
1010 * @returns VBox status code.
1011 * @param pVM The VM to operate on.
1012 */
1013VMM_INT_DECL(int) TMR3Term(PVM pVM)
1014{
1015 AssertMsg(pVM->tm.s.offVM, ("bad init order!\n"));
1016 if (pVM->tm.s.pTimer)
1017 {
1018 int rc = RTTimerDestroy(pVM->tm.s.pTimer);
1019 AssertRC(rc);
1020 pVM->tm.s.pTimer = NULL;
1021 }
1022
1023 return VINF_SUCCESS;
1024}
1025
1026
1027/**
1028 * The VM is being reset.
1029 *
1030 * For the TM component this means that a rescheduling is preformed,
1031 * the FF is cleared and but without running the queues. We'll have to
1032 * check if this makes sense or not, but it seems like a good idea now....
1033 *
1034 * @param pVM VM handle.
1035 */
1036VMM_INT_DECL(void) TMR3Reset(PVM pVM)
1037{
1038 LogFlow(("TMR3Reset:\n"));
1039 VM_ASSERT_EMT(pVM);
1040 tmTimerLock(pVM);
1041
1042 /*
1043 * Abort any pending catch up.
1044 * This isn't perfect...
1045 */
1046 if (pVM->tm.s.fVirtualSyncCatchUp)
1047 {
1048 const uint64_t offVirtualNow = TMVirtualGetNoCheck(pVM);
1049 const uint64_t offVirtualSyncNow = TMVirtualSyncGetNoCheck(pVM);
1050 if (pVM->tm.s.fVirtualSyncCatchUp)
1051 {
1052 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
1053
1054 const uint64_t offOld = pVM->tm.s.offVirtualSyncGivenUp;
1055 const uint64_t offNew = offVirtualNow - offVirtualSyncNow;
1056 Assert(offOld <= offNew);
1057 ASMAtomicWriteU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp, offNew);
1058 ASMAtomicWriteU64((uint64_t volatile *)&pVM->tm.s.offVirtualSync, offNew);
1059 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
1060 LogRel(("TM: Aborting catch-up attempt on reset with a %'RU64 ns lag on reset; new total: %'RU64 ns\n", offNew - offOld, offNew));
1061 }
1062 }
1063
1064 /*
1065 * Process the queues.
1066 */
1067 for (int i = 0; i < TMCLOCK_MAX; i++)
1068 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[i]);
1069#ifdef VBOX_STRICT
1070 tmTimerQueuesSanityChecks(pVM, "TMR3Reset");
1071#endif
1072
1073 PVMCPU pVCpuDst = &pVM->aCpus[pVM->tm.s.idTimerCpu];
1074 VMCPU_FF_CLEAR(pVCpuDst, VMCPU_FF_TIMER); /** @todo FIXME: this isn't right. */
1075 tmTimerUnlock(pVM);
1076}
1077
1078
1079/**
1080 * Resolve a builtin RC symbol.
1081 * Called by PDM when loading or relocating GC modules.
1082 *
1083 * @returns VBox status
1084 * @param pVM VM Handle.
1085 * @param pszSymbol Symbol to resolve.
1086 * @param pRCPtrValue Where to store the symbol value.
1087 * @remark This has to work before TMR3Relocate() is called.
1088 */
1089VMM_INT_DECL(int) TMR3GetImportRC(PVM pVM, const char *pszSymbol, PRTRCPTR pRCPtrValue)
1090{
1091 if (!strcmp(pszSymbol, "g_pSUPGlobalInfoPage"))
1092 *pRCPtrValue = MMHyperR3ToRC(pVM, &pVM->tm.s.pvGIPRC);
1093 //else if (..)
1094 else
1095 return VERR_SYMBOL_NOT_FOUND;
1096 return VINF_SUCCESS;
1097}
1098
1099
1100/**
1101 * Execute state save operation.
1102 *
1103 * @returns VBox status code.
1104 * @param pVM VM Handle.
1105 * @param pSSM SSM operation handle.
1106 */
1107static DECLCALLBACK(int) tmR3Save(PVM pVM, PSSMHANDLE pSSM)
1108{
1109 LogFlow(("tmR3Save:\n"));
1110#ifdef VBOX_STRICT
1111 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1112 {
1113 PVMCPU pVCpu = &pVM->aCpus[i];
1114 Assert(!pVCpu->tm.s.fTSCTicking);
1115 }
1116 Assert(!pVM->tm.s.cVirtualTicking);
1117 Assert(!pVM->tm.s.fVirtualSyncTicking);
1118#endif
1119
1120 /*
1121 * Save the virtual clocks.
1122 */
1123 /* the virtual clock. */
1124 SSMR3PutU64(pSSM, TMCLOCK_FREQ_VIRTUAL);
1125 SSMR3PutU64(pSSM, pVM->tm.s.u64Virtual);
1126
1127 /* the virtual timer synchronous clock. */
1128 SSMR3PutU64(pSSM, pVM->tm.s.u64VirtualSync);
1129 SSMR3PutU64(pSSM, pVM->tm.s.offVirtualSync);
1130 SSMR3PutU64(pSSM, pVM->tm.s.offVirtualSyncGivenUp);
1131 SSMR3PutU64(pSSM, pVM->tm.s.u64VirtualSyncCatchUpPrev);
1132 SSMR3PutBool(pSSM, pVM->tm.s.fVirtualSyncCatchUp);
1133
1134 /* real time clock */
1135 SSMR3PutU64(pSSM, TMCLOCK_FREQ_REAL);
1136
1137 /* the cpu tick clock. */
1138 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1139 {
1140 PVMCPU pVCpu = &pVM->aCpus[i];
1141 SSMR3PutU64(pSSM, TMCpuTickGet(pVCpu));
1142 }
1143 return SSMR3PutU64(pSSM, pVM->tm.s.cTSCTicksPerSecond);
1144}
1145
1146
1147/**
1148 * Execute state load operation.
1149 *
1150 * @returns VBox status code.
1151 * @param pVM VM Handle.
1152 * @param pSSM SSM operation handle.
1153 * @param uVersion Data layout version.
1154 * @param uPass The data pass.
1155 */
1156static DECLCALLBACK(int) tmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass)
1157{
1158 LogFlow(("tmR3Load:\n"));
1159
1160 Assert(uPass == SSM_PASS_FINAL); NOREF(uPass);
1161#ifdef VBOX_STRICT
1162 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1163 {
1164 PVMCPU pVCpu = &pVM->aCpus[i];
1165 Assert(!pVCpu->tm.s.fTSCTicking);
1166 }
1167 Assert(!pVM->tm.s.cVirtualTicking);
1168 Assert(!pVM->tm.s.fVirtualSyncTicking);
1169#endif
1170
1171 /*
1172 * Validate version.
1173 */
1174 if (uVersion != TM_SAVED_STATE_VERSION)
1175 {
1176 AssertMsgFailed(("tmR3Load: Invalid version uVersion=%d!\n", uVersion));
1177 return VERR_SSM_UNSUPPORTED_DATA_UNIT_VERSION;
1178 }
1179
1180 /*
1181 * Load the virtual clock.
1182 */
1183 pVM->tm.s.cVirtualTicking = 0;
1184 /* the virtual clock. */
1185 uint64_t u64Hz;
1186 int rc = SSMR3GetU64(pSSM, &u64Hz);
1187 if (RT_FAILURE(rc))
1188 return rc;
1189 if (u64Hz != TMCLOCK_FREQ_VIRTUAL)
1190 {
1191 AssertMsgFailed(("The virtual clock frequency differs! Saved: %'RU64 Binary: %'RU64\n",
1192 u64Hz, TMCLOCK_FREQ_VIRTUAL));
1193 return VERR_SSM_VIRTUAL_CLOCK_HZ;
1194 }
1195 SSMR3GetU64(pSSM, &pVM->tm.s.u64Virtual);
1196 pVM->tm.s.u64VirtualOffset = 0;
1197
1198 /* the virtual timer synchronous clock. */
1199 pVM->tm.s.fVirtualSyncTicking = false;
1200 uint64_t u64;
1201 SSMR3GetU64(pSSM, &u64);
1202 pVM->tm.s.u64VirtualSync = u64;
1203 SSMR3GetU64(pSSM, &u64);
1204 pVM->tm.s.offVirtualSync = u64;
1205 SSMR3GetU64(pSSM, &u64);
1206 pVM->tm.s.offVirtualSyncGivenUp = u64;
1207 SSMR3GetU64(pSSM, &u64);
1208 pVM->tm.s.u64VirtualSyncCatchUpPrev = u64;
1209 bool f;
1210 SSMR3GetBool(pSSM, &f);
1211 pVM->tm.s.fVirtualSyncCatchUp = f;
1212
1213 /* the real clock */
1214 rc = SSMR3GetU64(pSSM, &u64Hz);
1215 if (RT_FAILURE(rc))
1216 return rc;
1217 if (u64Hz != TMCLOCK_FREQ_REAL)
1218 {
1219 AssertMsgFailed(("The real clock frequency differs! Saved: %'RU64 Binary: %'RU64\n",
1220 u64Hz, TMCLOCK_FREQ_REAL));
1221 return VERR_SSM_VIRTUAL_CLOCK_HZ; /* misleading... */
1222 }
1223
1224 /* the cpu tick clock. */
1225 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1226 {
1227 PVMCPU pVCpu = &pVM->aCpus[i];
1228
1229 pVCpu->tm.s.fTSCTicking = false;
1230 SSMR3GetU64(pSSM, &pVCpu->tm.s.u64TSC);
1231
1232 if (pVM->tm.s.fTSCUseRealTSC)
1233 pVCpu->tm.s.offTSCRawSrc = 0; /** @todo TSC restore stuff and HWACC. */
1234 }
1235
1236 rc = SSMR3GetU64(pSSM, &u64Hz);
1237 if (RT_FAILURE(rc))
1238 return rc;
1239 if (!pVM->tm.s.fTSCUseRealTSC)
1240 pVM->tm.s.cTSCTicksPerSecond = u64Hz;
1241
1242 LogRel(("TM: cTSCTicksPerSecond=%#RX64 (%'RU64) fTSCVirtualized=%RTbool fTSCUseRealTSC=%RTbool (state load)\n",
1243 pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.fTSCVirtualized, pVM->tm.s.fTSCUseRealTSC));
1244
1245 /*
1246 * Make sure timers get rescheduled immediately.
1247 */
1248 PVMCPU pVCpuDst = &pVM->aCpus[pVM->tm.s.idTimerCpu];
1249 VMCPU_FF_SET(pVCpuDst, VMCPU_FF_TIMER);
1250
1251 return VINF_SUCCESS;
1252}
1253
1254
1255/**
1256 * Internal TMR3TimerCreate worker.
1257 *
1258 * @returns VBox status code.
1259 * @param pVM The VM handle.
1260 * @param enmClock The timer clock.
1261 * @param pszDesc The timer description.
1262 * @param ppTimer Where to store the timer pointer on success.
1263 */
1264static int tmr3TimerCreate(PVM pVM, TMCLOCK enmClock, const char *pszDesc, PPTMTIMERR3 ppTimer)
1265{
1266 VM_ASSERT_EMT(pVM);
1267
1268 /*
1269 * Allocate the timer.
1270 */
1271 PTMTIMERR3 pTimer = NULL;
1272 if (pVM->tm.s.pFree && VM_IS_EMT(pVM))
1273 {
1274 pTimer = pVM->tm.s.pFree;
1275 pVM->tm.s.pFree = pTimer->pBigNext;
1276 Log3(("TM: Recycling timer %p, new free head %p.\n", pTimer, pTimer->pBigNext));
1277 }
1278
1279 if (!pTimer)
1280 {
1281 int rc = MMHyperAlloc(pVM, sizeof(*pTimer), 0, MM_TAG_TM, (void **)&pTimer);
1282 if (RT_FAILURE(rc))
1283 return rc;
1284 Log3(("TM: Allocated new timer %p\n", pTimer));
1285 }
1286
1287 /*
1288 * Initialize it.
1289 */
1290 pTimer->u64Expire = 0;
1291 pTimer->enmClock = enmClock;
1292 pTimer->pVMR3 = pVM;
1293 pTimer->pVMR0 = pVM->pVMR0;
1294 pTimer->pVMRC = pVM->pVMRC;
1295 pTimer->enmState = TMTIMERSTATE_STOPPED;
1296 pTimer->offScheduleNext = 0;
1297 pTimer->offNext = 0;
1298 pTimer->offPrev = 0;
1299 pTimer->pvUser = NULL;
1300 pTimer->pCritSect = NULL;
1301 pTimer->pszDesc = pszDesc;
1302
1303 /* insert into the list of created timers. */
1304 tmTimerLock(pVM);
1305 pTimer->pBigPrev = NULL;
1306 pTimer->pBigNext = pVM->tm.s.pCreated;
1307 pVM->tm.s.pCreated = pTimer;
1308 if (pTimer->pBigNext)
1309 pTimer->pBigNext->pBigPrev = pTimer;
1310#ifdef VBOX_STRICT
1311 tmTimerQueuesSanityChecks(pVM, "tmR3TimerCreate");
1312#endif
1313 tmTimerUnlock(pVM);
1314
1315 *ppTimer = pTimer;
1316 return VINF_SUCCESS;
1317}
1318
1319
1320/**
1321 * Creates a device timer.
1322 *
1323 * @returns VBox status.
1324 * @param pVM The VM to create the timer in.
1325 * @param pDevIns Device instance.
1326 * @param enmClock The clock to use on this timer.
1327 * @param pfnCallback Callback function.
1328 * @param pvUser The user argument to the callback.
1329 * @param fFlags Timer creation flags, see grp_tm_timer_flags.
1330 * @param pszDesc Pointer to description string which must stay around
1331 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1332 * @param ppTimer Where to store the timer on success.
1333 */
1334VMM_INT_DECL(int) TMR3TimerCreateDevice(PVM pVM, PPDMDEVINS pDevIns, TMCLOCK enmClock,
1335 PFNTMTIMERDEV pfnCallback, void *pvUser,
1336 uint32_t fFlags, const char *pszDesc, PPTMTIMERR3 ppTimer)
1337{
1338 AssertReturn(!(fFlags & ~(TMTIMER_FLAGS_NO_CRIT_SECT)), VERR_INVALID_PARAMETER);
1339
1340 /*
1341 * Allocate and init stuff.
1342 */
1343 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, ppTimer);
1344 if (RT_SUCCESS(rc))
1345 {
1346 (*ppTimer)->enmType = TMTIMERTYPE_DEV;
1347 (*ppTimer)->u.Dev.pfnTimer = pfnCallback;
1348 (*ppTimer)->u.Dev.pDevIns = pDevIns;
1349 (*ppTimer)->pvUser = pvUser;
1350 if (!(fFlags & TMTIMER_FLAGS_NO_CRIT_SECT))
1351 (*ppTimer)->pCritSect = PDMR3DevGetCritSect(pVM, pDevIns);
1352 Log(("TM: Created device timer %p clock %d callback %p '%s'\n", (*ppTimer), enmClock, pfnCallback, pszDesc));
1353 }
1354
1355 return rc;
1356}
1357
1358
1359
1360
1361/**
1362 * Creates a USB device timer.
1363 *
1364 * @returns VBox status.
1365 * @param pVM The VM to create the timer in.
1366 * @param pUsbIns The USB device instance.
1367 * @param enmClock The clock to use on this timer.
1368 * @param pfnCallback Callback function.
1369 * @param pvUser The user argument to the callback.
1370 * @param fFlags Timer creation flags, see grp_tm_timer_flags.
1371 * @param pszDesc Pointer to description string which must stay around
1372 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1373 * @param ppTimer Where to store the timer on success.
1374 */
1375VMM_INT_DECL(int) TMR3TimerCreateUsb(PVM pVM, PPDMUSBINS pUsbIns, TMCLOCK enmClock,
1376 PFNTMTIMERUSB pfnCallback, void *pvUser,
1377 uint32_t fFlags, const char *pszDesc, PPTMTIMERR3 ppTimer)
1378{
1379 AssertReturn(!(fFlags & ~(TMTIMER_FLAGS_NO_CRIT_SECT)), VERR_INVALID_PARAMETER);
1380
1381 /*
1382 * Allocate and init stuff.
1383 */
1384 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, ppTimer);
1385 if (RT_SUCCESS(rc))
1386 {
1387 (*ppTimer)->enmType = TMTIMERTYPE_USB;
1388 (*ppTimer)->u.Usb.pfnTimer = pfnCallback;
1389 (*ppTimer)->u.Usb.pUsbIns = pUsbIns;
1390 (*ppTimer)->pvUser = pvUser;
1391 //if (!(fFlags & TMTIMER_FLAGS_NO_CRIT_SECT))
1392 //{
1393 // if (pDevIns->pCritSectR3)
1394 // (*ppTimer)->pCritSect = pUsbIns->pCritSectR3;
1395 // else
1396 // (*ppTimer)->pCritSect = IOMR3GetCritSect(pVM);
1397 //}
1398 Log(("TM: Created USB device timer %p clock %d callback %p '%s'\n", (*ppTimer), enmClock, pfnCallback, pszDesc));
1399 }
1400
1401 return rc;
1402}
1403
1404
1405/**
1406 * Creates a driver timer.
1407 *
1408 * @returns VBox status.
1409 * @param pVM The VM to create the timer in.
1410 * @param pDrvIns Driver instance.
1411 * @param enmClock The clock to use on this timer.
1412 * @param pfnCallback Callback function.
1413 * @param pvUser The user argument to the callback.
1414 * @param fFlags Timer creation flags, see grp_tm_timer_flags.
1415 * @param pszDesc Pointer to description string which must stay around
1416 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1417 * @param ppTimer Where to store the timer on success.
1418 */
1419VMM_INT_DECL(int) TMR3TimerCreateDriver(PVM pVM, PPDMDRVINS pDrvIns, TMCLOCK enmClock, PFNTMTIMERDRV pfnCallback, void *pvUser,
1420 uint32_t fFlags, const char *pszDesc, PPTMTIMERR3 ppTimer)
1421{
1422 AssertReturn(!(fFlags & ~(TMTIMER_FLAGS_NO_CRIT_SECT)), VERR_INVALID_PARAMETER);
1423
1424 /*
1425 * Allocate and init stuff.
1426 */
1427 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, ppTimer);
1428 if (RT_SUCCESS(rc))
1429 {
1430 (*ppTimer)->enmType = TMTIMERTYPE_DRV;
1431 (*ppTimer)->u.Drv.pfnTimer = pfnCallback;
1432 (*ppTimer)->u.Drv.pDrvIns = pDrvIns;
1433 (*ppTimer)->pvUser = pvUser;
1434 Log(("TM: Created device timer %p clock %d callback %p '%s'\n", (*ppTimer), enmClock, pfnCallback, pszDesc));
1435 }
1436
1437 return rc;
1438}
1439
1440
1441/**
1442 * Creates an internal timer.
1443 *
1444 * @returns VBox status.
1445 * @param pVM The VM to create the timer in.
1446 * @param enmClock The clock to use on this timer.
1447 * @param pfnCallback Callback function.
1448 * @param pvUser User argument to be passed to the callback.
1449 * @param pszDesc Pointer to description string which must stay around
1450 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1451 * @param ppTimer Where to store the timer on success.
1452 */
1453VMMR3DECL(int) TMR3TimerCreateInternal(PVM pVM, TMCLOCK enmClock, PFNTMTIMERINT pfnCallback, void *pvUser, const char *pszDesc, PPTMTIMERR3 ppTimer)
1454{
1455 /*
1456 * Allocate and init stuff.
1457 */
1458 PTMTIMER pTimer;
1459 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, &pTimer);
1460 if (RT_SUCCESS(rc))
1461 {
1462 pTimer->enmType = TMTIMERTYPE_INTERNAL;
1463 pTimer->u.Internal.pfnTimer = pfnCallback;
1464 pTimer->pvUser = pvUser;
1465 *ppTimer = pTimer;
1466 Log(("TM: Created internal timer %p clock %d callback %p '%s'\n", pTimer, enmClock, pfnCallback, pszDesc));
1467 }
1468
1469 return rc;
1470}
1471
1472/**
1473 * Creates an external timer.
1474 *
1475 * @returns Timer handle on success.
1476 * @returns NULL on failure.
1477 * @param pVM The VM to create the timer in.
1478 * @param enmClock The clock to use on this timer.
1479 * @param pfnCallback Callback function.
1480 * @param pvUser User argument.
1481 * @param pszDesc Pointer to description string which must stay around
1482 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1483 */
1484VMMR3DECL(PTMTIMERR3) TMR3TimerCreateExternal(PVM pVM, TMCLOCK enmClock, PFNTMTIMEREXT pfnCallback, void *pvUser, const char *pszDesc)
1485{
1486 /*
1487 * Allocate and init stuff.
1488 */
1489 PTMTIMERR3 pTimer;
1490 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, &pTimer);
1491 if (RT_SUCCESS(rc))
1492 {
1493 pTimer->enmType = TMTIMERTYPE_EXTERNAL;
1494 pTimer->u.External.pfnTimer = pfnCallback;
1495 pTimer->pvUser = pvUser;
1496 Log(("TM: Created external timer %p clock %d callback %p '%s'\n", pTimer, enmClock, pfnCallback, pszDesc));
1497 return pTimer;
1498 }
1499
1500 return NULL;
1501}
1502
1503
1504/**
1505 * Destroy a timer
1506 *
1507 * @returns VBox status.
1508 * @param pTimer Timer handle as returned by one of the create functions.
1509 */
1510VMMR3DECL(int) TMR3TimerDestroy(PTMTIMER pTimer)
1511{
1512 /*
1513 * Be extra careful here.
1514 */
1515 if (!pTimer)
1516 return VINF_SUCCESS;
1517 AssertPtr(pTimer);
1518 Assert((unsigned)pTimer->enmClock < (unsigned)TMCLOCK_MAX);
1519
1520 PVM pVM = pTimer->CTX_SUFF(pVM);
1521 PTMTIMERQUEUE pQueue = &pVM->tm.s.CTX_SUFF(paTimerQueues)[pTimer->enmClock];
1522 bool fActive = false;
1523 bool fPending = false;
1524
1525 AssertMsg( !pTimer->pCritSect
1526 || VMR3GetState(pVM) != VMSTATE_RUNNING
1527 || PDMCritSectIsOwner(pTimer->pCritSect), ("%s\n", pTimer->pszDesc));
1528
1529 /*
1530 * The rest of the game happens behind the lock, just
1531 * like create does. All the work is done here.
1532 */
1533 tmTimerLock(pVM);
1534 for (int cRetries = 1000;; cRetries--)
1535 {
1536 /*
1537 * Change to the DESTROY state.
1538 */
1539 TMTIMERSTATE enmState = pTimer->enmState;
1540 TMTIMERSTATE enmNewState = enmState;
1541 Log2(("TMTimerDestroy: %p:{.enmState=%s, .pszDesc='%s'} cRetries=%d\n",
1542 pTimer, tmTimerState(enmState), R3STRING(pTimer->pszDesc), cRetries));
1543 switch (enmState)
1544 {
1545 case TMTIMERSTATE_STOPPED:
1546 case TMTIMERSTATE_EXPIRED_DELIVER:
1547 break;
1548
1549 case TMTIMERSTATE_ACTIVE:
1550 fActive = true;
1551 break;
1552
1553 case TMTIMERSTATE_PENDING_STOP:
1554 case TMTIMERSTATE_PENDING_STOP_SCHEDULE:
1555 case TMTIMERSTATE_PENDING_RESCHEDULE:
1556 fActive = true;
1557 fPending = true;
1558 break;
1559
1560 case TMTIMERSTATE_PENDING_SCHEDULE:
1561 fPending = true;
1562 break;
1563
1564 /*
1565 * This shouldn't happen as the caller should make sure there are no races.
1566 */
1567 case TMTIMERSTATE_EXPIRED_GET_UNLINK:
1568 case TMTIMERSTATE_PENDING_SCHEDULE_SET_EXPIRE:
1569 case TMTIMERSTATE_PENDING_RESCHEDULE_SET_EXPIRE:
1570 AssertMsgFailed(("%p:.enmState=%s %s\n", pTimer, tmTimerState(enmState), pTimer->pszDesc));
1571 tmTimerUnlock(pVM);
1572 if (!RTThreadYield())
1573 RTThreadSleep(1);
1574 AssertMsgReturn(cRetries > 0, ("Failed waiting for stable state. state=%d (%s)\n", pTimer->enmState, pTimer->pszDesc),
1575 VERR_TM_UNSTABLE_STATE);
1576 tmTimerLock(pVM);
1577 continue;
1578
1579 /*
1580 * Invalid states.
1581 */
1582 case TMTIMERSTATE_FREE:
1583 case TMTIMERSTATE_DESTROY:
1584 tmTimerUnlock(pVM);
1585 AssertLogRelMsgFailedReturn(("pTimer=%p %s\n", pTimer, tmTimerState(enmState)), VERR_TM_INVALID_STATE);
1586
1587 default:
1588 AssertMsgFailed(("Unknown timer state %d (%s)\n", enmState, R3STRING(pTimer->pszDesc)));
1589 tmTimerUnlock(pVM);
1590 return VERR_TM_UNKNOWN_STATE;
1591 }
1592
1593 /*
1594 * Try switch to the destroy state.
1595 * This should always succeed as the caller should make sure there are no race.
1596 */
1597 bool fRc;
1598 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_DESTROY, enmState, fRc);
1599 if (fRc)
1600 break;
1601 AssertMsgFailed(("%p:.enmState=%s %s\n", pTimer, tmTimerState(enmState), pTimer->pszDesc));
1602 tmTimerUnlock(pVM);
1603 AssertMsgReturn(cRetries > 0, ("Failed waiting for stable state. state=%d (%s)\n", pTimer->enmState, pTimer->pszDesc),
1604 VERR_TM_UNSTABLE_STATE);
1605 tmTimerLock(pVM);
1606 }
1607
1608 /*
1609 * Unlink from the active list.
1610 */
1611 if (fActive)
1612 {
1613 const PTMTIMER pPrev = TMTIMER_GET_PREV(pTimer);
1614 const PTMTIMER pNext = TMTIMER_GET_NEXT(pTimer);
1615 if (pPrev)
1616 TMTIMER_SET_NEXT(pPrev, pNext);
1617 else
1618 {
1619 TMTIMER_SET_HEAD(pQueue, pNext);
1620 pQueue->u64Expire = pNext ? pNext->u64Expire : INT64_MAX;
1621 }
1622 if (pNext)
1623 TMTIMER_SET_PREV(pNext, pPrev);
1624 pTimer->offNext = 0;
1625 pTimer->offPrev = 0;
1626 }
1627
1628 /*
1629 * Unlink from the schedule list by running it.
1630 */
1631 if (fPending)
1632 {
1633 Log3(("TMR3TimerDestroy: tmTimerQueueSchedule\n"));
1634 STAM_PROFILE_START(&pVM->tm.s.CTX_SUFF_Z(StatScheduleOne), a);
1635 Assert(pQueue->offSchedule);
1636 tmTimerQueueSchedule(pVM, pQueue);
1637 }
1638
1639 /*
1640 * Read to move the timer from the created list and onto the free list.
1641 */
1642 Assert(!pTimer->offNext); Assert(!pTimer->offPrev); Assert(!pTimer->offScheduleNext);
1643
1644 /* unlink from created list */
1645 if (pTimer->pBigPrev)
1646 pTimer->pBigPrev->pBigNext = pTimer->pBigNext;
1647 else
1648 pVM->tm.s.pCreated = pTimer->pBigNext;
1649 if (pTimer->pBigNext)
1650 pTimer->pBigNext->pBigPrev = pTimer->pBigPrev;
1651 pTimer->pBigNext = 0;
1652 pTimer->pBigPrev = 0;
1653
1654 /* free */
1655 Log2(("TM: Inserting %p into the free list ahead of %p!\n", pTimer, pVM->tm.s.pFree));
1656 TM_SET_STATE(pTimer, TMTIMERSTATE_FREE);
1657 pTimer->pBigNext = pVM->tm.s.pFree;
1658 pVM->tm.s.pFree = pTimer;
1659
1660#ifdef VBOX_STRICT
1661 tmTimerQueuesSanityChecks(pVM, "TMR3TimerDestroy");
1662#endif
1663 tmTimerUnlock(pVM);
1664 return VINF_SUCCESS;
1665}
1666
1667
1668/**
1669 * Destroy all timers owned by a device.
1670 *
1671 * @returns VBox status.
1672 * @param pVM VM handle.
1673 * @param pDevIns Device which timers should be destroyed.
1674 */
1675VMM_INT_DECL(int) TMR3TimerDestroyDevice(PVM pVM, PPDMDEVINS pDevIns)
1676{
1677 LogFlow(("TMR3TimerDestroyDevice: pDevIns=%p\n", pDevIns));
1678 if (!pDevIns)
1679 return VERR_INVALID_PARAMETER;
1680
1681 tmTimerLock(pVM);
1682 PTMTIMER pCur = pVM->tm.s.pCreated;
1683 while (pCur)
1684 {
1685 PTMTIMER pDestroy = pCur;
1686 pCur = pDestroy->pBigNext;
1687 if ( pDestroy->enmType == TMTIMERTYPE_DEV
1688 && pDestroy->u.Dev.pDevIns == pDevIns)
1689 {
1690 int rc = TMR3TimerDestroy(pDestroy);
1691 AssertRC(rc);
1692 }
1693 }
1694 tmTimerUnlock(pVM);
1695
1696 LogFlow(("TMR3TimerDestroyDevice: returns VINF_SUCCESS\n"));
1697 return VINF_SUCCESS;
1698}
1699
1700
1701/**
1702 * Destroy all timers owned by a USB device.
1703 *
1704 * @returns VBox status.
1705 * @param pVM VM handle.
1706 * @param pUsbIns USB device which timers should be destroyed.
1707 */
1708VMM_INT_DECL(int) TMR3TimerDestroyUsb(PVM pVM, PPDMUSBINS pUsbIns)
1709{
1710 LogFlow(("TMR3TimerDestroyUsb: pUsbIns=%p\n", pUsbIns));
1711 if (!pUsbIns)
1712 return VERR_INVALID_PARAMETER;
1713
1714 tmTimerLock(pVM);
1715 PTMTIMER pCur = pVM->tm.s.pCreated;
1716 while (pCur)
1717 {
1718 PTMTIMER pDestroy = pCur;
1719 pCur = pDestroy->pBigNext;
1720 if ( pDestroy->enmType == TMTIMERTYPE_USB
1721 && pDestroy->u.Usb.pUsbIns == pUsbIns)
1722 {
1723 int rc = TMR3TimerDestroy(pDestroy);
1724 AssertRC(rc);
1725 }
1726 }
1727 tmTimerUnlock(pVM);
1728
1729 LogFlow(("TMR3TimerDestroyUsb: returns VINF_SUCCESS\n"));
1730 return VINF_SUCCESS;
1731}
1732
1733
1734/**
1735 * Destroy all timers owned by a driver.
1736 *
1737 * @returns VBox status.
1738 * @param pVM VM handle.
1739 * @param pDrvIns Driver which timers should be destroyed.
1740 */
1741VMM_INT_DECL(int) TMR3TimerDestroyDriver(PVM pVM, PPDMDRVINS pDrvIns)
1742{
1743 LogFlow(("TMR3TimerDestroyDriver: pDrvIns=%p\n", pDrvIns));
1744 if (!pDrvIns)
1745 return VERR_INVALID_PARAMETER;
1746
1747 tmTimerLock(pVM);
1748 PTMTIMER pCur = pVM->tm.s.pCreated;
1749 while (pCur)
1750 {
1751 PTMTIMER pDestroy = pCur;
1752 pCur = pDestroy->pBigNext;
1753 if ( pDestroy->enmType == TMTIMERTYPE_DRV
1754 && pDestroy->u.Drv.pDrvIns == pDrvIns)
1755 {
1756 int rc = TMR3TimerDestroy(pDestroy);
1757 AssertRC(rc);
1758 }
1759 }
1760 tmTimerUnlock(pVM);
1761
1762 LogFlow(("TMR3TimerDestroyDriver: returns VINF_SUCCESS\n"));
1763 return VINF_SUCCESS;
1764}
1765
1766
1767/**
1768 * Internal function for getting the clock time.
1769 *
1770 * @returns clock time.
1771 * @param pVM The VM handle.
1772 * @param enmClock The clock.
1773 */
1774DECLINLINE(uint64_t) tmClock(PVM pVM, TMCLOCK enmClock)
1775{
1776 switch (enmClock)
1777 {
1778 case TMCLOCK_VIRTUAL: return TMVirtualGet(pVM);
1779 case TMCLOCK_VIRTUAL_SYNC: return TMVirtualSyncGet(pVM);
1780 case TMCLOCK_REAL: return TMRealGet(pVM);
1781 case TMCLOCK_TSC: return TMCpuTickGet(&pVM->aCpus[0] /* just take VCPU 0 */);
1782 default:
1783 AssertMsgFailed(("enmClock=%d\n", enmClock));
1784 return ~(uint64_t)0;
1785 }
1786}
1787
1788
1789/**
1790 * Checks if the sync queue has one or more expired timers.
1791 *
1792 * @returns true / false.
1793 *
1794 * @param pVM The VM handle.
1795 * @param enmClock The queue.
1796 */
1797DECLINLINE(bool) tmR3HasExpiredTimer(PVM pVM, TMCLOCK enmClock)
1798{
1799 const uint64_t u64Expire = pVM->tm.s.CTX_SUFF(paTimerQueues)[enmClock].u64Expire;
1800 return u64Expire != INT64_MAX && u64Expire <= tmClock(pVM, enmClock);
1801}
1802
1803
1804/**
1805 * Checks for expired timers in all the queues.
1806 *
1807 * @returns true / false.
1808 * @param pVM The VM handle.
1809 */
1810DECLINLINE(bool) tmR3AnyExpiredTimers(PVM pVM)
1811{
1812 /*
1813 * Combine the time calculation for the first two since we're not on EMT
1814 * TMVirtualSyncGet only permits EMT.
1815 */
1816 uint64_t u64Now = TMVirtualGetNoCheck(pVM);
1817 if (pVM->tm.s.CTX_SUFF(paTimerQueues)[TMCLOCK_VIRTUAL].u64Expire <= u64Now)
1818 return true;
1819 u64Now = pVM->tm.s.fVirtualSyncTicking
1820 ? u64Now - pVM->tm.s.offVirtualSync
1821 : pVM->tm.s.u64VirtualSync;
1822 if (pVM->tm.s.CTX_SUFF(paTimerQueues)[TMCLOCK_VIRTUAL_SYNC].u64Expire <= u64Now)
1823 return true;
1824
1825 /*
1826 * The remaining timers.
1827 */
1828 if (tmR3HasExpiredTimer(pVM, TMCLOCK_REAL))
1829 return true;
1830 if (tmR3HasExpiredTimer(pVM, TMCLOCK_TSC))
1831 return true;
1832 return false;
1833}
1834
1835
1836/**
1837 * Schedule timer callback.
1838 *
1839 * @param pTimer Timer handle.
1840 * @param pvUser VM handle.
1841 * @thread Timer thread.
1842 *
1843 * @remark We cannot do the scheduling and queues running from a timer handler
1844 * since it's not executing in EMT, and even if it was it would be async
1845 * and we wouldn't know the state of the affairs.
1846 * So, we'll just raise the timer FF and force any REM execution to exit.
1847 */
1848static DECLCALLBACK(void) tmR3TimerCallback(PRTTIMER pTimer, void *pvUser, uint64_t /*iTick*/)
1849{
1850 PVM pVM = (PVM)pvUser;
1851 PVMCPU pVCpuDst = &pVM->aCpus[pVM->tm.s.idTimerCpu];
1852
1853 AssertCompile(TMCLOCK_MAX == 4);
1854#ifdef DEBUG_Sander /* very annoying, keep it private. */
1855 if (VMCPU_FF_ISSET(pVCpuDst, VMCPU_FF_TIMER))
1856 Log(("tmR3TimerCallback: timer event still pending!!\n"));
1857#endif
1858 if ( !VMCPU_FF_ISSET(pVCpuDst, VMCPU_FF_TIMER)
1859 && ( pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].offSchedule /** @todo FIXME - reconsider offSchedule as a reason for running the timer queues. */
1860 || pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL].offSchedule
1861 || pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL].offSchedule
1862 || pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC].offSchedule
1863 || tmR3AnyExpiredTimers(pVM)
1864 )
1865 && !VMCPU_FF_ISSET(pVCpuDst, VMCPU_FF_TIMER)
1866 && !pVM->tm.s.fRunningQueues
1867 )
1868 {
1869 Log5(("TM(%u): FF: 0 -> 1\n", __LINE__));
1870 VMCPU_FF_SET(pVCpuDst, VMCPU_FF_TIMER);
1871 REMR3NotifyTimerPending(pVM, pVCpuDst);
1872 VMR3NotifyCpuFFU(pVCpuDst->pUVCpu, VMNOTIFYFF_FLAGS_DONE_REM /** @todo | VMNOTIFYFF_FLAGS_POKE ?*/);
1873 STAM_COUNTER_INC(&pVM->tm.s.StatTimerCallbackSetFF);
1874 }
1875}
1876
1877
1878/**
1879 * Schedules and runs any pending timers.
1880 *
1881 * This is normally called from a forced action handler in EMT.
1882 *
1883 * @param pVM The VM to run the timers for.
1884 *
1885 * @thread EMT (actually EMT0, but we fend off the others)
1886 */
1887VMMR3DECL(void) TMR3TimerQueuesDo(PVM pVM)
1888{
1889 /*
1890 * Only the dedicated timer EMT should do stuff here.
1891 * (fRunningQueues is only used as an indicator.)
1892 */
1893 Assert(pVM->tm.s.idTimerCpu < pVM->cCpus);
1894 PVMCPU pVCpuDst = &pVM->aCpus[pVM->tm.s.idTimerCpu];
1895 if (VMMGetCpu(pVM) != pVCpuDst)
1896 {
1897 Assert(pVM->cCpus > 1);
1898 return;
1899 }
1900 STAM_PROFILE_START(&pVM->tm.s.StatDoQueues, a);
1901 Log2(("TMR3TimerQueuesDo:\n"));
1902 Assert(!pVM->tm.s.fRunningQueues);
1903 ASMAtomicWriteBool(&pVM->tm.s.fRunningQueues, true);
1904 tmTimerLock(pVM);
1905
1906 /*
1907 * Process the queues.
1908 */
1909 AssertCompile(TMCLOCK_MAX == 4);
1910
1911 /* TMCLOCK_VIRTUAL_SYNC (see also TMR3VirtualSyncFF) */
1912 STAM_PROFILE_ADV_START(&pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL_SYNC], s1);
1913 tmVirtualSyncLock(pVM);
1914 ASMAtomicWriteBool(&pVM->tm.s.fRunningVirtualSyncQueue, true);
1915 VMCPU_FF_CLEAR(pVCpuDst, VMCPU_FF_TIMER); /* Clear the FF once we started working for real. */
1916
1917 Assert(!pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].offSchedule);
1918 tmR3TimerQueueRunVirtualSync(pVM);
1919 if (pVM->tm.s.fVirtualSyncTicking) /** @todo move into tmR3TimerQueueRunVirtualSync - FIXME */
1920 VM_FF_CLEAR(pVM, VM_FF_TM_VIRTUAL_SYNC);
1921
1922 ASMAtomicWriteBool(&pVM->tm.s.fRunningVirtualSyncQueue, false);
1923 tmVirtualSyncUnlock(pVM);
1924 STAM_PROFILE_ADV_STOP(&pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL_SYNC], s1);
1925
1926 /* TMCLOCK_VIRTUAL */
1927 STAM_PROFILE_ADV_START(&pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL], s2);
1928 if (pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL].offSchedule)
1929 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL]);
1930 tmR3TimerQueueRun(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL]);
1931 STAM_PROFILE_ADV_STOP(&pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL], s2);
1932
1933 /* TMCLOCK_TSC */
1934 Assert(!pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC].offActive); /* not used */
1935
1936 /* TMCLOCK_REAL */
1937 STAM_PROFILE_ADV_START(&pVM->tm.s.aStatDoQueues[TMCLOCK_REAL], s3);
1938 if (pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL].offSchedule)
1939 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL]);
1940 tmR3TimerQueueRun(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL]);
1941 STAM_PROFILE_ADV_STOP(&pVM->tm.s.aStatDoQueues[TMCLOCK_REAL], s3);
1942
1943#ifdef VBOX_STRICT
1944 /* check that we didn't screw up. */
1945 tmTimerQueuesSanityChecks(pVM, "TMR3TimerQueuesDo");
1946#endif
1947
1948 /* done */
1949 Log2(("TMR3TimerQueuesDo: returns void\n"));
1950 ASMAtomicWriteBool(&pVM->tm.s.fRunningQueues, false);
1951 tmTimerUnlock(pVM);
1952 STAM_PROFILE_STOP(&pVM->tm.s.StatDoQueues, a);
1953}
1954
1955//RT_C_DECLS_BEGIN
1956//int iomLock(PVM pVM);
1957//void iomUnlock(PVM pVM);
1958//RT_C_DECLS_END
1959
1960
1961/**
1962 * Schedules and runs any pending times in the specified queue.
1963 *
1964 * This is normally called from a forced action handler in EMT.
1965 *
1966 * @param pVM The VM to run the timers for.
1967 * @param pQueue The queue to run.
1968 */
1969static void tmR3TimerQueueRun(PVM pVM, PTMTIMERQUEUE pQueue)
1970{
1971 VM_ASSERT_EMT(pVM);
1972
1973 /*
1974 * Run timers.
1975 *
1976 * We check the clock once and run all timers which are ACTIVE
1977 * and have an expire time less or equal to the time we read.
1978 *
1979 * N.B. A generic unlink must be applied since other threads
1980 * are allowed to mess with any active timer at any time.
1981 * However, we only allow EMT to handle EXPIRED_PENDING
1982 * timers, thus enabling the timer handler function to
1983 * arm the timer again.
1984 */
1985 PTMTIMER pNext = TMTIMER_GET_HEAD(pQueue);
1986 if (!pNext)
1987 return;
1988 const uint64_t u64Now = tmClock(pVM, pQueue->enmClock);
1989 while (pNext && pNext->u64Expire <= u64Now)
1990 {
1991 PTMTIMER pTimer = pNext;
1992 pNext = TMTIMER_GET_NEXT(pTimer);
1993 PPDMCRITSECT pCritSect = pTimer->pCritSect;
1994 if (pCritSect)
1995 PDMCritSectEnter(pCritSect, VERR_INTERNAL_ERROR);
1996 Log2(("tmR3TimerQueueRun: %p:{.enmState=%s, .enmClock=%d, .enmType=%d, u64Expire=%llx (now=%llx) .pszDesc=%s}\n",
1997 pTimer, tmTimerState(pTimer->enmState), pTimer->enmClock, pTimer->enmType, pTimer->u64Expire, u64Now, pTimer->pszDesc));
1998 bool fRc;
1999 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_EXPIRED_GET_UNLINK, TMTIMERSTATE_ACTIVE, fRc);
2000 if (fRc)
2001 {
2002 Assert(!pTimer->offScheduleNext); /* this can trigger falsely */
2003
2004 /* unlink */
2005 const PTMTIMER pPrev = TMTIMER_GET_PREV(pTimer);
2006 if (pPrev)
2007 TMTIMER_SET_NEXT(pPrev, pNext);
2008 else
2009 {
2010 TMTIMER_SET_HEAD(pQueue, pNext);
2011 pQueue->u64Expire = pNext ? pNext->u64Expire : INT64_MAX;
2012 }
2013 if (pNext)
2014 TMTIMER_SET_PREV(pNext, pPrev);
2015 pTimer->offNext = 0;
2016 pTimer->offPrev = 0;
2017
2018 /* fire */
2019 TM_SET_STATE(pTimer, TMTIMERSTATE_EXPIRED_DELIVER);
2020 switch (pTimer->enmType)
2021 {
2022 case TMTIMERTYPE_DEV: pTimer->u.Dev.pfnTimer(pTimer->u.Dev.pDevIns, pTimer, pTimer->pvUser); break;
2023 case TMTIMERTYPE_USB: pTimer->u.Usb.pfnTimer(pTimer->u.Usb.pUsbIns, pTimer, pTimer->pvUser); break;
2024 case TMTIMERTYPE_DRV: pTimer->u.Drv.pfnTimer(pTimer->u.Drv.pDrvIns, pTimer, pTimer->pvUser); break;
2025 case TMTIMERTYPE_INTERNAL: pTimer->u.Internal.pfnTimer(pVM, pTimer, pTimer->pvUser); break;
2026 case TMTIMERTYPE_EXTERNAL: pTimer->u.External.pfnTimer(pTimer->pvUser); break;
2027 default:
2028 AssertMsgFailed(("Invalid timer type %d (%s)\n", pTimer->enmType, pTimer->pszDesc));
2029 break;
2030 }
2031
2032 /* change the state if it wasn't changed already in the handler. */
2033 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_STOPPED, TMTIMERSTATE_EXPIRED_DELIVER, fRc);
2034 Log2(("tmR3TimerQueueRun: new state %s\n", tmTimerState(pTimer->enmState)));
2035 }
2036 if (pCritSect)
2037 PDMCritSectLeave(pCritSect);
2038 } /* run loop */
2039}
2040
2041
2042/**
2043 * Schedules and runs any pending times in the timer queue for the
2044 * synchronous virtual clock.
2045 *
2046 * This scheduling is a bit different from the other queues as it need
2047 * to implement the special requirements of the timer synchronous virtual
2048 * clock, thus this 2nd queue run function.
2049 *
2050 * @param pVM The VM to run the timers for.
2051 *
2052 * @remarks The caller must the Virtual Sync lock. Owning the TM lock is no
2053 * longer important.
2054 */
2055static void tmR3TimerQueueRunVirtualSync(PVM pVM)
2056{
2057 PTMTIMERQUEUE const pQueue = &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC];
2058 VM_ASSERT_EMT(pVM);
2059 Assert(PDMCritSectIsOwner(&pVM->tm.s.VirtualSyncLock));
2060
2061 /*
2062 * Any timers?
2063 */
2064 PTMTIMER pNext = TMTIMER_GET_HEAD(pQueue);
2065 if (RT_UNLIKELY(!pNext))
2066 {
2067 Assert(pVM->tm.s.fVirtualSyncTicking || !pVM->tm.s.cVirtualTicking);
2068 return;
2069 }
2070 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRun);
2071
2072 /*
2073 * Calculate the time frame for which we will dispatch timers.
2074 *
2075 * We use a time frame ranging from the current sync time (which is most likely the
2076 * same as the head timer) and some configurable period (100000ns) up towards the
2077 * current virtual time. This period might also need to be restricted by the catch-up
2078 * rate so frequent calls to this function won't accelerate the time too much, however
2079 * this will be implemented at a later point if necessary.
2080 *
2081 * Without this frame we would 1) having to run timers much more frequently
2082 * and 2) lag behind at a steady rate.
2083 */
2084 const uint64_t u64VirtualNow = TMVirtualGetNoCheck(pVM);
2085 uint64_t const offSyncGivenUp = pVM->tm.s.offVirtualSyncGivenUp;
2086 uint64_t u64Now;
2087 if (!pVM->tm.s.fVirtualSyncTicking)
2088 {
2089 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRunStoppedAlready);
2090 u64Now = pVM->tm.s.u64VirtualSync;
2091#ifdef DEBUG_bird
2092 Assert(u64Now <= pNext->u64Expire);
2093#endif
2094 }
2095 else
2096 {
2097 /* Calc 'now'. */
2098 bool fStopCatchup = false;
2099 bool fUpdateStuff = false;
2100 uint64_t off = pVM->tm.s.offVirtualSync;
2101 if (pVM->tm.s.fVirtualSyncCatchUp)
2102 {
2103 uint64_t u64Delta = u64VirtualNow - pVM->tm.s.u64VirtualSyncCatchUpPrev;
2104 if (RT_LIKELY(!(u64Delta >> 32)))
2105 {
2106 uint64_t u64Sub = ASMMultU64ByU32DivByU32(u64Delta, pVM->tm.s.u32VirtualSyncCatchUpPercentage, 100);
2107 if (off > u64Sub + offSyncGivenUp)
2108 {
2109 off -= u64Sub;
2110 Log4(("TM: %'RU64/-%'8RU64: sub %'RU64 [tmR3TimerQueueRunVirtualSync]\n", u64VirtualNow - off, off - offSyncGivenUp, u64Sub));
2111 }
2112 else
2113 {
2114 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
2115 fStopCatchup = true;
2116 off = offSyncGivenUp;
2117 }
2118 fUpdateStuff = true;
2119 }
2120 }
2121 u64Now = u64VirtualNow - off;
2122
2123 /* Adjust against last returned time. */
2124 uint64_t u64Last = ASMAtomicUoReadU64(&pVM->tm.s.u64VirtualSync);
2125 if (u64Last > u64Now)
2126 {
2127 u64Now = u64Last + 1;
2128 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncGetAdjLast);
2129 }
2130
2131 /* Check if stopped by expired timer. */
2132 uint64_t u64Expire = pNext->u64Expire;
2133 if (u64Now >= pNext->u64Expire)
2134 {
2135 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRunStop);
2136 u64Now = pNext->u64Expire;
2137 ASMAtomicWriteU64(&pVM->tm.s.u64VirtualSync, u64Now);
2138 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncTicking, false);
2139 Log4(("TM: %'RU64/-%'8RU64: exp tmr [tmR3TimerQueueRunVirtualSync]\n", u64Now, u64VirtualNow - u64Now - offSyncGivenUp));
2140 }
2141 else
2142 {
2143 ASMAtomicWriteU64(&pVM->tm.s.u64VirtualSync, u64Now);
2144 if (fUpdateStuff)
2145 {
2146 ASMAtomicWriteU64(&pVM->tm.s.offVirtualSync, off);
2147 ASMAtomicWriteU64(&pVM->tm.s.u64VirtualSyncCatchUpPrev, u64VirtualNow);
2148 ASMAtomicWriteU64(&pVM->tm.s.u64VirtualSync, u64Now);
2149 if (fStopCatchup)
2150 {
2151 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
2152 Log4(("TM: %'RU64/0: caught up [tmR3TimerQueueRunVirtualSync]\n", u64VirtualNow));
2153 }
2154 }
2155 }
2156 }
2157
2158 /* calc end of frame. */
2159 uint64_t u64Max = u64Now + pVM->tm.s.u32VirtualSyncScheduleSlack;
2160 if (u64Max > u64VirtualNow - offSyncGivenUp)
2161 u64Max = u64VirtualNow - offSyncGivenUp;
2162
2163 /* assert sanity */
2164#ifdef DEBUG_bird
2165 Assert(u64Now <= u64VirtualNow - offSyncGivenUp);
2166 Assert(u64Max <= u64VirtualNow - offSyncGivenUp);
2167 Assert(u64Now <= u64Max);
2168 Assert(offSyncGivenUp == pVM->tm.s.offVirtualSyncGivenUp);
2169#endif
2170
2171 /*
2172 * Process the expired timers moving the clock along as we progress.
2173 */
2174#ifdef DEBUG_bird
2175#ifdef VBOX_STRICT
2176 uint64_t u64Prev = u64Now; NOREF(u64Prev);
2177#endif
2178#endif
2179 while (pNext && pNext->u64Expire <= u64Max)
2180 {
2181 /* Advance */
2182 PTMTIMER pTimer = pNext;
2183 pNext = TMTIMER_GET_NEXT(pTimer);
2184
2185 /* Take the associated lock. */
2186 PPDMCRITSECT pCritSect = pTimer->pCritSect;
2187 if (pCritSect)
2188 PDMCritSectEnter(pCritSect, VERR_INTERNAL_ERROR);
2189
2190 Log2(("tmR3TimerQueueRun: %p:{.enmState=%s, .enmClock=%d, .enmType=%d, u64Expire=%llx (now=%llx) .pszDesc=%s}\n",
2191 pTimer, tmTimerState(pTimer->enmState), pTimer->enmClock, pTimer->enmType, pTimer->u64Expire, u64Now, pTimer->pszDesc));
2192
2193 /* Advance the clock - don't permit timers to be out of order or armed
2194 in the 'past'. */
2195#ifdef DEBUG_bird
2196#ifdef VBOX_STRICT
2197 AssertMsg(pTimer->u64Expire >= u64Prev, ("%'RU64 < %'RU64 %s\n", pTimer->u64Expire, u64Prev, pTimer->pszDesc));
2198 u64Prev = pTimer->u64Expire;
2199#endif
2200#endif
2201 ASMAtomicWriteU64(&pVM->tm.s.u64VirtualSync, pTimer->u64Expire);
2202 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncTicking, false);
2203
2204 /* Unlink it, change the state and do the callout. */
2205 tmTimerQueueUnlinkActive(pQueue, pTimer);
2206 TM_SET_STATE(pTimer, TMTIMERSTATE_EXPIRED_DELIVER);
2207 switch (pTimer->enmType)
2208 {
2209 case TMTIMERTYPE_DEV: pTimer->u.Dev.pfnTimer(pTimer->u.Dev.pDevIns, pTimer, pTimer->pvUser); break;
2210 case TMTIMERTYPE_USB: pTimer->u.Usb.pfnTimer(pTimer->u.Usb.pUsbIns, pTimer, pTimer->pvUser); break;
2211 case TMTIMERTYPE_DRV: pTimer->u.Drv.pfnTimer(pTimer->u.Drv.pDrvIns, pTimer, pTimer->pvUser); break;
2212 case TMTIMERTYPE_INTERNAL: pTimer->u.Internal.pfnTimer(pVM, pTimer, pTimer->pvUser); break;
2213 case TMTIMERTYPE_EXTERNAL: pTimer->u.External.pfnTimer(pTimer->pvUser); break;
2214 default:
2215 AssertMsgFailed(("Invalid timer type %d (%s)\n", pTimer->enmType, pTimer->pszDesc));
2216 break;
2217 }
2218
2219 /* Change the state if it wasn't changed already in the handler.
2220 Reset the Hz hint too since this is the same as TMTimerStop. */
2221 bool fRc;
2222 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_STOPPED, TMTIMERSTATE_EXPIRED_DELIVER, fRc);
2223 if (fRc && pTimer->uHzHint)
2224 {
2225 if (pTimer->uHzHint >= pVM->tm.s.uMaxHzHint)
2226 ASMAtomicWriteBool(&pVM->tm.s.fHzHintNeedsUpdating, true);
2227 pTimer->uHzHint = 0;
2228 }
2229 Log2(("tmR3TimerQueueRun: new state %s\n", tmTimerState(pTimer->enmState)));
2230
2231 /* Leave the associated lock. */
2232 if (pCritSect)
2233 PDMCritSectLeave(pCritSect);
2234 } /* run loop */
2235
2236
2237 /*
2238 * Restart the clock if it was stopped to serve any timers,
2239 * and start/adjust catch-up if necessary.
2240 */
2241 if ( !pVM->tm.s.fVirtualSyncTicking
2242 && pVM->tm.s.cVirtualTicking)
2243 {
2244 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRunRestart);
2245
2246 /* calc the slack we've handed out. */
2247 const uint64_t u64VirtualNow2 = TMVirtualGetNoCheck(pVM);
2248 Assert(u64VirtualNow2 >= u64VirtualNow);
2249#ifdef DEBUG_bird
2250 AssertMsg(pVM->tm.s.u64VirtualSync >= u64Now, ("%'RU64 < %'RU64\n", pVM->tm.s.u64VirtualSync, u64Now));
2251#endif
2252 const uint64_t offSlack = pVM->tm.s.u64VirtualSync - u64Now;
2253 STAM_STATS({
2254 if (offSlack)
2255 {
2256 PSTAMPROFILE p = &pVM->tm.s.StatVirtualSyncRunSlack;
2257 p->cPeriods++;
2258 p->cTicks += offSlack;
2259 if (p->cTicksMax < offSlack) p->cTicksMax = offSlack;
2260 if (p->cTicksMin > offSlack) p->cTicksMin = offSlack;
2261 }
2262 });
2263
2264 /* Let the time run a little bit while we were busy running timers(?). */
2265 uint64_t u64Elapsed;
2266#define MAX_ELAPSED 30000U /* ns */
2267 if (offSlack > MAX_ELAPSED)
2268 u64Elapsed = 0;
2269 else
2270 {
2271 u64Elapsed = u64VirtualNow2 - u64VirtualNow;
2272 if (u64Elapsed > MAX_ELAPSED)
2273 u64Elapsed = MAX_ELAPSED;
2274 u64Elapsed = u64Elapsed > offSlack ? u64Elapsed - offSlack : 0;
2275 }
2276#undef MAX_ELAPSED
2277
2278 /* Calc the current offset. */
2279 uint64_t offNew = u64VirtualNow2 - pVM->tm.s.u64VirtualSync - u64Elapsed;
2280 Assert(!(offNew & RT_BIT_64(63)));
2281 uint64_t offLag = offNew - pVM->tm.s.offVirtualSyncGivenUp;
2282 Assert(!(offLag & RT_BIT_64(63)));
2283
2284 /*
2285 * Deal with starting, adjusting and stopping catchup.
2286 */
2287 if (pVM->tm.s.fVirtualSyncCatchUp)
2288 {
2289 if (offLag <= pVM->tm.s.u64VirtualSyncCatchUpStopThreshold)
2290 {
2291 /* stop */
2292 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
2293 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
2294 Log4(("TM: %'RU64/-%'8RU64: caught up [pt]\n", u64VirtualNow2 - offNew, offLag));
2295 }
2296 else if (offLag <= pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold)
2297 {
2298 /* adjust */
2299 unsigned i = 0;
2300 while ( i + 1 < RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods)
2301 && offLag >= pVM->tm.s.aVirtualSyncCatchUpPeriods[i + 1].u64Start)
2302 i++;
2303 if (pVM->tm.s.u32VirtualSyncCatchUpPercentage < pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage)
2304 {
2305 STAM_COUNTER_INC(&pVM->tm.s.aStatVirtualSyncCatchupAdjust[i]);
2306 ASMAtomicWriteU32(&pVM->tm.s.u32VirtualSyncCatchUpPercentage, pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage);
2307 Log4(("TM: %'RU64/%'8RU64: adj %u%%\n", u64VirtualNow2 - offNew, offLag, pVM->tm.s.u32VirtualSyncCatchUpPercentage));
2308 }
2309 pVM->tm.s.u64VirtualSyncCatchUpPrev = u64VirtualNow2;
2310 }
2311 else
2312 {
2313 /* give up */
2314 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncGiveUp);
2315 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
2316 ASMAtomicWriteU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp, offNew);
2317 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
2318 Log4(("TM: %'RU64/%'8RU64: give up %u%%\n", u64VirtualNow2 - offNew, offLag, pVM->tm.s.u32VirtualSyncCatchUpPercentage));
2319 LogRel(("TM: Giving up catch-up attempt at a %'RU64 ns lag; new total: %'RU64 ns\n", offLag, offNew));
2320 }
2321 }
2322 else if (offLag >= pVM->tm.s.aVirtualSyncCatchUpPeriods[0].u64Start)
2323 {
2324 if (offLag <= pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold)
2325 {
2326 /* start */
2327 STAM_PROFILE_ADV_START(&pVM->tm.s.StatVirtualSyncCatchup, c);
2328 unsigned i = 0;
2329 while ( i + 1 < RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods)
2330 && offLag >= pVM->tm.s.aVirtualSyncCatchUpPeriods[i + 1].u64Start)
2331 i++;
2332 STAM_COUNTER_INC(&pVM->tm.s.aStatVirtualSyncCatchupInitial[i]);
2333 ASMAtomicWriteU32(&pVM->tm.s.u32VirtualSyncCatchUpPercentage, pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage);
2334 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, true);
2335 Log4(("TM: %'RU64/%'8RU64: catch-up %u%%\n", u64VirtualNow2 - offNew, offLag, pVM->tm.s.u32VirtualSyncCatchUpPercentage));
2336 }
2337 else
2338 {
2339 /* don't bother */
2340 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncGiveUpBeforeStarting);
2341 ASMAtomicWriteU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp, offNew);
2342 Log4(("TM: %'RU64/%'8RU64: give up\n", u64VirtualNow2 - offNew, offLag));
2343 LogRel(("TM: Not bothering to attempt catching up a %'RU64 ns lag; new total: %'RU64\n", offLag, offNew));
2344 }
2345 }
2346
2347 /*
2348 * Update the offset and restart the clock.
2349 */
2350 Assert(!(offNew & RT_BIT_64(63)));
2351 ASMAtomicWriteU64(&pVM->tm.s.offVirtualSync, offNew);
2352 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncTicking, true);
2353 }
2354}
2355
2356
2357/**
2358 * Deals with stopped Virtual Sync clock.
2359 *
2360 * This is called by the forced action flag handling code in EM when it
2361 * encounters the VM_FF_TM_VIRTUAL_SYNC flag. It is called by all VCPUs and they
2362 * will block on the VirtualSyncLock until the pending timers has been executed
2363 * and the clock restarted.
2364 *
2365 * @param pVM The VM to run the timers for.
2366 * @param pVCpu The virtual CPU we're running at.
2367 *
2368 * @thread EMTs
2369 */
2370VMMR3_INT_DECL(void) TMR3VirtualSyncFF(PVM pVM, PVMCPU pVCpu)
2371{
2372 Log2(("TMR3VirtualSyncFF:\n"));
2373
2374 /*
2375 * The EMT doing the timers is diverted to them.
2376 */
2377 if (pVCpu->idCpu == pVM->tm.s.idTimerCpu)
2378 TMR3TimerQueuesDo(pVM);
2379 /*
2380 * The other EMTs will block on the virtual sync lock and the first owner
2381 * will run the queue and thus restarting the clock.
2382 *
2383 * Note! This is very suboptimal code wrt to resuming execution when there
2384 * are more than two Virtual CPUs, since they will all have to enter
2385 * the critical section one by one. But it's a very simple solution
2386 * which will have to do the job for now.
2387 */
2388 else
2389 {
2390 STAM_PROFILE_START(&pVM->tm.s.StatVirtualSyncFF, a);
2391 tmVirtualSyncLock(pVM);
2392 if (pVM->tm.s.fVirtualSyncTicking)
2393 {
2394 STAM_PROFILE_STOP(&pVM->tm.s.StatVirtualSyncFF, a); /* before the unlock! */
2395 tmVirtualSyncUnlock(pVM);
2396 Log2(("TMR3VirtualSyncFF: ticking\n"));
2397 }
2398 else
2399 {
2400 tmVirtualSyncUnlock(pVM);
2401
2402 /* try run it. */
2403 tmTimerLock(pVM);
2404 tmVirtualSyncLock(pVM);
2405 if (pVM->tm.s.fVirtualSyncTicking)
2406 Log2(("TMR3VirtualSyncFF: ticking (2)\n"));
2407 else
2408 {
2409 ASMAtomicWriteBool(&pVM->tm.s.fRunningVirtualSyncQueue, true);
2410 Log2(("TMR3VirtualSyncFF: running queue\n"));
2411
2412 Assert(!pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].offSchedule);
2413 tmR3TimerQueueRunVirtualSync(pVM);
2414 if (pVM->tm.s.fVirtualSyncTicking) /** @todo move into tmR3TimerQueueRunVirtualSync - FIXME */
2415 VM_FF_CLEAR(pVM, VM_FF_TM_VIRTUAL_SYNC);
2416
2417 ASMAtomicWriteBool(&pVM->tm.s.fRunningVirtualSyncQueue, false);
2418 }
2419 STAM_PROFILE_STOP(&pVM->tm.s.StatVirtualSyncFF, a); /* before the unlock! */
2420 tmVirtualSyncUnlock(pVM);
2421 tmTimerUnlock(pVM);
2422 }
2423 }
2424}
2425
2426
2427/** @name Saved state values
2428 * @{ */
2429#define TMTIMERSTATE_SAVED_PENDING_STOP 4
2430#define TMTIMERSTATE_SAVED_PENDING_SCHEDULE 7
2431/** @} */
2432
2433
2434/**
2435 * Saves the state of a timer to a saved state.
2436 *
2437 * @returns VBox status.
2438 * @param pTimer Timer to save.
2439 * @param pSSM Save State Manager handle.
2440 */
2441VMMR3DECL(int) TMR3TimerSave(PTMTIMERR3 pTimer, PSSMHANDLE pSSM)
2442{
2443 LogFlow(("TMR3TimerSave: %p:{enmState=%s, .pszDesc={%s}} pSSM=%p\n", pTimer, tmTimerState(pTimer->enmState), pTimer->pszDesc, pSSM));
2444 switch (pTimer->enmState)
2445 {
2446 case TMTIMERSTATE_STOPPED:
2447 case TMTIMERSTATE_PENDING_STOP:
2448 case TMTIMERSTATE_PENDING_STOP_SCHEDULE:
2449 return SSMR3PutU8(pSSM, TMTIMERSTATE_SAVED_PENDING_STOP);
2450
2451 case TMTIMERSTATE_PENDING_SCHEDULE_SET_EXPIRE:
2452 case TMTIMERSTATE_PENDING_RESCHEDULE_SET_EXPIRE:
2453 AssertMsgFailed(("u64Expire is being updated! (%s)\n", pTimer->pszDesc));
2454 if (!RTThreadYield())
2455 RTThreadSleep(1);
2456 /* fall thru */
2457 case TMTIMERSTATE_ACTIVE:
2458 case TMTIMERSTATE_PENDING_SCHEDULE:
2459 case TMTIMERSTATE_PENDING_RESCHEDULE:
2460 SSMR3PutU8(pSSM, TMTIMERSTATE_SAVED_PENDING_SCHEDULE);
2461 return SSMR3PutU64(pSSM, pTimer->u64Expire);
2462
2463 case TMTIMERSTATE_EXPIRED_GET_UNLINK:
2464 case TMTIMERSTATE_EXPIRED_DELIVER:
2465 case TMTIMERSTATE_DESTROY:
2466 case TMTIMERSTATE_FREE:
2467 AssertMsgFailed(("Invalid timer state %d %s (%s)\n", pTimer->enmState, tmTimerState(pTimer->enmState), pTimer->pszDesc));
2468 return SSMR3HandleSetStatus(pSSM, VERR_TM_INVALID_STATE);
2469 }
2470
2471 AssertMsgFailed(("Unknown timer state %d (%s)\n", pTimer->enmState, pTimer->pszDesc));
2472 return SSMR3HandleSetStatus(pSSM, VERR_TM_UNKNOWN_STATE);
2473}
2474
2475
2476/**
2477 * Loads the state of a timer from a saved state.
2478 *
2479 * @returns VBox status.
2480 * @param pTimer Timer to restore.
2481 * @param pSSM Save State Manager handle.
2482 */
2483VMMR3DECL(int) TMR3TimerLoad(PTMTIMERR3 pTimer, PSSMHANDLE pSSM)
2484{
2485 Assert(pTimer); Assert(pSSM); VM_ASSERT_EMT(pTimer->pVMR3);
2486 LogFlow(("TMR3TimerLoad: %p:{enmState=%s, .pszDesc={%s}} pSSM=%p\n", pTimer, tmTimerState(pTimer->enmState), pTimer->pszDesc, pSSM));
2487
2488 /*
2489 * Load the state and validate it.
2490 */
2491 uint8_t u8State;
2492 int rc = SSMR3GetU8(pSSM, &u8State);
2493 if (RT_FAILURE(rc))
2494 return rc;
2495#if 1 /* Workaround for accidental state shift in r47786 (2009-05-26 19:12:12). */ /** @todo remove this in a few weeks! */
2496 if ( u8State == TMTIMERSTATE_SAVED_PENDING_STOP + 1
2497 || u8State == TMTIMERSTATE_SAVED_PENDING_SCHEDULE + 1)
2498 u8State--;
2499#endif
2500 if ( u8State != TMTIMERSTATE_SAVED_PENDING_STOP
2501 && u8State != TMTIMERSTATE_SAVED_PENDING_SCHEDULE)
2502 {
2503 AssertLogRelMsgFailed(("u8State=%d\n", u8State));
2504 return SSMR3HandleSetStatus(pSSM, VERR_TM_LOAD_STATE);
2505 }
2506
2507 /* Enter the critical section to make TMTimerSet/Stop happy. */
2508 PPDMCRITSECT pCritSect = pTimer->pCritSect;
2509 if (pCritSect)
2510 PDMCritSectEnter(pCritSect, VERR_INTERNAL_ERROR);
2511
2512 if (u8State == TMTIMERSTATE_SAVED_PENDING_SCHEDULE)
2513 {
2514 /*
2515 * Load the expire time.
2516 */
2517 uint64_t u64Expire;
2518 rc = SSMR3GetU64(pSSM, &u64Expire);
2519 if (RT_FAILURE(rc))
2520 return rc;
2521
2522 /*
2523 * Set it.
2524 */
2525 Log(("u8State=%d u64Expire=%llu\n", u8State, u64Expire));
2526 rc = TMTimerSet(pTimer, u64Expire);
2527 }
2528 else
2529 {
2530 /*
2531 * Stop it.
2532 */
2533 Log(("u8State=%d\n", u8State));
2534 rc = TMTimerStop(pTimer);
2535 }
2536
2537 if (pCritSect)
2538 PDMCritSectLeave(pCritSect);
2539
2540 /*
2541 * On failure set SSM status.
2542 */
2543 if (RT_FAILURE(rc))
2544 rc = SSMR3HandleSetStatus(pSSM, rc);
2545 return rc;
2546}
2547
2548
2549/**
2550 * Associates a critical section with a timer.
2551 *
2552 * The critical section will be entered prior to doing the timer call back, thus
2553 * avoiding potential races between the timer thread and other threads trying to
2554 * stop or adjust the timer expiration while it's being delivered. The timer
2555 * thread will leave the critical section when the timer callback returns.
2556 *
2557 * In strict builds, ownership of the critical section will be asserted by
2558 * TMTimerSet, TMTimerStop, TMTimerGetExpire and TMTimerDestroy (when called at
2559 * runtime).
2560 *
2561 * @retval VINF_SUCCESS on success.
2562 * @retval VERR_INVALID_HANDLE if the timer handle is NULL or invalid
2563 * (asserted).
2564 * @retval VERR_INVALID_PARAMETER if pCritSect is NULL or has an invalid magic
2565 * (asserted).
2566 * @retval VERR_ALREADY_EXISTS if a critical section was already associated
2567 * with the timer (asserted).
2568 * @retval VERR_INVALID_STATE if the timer isn't stopped.
2569 *
2570 * @param pTimer The timer handle.
2571 * @param pCritSect The critical section. The caller must make sure this
2572 * is around for the life time of the timer.
2573 *
2574 * @thread Any, but the caller is responsible for making sure the timer is not
2575 * active.
2576 */
2577VMMR3DECL(int) TMR3TimerSetCritSect(PTMTIMERR3 pTimer, PPDMCRITSECT pCritSect)
2578{
2579 AssertPtrReturn(pTimer, VERR_INVALID_HANDLE);
2580 AssertPtrReturn(pCritSect, VERR_INVALID_PARAMETER);
2581 const char *pszName = PDMR3CritSectName(pCritSect); /* exploited for validation */
2582 AssertReturn(pszName, VERR_INVALID_PARAMETER);
2583 AssertReturn(!pTimer->pCritSect, VERR_ALREADY_EXISTS);
2584 AssertReturn(pTimer->enmState == TMTIMERSTATE_STOPPED, VERR_INVALID_STATE);
2585 LogFlow(("pTimer=%p (%s) pCritSect=%p (%s)\n", pTimer, pTimer->pszDesc, pCritSect, pszName));
2586
2587 pTimer->pCritSect = pCritSect;
2588 return VINF_SUCCESS;
2589}
2590
2591
2592/**
2593 * Get the real world UTC time adjusted for VM lag.
2594 *
2595 * @returns pTime.
2596 * @param pVM The VM instance.
2597 * @param pTime Where to store the time.
2598 */
2599VMMR3_INT_DECL(PRTTIMESPEC) TMR3UtcNow(PVM pVM, PRTTIMESPEC pTime)
2600{
2601 RTTimeNow(pTime);
2602 RTTimeSpecSubNano(pTime, ASMAtomicReadU64(&pVM->tm.s.offVirtualSync) - ASMAtomicReadU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp));
2603 RTTimeSpecAddNano(pTime, pVM->tm.s.offUTC);
2604 return pTime;
2605}
2606
2607
2608/**
2609 * Pauses all clocks except TMCLOCK_REAL.
2610 *
2611 * @returns VBox status code, all errors are asserted.
2612 * @param pVM The VM handle.
2613 * @param pVCpu The virtual CPU handle.
2614 * @thread EMT corresponding to the virtual CPU handle.
2615 */
2616VMMR3DECL(int) TMR3NotifySuspend(PVM pVM, PVMCPU pVCpu)
2617{
2618 VMCPU_ASSERT_EMT(pVCpu);
2619
2620 /*
2621 * The shared virtual clock (includes virtual sync which is tied to it).
2622 */
2623 tmTimerLock(pVM); /* Paranoia: Exploiting the timer lock here. */
2624 int rc = tmVirtualPauseLocked(pVM);
2625 tmTimerUnlock(pVM);
2626 if (RT_FAILURE(rc))
2627 return rc;
2628
2629 /*
2630 * Pause the TSC last since it is normally linked to the virtual
2631 * sync clock, so the above code may actually stop both clock.
2632 */
2633 rc = tmCpuTickPause(pVM, pVCpu);
2634 if (RT_FAILURE(rc))
2635 return rc;
2636
2637#ifndef VBOX_WITHOUT_NS_ACCOUNTING
2638 /*
2639 * Update cNsTotal.
2640 */
2641 uint32_t uGen = ASMAtomicIncU32(&pVCpu->tm.s.uTimesGen); Assert(uGen & 1);
2642 pVCpu->tm.s.cNsTotal = RTTimeNanoTS() - pVCpu->tm.s.u64NsTsStartTotal;
2643 pVCpu->tm.s.cNsOther = pVCpu->tm.s.cNsTotal - pVCpu->tm.s.cNsExecuting - pVCpu->tm.s.cNsHalted;
2644 ASMAtomicWriteU32(&pVCpu->tm.s.uTimesGen, (uGen | 1) + 1);
2645#endif
2646
2647 return VINF_SUCCESS;
2648}
2649
2650
2651/**
2652 * Resumes all clocks except TMCLOCK_REAL.
2653 *
2654 * @returns VBox status code, all errors are asserted.
2655 * @param pVM The VM handle.
2656 * @param pVCpu The virtual CPU handle.
2657 * @thread EMT corresponding to the virtual CPU handle.
2658 */
2659VMMR3DECL(int) TMR3NotifyResume(PVM pVM, PVMCPU pVCpu)
2660{
2661 VMCPU_ASSERT_EMT(pVCpu);
2662 int rc;
2663
2664#ifndef VBOX_WITHOUT_NS_ACCOUNTING
2665 /*
2666 * Set u64NsTsStartTotal. There is no need to back this out if either of
2667 * the two calls below fail.
2668 */
2669 pVCpu->tm.s.u64NsTsStartTotal = RTTimeNanoTS() - pVCpu->tm.s.cNsTotal;
2670#endif
2671
2672 /*
2673 * Resume the TSC first since it is normally linked to the virtual sync
2674 * clock, so it may actually not be resumed until we've executed the code
2675 * below.
2676 */
2677 if (!pVM->tm.s.fTSCTiedToExecution)
2678 {
2679 rc = tmCpuTickResume(pVM, pVCpu);
2680 if (RT_FAILURE(rc))
2681 return rc;
2682 }
2683
2684 /*
2685 * The shared virtual clock (includes virtual sync which is tied to it).
2686 */
2687 tmTimerLock(pVM); /* Paranoia: Exploiting the timer lock here. */
2688 rc = tmVirtualResumeLocked(pVM);
2689 tmTimerUnlock(pVM);
2690
2691 return rc;
2692}
2693
2694
2695/**
2696 * Sets the warp drive percent of the virtual time.
2697 *
2698 * @returns VBox status code.
2699 * @param pVM The VM handle.
2700 * @param u32Percent The new percentage. 100 means normal operation.
2701 */
2702VMMDECL(int) TMR3SetWarpDrive(PVM pVM, uint32_t u32Percent)
2703{
2704 return VMR3ReqCallWait(pVM, VMCPUID_ANY, (PFNRT)tmR3SetWarpDrive, 2, pVM, u32Percent);
2705}
2706
2707
2708/**
2709 * EMT worker for TMR3SetWarpDrive.
2710 *
2711 * @returns VBox status code.
2712 * @param pVM The VM handle.
2713 * @param u32Percent See TMR3SetWarpDrive().
2714 * @internal
2715 */
2716static DECLCALLBACK(int) tmR3SetWarpDrive(PVM pVM, uint32_t u32Percent)
2717{
2718 PVMCPU pVCpu = VMMGetCpu(pVM);
2719
2720 /*
2721 * Validate it.
2722 */
2723 AssertMsgReturn(u32Percent >= 2 && u32Percent <= 20000,
2724 ("%RX32 is not between 2 and 20000 (inclusive).\n", u32Percent),
2725 VERR_INVALID_PARAMETER);
2726
2727/** @todo This isn't a feature specific to virtual time, move the variables to
2728 * TM level and make it affect TMR3UTCNow as well! */
2729
2730 /*
2731 * If the time is running we'll have to pause it before we can change
2732 * the warp drive settings.
2733 */
2734 tmTimerLock(pVM); /* Paranoia: Exploiting the timer lock here. */
2735 bool fPaused = !!pVM->tm.s.cVirtualTicking;
2736 if (fPaused) /** @todo this isn't really working, but wtf. */
2737 TMR3NotifySuspend(pVM, pVCpu);
2738
2739 pVM->tm.s.u32VirtualWarpDrivePercentage = u32Percent;
2740 pVM->tm.s.fVirtualWarpDrive = u32Percent != 100;
2741 LogRel(("TM: u32VirtualWarpDrivePercentage=%RI32 fVirtualWarpDrive=%RTbool\n",
2742 pVM->tm.s.u32VirtualWarpDrivePercentage, pVM->tm.s.fVirtualWarpDrive));
2743
2744 if (fPaused)
2745 TMR3NotifyResume(pVM, pVCpu);
2746 tmTimerUnlock(pVM);
2747 return VINF_SUCCESS;
2748}
2749
2750
2751/**
2752 * Gets the performance information for one virtual CPU as seen by the VMM.
2753 *
2754 * The returned times covers the period where the VM is running and will be
2755 * reset when restoring a previous VM state (at least for the time being).
2756 *
2757 * @retval VINF_SUCCESS on success.
2758 * @retval VERR_NOT_IMPLEMENTED if not compiled in.
2759 * @retval VERR_INVALID_STATE if the VM handle is bad.
2760 * @retval VERR_INVALID_PARAMETER if idCpu is out of range.
2761 *
2762 * @param pVM The VM handle.
2763 * @param idCpu The ID of the virtual CPU which times to get.
2764 * @param pcNsTotal Where to store the total run time (nano seconds) of
2765 * the CPU, i.e. the sum of the three other returns.
2766 * Optional.
2767 * @param pcNsExecuting Where to store the time (nano seconds) spent
2768 * executing guest code. Optional.
2769 * @param pcNsHalted Where to store the time (nano seconds) spent
2770 * halted. Optional
2771 * @param pcNsOther Where to store the time (nano seconds) spent
2772 * preempted by the host scheduler, on virtualization
2773 * overhead and on other tasks.
2774 */
2775VMMR3DECL(int) TMR3GetCpuLoadTimes(PVM pVM, VMCPUID idCpu, uint64_t *pcNsTotal, uint64_t *pcNsExecuting,
2776 uint64_t *pcNsHalted, uint64_t *pcNsOther)
2777{
2778 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_STATE);
2779 AssertReturn(idCpu < pVM->cCpus, VERR_INVALID_PARAMETER);
2780
2781#ifndef VBOX_WITHOUT_NS_ACCOUNTING
2782 /*
2783 * Get a stable result set.
2784 * This should be way quicker than an EMT request.
2785 */
2786 PVMCPU pVCpu = &pVM->aCpus[idCpu];
2787 uint32_t uTimesGen = ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen);
2788 uint64_t cNsTotal = pVCpu->tm.s.cNsTotal;
2789 uint64_t cNsExecuting = pVCpu->tm.s.cNsExecuting;
2790 uint64_t cNsHalted = pVCpu->tm.s.cNsHalted;
2791 uint64_t cNsOther = pVCpu->tm.s.cNsOther;
2792 while ( (uTimesGen & 1) /* update in progress */
2793 || uTimesGen != ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen))
2794 {
2795 RTThreadYield();
2796 uTimesGen = ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen);
2797 cNsTotal = pVCpu->tm.s.cNsTotal;
2798 cNsExecuting = pVCpu->tm.s.cNsExecuting;
2799 cNsHalted = pVCpu->tm.s.cNsHalted;
2800 cNsOther = pVCpu->tm.s.cNsOther;
2801 }
2802
2803 /*
2804 * Fill in the return values.
2805 */
2806 if (pcNsTotal)
2807 *pcNsTotal = cNsTotal;
2808 if (pcNsExecuting)
2809 *pcNsExecuting = cNsExecuting;
2810 if (pcNsHalted)
2811 *pcNsHalted = cNsHalted;
2812 if (pcNsOther)
2813 *pcNsOther = cNsOther;
2814
2815 return VINF_SUCCESS;
2816
2817#else
2818 return VERR_NOT_IMPLEMENTED;
2819#endif
2820}
2821
2822#ifndef VBOX_WITHOUT_NS_ACCOUNTING
2823
2824/**
2825 * Helper for tmR3CpuLoadTimer.
2826 * @returns
2827 * @param pState The state to update.
2828 * @param cNsTotalDelta Total time.
2829 * @param cNsExecutingDelta Time executing.
2830 * @param cNsHaltedDelta Time halted.
2831 */
2832DECLINLINE(void) tmR3CpuLoadTimerMakeUpdate(PTMCPULOADSTATE pState,
2833 uint64_t cNsTotal,
2834 uint64_t cNsExecuting,
2835 uint64_t cNsHalted)
2836{
2837 /* Calc deltas */
2838 uint64_t cNsTotalDelta = cNsTotal - pState->cNsPrevTotal;
2839 pState->cNsPrevTotal = cNsTotal;
2840
2841 uint64_t cNsExecutingDelta = cNsExecuting - pState->cNsPrevExecuting;
2842 pState->cNsPrevExecuting = cNsExecuting;
2843
2844 uint64_t cNsHaltedDelta = cNsHalted - pState->cNsPrevHalted;
2845 pState->cNsPrevHalted = cNsHalted;
2846
2847 /* Calc pcts. */
2848 if (!cNsTotalDelta)
2849 {
2850 pState->cPctExecuting = 0;
2851 pState->cPctHalted = 100;
2852 pState->cPctOther = 0;
2853 }
2854 else if (cNsTotalDelta < UINT64_MAX / 4)
2855 {
2856 pState->cPctExecuting = (uint8_t)(cNsExecutingDelta * 100 / cNsTotalDelta);
2857 pState->cPctHalted = (uint8_t)(cNsHaltedDelta * 100 / cNsTotalDelta);
2858 pState->cPctOther = (uint8_t)((cNsTotalDelta - cNsExecutingDelta - cNsHaltedDelta) * 100 / cNsTotalDelta);
2859 }
2860 else
2861 {
2862 pState->cPctExecuting = 0;
2863 pState->cPctHalted = 100;
2864 pState->cPctOther = 0;
2865 }
2866}
2867
2868
2869/**
2870 * Timer callback that calculates the CPU load since the last time it was
2871 * called.
2872 *
2873 * @param pVM The VM handle.
2874 * @param pTimer The timer.
2875 * @param pvUser NULL, unused.
2876 */
2877static DECLCALLBACK(void) tmR3CpuLoadTimer(PVM pVM, PTMTIMER pTimer, void *pvUser)
2878{
2879 /*
2880 * Re-arm the timer first.
2881 */
2882 int rc = TMTimerSetMillies(pTimer, 1000);
2883 AssertLogRelRC(rc);
2884 NOREF(pvUser);
2885
2886 /*
2887 * Update the values for each CPU.
2888 */
2889 uint64_t cNsTotalAll = 0;
2890 uint64_t cNsExecutingAll = 0;
2891 uint64_t cNsHaltedAll = 0;
2892 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2893 {
2894 PVMCPU pVCpu = &pVM->aCpus[iCpu];
2895
2896 /* Try get a stable data set. */
2897 uint32_t cTries = 3;
2898 uint32_t uTimesGen = ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen);
2899 uint64_t cNsTotal = pVCpu->tm.s.cNsTotal;
2900 uint64_t cNsExecuting = pVCpu->tm.s.cNsExecuting;
2901 uint64_t cNsHalted = pVCpu->tm.s.cNsHalted;
2902 while (RT_UNLIKELY( (uTimesGen & 1) /* update in progress */
2903 || uTimesGen != ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen)))
2904 {
2905 if (!--cTries)
2906 break;
2907 ASMNopPause();
2908 uTimesGen = ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen);
2909 cNsTotal = pVCpu->tm.s.cNsTotal;
2910 cNsExecuting = pVCpu->tm.s.cNsExecuting;
2911 cNsHalted = pVCpu->tm.s.cNsHalted;
2912 }
2913
2914 /* Totals */
2915 cNsTotalAll += cNsTotal;
2916 cNsExecutingAll += cNsExecuting;
2917 cNsHaltedAll += cNsHalted;
2918
2919 /* Calc the PCTs and update the state. */
2920 tmR3CpuLoadTimerMakeUpdate(&pVCpu->tm.s.CpuLoad, cNsTotal, cNsExecuting, cNsHalted);
2921 }
2922
2923 /*
2924 * Update the value for all the CPUs.
2925 */
2926 tmR3CpuLoadTimerMakeUpdate(&pVM->tm.s.CpuLoad, cNsTotalAll, cNsExecutingAll, cNsHaltedAll);
2927
2928 /** @todo Try add 1, 5 and 15 min load stats. */
2929
2930}
2931
2932#endif /* !VBOX_WITHOUT_NS_ACCOUNTING */
2933
2934/**
2935 * Gets the 5 char clock name for the info tables.
2936 *
2937 * @returns The name.
2938 * @param enmClock The clock.
2939 */
2940DECLINLINE(const char *) tmR3Get5CharClockName(TMCLOCK enmClock)
2941{
2942 switch (enmClock)
2943 {
2944 case TMCLOCK_REAL: return "Real ";
2945 case TMCLOCK_VIRTUAL: return "Virt ";
2946 case TMCLOCK_VIRTUAL_SYNC: return "VrSy ";
2947 case TMCLOCK_TSC: return "TSC ";
2948 default: return "Bad ";
2949 }
2950}
2951
2952
2953/**
2954 * Display all timers.
2955 *
2956 * @param pVM VM Handle.
2957 * @param pHlp The info helpers.
2958 * @param pszArgs Arguments, ignored.
2959 */
2960static DECLCALLBACK(void) tmR3TimerInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2961{
2962 NOREF(pszArgs);
2963 pHlp->pfnPrintf(pHlp,
2964 "Timers (pVM=%p)\n"
2965 "%.*s %.*s %.*s %.*s Clock %18s %18s %6s %-25s Description\n",
2966 pVM,
2967 sizeof(RTR3PTR) * 2, "pTimerR3 ",
2968 sizeof(int32_t) * 2, "offNext ",
2969 sizeof(int32_t) * 2, "offPrev ",
2970 sizeof(int32_t) * 2, "offSched ",
2971 "Time",
2972 "Expire",
2973 "HzHint",
2974 "State");
2975 tmTimerLock(pVM);
2976 for (PTMTIMERR3 pTimer = pVM->tm.s.pCreated; pTimer; pTimer = pTimer->pBigNext)
2977 {
2978 pHlp->pfnPrintf(pHlp,
2979 "%p %08RX32 %08RX32 %08RX32 %s %18RU64 %18RU64 %6RU32 %-25s %s\n",
2980 pTimer,
2981 pTimer->offNext,
2982 pTimer->offPrev,
2983 pTimer->offScheduleNext,
2984 tmR3Get5CharClockName(pTimer->enmClock),
2985 TMTimerGet(pTimer),
2986 pTimer->u64Expire,
2987 pTimer->uHzHint,
2988 tmTimerState(pTimer->enmState),
2989 pTimer->pszDesc);
2990 }
2991 tmTimerUnlock(pVM);
2992}
2993
2994
2995/**
2996 * Display all active timers.
2997 *
2998 * @param pVM VM Handle.
2999 * @param pHlp The info helpers.
3000 * @param pszArgs Arguments, ignored.
3001 */
3002static DECLCALLBACK(void) tmR3TimerInfoActive(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3003{
3004 NOREF(pszArgs);
3005 pHlp->pfnPrintf(pHlp,
3006 "Active Timers (pVM=%p)\n"
3007 "%.*s %.*s %.*s %.*s Clock %18s %18s %6s %-25s Description\n",
3008 pVM,
3009 sizeof(RTR3PTR) * 2, "pTimerR3 ",
3010 sizeof(int32_t) * 2, "offNext ",
3011 sizeof(int32_t) * 2, "offPrev ",
3012 sizeof(int32_t) * 2, "offSched ",
3013 "Time",
3014 "Expire",
3015 "HzHint",
3016 "State");
3017 for (unsigned iQueue = 0; iQueue < TMCLOCK_MAX; iQueue++)
3018 {
3019 tmTimerLock(pVM);
3020 for (PTMTIMERR3 pTimer = TMTIMER_GET_HEAD(&pVM->tm.s.paTimerQueuesR3[iQueue]);
3021 pTimer;
3022 pTimer = TMTIMER_GET_NEXT(pTimer))
3023 {
3024 pHlp->pfnPrintf(pHlp,
3025 "%p %08RX32 %08RX32 %08RX32 %s %18RU64 %18RU64 %6RU32 %-25s %s\n",
3026 pTimer,
3027 pTimer->offNext,
3028 pTimer->offPrev,
3029 pTimer->offScheduleNext,
3030 tmR3Get5CharClockName(pTimer->enmClock),
3031 TMTimerGet(pTimer),
3032 pTimer->u64Expire,
3033 pTimer->uHzHint,
3034 tmTimerState(pTimer->enmState),
3035 pTimer->pszDesc);
3036 }
3037 tmTimerUnlock(pVM);
3038 }
3039}
3040
3041
3042/**
3043 * Display all clocks.
3044 *
3045 * @param pVM VM Handle.
3046 * @param pHlp The info helpers.
3047 * @param pszArgs Arguments, ignored.
3048 */
3049static DECLCALLBACK(void) tmR3InfoClocks(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3050{
3051 NOREF(pszArgs);
3052
3053 /*
3054 * Read the times first to avoid more than necessary time variation.
3055 */
3056 const uint64_t u64Virtual = TMVirtualGet(pVM);
3057 const uint64_t u64VirtualSync = TMVirtualSyncGet(pVM);
3058 const uint64_t u64Real = TMRealGet(pVM);
3059
3060 for (VMCPUID i = 0; i < pVM->cCpus; i++)
3061 {
3062 PVMCPU pVCpu = &pVM->aCpus[i];
3063 uint64_t u64TSC = TMCpuTickGet(pVCpu);
3064
3065 /*
3066 * TSC
3067 */
3068 pHlp->pfnPrintf(pHlp,
3069 "Cpu Tick: %18RU64 (%#016RX64) %RU64Hz %s%s",
3070 u64TSC, u64TSC, TMCpuTicksPerSecond(pVM),
3071 pVCpu->tm.s.fTSCTicking ? "ticking" : "paused",
3072 pVM->tm.s.fTSCVirtualized ? " - virtualized" : "");
3073 if (pVM->tm.s.fTSCUseRealTSC)
3074 {
3075 pHlp->pfnPrintf(pHlp, " - real tsc");
3076 if (pVCpu->tm.s.offTSCRawSrc)
3077 pHlp->pfnPrintf(pHlp, "\n offset %RU64", pVCpu->tm.s.offTSCRawSrc);
3078 }
3079 else
3080 pHlp->pfnPrintf(pHlp, " - virtual clock");
3081 pHlp->pfnPrintf(pHlp, "\n");
3082 }
3083
3084 /*
3085 * virtual
3086 */
3087 pHlp->pfnPrintf(pHlp,
3088 " Virtual: %18RU64 (%#016RX64) %RU64Hz %s",
3089 u64Virtual, u64Virtual, TMVirtualGetFreq(pVM),
3090 pVM->tm.s.cVirtualTicking ? "ticking" : "paused");
3091 if (pVM->tm.s.fVirtualWarpDrive)
3092 pHlp->pfnPrintf(pHlp, " WarpDrive %RU32 %%", pVM->tm.s.u32VirtualWarpDrivePercentage);
3093 pHlp->pfnPrintf(pHlp, "\n");
3094
3095 /*
3096 * virtual sync
3097 */
3098 pHlp->pfnPrintf(pHlp,
3099 "VirtSync: %18RU64 (%#016RX64) %s%s",
3100 u64VirtualSync, u64VirtualSync,
3101 pVM->tm.s.fVirtualSyncTicking ? "ticking" : "paused",
3102 pVM->tm.s.fVirtualSyncCatchUp ? " - catchup" : "");
3103 if (pVM->tm.s.offVirtualSync)
3104 {
3105 pHlp->pfnPrintf(pHlp, "\n offset %RU64", pVM->tm.s.offVirtualSync);
3106 if (pVM->tm.s.u32VirtualSyncCatchUpPercentage)
3107 pHlp->pfnPrintf(pHlp, " catch-up rate %u %%", pVM->tm.s.u32VirtualSyncCatchUpPercentage);
3108 }
3109 pHlp->pfnPrintf(pHlp, "\n");
3110
3111 /*
3112 * real
3113 */
3114 pHlp->pfnPrintf(pHlp,
3115 " Real: %18RU64 (%#016RX64) %RU64Hz\n",
3116 u64Real, u64Real, TMRealGetFreq(pVM));
3117}
3118
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette