VirtualBox

source: vbox/trunk/src/VBox/VMM/TM.cpp@ 34326

Last change on this file since 34326 was 34326, checked in by vboxsync, 14 years ago

VMM: Removed the XXXInitCPU and XXXTermCPU methods since all but the HWACCM ones where stubs and the XXXTermCPU bits was not called in all expected paths. The HWACCMR3InitCPU was hooked up as a VMINITCOMPLETED_RING3 hook, essentially leaving it's position in the order of things unchanged, while the HWACCMR3TermCPU call was made static without changing its position at the end of HWACCMR3Term.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id
File size: 129.7 KB
Line 
1/* $Id: TM.cpp 34326 2010-11-24 14:03:55Z vboxsync $ */
2/** @file
3 * TM - Time Manager.
4 */
5
6/*
7 * Copyright (C) 2006-2007 Oracle Corporation
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 */
17
18/** @page pg_tm TM - The Time Manager
19 *
20 * The Time Manager abstracts the CPU clocks and manages timers used by the VMM,
21 * device and drivers.
22 *
23 * @see grp_tm
24 *
25 *
26 * @section sec_tm_clocks Clocks
27 *
28 * There are currently 4 clocks:
29 * - Virtual (guest).
30 * - Synchronous virtual (guest).
31 * - CPU Tick (TSC) (guest). Only current use is rdtsc emulation. Usually a
32 * function of the virtual clock.
33 * - Real (host). This is only used for display updates atm.
34 *
35 * The most important clocks are the three first ones and of these the second is
36 * the most interesting.
37 *
38 *
39 * The synchronous virtual clock is tied to the virtual clock except that it
40 * will take into account timer delivery lag caused by host scheduling. It will
41 * normally never advance beyond the head timer, and when lagging too far behind
42 * it will gradually speed up to catch up with the virtual clock. All devices
43 * implementing time sources accessible to and used by the guest is using this
44 * clock (for timers and other things). This ensures consistency between the
45 * time sources.
46 *
47 * The virtual clock is implemented as an offset to a monotonic, high
48 * resolution, wall clock. The current time source is using the RTTimeNanoTS()
49 * machinery based upon the Global Info Pages (GIP), that is, we're using TSC
50 * deltas (usually 10 ms) to fill the gaps between GIP updates. The result is
51 * a fairly high res clock that works in all contexts and on all hosts. The
52 * virtual clock is paused when the VM isn't in the running state.
53 *
54 * The CPU tick (TSC) is normally virtualized as a function of the synchronous
55 * virtual clock, where the frequency defaults to the host cpu frequency (as we
56 * measure it). In this mode it is possible to configure the frequency. Another
57 * (non-default) option is to use the raw unmodified host TSC values. And yet
58 * another, to tie it to time spent executing guest code. All these things are
59 * configurable should non-default behavior be desirable.
60 *
61 * The real clock is a monotonic clock (when available) with relatively low
62 * resolution, though this a bit host specific. Note that we're currently not
63 * servicing timers using the real clock when the VM is not running, this is
64 * simply because it has not been needed yet therefore not implemented.
65 *
66 *
67 * @subsection subsec_tm_timesync Guest Time Sync / UTC time
68 *
69 * Guest time syncing is primarily taken care of by the VMM device. The
70 * principle is very simple, the guest additions periodically asks the VMM
71 * device what the current UTC time is and makes adjustments accordingly.
72 *
73 * A complicating factor is that the synchronous virtual clock might be doing
74 * catchups and the guest perception is currently a little bit behind the world
75 * but it will (hopefully) be catching up soon as we're feeding timer interrupts
76 * at a slightly higher rate. Adjusting the guest clock to the current wall
77 * time in the real world would be a bad idea then because the guest will be
78 * advancing too fast and run ahead of world time (if the catchup works out).
79 * To solve this problem TM provides the VMM device with an UTC time source that
80 * gets adjusted with the current lag, so that when the guest eventually catches
81 * up the lag it will be showing correct real world time.
82 *
83 *
84 * @section sec_tm_timers Timers
85 *
86 * The timers can use any of the TM clocks described in the previous section.
87 * Each clock has its own scheduling facility, or timer queue if you like.
88 * There are a few factors which makes it a bit complex. First, there is the
89 * usual R0 vs R3 vs. RC thing. Then there are multiple threads, and then there
90 * is the timer thread that periodically checks whether any timers has expired
91 * without EMT noticing. On the API level, all but the create and save APIs
92 * must be multithreaded. EMT will always run the timers.
93 *
94 * The design is using a doubly linked list of active timers which is ordered
95 * by expire date. This list is only modified by the EMT thread. Updates to
96 * the list are batched in a singly linked list, which is then processed by the
97 * EMT thread at the first opportunity (immediately, next time EMT modifies a
98 * timer on that clock, or next timer timeout). Both lists are offset based and
99 * all the elements are therefore allocated from the hyper heap.
100 *
101 * For figuring out when there is need to schedule and run timers TM will:
102 * - Poll whenever somebody queries the virtual clock.
103 * - Poll the virtual clocks from the EM and REM loops.
104 * - Poll the virtual clocks from trap exit path.
105 * - Poll the virtual clocks and calculate first timeout from the halt loop.
106 * - Employ a thread which periodically (100Hz) polls all the timer queues.
107 *
108 *
109 * @image html TMTIMER-Statechart-Diagram.gif
110 *
111 * @section sec_tm_timer Logging
112 *
113 * Level 2: Logs a most of the timer state transitions and queue servicing.
114 * Level 3: Logs a few oddments.
115 * Level 4: Logs TMCLOCK_VIRTUAL_SYNC catch-up events.
116 *
117 */
118
119/*******************************************************************************
120* Header Files *
121*******************************************************************************/
122#define LOG_GROUP LOG_GROUP_TM
123#include <VBox/tm.h>
124#include <VBox/vmm.h>
125#include <VBox/mm.h>
126#include <VBox/ssm.h>
127#include <VBox/dbgf.h>
128#include <VBox/rem.h>
129#include <VBox/pdmapi.h>
130#include <VBox/iom.h>
131#include "TMInternal.h"
132#include <VBox/vm.h>
133
134#include <VBox/pdmdev.h>
135#include <VBox/param.h>
136#include <VBox/err.h>
137
138#include <VBox/log.h>
139#include <iprt/asm.h>
140#include <iprt/asm-math.h>
141#include <iprt/asm-amd64-x86.h>
142#include <iprt/assert.h>
143#include <iprt/thread.h>
144#include <iprt/time.h>
145#include <iprt/timer.h>
146#include <iprt/semaphore.h>
147#include <iprt/string.h>
148#include <iprt/env.h>
149
150
151/*******************************************************************************
152* Defined Constants And Macros *
153*******************************************************************************/
154/** The current saved state version.*/
155#define TM_SAVED_STATE_VERSION 3
156
157
158/*******************************************************************************
159* Internal Functions *
160*******************************************************************************/
161static bool tmR3HasFixedTSC(PVM pVM);
162static uint64_t tmR3CalibrateTSC(PVM pVM);
163static DECLCALLBACK(int) tmR3Save(PVM pVM, PSSMHANDLE pSSM);
164static DECLCALLBACK(int) tmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass);
165static DECLCALLBACK(void) tmR3TimerCallback(PRTTIMER pTimer, void *pvUser, uint64_t iTick);
166static void tmR3TimerQueueRun(PVM pVM, PTMTIMERQUEUE pQueue);
167static void tmR3TimerQueueRunVirtualSync(PVM pVM);
168static DECLCALLBACK(int) tmR3SetWarpDrive(PVM pVM, uint32_t u32Percent);
169#ifndef VBOX_WITHOUT_NS_ACCOUNTING
170static DECLCALLBACK(void) tmR3CpuLoadTimer(PVM pVM, PTMTIMER pTimer, void *pvUser);
171#endif
172static DECLCALLBACK(void) tmR3TimerInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
173static DECLCALLBACK(void) tmR3TimerInfoActive(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
174static DECLCALLBACK(void) tmR3InfoClocks(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
175
176
177/**
178 * Initializes the TM.
179 *
180 * @returns VBox status code.
181 * @param pVM The VM to operate on.
182 */
183VMM_INT_DECL(int) TMR3Init(PVM pVM)
184{
185 LogFlow(("TMR3Init:\n"));
186
187 /*
188 * Assert alignment and sizes.
189 */
190 AssertCompileMemberAlignment(VM, tm.s, 32);
191 AssertCompile(sizeof(pVM->tm.s) <= sizeof(pVM->tm.padding));
192 AssertCompileMemberAlignment(TM, TimerCritSect, 8);
193 AssertCompileMemberAlignment(TM, VirtualSyncLock, 8);
194
195 /*
196 * Init the structure.
197 */
198 void *pv;
199 int rc = MMHyperAlloc(pVM, sizeof(pVM->tm.s.paTimerQueuesR3[0]) * TMCLOCK_MAX, 0, MM_TAG_TM, &pv);
200 AssertRCReturn(rc, rc);
201 pVM->tm.s.paTimerQueuesR3 = (PTMTIMERQUEUE)pv;
202 pVM->tm.s.paTimerQueuesR0 = MMHyperR3ToR0(pVM, pv);
203 pVM->tm.s.paTimerQueuesRC = MMHyperR3ToRC(pVM, pv);
204
205 pVM->tm.s.offVM = RT_OFFSETOF(VM, tm.s);
206 pVM->tm.s.idTimerCpu = pVM->cCpus - 1; /* The last CPU. */
207 pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL].enmClock = TMCLOCK_VIRTUAL;
208 pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL].u64Expire = INT64_MAX;
209 pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].enmClock = TMCLOCK_VIRTUAL_SYNC;
210 pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].u64Expire = INT64_MAX;
211 pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL].enmClock = TMCLOCK_REAL;
212 pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL].u64Expire = INT64_MAX;
213 pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC].enmClock = TMCLOCK_TSC;
214 pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC].u64Expire = INT64_MAX;
215
216
217 /*
218 * We directly use the GIP to calculate the virtual time. We map the
219 * the GIP into the guest context so we can do this calculation there
220 * as well and save costly world switches.
221 */
222 pVM->tm.s.pvGIPR3 = (void *)g_pSUPGlobalInfoPage;
223 AssertMsgReturn(pVM->tm.s.pvGIPR3, ("GIP support is now required!\n"), VERR_INTERNAL_ERROR);
224 RTHCPHYS HCPhysGIP;
225 rc = SUPR3GipGetPhys(&HCPhysGIP);
226 AssertMsgRCReturn(rc, ("Failed to get GIP physical address!\n"), rc);
227
228 RTGCPTR GCPtr;
229 rc = MMR3HyperMapHCPhys(pVM, pVM->tm.s.pvGIPR3, NIL_RTR0PTR, HCPhysGIP, PAGE_SIZE, "GIP", &GCPtr);
230 if (RT_FAILURE(rc))
231 {
232 AssertMsgFailed(("Failed to map GIP into GC, rc=%Rrc!\n", rc));
233 return rc;
234 }
235 pVM->tm.s.pvGIPRC = GCPtr;
236 LogFlow(("TMR3Init: HCPhysGIP=%RHp at %RRv\n", HCPhysGIP, pVM->tm.s.pvGIPRC));
237 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
238
239 /* Check assumptions made in TMAllVirtual.cpp about the GIP update interval. */
240 if ( g_pSUPGlobalInfoPage->u32Magic == SUPGLOBALINFOPAGE_MAGIC
241 && g_pSUPGlobalInfoPage->u32UpdateIntervalNS >= 250000000 /* 0.25s */)
242 return VMSetError(pVM, VERR_INTERNAL_ERROR, RT_SRC_POS,
243 N_("The GIP update interval is too big. u32UpdateIntervalNS=%RU32 (u32UpdateHz=%RU32)"),
244 g_pSUPGlobalInfoPage->u32UpdateIntervalNS, g_pSUPGlobalInfoPage->u32UpdateHz);
245 LogRel(("TM: GIP - u32Mode=%d (%s) u32UpdateHz=%u\n", g_pSUPGlobalInfoPage->u32Mode,
246 g_pSUPGlobalInfoPage->u32Mode == SUPGIPMODE_SYNC_TSC ? "SyncTSC"
247 : g_pSUPGlobalInfoPage->u32Mode == SUPGIPMODE_ASYNC_TSC ? "AsyncTSC" : "Unknown",
248 g_pSUPGlobalInfoPage->u32UpdateHz));
249
250 /*
251 * Setup the VirtualGetRaw backend.
252 */
253 pVM->tm.s.VirtualGetRawDataR3.pu64Prev = &pVM->tm.s.u64VirtualRawPrev;
254 pVM->tm.s.VirtualGetRawDataR3.pfnBad = tmVirtualNanoTSBad;
255 pVM->tm.s.VirtualGetRawDataR3.pfnRediscover = tmVirtualNanoTSRediscover;
256 if (ASMCpuId_EDX(1) & X86_CPUID_FEATURE_EDX_SSE2)
257 {
258 if (g_pSUPGlobalInfoPage->u32Mode == SUPGIPMODE_SYNC_TSC)
259 pVM->tm.s.pfnVirtualGetRawR3 = RTTimeNanoTSLFenceSync;
260 else
261 pVM->tm.s.pfnVirtualGetRawR3 = RTTimeNanoTSLFenceAsync;
262 }
263 else
264 {
265 if (g_pSUPGlobalInfoPage->u32Mode == SUPGIPMODE_SYNC_TSC)
266 pVM->tm.s.pfnVirtualGetRawR3 = RTTimeNanoTSLegacySync;
267 else
268 pVM->tm.s.pfnVirtualGetRawR3 = RTTimeNanoTSLegacyAsync;
269 }
270
271 pVM->tm.s.VirtualGetRawDataRC.pu64Prev = MMHyperR3ToRC(pVM, (void *)&pVM->tm.s.u64VirtualRawPrev);
272 pVM->tm.s.VirtualGetRawDataR0.pu64Prev = MMHyperR3ToR0(pVM, (void *)&pVM->tm.s.u64VirtualRawPrev);
273 AssertReturn(pVM->tm.s.VirtualGetRawDataR0.pu64Prev, VERR_INTERNAL_ERROR);
274 /* The rest is done in TMR3InitFinalize since it's too early to call PDM. */
275
276 /*
277 * Init the locks.
278 */
279 rc = PDMR3CritSectInit(pVM, &pVM->tm.s.TimerCritSect, RT_SRC_POS, "TM Timer Lock");
280 if (RT_FAILURE(rc))
281 return rc;
282 rc = PDMR3CritSectInit(pVM, &pVM->tm.s.VirtualSyncLock, RT_SRC_POS, "TM VirtualSync Lock");
283 if (RT_FAILURE(rc))
284 return rc;
285
286 /*
287 * Get our CFGM node, create it if necessary.
288 */
289 PCFGMNODE pCfgHandle = CFGMR3GetChild(CFGMR3GetRoot(pVM), "TM");
290 if (!pCfgHandle)
291 {
292 rc = CFGMR3InsertNode(CFGMR3GetRoot(pVM), "TM", &pCfgHandle);
293 AssertRCReturn(rc, rc);
294 }
295
296 /*
297 * Determine the TSC configuration and frequency.
298 */
299 /* mode */
300 /** @cfgm{/TM/TSCVirtualized,bool,true}
301 * Use a virtualize TSC, i.e. trap all TSC access. */
302 rc = CFGMR3QueryBool(pCfgHandle, "TSCVirtualized", &pVM->tm.s.fTSCVirtualized);
303 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
304 pVM->tm.s.fTSCVirtualized = true; /* trap rdtsc */
305 else if (RT_FAILURE(rc))
306 return VMSetError(pVM, rc, RT_SRC_POS,
307 N_("Configuration error: Failed to querying bool value \"UseRealTSC\""));
308
309 /* source */
310 /** @cfgm{/TM/UseRealTSC,bool,false}
311 * Use the real TSC as time source for the TSC instead of the synchronous
312 * virtual clock (false, default). */
313 rc = CFGMR3QueryBool(pCfgHandle, "UseRealTSC", &pVM->tm.s.fTSCUseRealTSC);
314 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
315 pVM->tm.s.fTSCUseRealTSC = false; /* use virtual time */
316 else if (RT_FAILURE(rc))
317 return VMSetError(pVM, rc, RT_SRC_POS,
318 N_("Configuration error: Failed to querying bool value \"UseRealTSC\""));
319 if (!pVM->tm.s.fTSCUseRealTSC)
320 pVM->tm.s.fTSCVirtualized = true;
321
322 /* TSC reliability */
323 /** @cfgm{/TM/MaybeUseOffsettedHostTSC,bool,detect}
324 * Whether the CPU has a fixed TSC rate and may be used in offsetted mode with
325 * VT-x/AMD-V execution. This is autodetected in a very restrictive way by
326 * default. */
327 rc = CFGMR3QueryBool(pCfgHandle, "MaybeUseOffsettedHostTSC", &pVM->tm.s.fMaybeUseOffsettedHostTSC);
328 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
329 {
330 if (!pVM->tm.s.fTSCUseRealTSC)
331 pVM->tm.s.fMaybeUseOffsettedHostTSC = tmR3HasFixedTSC(pVM);
332 else
333 pVM->tm.s.fMaybeUseOffsettedHostTSC = true;
334 }
335
336 /** @cfgm{TM/TSCTicksPerSecond, uint32_t, Current TSC frequency from GIP}
337 * The number of TSC ticks per second (i.e. the TSC frequency). This will
338 * override TSCUseRealTSC, TSCVirtualized and MaybeUseOffsettedHostTSC.
339 */
340 rc = CFGMR3QueryU64(pCfgHandle, "TSCTicksPerSecond", &pVM->tm.s.cTSCTicksPerSecond);
341 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
342 {
343 pVM->tm.s.cTSCTicksPerSecond = tmR3CalibrateTSC(pVM);
344 if ( !pVM->tm.s.fTSCUseRealTSC
345 && pVM->tm.s.cTSCTicksPerSecond >= _4G)
346 {
347 pVM->tm.s.cTSCTicksPerSecond = _4G - 1; /* (A limitation of our math code) */
348 pVM->tm.s.fMaybeUseOffsettedHostTSC = false;
349 }
350 }
351 else if (RT_FAILURE(rc))
352 return VMSetError(pVM, rc, RT_SRC_POS,
353 N_("Configuration error: Failed to querying uint64_t value \"TSCTicksPerSecond\""));
354 else if ( pVM->tm.s.cTSCTicksPerSecond < _1M
355 || pVM->tm.s.cTSCTicksPerSecond >= _4G)
356 return VMSetError(pVM, VERR_INVALID_PARAMETER, RT_SRC_POS,
357 N_("Configuration error: \"TSCTicksPerSecond\" = %RI64 is not in the range 1MHz..4GHz-1"),
358 pVM->tm.s.cTSCTicksPerSecond);
359 else
360 {
361 pVM->tm.s.fTSCUseRealTSC = pVM->tm.s.fMaybeUseOffsettedHostTSC = false;
362 pVM->tm.s.fTSCVirtualized = true;
363 }
364
365 /** @cfgm{TM/TSCTiedToExecution, bool, false}
366 * Whether the TSC should be tied to execution. This will exclude most of the
367 * virtualization overhead, but will by default include the time spent in the
368 * halt state (see TM/TSCNotTiedToHalt). This setting will override all other
369 * TSC settings except for TSCTicksPerSecond and TSCNotTiedToHalt, which should
370 * be used avoided or used with great care. Note that this will only work right
371 * together with VT-x or AMD-V, and with a single virtual CPU. */
372 rc = CFGMR3QueryBoolDef(pCfgHandle, "TSCTiedToExecution", &pVM->tm.s.fTSCTiedToExecution, false);
373 if (RT_FAILURE(rc))
374 return VMSetError(pVM, rc, RT_SRC_POS,
375 N_("Configuration error: Failed to querying bool value \"TSCTiedToExecution\""));
376 if (pVM->tm.s.fTSCTiedToExecution)
377 {
378 /* tied to execution, override all other settings. */
379 pVM->tm.s.fTSCVirtualized = true;
380 pVM->tm.s.fTSCUseRealTSC = true;
381 pVM->tm.s.fMaybeUseOffsettedHostTSC = false;
382 }
383
384 /** @cfgm{TM/TSCNotTiedToHalt, bool, true}
385 * For overriding the default of TM/TSCTiedToExecution, i.e. set this to false
386 * to make the TSC freeze during HLT. */
387 rc = CFGMR3QueryBoolDef(pCfgHandle, "TSCNotTiedToHalt", &pVM->tm.s.fTSCNotTiedToHalt, false);
388 if (RT_FAILURE(rc))
389 return VMSetError(pVM, rc, RT_SRC_POS,
390 N_("Configuration error: Failed to querying bool value \"TSCNotTiedToHalt\""));
391
392 /* setup and report */
393 if (pVM->tm.s.fTSCVirtualized)
394 CPUMR3SetCR4Feature(pVM, X86_CR4_TSD, ~X86_CR4_TSD);
395 else
396 CPUMR3SetCR4Feature(pVM, 0, ~X86_CR4_TSD);
397 LogRel(("TM: cTSCTicksPerSecond=%#RX64 (%'RU64) fTSCVirtualized=%RTbool fTSCUseRealTSC=%RTbool\n"
398 "TM: fMaybeUseOffsettedHostTSC=%RTbool TSCTiedToExecution=%RTbool TSCNotTiedToHalt=%RTbool\n",
399 pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.fTSCVirtualized, pVM->tm.s.fTSCUseRealTSC,
400 pVM->tm.s.fMaybeUseOffsettedHostTSC, pVM->tm.s.fTSCTiedToExecution, pVM->tm.s.fTSCNotTiedToHalt));
401
402 /*
403 * Configure the timer synchronous virtual time.
404 */
405 /** @cfgm{TM/ScheduleSlack, uint32_t, ns, 0, UINT32_MAX, 100000}
406 * Scheduling slack when processing timers. */
407 rc = CFGMR3QueryU32(pCfgHandle, "ScheduleSlack", &pVM->tm.s.u32VirtualSyncScheduleSlack);
408 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
409 pVM->tm.s.u32VirtualSyncScheduleSlack = 100000; /* 0.100ms (ASSUMES virtual time is nanoseconds) */
410 else if (RT_FAILURE(rc))
411 return VMSetError(pVM, rc, RT_SRC_POS,
412 N_("Configuration error: Failed to querying 32-bit integer value \"ScheduleSlack\""));
413
414 /** @cfgm{TM/CatchUpStopThreshold, uint64_t, ns, 0, UINT64_MAX, 500000}
415 * When to stop a catch-up, considering it successful. */
416 rc = CFGMR3QueryU64(pCfgHandle, "CatchUpStopThreshold", &pVM->tm.s.u64VirtualSyncCatchUpStopThreshold);
417 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
418 pVM->tm.s.u64VirtualSyncCatchUpStopThreshold = 500000; /* 0.5ms */
419 else if (RT_FAILURE(rc))
420 return VMSetError(pVM, rc, RT_SRC_POS,
421 N_("Configuration error: Failed to querying 64-bit integer value \"CatchUpStopThreshold\""));
422
423 /** @cfgm{TM/CatchUpGiveUpThreshold, uint64_t, ns, 0, UINT64_MAX, 60000000000}
424 * When to give up a catch-up attempt. */
425 rc = CFGMR3QueryU64(pCfgHandle, "CatchUpGiveUpThreshold", &pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold);
426 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
427 pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold = UINT64_C(60000000000); /* 60 sec */
428 else if (RT_FAILURE(rc))
429 return VMSetError(pVM, rc, RT_SRC_POS,
430 N_("Configuration error: Failed to querying 64-bit integer value \"CatchUpGiveUpThreshold\""));
431
432
433 /** @cfgm{TM/CatchUpPrecentage[0..9], uint32_t, %, 1, 2000, various}
434 * The catch-up percent for a given period. */
435 /** @cfgm{TM/CatchUpStartThreshold[0..9], uint64_t, ns, 0, UINT64_MAX,
436 * The catch-up period threshold, or if you like, when a period starts. */
437#define TM_CFG_PERIOD(iPeriod, DefStart, DefPct) \
438 do \
439 { \
440 uint64_t u64; \
441 rc = CFGMR3QueryU64(pCfgHandle, "CatchUpStartThreshold" #iPeriod, &u64); \
442 if (rc == VERR_CFGM_VALUE_NOT_FOUND) \
443 u64 = UINT64_C(DefStart); \
444 else if (RT_FAILURE(rc)) \
445 return VMSetError(pVM, rc, RT_SRC_POS, N_("Configuration error: Failed to querying 64-bit integer value \"CatchUpThreshold" #iPeriod "\"")); \
446 if ( (iPeriod > 0 && u64 <= pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod - 1].u64Start) \
447 || u64 >= pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold) \
448 return VMSetError(pVM, VERR_INVALID_PARAMETER, RT_SRC_POS, N_("Configuration error: Invalid start of period #" #iPeriod ": %'RU64"), u64); \
449 pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod].u64Start = u64; \
450 rc = CFGMR3QueryU32(pCfgHandle, "CatchUpPrecentage" #iPeriod, &pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod].u32Percentage); \
451 if (rc == VERR_CFGM_VALUE_NOT_FOUND) \
452 pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod].u32Percentage = (DefPct); \
453 else if (RT_FAILURE(rc)) \
454 return VMSetError(pVM, rc, RT_SRC_POS, N_("Configuration error: Failed to querying 32-bit integer value \"CatchUpPrecentage" #iPeriod "\"")); \
455 } while (0)
456 /* This needs more tuning. Not sure if we really need so many period and be so gentle. */
457 TM_CFG_PERIOD(0, 750000, 5); /* 0.75ms at 1.05x */
458 TM_CFG_PERIOD(1, 1500000, 10); /* 1.50ms at 1.10x */
459 TM_CFG_PERIOD(2, 8000000, 25); /* 8ms at 1.25x */
460 TM_CFG_PERIOD(3, 30000000, 50); /* 30ms at 1.50x */
461 TM_CFG_PERIOD(4, 75000000, 75); /* 75ms at 1.75x */
462 TM_CFG_PERIOD(5, 175000000, 100); /* 175ms at 2x */
463 TM_CFG_PERIOD(6, 500000000, 200); /* 500ms at 3x */
464 TM_CFG_PERIOD(7, 3000000000, 300); /* 3s at 4x */
465 TM_CFG_PERIOD(8,30000000000, 400); /* 30s at 5x */
466 TM_CFG_PERIOD(9,55000000000, 500); /* 55s at 6x */
467 AssertCompile(RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods) == 10);
468#undef TM_CFG_PERIOD
469
470 /*
471 * Configure real world time (UTC).
472 */
473 /** @cfgm{TM/UTCOffset, int64_t, ns, INT64_MIN, INT64_MAX, 0}
474 * The UTC offset. This is used to put the guest back or forwards in time. */
475 rc = CFGMR3QueryS64(pCfgHandle, "UTCOffset", &pVM->tm.s.offUTC);
476 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
477 pVM->tm.s.offUTC = 0; /* ns */
478 else if (RT_FAILURE(rc))
479 return VMSetError(pVM, rc, RT_SRC_POS,
480 N_("Configuration error: Failed to querying 64-bit integer value \"UTCOffset\""));
481
482 /*
483 * Setup the warp drive.
484 */
485 /** @cfgm{TM/WarpDrivePercentage, uint32_t, %, 0, 20000, 100}
486 * The warp drive percentage, 100% is normal speed. This is used to speed up
487 * or slow down the virtual clock, which can be useful for fast forwarding
488 * borring periods during tests. */
489 rc = CFGMR3QueryU32(pCfgHandle, "WarpDrivePercentage", &pVM->tm.s.u32VirtualWarpDrivePercentage);
490 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
491 rc = CFGMR3QueryU32(CFGMR3GetRoot(pVM), "WarpDrivePercentage", &pVM->tm.s.u32VirtualWarpDrivePercentage); /* legacy */
492 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
493 pVM->tm.s.u32VirtualWarpDrivePercentage = 100;
494 else if (RT_FAILURE(rc))
495 return VMSetError(pVM, rc, RT_SRC_POS,
496 N_("Configuration error: Failed to querying uint32_t value \"WarpDrivePercent\""));
497 else if ( pVM->tm.s.u32VirtualWarpDrivePercentage < 2
498 || pVM->tm.s.u32VirtualWarpDrivePercentage > 20000)
499 return VMSetError(pVM, VERR_INVALID_PARAMETER, RT_SRC_POS,
500 N_("Configuration error: \"WarpDrivePercent\" = %RI32 is not in the range 2..20000"),
501 pVM->tm.s.u32VirtualWarpDrivePercentage);
502 pVM->tm.s.fVirtualWarpDrive = pVM->tm.s.u32VirtualWarpDrivePercentage != 100;
503 if (pVM->tm.s.fVirtualWarpDrive)
504 LogRel(("TM: u32VirtualWarpDrivePercentage=%RI32\n", pVM->tm.s.u32VirtualWarpDrivePercentage));
505
506 /*
507 * Gather the Host Hz configuration values.
508 */
509 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzMax", &pVM->tm.s.cHostHzMax, 20000);
510 if (RT_FAILURE(rc))
511 return VMSetError(pVM, rc, RT_SRC_POS,
512 N_("Configuration error: Failed to querying uint32_t value \"HostHzMax\""));
513
514 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorTimerCpu", &pVM->tm.s.cPctHostHzFudgeFactorTimerCpu, 111);
515 if (RT_FAILURE(rc))
516 return VMSetError(pVM, rc, RT_SRC_POS,
517 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorTimerCpu\""));
518
519 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorOtherCpu", &pVM->tm.s.cPctHostHzFudgeFactorOtherCpu, 110);
520 if (RT_FAILURE(rc))
521 return VMSetError(pVM, rc, RT_SRC_POS,
522 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorOtherCpu\""));
523
524 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorCatchUp100", &pVM->tm.s.cPctHostHzFudgeFactorCatchUp100, 300);
525 if (RT_FAILURE(rc))
526 return VMSetError(pVM, rc, RT_SRC_POS,
527 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorCatchUp100\""));
528
529 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorCatchUp200", &pVM->tm.s.cPctHostHzFudgeFactorCatchUp200, 250);
530 if (RT_FAILURE(rc))
531 return VMSetError(pVM, rc, RT_SRC_POS,
532 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorCatchUp200\""));
533
534 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorCatchUp400", &pVM->tm.s.cPctHostHzFudgeFactorCatchUp400, 200);
535 if (RT_FAILURE(rc))
536 return VMSetError(pVM, rc, RT_SRC_POS,
537 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorCatchUp400\""));
538
539 /*
540 * Start the timer (guard against REM not yielding).
541 */
542 /** @cfgm{TM/TimerMillies, uint32_t, ms, 1, 1000, 10}
543 * The watchdog timer interval. */
544 uint32_t u32Millies;
545 rc = CFGMR3QueryU32(pCfgHandle, "TimerMillies", &u32Millies);
546 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
547 u32Millies = 10;
548 else if (RT_FAILURE(rc))
549 return VMSetError(pVM, rc, RT_SRC_POS,
550 N_("Configuration error: Failed to query uint32_t value \"TimerMillies\""));
551 rc = RTTimerCreate(&pVM->tm.s.pTimer, u32Millies, tmR3TimerCallback, pVM);
552 if (RT_FAILURE(rc))
553 {
554 AssertMsgFailed(("Failed to create timer, u32Millies=%d rc=%Rrc.\n", u32Millies, rc));
555 return rc;
556 }
557 Log(("TM: Created timer %p firing every %d milliseconds\n", pVM->tm.s.pTimer, u32Millies));
558 pVM->tm.s.u32TimerMillies = u32Millies;
559
560 /*
561 * Register saved state.
562 */
563 rc = SSMR3RegisterInternal(pVM, "tm", 1, TM_SAVED_STATE_VERSION, sizeof(uint64_t) * 8,
564 NULL, NULL, NULL,
565 NULL, tmR3Save, NULL,
566 NULL, tmR3Load, NULL);
567 if (RT_FAILURE(rc))
568 return rc;
569
570 /*
571 * Register statistics.
572 */
573 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR3.c1nsSteps,STAMTYPE_U32, "/TM/R3/1nsSteps", STAMUNIT_OCCURENCES, "Virtual time 1ns steps (due to TSC / GIP variations).");
574 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR3.cBadPrev, STAMTYPE_U32, "/TM/R3/cBadPrev", STAMUNIT_OCCURENCES, "Times the previous virtual time was considered erratic (shouldn't ever happen).");
575 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR0.c1nsSteps,STAMTYPE_U32, "/TM/R0/1nsSteps", STAMUNIT_OCCURENCES, "Virtual time 1ns steps (due to TSC / GIP variations).");
576 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR0.cBadPrev, STAMTYPE_U32, "/TM/R0/cBadPrev", STAMUNIT_OCCURENCES, "Times the previous virtual time was considered erratic (shouldn't ever happen).");
577 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataRC.c1nsSteps,STAMTYPE_U32, "/TM/RC/1nsSteps", STAMUNIT_OCCURENCES, "Virtual time 1ns steps (due to TSC / GIP variations).");
578 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataRC.cBadPrev, STAMTYPE_U32, "/TM/RC/cBadPrev", STAMUNIT_OCCURENCES, "Times the previous virtual time was considered erratic (shouldn't ever happen).");
579 STAM_REL_REG( pVM,(void*)&pVM->tm.s.offVirtualSync, STAMTYPE_U64, "/TM/VirtualSync/CurrentOffset", STAMUNIT_NS, "The current offset. (subtract GivenUp to get the lag)");
580 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.offVirtualSyncGivenUp, STAMTYPE_U64, "/TM/VirtualSync/GivenUp", STAMUNIT_NS, "Nanoseconds of the 'CurrentOffset' that's been given up and won't ever be attempted caught up with.");
581 STAM_REL_REG( pVM,(void*)&pVM->tm.s.uMaxHzHint, STAMTYPE_U32, "/TM/MaxHzHint", STAMUNIT_HZ, "Max guest timer frequency hint.");
582
583#ifdef VBOX_WITH_STATISTICS
584 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR3.cExpired, STAMTYPE_U32, "/TM/R3/cExpired", STAMUNIT_OCCURENCES, "Times the TSC interval expired (overlaps 1ns steps).");
585 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR3.cUpdateRaces,STAMTYPE_U32, "/TM/R3/cUpdateRaces", STAMUNIT_OCCURENCES, "Thread races when updating the previous timestamp.");
586 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR0.cExpired, STAMTYPE_U32, "/TM/R0/cExpired", STAMUNIT_OCCURENCES, "Times the TSC interval expired (overlaps 1ns steps).");
587 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR0.cUpdateRaces,STAMTYPE_U32, "/TM/R0/cUpdateRaces", STAMUNIT_OCCURENCES, "Thread races when updating the previous timestamp.");
588 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataRC.cExpired, STAMTYPE_U32, "/TM/RC/cExpired", STAMUNIT_OCCURENCES, "Times the TSC interval expired (overlaps 1ns steps).");
589 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataRC.cUpdateRaces,STAMTYPE_U32, "/TM/RC/cUpdateRaces", STAMUNIT_OCCURENCES, "Thread races when updating the previous timestamp.");
590 STAM_REG(pVM, &pVM->tm.s.StatDoQueues, STAMTYPE_PROFILE, "/TM/DoQueues", STAMUNIT_TICKS_PER_CALL, "Profiling timer TMR3TimerQueuesDo.");
591 STAM_REG(pVM, &pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL], STAMTYPE_PROFILE_ADV, "/TM/DoQueues/Virtual", STAMUNIT_TICKS_PER_CALL, "Time spent on the virtual clock queue.");
592 STAM_REG(pVM, &pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL_SYNC], STAMTYPE_PROFILE_ADV, "/TM/DoQueues/VirtualSync", STAMUNIT_TICKS_PER_CALL, "Time spent on the virtual sync clock queue.");
593 STAM_REG(pVM, &pVM->tm.s.aStatDoQueues[TMCLOCK_REAL], STAMTYPE_PROFILE_ADV, "/TM/DoQueues/Real", STAMUNIT_TICKS_PER_CALL, "Time spent on the real clock queue.");
594
595 STAM_REG(pVM, &pVM->tm.s.StatPoll, STAMTYPE_COUNTER, "/TM/Poll", STAMUNIT_OCCURENCES, "TMTimerPoll calls.");
596 STAM_REG(pVM, &pVM->tm.s.StatPollAlreadySet, STAMTYPE_COUNTER, "/TM/Poll/AlreadySet", STAMUNIT_OCCURENCES, "TMTimerPoll calls where the FF was already set.");
597 STAM_REG(pVM, &pVM->tm.s.StatPollELoop, STAMTYPE_COUNTER, "/TM/Poll/ELoop", STAMUNIT_OCCURENCES, "Times TMTimerPoll has given up getting a consistent virtual sync data set.");
598 STAM_REG(pVM, &pVM->tm.s.StatPollMiss, STAMTYPE_COUNTER, "/TM/Poll/Miss", STAMUNIT_OCCURENCES, "TMTimerPoll calls where nothing had expired.");
599 STAM_REG(pVM, &pVM->tm.s.StatPollRunning, STAMTYPE_COUNTER, "/TM/Poll/Running", STAMUNIT_OCCURENCES, "TMTimerPoll calls where the queues were being run.");
600 STAM_REG(pVM, &pVM->tm.s.StatPollSimple, STAMTYPE_COUNTER, "/TM/Poll/Simple", STAMUNIT_OCCURENCES, "TMTimerPoll calls where we could take the simple path.");
601 STAM_REG(pVM, &pVM->tm.s.StatPollVirtual, STAMTYPE_COUNTER, "/TM/Poll/HitsVirtual", STAMUNIT_OCCURENCES, "The number of times TMTimerPoll found an expired TMCLOCK_VIRTUAL queue.");
602 STAM_REG(pVM, &pVM->tm.s.StatPollVirtualSync, STAMTYPE_COUNTER, "/TM/Poll/HitsVirtualSync", STAMUNIT_OCCURENCES, "The number of times TMTimerPoll found an expired TMCLOCK_VIRTUAL_SYNC queue.");
603
604 STAM_REG(pVM, &pVM->tm.s.StatPostponedR3, STAMTYPE_COUNTER, "/TM/PostponedR3", STAMUNIT_OCCURENCES, "Postponed due to unschedulable state, in ring-3.");
605 STAM_REG(pVM, &pVM->tm.s.StatPostponedRZ, STAMTYPE_COUNTER, "/TM/PostponedRZ", STAMUNIT_OCCURENCES, "Postponed due to unschedulable state, in ring-0 / RC.");
606
607 STAM_REG(pVM, &pVM->tm.s.StatScheduleOneR3, STAMTYPE_PROFILE, "/TM/ScheduleOneR3", STAMUNIT_TICKS_PER_CALL, "Profiling the scheduling of one queue during a TMTimer* call in EMT.");
608 STAM_REG(pVM, &pVM->tm.s.StatScheduleOneRZ, STAMTYPE_PROFILE, "/TM/ScheduleOneRZ", STAMUNIT_TICKS_PER_CALL, "Profiling the scheduling of one queue during a TMTimer* call in EMT.");
609 STAM_REG(pVM, &pVM->tm.s.StatScheduleSetFF, STAMTYPE_COUNTER, "/TM/ScheduleSetFF", STAMUNIT_OCCURENCES, "The number of times the timer FF was set instead of doing scheduling.");
610
611 STAM_REG(pVM, &pVM->tm.s.StatTimerSet, STAMTYPE_COUNTER, "/TM/TimerSet", STAMUNIT_OCCURENCES, "Calls");
612 STAM_REG(pVM, &pVM->tm.s.StatTimerSetOpt, STAMTYPE_COUNTER, "/TM/TimerSet/Opt", STAMUNIT_OCCURENCES, "Optimized path taken.");
613 STAM_REG(pVM, &pVM->tm.s.StatTimerSetR3, STAMTYPE_PROFILE, "/TM/TimerSet/R3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSet calls made in ring-3.");
614 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRZ, STAMTYPE_PROFILE, "/TM/TimerSet/RZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSet calls made in ring-0 / RC.");
615 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStActive, STAMTYPE_COUNTER, "/TM/TimerSet/StActive", STAMUNIT_OCCURENCES, "ACTIVE");
616 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStExpDeliver, STAMTYPE_COUNTER, "/TM/TimerSet/StExpDeliver", STAMUNIT_OCCURENCES, "EXPIRED_DELIVER");
617 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStOther, STAMTYPE_COUNTER, "/TM/TimerSet/StOther", STAMUNIT_OCCURENCES, "Other states");
618 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStPendStop, STAMTYPE_COUNTER, "/TM/TimerSet/StPendStop", STAMUNIT_OCCURENCES, "PENDING_STOP");
619 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStPendStopSched, STAMTYPE_COUNTER, "/TM/TimerSet/StPendStopSched", STAMUNIT_OCCURENCES, "PENDING_STOP_SCHEDULE");
620 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStPendSched, STAMTYPE_COUNTER, "/TM/TimerSet/StPendSched", STAMUNIT_OCCURENCES, "PENDING_SCHEDULE");
621 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStPendResched, STAMTYPE_COUNTER, "/TM/TimerSet/StPendResched", STAMUNIT_OCCURENCES, "PENDING_RESCHEDULE");
622 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStStopped, STAMTYPE_COUNTER, "/TM/TimerSet/StStopped", STAMUNIT_OCCURENCES, "STOPPED");
623
624 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelative, STAMTYPE_COUNTER, "/TM/TimerSetRelative", STAMUNIT_OCCURENCES, "Calls");
625 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeOpt, STAMTYPE_COUNTER, "/TM/TimerSetRelative/Opt", STAMUNIT_OCCURENCES, "Optimized path taken.");
626 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeR3, STAMTYPE_PROFILE, "/TM/TimerSetRelative/R3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSetRelative calls made in ring-3.");
627 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeRZ, STAMTYPE_PROFILE, "/TM/TimerSetRelative/RZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSetReltaive calls made in ring-0 / RC.");
628 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeRacyVirtSync, STAMTYPE_COUNTER, "/TM/TimerSetRelative/RacyVirtSync", STAMUNIT_OCCURENCES, "Potentially racy virtual sync timer update.");
629 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStActive, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StActive", STAMUNIT_OCCURENCES, "ACTIVE");
630 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStExpDeliver, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StExpDeliver", STAMUNIT_OCCURENCES, "EXPIRED_DELIVER");
631 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStOther, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StOther", STAMUNIT_OCCURENCES, "Other states");
632 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStPendStop, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StPendStop", STAMUNIT_OCCURENCES, "PENDING_STOP");
633 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStPendStopSched, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StPendStopSched",STAMUNIT_OCCURENCES, "PENDING_STOP_SCHEDULE");
634 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStPendSched, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StPendSched", STAMUNIT_OCCURENCES, "PENDING_SCHEDULE");
635 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStPendResched, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StPendResched", STAMUNIT_OCCURENCES, "PENDING_RESCHEDULE");
636 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStStopped, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StStopped", STAMUNIT_OCCURENCES, "STOPPED");
637
638 STAM_REG(pVM, &pVM->tm.s.StatTimerStopR3, STAMTYPE_PROFILE, "/TM/TimerStopR3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerStop calls made in ring-3.");
639 STAM_REG(pVM, &pVM->tm.s.StatTimerStopRZ, STAMTYPE_PROFILE, "/TM/TimerStopRZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerStop calls made in ring-0 / RC.");
640
641 STAM_REG(pVM, &pVM->tm.s.StatVirtualGet, STAMTYPE_COUNTER, "/TM/VirtualGet", STAMUNIT_OCCURENCES, "The number of times TMTimerGet was called when the clock was running.");
642 STAM_REG(pVM, &pVM->tm.s.StatVirtualGetSetFF, STAMTYPE_COUNTER, "/TM/VirtualGetSetFF", STAMUNIT_OCCURENCES, "Times we set the FF when calling TMTimerGet.");
643 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGet, STAMTYPE_COUNTER, "/TM/VirtualSyncGet", STAMUNIT_OCCURENCES, "The number of times tmVirtualSyncGetEx was called.");
644 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetELoop, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/ELoop", STAMUNIT_OCCURENCES, "Times tmVirtualSyncGetEx has given up getting a consistent virtual sync data set.");
645 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetExpired, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/Expired", STAMUNIT_OCCURENCES, "Times tmVirtualSyncGetEx encountered an expired timer stopping the clock.");
646 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetLocked, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/Locked", STAMUNIT_OCCURENCES, "Times we successfully acquired the lock in tmVirtualSyncGetEx.");
647 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetLockless, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/Lockless", STAMUNIT_OCCURENCES, "Times tmVirtualSyncGetEx returned without needing to take the lock.");
648 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetSetFF, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/SetFF", STAMUNIT_OCCURENCES, "Times we set the FF when calling tmVirtualSyncGetEx.");
649 STAM_REG(pVM, &pVM->tm.s.StatVirtualPause, STAMTYPE_COUNTER, "/TM/VirtualPause", STAMUNIT_OCCURENCES, "The number of times TMR3TimerPause was called.");
650 STAM_REG(pVM, &pVM->tm.s.StatVirtualResume, STAMTYPE_COUNTER, "/TM/VirtualResume", STAMUNIT_OCCURENCES, "The number of times TMR3TimerResume was called.");
651
652 STAM_REG(pVM, &pVM->tm.s.StatTimerCallbackSetFF, STAMTYPE_COUNTER, "/TM/CallbackSetFF", STAMUNIT_OCCURENCES, "The number of times the timer callback set FF.");
653
654 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupLE010, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupLE010", STAMUNIT_OCCURENCES, "In catch-up mode, 10% or lower.");
655 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupLE025, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupLE025", STAMUNIT_OCCURENCES, "In catch-up mode, 25%-11%.");
656 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupLE100, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupLE100", STAMUNIT_OCCURENCES, "In catch-up mode, 100%-26%.");
657 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupOther, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupOther", STAMUNIT_OCCURENCES, "In catch-up mode, > 100%.");
658 STAM_REG(pVM, &pVM->tm.s.StatTSCNotFixed, STAMTYPE_COUNTER, "/TM/TSC/Intercept/NotFixed", STAMUNIT_OCCURENCES, "TSC is not fixed, it may run at variable speed.");
659 STAM_REG(pVM, &pVM->tm.s.StatTSCNotTicking, STAMTYPE_COUNTER, "/TM/TSC/Intercept/NotTicking", STAMUNIT_OCCURENCES, "TSC is not ticking.");
660 STAM_REG(pVM, &pVM->tm.s.StatTSCSyncNotTicking, STAMTYPE_COUNTER, "/TM/TSC/Intercept/SyncNotTicking", STAMUNIT_OCCURENCES, "VirtualSync isn't ticking.");
661 STAM_REG(pVM, &pVM->tm.s.StatTSCWarp, STAMTYPE_COUNTER, "/TM/TSC/Intercept/Warp", STAMUNIT_OCCURENCES, "Warpdrive is active.");
662 STAM_REG(pVM, &pVM->tm.s.StatTSCSet, STAMTYPE_COUNTER, "/TM/TSC/Sets", STAMUNIT_OCCURENCES, "Calls to TMCpuTickSet.");
663 STAM_REG(pVM, &pVM->tm.s.StatTSCUnderflow, STAMTYPE_COUNTER, "/TM/TSC/Underflow", STAMUNIT_OCCURENCES, "TSC underflow; corrected with last seen value .");
664#endif /* VBOX_WITH_STATISTICS */
665
666 for (VMCPUID i = 0; i < pVM->cCpus; i++)
667 {
668 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.offTSCRawSrc, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_TICKS, "TSC offset relative the raw source", "/TM/TSC/offCPU%u", i);
669#ifndef VBOX_WITHOUT_NS_ACCOUNTING
670# if defined(VBOX_WITH_STATISTICS) || defined(VBOX_WITH_NS_ACCOUNTING_STATS)
671 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsTotal, STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Resettable: Total CPU run time.", "/TM/CPU/%02u", i);
672 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsExecuting, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent executing guest code.", "/TM/CPU/%02u/PrfExecuting", i);
673 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsExecLong, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent executing guest code - long hauls.", "/TM/CPU/%02u/PrfExecLong", i);
674 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsExecShort, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent executing guest code - short stretches.", "/TM/CPU/%02u/PrfExecShort", i);
675 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsExecTiny, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent executing guest code - tiny bits.", "/TM/CPU/%02u/PrfExecTiny", i);
676 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsHalted, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent halted.", "/TM/CPU/%02u/PrfHalted", i);
677 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsOther, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent in the VMM or preempted.", "/TM/CPU/%02u/PrfOther", i);
678# endif
679 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cNsTotal, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Total CPU run time.", "/TM/CPU/%02u/cNsTotal", i);
680 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cNsExecuting, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Time spent executing guest code.", "/TM/CPU/%02u/cNsExecuting", i);
681 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cNsHalted, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Time spent halted.", "/TM/CPU/%02u/cNsHalted", i);
682 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cNsOther, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Time spent in the VMM or preempted.", "/TM/CPU/%02u/cNsOther", i);
683 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cPeriodsExecuting, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_COUNT, "Times executed guest code.", "/TM/CPU/%02u/cPeriodsExecuting", i);
684 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cPeriodsHalted, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_COUNT, "Times halted.", "/TM/CPU/%02u/cPeriodsHalted", i);
685 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.CpuLoad.cPctExecuting, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent executing guest code recently.", "/TM/CPU/%02u/pctExecuting", i);
686 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.CpuLoad.cPctHalted, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent halted recently.", "/TM/CPU/%02u/pctHalted", i);
687 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.CpuLoad.cPctOther, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent in the VMM or preempted recently.", "/TM/CPU/%02u/pctOther", i);
688#endif
689 }
690#ifndef VBOX_WITHOUT_NS_ACCOUNTING
691 STAMR3RegisterF(pVM, &pVM->tm.s.CpuLoad.cPctExecuting, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent executing guest code recently.", "/TM/CPU/pctExecuting");
692 STAMR3RegisterF(pVM, &pVM->tm.s.CpuLoad.cPctHalted, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent halted recently.", "/TM/CPU/pctHalted");
693 STAMR3RegisterF(pVM, &pVM->tm.s.CpuLoad.cPctOther, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent in the VMM or preempted recently.", "/TM/CPU/pctOther");
694#endif
695
696#ifdef VBOX_WITH_STATISTICS
697 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncCatchup, STAMTYPE_PROFILE_ADV, "/TM/VirtualSync/CatchUp", STAMUNIT_TICKS_PER_OCCURENCE, "Counting and measuring the times spent catching up.");
698 STAM_REG(pVM, (void *)&pVM->tm.s.fVirtualSyncCatchUp, STAMTYPE_U8, "/TM/VirtualSync/CatchUpActive", STAMUNIT_NONE, "Catch-Up active indicator.");
699 STAM_REG(pVM, (void *)&pVM->tm.s.u32VirtualSyncCatchUpPercentage, STAMTYPE_U32, "/TM/VirtualSync/CatchUpPercentage", STAMUNIT_PCT, "The catch-up percentage. (+100/100 to get clock multiplier)");
700 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncFF, STAMTYPE_PROFILE, "/TM/VirtualSync/FF", STAMUNIT_TICKS_PER_OCCURENCE, "Time spent in TMR3VirtualSyncFF by all but the dedicate timer EMT.");
701 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGiveUp, STAMTYPE_COUNTER, "/TM/VirtualSync/GiveUp", STAMUNIT_OCCURENCES, "Times the catch-up was abandoned.");
702 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGiveUpBeforeStarting, STAMTYPE_COUNTER, "/TM/VirtualSync/GiveUpBeforeStarting",STAMUNIT_OCCURENCES, "Times the catch-up was abandoned before even starting. (Typically debugging++.)");
703 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRun, STAMTYPE_COUNTER, "/TM/VirtualSync/Run", STAMUNIT_OCCURENCES, "Times the virtual sync timer queue was considered.");
704 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunRestart, STAMTYPE_COUNTER, "/TM/VirtualSync/Run/Restarts", STAMUNIT_OCCURENCES, "Times the clock was restarted after a run.");
705 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunStop, STAMTYPE_COUNTER, "/TM/VirtualSync/Run/Stop", STAMUNIT_OCCURENCES, "Times the clock was stopped when calculating the current time before examining the timers.");
706 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunStoppedAlready, STAMTYPE_COUNTER, "/TM/VirtualSync/Run/StoppedAlready", STAMUNIT_OCCURENCES, "Times the clock was already stopped elsewhere (TMVirtualSyncGet).");
707 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunSlack, STAMTYPE_PROFILE, "/TM/VirtualSync/Run/Slack", STAMUNIT_NS_PER_OCCURENCE, "The scheduling slack. (Catch-up handed out when running timers.)");
708 for (unsigned i = 0; i < RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods); i++)
709 {
710 STAMR3RegisterF(pVM, &pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage, STAMTYPE_U32, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "The catch-up percentage.", "/TM/VirtualSync/Periods/%u", i);
711 STAMR3RegisterF(pVM, &pVM->tm.s.aStatVirtualSyncCatchupAdjust[i], STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_OCCURENCES, "Times adjusted to this period.", "/TM/VirtualSync/Periods/%u/Adjust", i);
712 STAMR3RegisterF(pVM, &pVM->tm.s.aStatVirtualSyncCatchupInitial[i], STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_OCCURENCES, "Times started in this period.", "/TM/VirtualSync/Periods/%u/Initial", i);
713 STAMR3RegisterF(pVM, &pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u64Start, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Start of this period (lag).", "/TM/VirtualSync/Periods/%u/Start", i);
714 }
715#endif /* VBOX_WITH_STATISTICS */
716
717 /*
718 * Register info handlers.
719 */
720 DBGFR3InfoRegisterInternalEx(pVM, "timers", "Dumps all timers. No arguments.", tmR3TimerInfo, DBGFINFO_FLAGS_RUN_ON_EMT);
721 DBGFR3InfoRegisterInternalEx(pVM, "activetimers", "Dumps active all timers. No arguments.", tmR3TimerInfoActive, DBGFINFO_FLAGS_RUN_ON_EMT);
722 DBGFR3InfoRegisterInternalEx(pVM, "clocks", "Display the time of the various clocks.", tmR3InfoClocks, DBGFINFO_FLAGS_RUN_ON_EMT);
723
724 return VINF_SUCCESS;
725}
726
727
728/**
729 * Checks if the host CPU has a fixed TSC frequency.
730 *
731 * @returns true if it has, false if it hasn't.
732 *
733 * @remark This test doesn't bother with very old CPUs that don't do power
734 * management or any other stuff that might influence the TSC rate.
735 * This isn't currently relevant.
736 */
737static bool tmR3HasFixedTSC(PVM pVM)
738{
739 if (ASMHasCpuId())
740 {
741 uint32_t uEAX, uEBX, uECX, uEDX;
742
743 if (CPUMGetHostCpuVendor(pVM) == CPUMCPUVENDOR_AMD)
744 {
745 /*
746 * AuthenticAMD - Check for APM support and that TscInvariant is set.
747 *
748 * This test isn't correct with respect to fixed/non-fixed TSC and
749 * older models, but this isn't relevant since the result is currently
750 * only used for making a decision on AMD-V models.
751 */
752 ASMCpuId(0x80000000, &uEAX, &uEBX, &uECX, &uEDX);
753 if (uEAX >= 0x80000007)
754 {
755 PSUPGLOBALINFOPAGE pGip = g_pSUPGlobalInfoPage;
756
757 ASMCpuId(0x80000007, &uEAX, &uEBX, &uECX, &uEDX);
758 if ( (uEDX & X86_CPUID_AMD_ADVPOWER_EDX_TSCINVAR) /* TscInvariant */
759 && pGip->u32Mode == SUPGIPMODE_SYNC_TSC /* no fixed tsc if the gip timer is in async mode */)
760 return true;
761 }
762 }
763 else if (CPUMGetHostCpuVendor(pVM) == CPUMCPUVENDOR_INTEL)
764 {
765 /*
766 * GenuineIntel - Check the model number.
767 *
768 * This test is lacking in the same way and for the same reasons
769 * as the AMD test above.
770 */
771 ASMCpuId(1, &uEAX, &uEBX, &uECX, &uEDX);
772 unsigned uModel = (uEAX >> 4) & 0x0f;
773 unsigned uFamily = (uEAX >> 8) & 0x0f;
774 if (uFamily == 0x0f)
775 uFamily += (uEAX >> 20) & 0xff;
776 if (uFamily >= 0x06)
777 uModel += ((uEAX >> 16) & 0x0f) << 4;
778 if ( (uFamily == 0x0f /*P4*/ && uModel >= 0x03)
779 || (uFamily == 0x06 /*P2/P3*/ && uModel >= 0x0e))
780 return true;
781 }
782 }
783 return false;
784}
785
786
787/**
788 * Calibrate the CPU tick.
789 *
790 * @returns Number of ticks per second.
791 */
792static uint64_t tmR3CalibrateTSC(PVM pVM)
793{
794 /*
795 * Use GIP when available present.
796 */
797 uint64_t u64Hz;
798 PSUPGLOBALINFOPAGE pGip = g_pSUPGlobalInfoPage;
799 if ( pGip
800 && pGip->u32Magic == SUPGLOBALINFOPAGE_MAGIC)
801 {
802 unsigned iCpu = pGip->u32Mode != SUPGIPMODE_ASYNC_TSC ? 0 : ASMGetApicId();
803 if (iCpu >= RT_ELEMENTS(pGip->aCPUs))
804 AssertReleaseMsgFailed(("iCpu=%d - the ApicId is too high. send VBox.log and hardware specs!\n", iCpu));
805 else
806 {
807 if (tmR3HasFixedTSC(pVM))
808 /* Sleep a bit to get a more reliable CpuHz value. */
809 RTThreadSleep(32);
810 else
811 {
812 /* Spin for 40ms to try push up the CPU frequency and get a more reliable CpuHz value. */
813 const uint64_t u64 = RTTimeMilliTS();
814 while ((RTTimeMilliTS() - u64) < 40 /*ms*/)
815 /* nothing */;
816 }
817
818 pGip = g_pSUPGlobalInfoPage;
819 if ( pGip
820 && pGip->u32Magic == SUPGLOBALINFOPAGE_MAGIC
821 && (u64Hz = pGip->aCPUs[iCpu].u64CpuHz)
822 && u64Hz != ~(uint64_t)0)
823 return u64Hz;
824 }
825 }
826
827 /* call this once first to make sure it's initialized. */
828 RTTimeNanoTS();
829
830 /*
831 * Yield the CPU to increase our chances of getting
832 * a correct value.
833 */
834 RTThreadYield(); /* Try avoid interruptions between TSC and NanoTS samplings. */
835 static const unsigned s_auSleep[5] = { 50, 30, 30, 40, 40 };
836 uint64_t au64Samples[5];
837 unsigned i;
838 for (i = 0; i < RT_ELEMENTS(au64Samples); i++)
839 {
840 RTMSINTERVAL cMillies;
841 int cTries = 5;
842 uint64_t u64Start = ASMReadTSC();
843 uint64_t u64End;
844 uint64_t StartTS = RTTimeNanoTS();
845 uint64_t EndTS;
846 do
847 {
848 RTThreadSleep(s_auSleep[i]);
849 u64End = ASMReadTSC();
850 EndTS = RTTimeNanoTS();
851 cMillies = (RTMSINTERVAL)((EndTS - StartTS + 500000) / 1000000);
852 } while ( cMillies == 0 /* the sleep may be interrupted... */
853 || (cMillies < 20 && --cTries > 0));
854 uint64_t u64Diff = u64End - u64Start;
855
856 au64Samples[i] = (u64Diff * 1000) / cMillies;
857 AssertMsg(cTries > 0, ("cMillies=%d i=%d\n", cMillies, i));
858 }
859
860 /*
861 * Discard the highest and lowest results and calculate the average.
862 */
863 unsigned iHigh = 0;
864 unsigned iLow = 0;
865 for (i = 1; i < RT_ELEMENTS(au64Samples); i++)
866 {
867 if (au64Samples[i] < au64Samples[iLow])
868 iLow = i;
869 if (au64Samples[i] > au64Samples[iHigh])
870 iHigh = i;
871 }
872 au64Samples[iLow] = 0;
873 au64Samples[iHigh] = 0;
874
875 u64Hz = au64Samples[0];
876 for (i = 1; i < RT_ELEMENTS(au64Samples); i++)
877 u64Hz += au64Samples[i];
878 u64Hz /= RT_ELEMENTS(au64Samples) - 2;
879
880 return u64Hz;
881}
882
883
884/**
885 * Finalizes the TM initialization.
886 *
887 * @returns VBox status code.
888 * @param pVM The VM to operate on.
889 */
890VMM_INT_DECL(int) TMR3InitFinalize(PVM pVM)
891{
892 int rc;
893
894 /*
895 * Resolve symbols.
896 */
897 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "tmVirtualNanoTSBad", &pVM->tm.s.VirtualGetRawDataRC.pfnBad);
898 AssertRCReturn(rc, rc);
899 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "tmVirtualNanoTSRediscover", &pVM->tm.s.VirtualGetRawDataRC.pfnRediscover);
900 AssertRCReturn(rc, rc);
901 if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceSync)
902 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLFenceSync", &pVM->tm.s.pfnVirtualGetRawRC);
903 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceAsync)
904 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLFenceAsync", &pVM->tm.s.pfnVirtualGetRawRC);
905 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacySync)
906 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLegacySync", &pVM->tm.s.pfnVirtualGetRawRC);
907 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacyAsync)
908 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLegacyAsync", &pVM->tm.s.pfnVirtualGetRawRC);
909 else
910 AssertFatalFailed();
911 AssertRCReturn(rc, rc);
912
913 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "tmVirtualNanoTSBad", &pVM->tm.s.VirtualGetRawDataR0.pfnBad);
914 AssertRCReturn(rc, rc);
915 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "tmVirtualNanoTSRediscover", &pVM->tm.s.VirtualGetRawDataR0.pfnRediscover);
916 AssertRCReturn(rc, rc);
917 if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceSync)
918 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "RTTimeNanoTSLFenceSync", &pVM->tm.s.pfnVirtualGetRawR0);
919 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceAsync)
920 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "RTTimeNanoTSLFenceAsync", &pVM->tm.s.pfnVirtualGetRawR0);
921 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacySync)
922 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "RTTimeNanoTSLegacySync", &pVM->tm.s.pfnVirtualGetRawR0);
923 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacyAsync)
924 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "RTTimeNanoTSLegacyAsync", &pVM->tm.s.pfnVirtualGetRawR0);
925 else
926 AssertFatalFailed();
927 AssertRCReturn(rc, rc);
928
929#ifndef VBOX_WITHOUT_NS_ACCOUNTING
930 /*
931 * Create a timer for refreshing the CPU load stats.
932 */
933 PTMTIMER pTimer;
934 rc = TMR3TimerCreateInternal(pVM, TMCLOCK_REAL, tmR3CpuLoadTimer, NULL, "CPU Load Timer", &pTimer);
935 if (RT_SUCCESS(rc))
936 rc = TMTimerSetMillies(pTimer, 1000);
937#endif
938
939 return rc;
940}
941
942
943/**
944 * Applies relocations to data and code managed by this
945 * component. This function will be called at init and
946 * whenever the VMM need to relocate it self inside the GC.
947 *
948 * @param pVM The VM.
949 * @param offDelta Relocation delta relative to old location.
950 */
951VMM_INT_DECL(void) TMR3Relocate(PVM pVM, RTGCINTPTR offDelta)
952{
953 int rc;
954 LogFlow(("TMR3Relocate\n"));
955
956 pVM->tm.s.pvGIPRC = MMHyperR3ToRC(pVM, pVM->tm.s.pvGIPR3);
957 pVM->tm.s.paTimerQueuesRC = MMHyperR3ToRC(pVM, pVM->tm.s.paTimerQueuesR3);
958 pVM->tm.s.paTimerQueuesR0 = MMHyperR3ToR0(pVM, pVM->tm.s.paTimerQueuesR3);
959
960 pVM->tm.s.VirtualGetRawDataRC.pu64Prev = MMHyperR3ToRC(pVM, (void *)&pVM->tm.s.u64VirtualRawPrev);
961 AssertFatal(pVM->tm.s.VirtualGetRawDataRC.pu64Prev);
962 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "tmVirtualNanoTSBad", &pVM->tm.s.VirtualGetRawDataRC.pfnBad);
963 AssertFatalRC(rc);
964 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "tmVirtualNanoTSRediscover", &pVM->tm.s.VirtualGetRawDataRC.pfnRediscover);
965 AssertFatalRC(rc);
966
967 if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceSync)
968 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLFenceSync", &pVM->tm.s.pfnVirtualGetRawRC);
969 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceAsync)
970 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLFenceAsync", &pVM->tm.s.pfnVirtualGetRawRC);
971 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacySync)
972 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLegacySync", &pVM->tm.s.pfnVirtualGetRawRC);
973 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacyAsync)
974 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLegacyAsync", &pVM->tm.s.pfnVirtualGetRawRC);
975 else
976 AssertFatalFailed();
977 AssertFatalRC(rc);
978
979 /*
980 * Iterate the timers updating the pVMRC pointers.
981 */
982 for (PTMTIMER pTimer = pVM->tm.s.pCreated; pTimer; pTimer = pTimer->pBigNext)
983 {
984 pTimer->pVMRC = pVM->pVMRC;
985 pTimer->pVMR0 = pVM->pVMR0;
986 }
987}
988
989
990/**
991 * Terminates the TM.
992 *
993 * Termination means cleaning up and freeing all resources,
994 * the VM it self is at this point powered off or suspended.
995 *
996 * @returns VBox status code.
997 * @param pVM The VM to operate on.
998 */
999VMM_INT_DECL(int) TMR3Term(PVM pVM)
1000{
1001 AssertMsg(pVM->tm.s.offVM, ("bad init order!\n"));
1002 if (pVM->tm.s.pTimer)
1003 {
1004 int rc = RTTimerDestroy(pVM->tm.s.pTimer);
1005 AssertRC(rc);
1006 pVM->tm.s.pTimer = NULL;
1007 }
1008
1009 return VINF_SUCCESS;
1010}
1011
1012
1013/**
1014 * The VM is being reset.
1015 *
1016 * For the TM component this means that a rescheduling is preformed,
1017 * the FF is cleared and but without running the queues. We'll have to
1018 * check if this makes sense or not, but it seems like a good idea now....
1019 *
1020 * @param pVM VM handle.
1021 */
1022VMM_INT_DECL(void) TMR3Reset(PVM pVM)
1023{
1024 LogFlow(("TMR3Reset:\n"));
1025 VM_ASSERT_EMT(pVM);
1026 tmTimerLock(pVM);
1027
1028 /*
1029 * Abort any pending catch up.
1030 * This isn't perfect...
1031 */
1032 if (pVM->tm.s.fVirtualSyncCatchUp)
1033 {
1034 const uint64_t offVirtualNow = TMVirtualGetNoCheck(pVM);
1035 const uint64_t offVirtualSyncNow = TMVirtualSyncGetNoCheck(pVM);
1036 if (pVM->tm.s.fVirtualSyncCatchUp)
1037 {
1038 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
1039
1040 const uint64_t offOld = pVM->tm.s.offVirtualSyncGivenUp;
1041 const uint64_t offNew = offVirtualNow - offVirtualSyncNow;
1042 Assert(offOld <= offNew);
1043 ASMAtomicWriteU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp, offNew);
1044 ASMAtomicWriteU64((uint64_t volatile *)&pVM->tm.s.offVirtualSync, offNew);
1045 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
1046 LogRel(("TM: Aborting catch-up attempt on reset with a %'RU64 ns lag on reset; new total: %'RU64 ns\n", offNew - offOld, offNew));
1047 }
1048 }
1049
1050 /*
1051 * Process the queues.
1052 */
1053 for (int i = 0; i < TMCLOCK_MAX; i++)
1054 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[i]);
1055#ifdef VBOX_STRICT
1056 tmTimerQueuesSanityChecks(pVM, "TMR3Reset");
1057#endif
1058
1059 PVMCPU pVCpuDst = &pVM->aCpus[pVM->tm.s.idTimerCpu];
1060 VMCPU_FF_CLEAR(pVCpuDst, VMCPU_FF_TIMER); /** @todo FIXME: this isn't right. */
1061 tmTimerUnlock(pVM);
1062}
1063
1064
1065/**
1066 * Resolve a builtin RC symbol.
1067 * Called by PDM when loading or relocating GC modules.
1068 *
1069 * @returns VBox status
1070 * @param pVM VM Handle.
1071 * @param pszSymbol Symbol to resolve.
1072 * @param pRCPtrValue Where to store the symbol value.
1073 * @remark This has to work before TMR3Relocate() is called.
1074 */
1075VMM_INT_DECL(int) TMR3GetImportRC(PVM pVM, const char *pszSymbol, PRTRCPTR pRCPtrValue)
1076{
1077 if (!strcmp(pszSymbol, "g_pSUPGlobalInfoPage"))
1078 *pRCPtrValue = MMHyperR3ToRC(pVM, &pVM->tm.s.pvGIPRC);
1079 //else if (..)
1080 else
1081 return VERR_SYMBOL_NOT_FOUND;
1082 return VINF_SUCCESS;
1083}
1084
1085
1086/**
1087 * Execute state save operation.
1088 *
1089 * @returns VBox status code.
1090 * @param pVM VM Handle.
1091 * @param pSSM SSM operation handle.
1092 */
1093static DECLCALLBACK(int) tmR3Save(PVM pVM, PSSMHANDLE pSSM)
1094{
1095 LogFlow(("tmR3Save:\n"));
1096#ifdef VBOX_STRICT
1097 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1098 {
1099 PVMCPU pVCpu = &pVM->aCpus[i];
1100 Assert(!pVCpu->tm.s.fTSCTicking);
1101 }
1102 Assert(!pVM->tm.s.cVirtualTicking);
1103 Assert(!pVM->tm.s.fVirtualSyncTicking);
1104#endif
1105
1106 /*
1107 * Save the virtual clocks.
1108 */
1109 /* the virtual clock. */
1110 SSMR3PutU64(pSSM, TMCLOCK_FREQ_VIRTUAL);
1111 SSMR3PutU64(pSSM, pVM->tm.s.u64Virtual);
1112
1113 /* the virtual timer synchronous clock. */
1114 SSMR3PutU64(pSSM, pVM->tm.s.u64VirtualSync);
1115 SSMR3PutU64(pSSM, pVM->tm.s.offVirtualSync);
1116 SSMR3PutU64(pSSM, pVM->tm.s.offVirtualSyncGivenUp);
1117 SSMR3PutU64(pSSM, pVM->tm.s.u64VirtualSyncCatchUpPrev);
1118 SSMR3PutBool(pSSM, pVM->tm.s.fVirtualSyncCatchUp);
1119
1120 /* real time clock */
1121 SSMR3PutU64(pSSM, TMCLOCK_FREQ_REAL);
1122
1123 /* the cpu tick clock. */
1124 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1125 {
1126 PVMCPU pVCpu = &pVM->aCpus[i];
1127 SSMR3PutU64(pSSM, TMCpuTickGet(pVCpu));
1128 }
1129 return SSMR3PutU64(pSSM, pVM->tm.s.cTSCTicksPerSecond);
1130}
1131
1132
1133/**
1134 * Execute state load operation.
1135 *
1136 * @returns VBox status code.
1137 * @param pVM VM Handle.
1138 * @param pSSM SSM operation handle.
1139 * @param uVersion Data layout version.
1140 * @param uPass The data pass.
1141 */
1142static DECLCALLBACK(int) tmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass)
1143{
1144 LogFlow(("tmR3Load:\n"));
1145
1146 Assert(uPass == SSM_PASS_FINAL); NOREF(uPass);
1147#ifdef VBOX_STRICT
1148 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1149 {
1150 PVMCPU pVCpu = &pVM->aCpus[i];
1151 Assert(!pVCpu->tm.s.fTSCTicking);
1152 }
1153 Assert(!pVM->tm.s.cVirtualTicking);
1154 Assert(!pVM->tm.s.fVirtualSyncTicking);
1155#endif
1156
1157 /*
1158 * Validate version.
1159 */
1160 if (uVersion != TM_SAVED_STATE_VERSION)
1161 {
1162 AssertMsgFailed(("tmR3Load: Invalid version uVersion=%d!\n", uVersion));
1163 return VERR_SSM_UNSUPPORTED_DATA_UNIT_VERSION;
1164 }
1165
1166 /*
1167 * Load the virtual clock.
1168 */
1169 pVM->tm.s.cVirtualTicking = 0;
1170 /* the virtual clock. */
1171 uint64_t u64Hz;
1172 int rc = SSMR3GetU64(pSSM, &u64Hz);
1173 if (RT_FAILURE(rc))
1174 return rc;
1175 if (u64Hz != TMCLOCK_FREQ_VIRTUAL)
1176 {
1177 AssertMsgFailed(("The virtual clock frequency differs! Saved: %'RU64 Binary: %'RU64\n",
1178 u64Hz, TMCLOCK_FREQ_VIRTUAL));
1179 return VERR_SSM_VIRTUAL_CLOCK_HZ;
1180 }
1181 SSMR3GetU64(pSSM, &pVM->tm.s.u64Virtual);
1182 pVM->tm.s.u64VirtualOffset = 0;
1183
1184 /* the virtual timer synchronous clock. */
1185 pVM->tm.s.fVirtualSyncTicking = false;
1186 uint64_t u64;
1187 SSMR3GetU64(pSSM, &u64);
1188 pVM->tm.s.u64VirtualSync = u64;
1189 SSMR3GetU64(pSSM, &u64);
1190 pVM->tm.s.offVirtualSync = u64;
1191 SSMR3GetU64(pSSM, &u64);
1192 pVM->tm.s.offVirtualSyncGivenUp = u64;
1193 SSMR3GetU64(pSSM, &u64);
1194 pVM->tm.s.u64VirtualSyncCatchUpPrev = u64;
1195 bool f;
1196 SSMR3GetBool(pSSM, &f);
1197 pVM->tm.s.fVirtualSyncCatchUp = f;
1198
1199 /* the real clock */
1200 rc = SSMR3GetU64(pSSM, &u64Hz);
1201 if (RT_FAILURE(rc))
1202 return rc;
1203 if (u64Hz != TMCLOCK_FREQ_REAL)
1204 {
1205 AssertMsgFailed(("The real clock frequency differs! Saved: %'RU64 Binary: %'RU64\n",
1206 u64Hz, TMCLOCK_FREQ_REAL));
1207 return VERR_SSM_VIRTUAL_CLOCK_HZ; /* misleading... */
1208 }
1209
1210 /* the cpu tick clock. */
1211 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1212 {
1213 PVMCPU pVCpu = &pVM->aCpus[i];
1214
1215 pVCpu->tm.s.fTSCTicking = false;
1216 SSMR3GetU64(pSSM, &pVCpu->tm.s.u64TSC);
1217
1218 if (pVM->tm.s.fTSCUseRealTSC)
1219 pVCpu->tm.s.offTSCRawSrc = 0; /** @todo TSC restore stuff and HWACC. */
1220 }
1221
1222 rc = SSMR3GetU64(pSSM, &u64Hz);
1223 if (RT_FAILURE(rc))
1224 return rc;
1225 if (!pVM->tm.s.fTSCUseRealTSC)
1226 pVM->tm.s.cTSCTicksPerSecond = u64Hz;
1227
1228 LogRel(("TM: cTSCTicksPerSecond=%#RX64 (%'RU64) fTSCVirtualized=%RTbool fTSCUseRealTSC=%RTbool (state load)\n",
1229 pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.fTSCVirtualized, pVM->tm.s.fTSCUseRealTSC));
1230
1231 /*
1232 * Make sure timers get rescheduled immediately.
1233 */
1234 PVMCPU pVCpuDst = &pVM->aCpus[pVM->tm.s.idTimerCpu];
1235 VMCPU_FF_SET(pVCpuDst, VMCPU_FF_TIMER);
1236
1237 return VINF_SUCCESS;
1238}
1239
1240
1241/**
1242 * Internal TMR3TimerCreate worker.
1243 *
1244 * @returns VBox status code.
1245 * @param pVM The VM handle.
1246 * @param enmClock The timer clock.
1247 * @param pszDesc The timer description.
1248 * @param ppTimer Where to store the timer pointer on success.
1249 */
1250static int tmr3TimerCreate(PVM pVM, TMCLOCK enmClock, const char *pszDesc, PPTMTIMERR3 ppTimer)
1251{
1252 VM_ASSERT_EMT(pVM);
1253
1254 /*
1255 * Allocate the timer.
1256 */
1257 PTMTIMERR3 pTimer = NULL;
1258 if (pVM->tm.s.pFree && VM_IS_EMT(pVM))
1259 {
1260 pTimer = pVM->tm.s.pFree;
1261 pVM->tm.s.pFree = pTimer->pBigNext;
1262 Log3(("TM: Recycling timer %p, new free head %p.\n", pTimer, pTimer->pBigNext));
1263 }
1264
1265 if (!pTimer)
1266 {
1267 int rc = MMHyperAlloc(pVM, sizeof(*pTimer), 0, MM_TAG_TM, (void **)&pTimer);
1268 if (RT_FAILURE(rc))
1269 return rc;
1270 Log3(("TM: Allocated new timer %p\n", pTimer));
1271 }
1272
1273 /*
1274 * Initialize it.
1275 */
1276 pTimer->u64Expire = 0;
1277 pTimer->enmClock = enmClock;
1278 pTimer->pVMR3 = pVM;
1279 pTimer->pVMR0 = pVM->pVMR0;
1280 pTimer->pVMRC = pVM->pVMRC;
1281 pTimer->enmState = TMTIMERSTATE_STOPPED;
1282 pTimer->offScheduleNext = 0;
1283 pTimer->offNext = 0;
1284 pTimer->offPrev = 0;
1285 pTimer->pvUser = NULL;
1286 pTimer->pCritSect = NULL;
1287 pTimer->pszDesc = pszDesc;
1288
1289 /* insert into the list of created timers. */
1290 tmTimerLock(pVM);
1291 pTimer->pBigPrev = NULL;
1292 pTimer->pBigNext = pVM->tm.s.pCreated;
1293 pVM->tm.s.pCreated = pTimer;
1294 if (pTimer->pBigNext)
1295 pTimer->pBigNext->pBigPrev = pTimer;
1296#ifdef VBOX_STRICT
1297 tmTimerQueuesSanityChecks(pVM, "tmR3TimerCreate");
1298#endif
1299 tmTimerUnlock(pVM);
1300
1301 *ppTimer = pTimer;
1302 return VINF_SUCCESS;
1303}
1304
1305
1306/**
1307 * Creates a device timer.
1308 *
1309 * @returns VBox status.
1310 * @param pVM The VM to create the timer in.
1311 * @param pDevIns Device instance.
1312 * @param enmClock The clock to use on this timer.
1313 * @param pfnCallback Callback function.
1314 * @param pvUser The user argument to the callback.
1315 * @param fFlags Timer creation flags, see grp_tm_timer_flags.
1316 * @param pszDesc Pointer to description string which must stay around
1317 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1318 * @param ppTimer Where to store the timer on success.
1319 */
1320VMM_INT_DECL(int) TMR3TimerCreateDevice(PVM pVM, PPDMDEVINS pDevIns, TMCLOCK enmClock, PFNTMTIMERDEV pfnCallback, void *pvUser, uint32_t fFlags, const char *pszDesc, PPTMTIMERR3 ppTimer)
1321{
1322 AssertReturn(!(fFlags & ~(TMTIMER_FLAGS_NO_CRIT_SECT)), VERR_INVALID_PARAMETER);
1323
1324 /*
1325 * Allocate and init stuff.
1326 */
1327 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, ppTimer);
1328 if (RT_SUCCESS(rc))
1329 {
1330 (*ppTimer)->enmType = TMTIMERTYPE_DEV;
1331 (*ppTimer)->u.Dev.pfnTimer = pfnCallback;
1332 (*ppTimer)->u.Dev.pDevIns = pDevIns;
1333 (*ppTimer)->pvUser = pvUser;
1334 if (fFlags & TMTIMER_FLAGS_DEFAULT_CRIT_SECT)
1335 {
1336 if (pDevIns->pCritSectR3)
1337 (*ppTimer)->pCritSect = pDevIns->pCritSectR3;
1338 else
1339 (*ppTimer)->pCritSect = IOMR3GetCritSect(pVM);
1340 }
1341 Log(("TM: Created device timer %p clock %d callback %p '%s'\n", (*ppTimer), enmClock, pfnCallback, pszDesc));
1342 }
1343
1344 return rc;
1345}
1346
1347
1348/**
1349 * Creates a driver timer.
1350 *
1351 * @returns VBox status.
1352 * @param pVM The VM to create the timer in.
1353 * @param pDrvIns Driver instance.
1354 * @param enmClock The clock to use on this timer.
1355 * @param pfnCallback Callback function.
1356 * @param pvUser The user argument to the callback.
1357 * @param fFlags Timer creation flags, see grp_tm_timer_flags.
1358 * @param pszDesc Pointer to description string which must stay around
1359 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1360 * @param ppTimer Where to store the timer on success.
1361 */
1362VMM_INT_DECL(int) TMR3TimerCreateDriver(PVM pVM, PPDMDRVINS pDrvIns, TMCLOCK enmClock, PFNTMTIMERDRV pfnCallback, void *pvUser,
1363 uint32_t fFlags, const char *pszDesc, PPTMTIMERR3 ppTimer)
1364{
1365 AssertReturn(!(fFlags & ~(TMTIMER_FLAGS_NO_CRIT_SECT)), VERR_INVALID_PARAMETER);
1366
1367 /*
1368 * Allocate and init stuff.
1369 */
1370 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, ppTimer);
1371 if (RT_SUCCESS(rc))
1372 {
1373 (*ppTimer)->enmType = TMTIMERTYPE_DRV;
1374 (*ppTimer)->u.Drv.pfnTimer = pfnCallback;
1375 (*ppTimer)->u.Drv.pDrvIns = pDrvIns;
1376 (*ppTimer)->pvUser = pvUser;
1377 Log(("TM: Created device timer %p clock %d callback %p '%s'\n", (*ppTimer), enmClock, pfnCallback, pszDesc));
1378 }
1379
1380 return rc;
1381}
1382
1383
1384/**
1385 * Creates an internal timer.
1386 *
1387 * @returns VBox status.
1388 * @param pVM The VM to create the timer in.
1389 * @param enmClock The clock to use on this timer.
1390 * @param pfnCallback Callback function.
1391 * @param pvUser User argument to be passed to the callback.
1392 * @param pszDesc Pointer to description string which must stay around
1393 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1394 * @param ppTimer Where to store the timer on success.
1395 */
1396VMMR3DECL(int) TMR3TimerCreateInternal(PVM pVM, TMCLOCK enmClock, PFNTMTIMERINT pfnCallback, void *pvUser, const char *pszDesc, PPTMTIMERR3 ppTimer)
1397{
1398 /*
1399 * Allocate and init stuff.
1400 */
1401 PTMTIMER pTimer;
1402 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, &pTimer);
1403 if (RT_SUCCESS(rc))
1404 {
1405 pTimer->enmType = TMTIMERTYPE_INTERNAL;
1406 pTimer->u.Internal.pfnTimer = pfnCallback;
1407 pTimer->pvUser = pvUser;
1408 *ppTimer = pTimer;
1409 Log(("TM: Created internal timer %p clock %d callback %p '%s'\n", pTimer, enmClock, pfnCallback, pszDesc));
1410 }
1411
1412 return rc;
1413}
1414
1415/**
1416 * Creates an external timer.
1417 *
1418 * @returns Timer handle on success.
1419 * @returns NULL on failure.
1420 * @param pVM The VM to create the timer in.
1421 * @param enmClock The clock to use on this timer.
1422 * @param pfnCallback Callback function.
1423 * @param pvUser User argument.
1424 * @param pszDesc Pointer to description string which must stay around
1425 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1426 */
1427VMMR3DECL(PTMTIMERR3) TMR3TimerCreateExternal(PVM pVM, TMCLOCK enmClock, PFNTMTIMEREXT pfnCallback, void *pvUser, const char *pszDesc)
1428{
1429 /*
1430 * Allocate and init stuff.
1431 */
1432 PTMTIMERR3 pTimer;
1433 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, &pTimer);
1434 if (RT_SUCCESS(rc))
1435 {
1436 pTimer->enmType = TMTIMERTYPE_EXTERNAL;
1437 pTimer->u.External.pfnTimer = pfnCallback;
1438 pTimer->pvUser = pvUser;
1439 Log(("TM: Created external timer %p clock %d callback %p '%s'\n", pTimer, enmClock, pfnCallback, pszDesc));
1440 return pTimer;
1441 }
1442
1443 return NULL;
1444}
1445
1446
1447/**
1448 * Destroy a timer
1449 *
1450 * @returns VBox status.
1451 * @param pTimer Timer handle as returned by one of the create functions.
1452 */
1453VMMR3DECL(int) TMR3TimerDestroy(PTMTIMER pTimer)
1454{
1455 /*
1456 * Be extra careful here.
1457 */
1458 if (!pTimer)
1459 return VINF_SUCCESS;
1460 AssertPtr(pTimer);
1461 Assert((unsigned)pTimer->enmClock < (unsigned)TMCLOCK_MAX);
1462
1463 PVM pVM = pTimer->CTX_SUFF(pVM);
1464 PTMTIMERQUEUE pQueue = &pVM->tm.s.CTX_SUFF(paTimerQueues)[pTimer->enmClock];
1465 bool fActive = false;
1466 bool fPending = false;
1467
1468 AssertMsg( !pTimer->pCritSect
1469 || VMR3GetState(pVM) != VMSTATE_RUNNING
1470 || PDMCritSectIsOwner(pTimer->pCritSect), ("%s\n", pTimer->pszDesc));
1471
1472 /*
1473 * The rest of the game happens behind the lock, just
1474 * like create does. All the work is done here.
1475 */
1476 tmTimerLock(pVM);
1477 for (int cRetries = 1000;; cRetries--)
1478 {
1479 /*
1480 * Change to the DESTROY state.
1481 */
1482 TMTIMERSTATE enmState = pTimer->enmState;
1483 TMTIMERSTATE enmNewState = enmState;
1484 Log2(("TMTimerDestroy: %p:{.enmState=%s, .pszDesc='%s'} cRetries=%d\n",
1485 pTimer, tmTimerState(enmState), R3STRING(pTimer->pszDesc), cRetries));
1486 switch (enmState)
1487 {
1488 case TMTIMERSTATE_STOPPED:
1489 case TMTIMERSTATE_EXPIRED_DELIVER:
1490 break;
1491
1492 case TMTIMERSTATE_ACTIVE:
1493 fActive = true;
1494 break;
1495
1496 case TMTIMERSTATE_PENDING_STOP:
1497 case TMTIMERSTATE_PENDING_STOP_SCHEDULE:
1498 case TMTIMERSTATE_PENDING_RESCHEDULE:
1499 fActive = true;
1500 fPending = true;
1501 break;
1502
1503 case TMTIMERSTATE_PENDING_SCHEDULE:
1504 fPending = true;
1505 break;
1506
1507 /*
1508 * This shouldn't happen as the caller should make sure there are no races.
1509 */
1510 case TMTIMERSTATE_EXPIRED_GET_UNLINK:
1511 case TMTIMERSTATE_PENDING_SCHEDULE_SET_EXPIRE:
1512 case TMTIMERSTATE_PENDING_RESCHEDULE_SET_EXPIRE:
1513 AssertMsgFailed(("%p:.enmState=%s %s\n", pTimer, tmTimerState(enmState), pTimer->pszDesc));
1514 tmTimerUnlock(pVM);
1515 if (!RTThreadYield())
1516 RTThreadSleep(1);
1517 AssertMsgReturn(cRetries > 0, ("Failed waiting for stable state. state=%d (%s)\n", pTimer->enmState, pTimer->pszDesc),
1518 VERR_TM_UNSTABLE_STATE);
1519 tmTimerLock(pVM);
1520 continue;
1521
1522 /*
1523 * Invalid states.
1524 */
1525 case TMTIMERSTATE_FREE:
1526 case TMTIMERSTATE_DESTROY:
1527 tmTimerUnlock(pVM);
1528 AssertLogRelMsgFailedReturn(("pTimer=%p %s\n", pTimer, tmTimerState(enmState)), VERR_TM_INVALID_STATE);
1529
1530 default:
1531 AssertMsgFailed(("Unknown timer state %d (%s)\n", enmState, R3STRING(pTimer->pszDesc)));
1532 tmTimerUnlock(pVM);
1533 return VERR_TM_UNKNOWN_STATE;
1534 }
1535
1536 /*
1537 * Try switch to the destroy state.
1538 * This should always succeed as the caller should make sure there are no race.
1539 */
1540 bool fRc;
1541 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_DESTROY, enmState, fRc);
1542 if (fRc)
1543 break;
1544 AssertMsgFailed(("%p:.enmState=%s %s\n", pTimer, tmTimerState(enmState), pTimer->pszDesc));
1545 tmTimerUnlock(pVM);
1546 AssertMsgReturn(cRetries > 0, ("Failed waiting for stable state. state=%d (%s)\n", pTimer->enmState, pTimer->pszDesc),
1547 VERR_TM_UNSTABLE_STATE);
1548 tmTimerLock(pVM);
1549 }
1550
1551 /*
1552 * Unlink from the active list.
1553 */
1554 if (fActive)
1555 {
1556 const PTMTIMER pPrev = TMTIMER_GET_PREV(pTimer);
1557 const PTMTIMER pNext = TMTIMER_GET_NEXT(pTimer);
1558 if (pPrev)
1559 TMTIMER_SET_NEXT(pPrev, pNext);
1560 else
1561 {
1562 TMTIMER_SET_HEAD(pQueue, pNext);
1563 pQueue->u64Expire = pNext ? pNext->u64Expire : INT64_MAX;
1564 }
1565 if (pNext)
1566 TMTIMER_SET_PREV(pNext, pPrev);
1567 pTimer->offNext = 0;
1568 pTimer->offPrev = 0;
1569 }
1570
1571 /*
1572 * Unlink from the schedule list by running it.
1573 */
1574 if (fPending)
1575 {
1576 Log3(("TMR3TimerDestroy: tmTimerQueueSchedule\n"));
1577 STAM_PROFILE_START(&pVM->tm.s.CTX_SUFF_Z(StatScheduleOne), a);
1578 Assert(pQueue->offSchedule);
1579 tmTimerQueueSchedule(pVM, pQueue);
1580 }
1581
1582 /*
1583 * Read to move the timer from the created list and onto the free list.
1584 */
1585 Assert(!pTimer->offNext); Assert(!pTimer->offPrev); Assert(!pTimer->offScheduleNext);
1586
1587 /* unlink from created list */
1588 if (pTimer->pBigPrev)
1589 pTimer->pBigPrev->pBigNext = pTimer->pBigNext;
1590 else
1591 pVM->tm.s.pCreated = pTimer->pBigNext;
1592 if (pTimer->pBigNext)
1593 pTimer->pBigNext->pBigPrev = pTimer->pBigPrev;
1594 pTimer->pBigNext = 0;
1595 pTimer->pBigPrev = 0;
1596
1597 /* free */
1598 Log2(("TM: Inserting %p into the free list ahead of %p!\n", pTimer, pVM->tm.s.pFree));
1599 TM_SET_STATE(pTimer, TMTIMERSTATE_FREE);
1600 pTimer->pBigNext = pVM->tm.s.pFree;
1601 pVM->tm.s.pFree = pTimer;
1602
1603#ifdef VBOX_STRICT
1604 tmTimerQueuesSanityChecks(pVM, "TMR3TimerDestroy");
1605#endif
1606 tmTimerUnlock(pVM);
1607 return VINF_SUCCESS;
1608}
1609
1610
1611/**
1612 * Destroy all timers owned by a device.
1613 *
1614 * @returns VBox status.
1615 * @param pVM VM handle.
1616 * @param pDevIns Device which timers should be destroyed.
1617 */
1618VMM_INT_DECL(int) TMR3TimerDestroyDevice(PVM pVM, PPDMDEVINS pDevIns)
1619{
1620 LogFlow(("TMR3TimerDestroyDevice: pDevIns=%p\n", pDevIns));
1621 if (!pDevIns)
1622 return VERR_INVALID_PARAMETER;
1623
1624 tmTimerLock(pVM);
1625 PTMTIMER pCur = pVM->tm.s.pCreated;
1626 while (pCur)
1627 {
1628 PTMTIMER pDestroy = pCur;
1629 pCur = pDestroy->pBigNext;
1630 if ( pDestroy->enmType == TMTIMERTYPE_DEV
1631 && pDestroy->u.Dev.pDevIns == pDevIns)
1632 {
1633 int rc = TMR3TimerDestroy(pDestroy);
1634 AssertRC(rc);
1635 }
1636 }
1637 tmTimerUnlock(pVM);
1638
1639 LogFlow(("TMR3TimerDestroyDevice: returns VINF_SUCCESS\n"));
1640 return VINF_SUCCESS;
1641}
1642
1643
1644/**
1645 * Destroy all timers owned by a driver.
1646 *
1647 * @returns VBox status.
1648 * @param pVM VM handle.
1649 * @param pDrvIns Driver which timers should be destroyed.
1650 */
1651VMM_INT_DECL(int) TMR3TimerDestroyDriver(PVM pVM, PPDMDRVINS pDrvIns)
1652{
1653 LogFlow(("TMR3TimerDestroyDriver: pDrvIns=%p\n", pDrvIns));
1654 if (!pDrvIns)
1655 return VERR_INVALID_PARAMETER;
1656
1657 tmTimerLock(pVM);
1658 PTMTIMER pCur = pVM->tm.s.pCreated;
1659 while (pCur)
1660 {
1661 PTMTIMER pDestroy = pCur;
1662 pCur = pDestroy->pBigNext;
1663 if ( pDestroy->enmType == TMTIMERTYPE_DRV
1664 && pDestroy->u.Drv.pDrvIns == pDrvIns)
1665 {
1666 int rc = TMR3TimerDestroy(pDestroy);
1667 AssertRC(rc);
1668 }
1669 }
1670 tmTimerUnlock(pVM);
1671
1672 LogFlow(("TMR3TimerDestroyDriver: returns VINF_SUCCESS\n"));
1673 return VINF_SUCCESS;
1674}
1675
1676
1677/**
1678 * Internal function for getting the clock time.
1679 *
1680 * @returns clock time.
1681 * @param pVM The VM handle.
1682 * @param enmClock The clock.
1683 */
1684DECLINLINE(uint64_t) tmClock(PVM pVM, TMCLOCK enmClock)
1685{
1686 switch (enmClock)
1687 {
1688 case TMCLOCK_VIRTUAL: return TMVirtualGet(pVM);
1689 case TMCLOCK_VIRTUAL_SYNC: return TMVirtualSyncGet(pVM);
1690 case TMCLOCK_REAL: return TMRealGet(pVM);
1691 case TMCLOCK_TSC: return TMCpuTickGet(&pVM->aCpus[0] /* just take VCPU 0 */);
1692 default:
1693 AssertMsgFailed(("enmClock=%d\n", enmClock));
1694 return ~(uint64_t)0;
1695 }
1696}
1697
1698
1699/**
1700 * Checks if the sync queue has one or more expired timers.
1701 *
1702 * @returns true / false.
1703 *
1704 * @param pVM The VM handle.
1705 * @param enmClock The queue.
1706 */
1707DECLINLINE(bool) tmR3HasExpiredTimer(PVM pVM, TMCLOCK enmClock)
1708{
1709 const uint64_t u64Expire = pVM->tm.s.CTX_SUFF(paTimerQueues)[enmClock].u64Expire;
1710 return u64Expire != INT64_MAX && u64Expire <= tmClock(pVM, enmClock);
1711}
1712
1713
1714/**
1715 * Checks for expired timers in all the queues.
1716 *
1717 * @returns true / false.
1718 * @param pVM The VM handle.
1719 */
1720DECLINLINE(bool) tmR3AnyExpiredTimers(PVM pVM)
1721{
1722 /*
1723 * Combine the time calculation for the first two since we're not on EMT
1724 * TMVirtualSyncGet only permits EMT.
1725 */
1726 uint64_t u64Now = TMVirtualGetNoCheck(pVM);
1727 if (pVM->tm.s.CTX_SUFF(paTimerQueues)[TMCLOCK_VIRTUAL].u64Expire <= u64Now)
1728 return true;
1729 u64Now = pVM->tm.s.fVirtualSyncTicking
1730 ? u64Now - pVM->tm.s.offVirtualSync
1731 : pVM->tm.s.u64VirtualSync;
1732 if (pVM->tm.s.CTX_SUFF(paTimerQueues)[TMCLOCK_VIRTUAL_SYNC].u64Expire <= u64Now)
1733 return true;
1734
1735 /*
1736 * The remaining timers.
1737 */
1738 if (tmR3HasExpiredTimer(pVM, TMCLOCK_REAL))
1739 return true;
1740 if (tmR3HasExpiredTimer(pVM, TMCLOCK_TSC))
1741 return true;
1742 return false;
1743}
1744
1745
1746/**
1747 * Schedule timer callback.
1748 *
1749 * @param pTimer Timer handle.
1750 * @param pvUser VM handle.
1751 * @thread Timer thread.
1752 *
1753 * @remark We cannot do the scheduling and queues running from a timer handler
1754 * since it's not executing in EMT, and even if it was it would be async
1755 * and we wouldn't know the state of the affairs.
1756 * So, we'll just raise the timer FF and force any REM execution to exit.
1757 */
1758static DECLCALLBACK(void) tmR3TimerCallback(PRTTIMER pTimer, void *pvUser, uint64_t /*iTick*/)
1759{
1760 PVM pVM = (PVM)pvUser;
1761 PVMCPU pVCpuDst = &pVM->aCpus[pVM->tm.s.idTimerCpu];
1762
1763 AssertCompile(TMCLOCK_MAX == 4);
1764#ifdef DEBUG_Sander /* very annoying, keep it private. */
1765 if (VMCPU_FF_ISSET(pVCpuDst, VMCPU_FF_TIMER))
1766 Log(("tmR3TimerCallback: timer event still pending!!\n"));
1767#endif
1768 if ( !VMCPU_FF_ISSET(pVCpuDst, VMCPU_FF_TIMER)
1769 && ( pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].offSchedule /** @todo FIXME - reconsider offSchedule as a reason for running the timer queues. */
1770 || pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL].offSchedule
1771 || pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL].offSchedule
1772 || pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC].offSchedule
1773 || tmR3AnyExpiredTimers(pVM)
1774 )
1775 && !VMCPU_FF_ISSET(pVCpuDst, VMCPU_FF_TIMER)
1776 && !pVM->tm.s.fRunningQueues
1777 )
1778 {
1779 Log5(("TM(%u): FF: 0 -> 1\n", __LINE__));
1780 VMCPU_FF_SET(pVCpuDst, VMCPU_FF_TIMER);
1781 REMR3NotifyTimerPending(pVM, pVCpuDst);
1782 VMR3NotifyCpuFFU(pVCpuDst->pUVCpu, VMNOTIFYFF_FLAGS_DONE_REM /** @todo | VMNOTIFYFF_FLAGS_POKE ?*/);
1783 STAM_COUNTER_INC(&pVM->tm.s.StatTimerCallbackSetFF);
1784 }
1785}
1786
1787
1788/**
1789 * Schedules and runs any pending timers.
1790 *
1791 * This is normally called from a forced action handler in EMT.
1792 *
1793 * @param pVM The VM to run the timers for.
1794 *
1795 * @thread EMT (actually EMT0, but we fend off the others)
1796 */
1797VMMR3DECL(void) TMR3TimerQueuesDo(PVM pVM)
1798{
1799 /*
1800 * Only the dedicated timer EMT should do stuff here.
1801 * (fRunningQueues is only used as an indicator.)
1802 */
1803 Assert(pVM->tm.s.idTimerCpu < pVM->cCpus);
1804 PVMCPU pVCpuDst = &pVM->aCpus[pVM->tm.s.idTimerCpu];
1805 if (VMMGetCpu(pVM) != pVCpuDst)
1806 {
1807 Assert(pVM->cCpus > 1);
1808 return;
1809 }
1810 STAM_PROFILE_START(&pVM->tm.s.StatDoQueues, a);
1811 Log2(("TMR3TimerQueuesDo:\n"));
1812 Assert(!pVM->tm.s.fRunningQueues);
1813 ASMAtomicWriteBool(&pVM->tm.s.fRunningQueues, true);
1814 tmTimerLock(pVM);
1815
1816 /*
1817 * Process the queues.
1818 */
1819 AssertCompile(TMCLOCK_MAX == 4);
1820
1821 /* TMCLOCK_VIRTUAL_SYNC (see also TMR3VirtualSyncFF) */
1822 STAM_PROFILE_ADV_START(&pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL_SYNC], s1);
1823 tmVirtualSyncLock(pVM);
1824 ASMAtomicWriteBool(&pVM->tm.s.fRunningVirtualSyncQueue, true);
1825 VMCPU_FF_CLEAR(pVCpuDst, VMCPU_FF_TIMER); /* Clear the FF once we started working for real. */
1826
1827 if (pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].offSchedule)
1828 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC]);
1829 tmR3TimerQueueRunVirtualSync(pVM);
1830 if (pVM->tm.s.fVirtualSyncTicking) /** @todo move into tmR3TimerQueueRunVirtualSync - FIXME */
1831 VM_FF_CLEAR(pVM, VM_FF_TM_VIRTUAL_SYNC);
1832
1833 ASMAtomicWriteBool(&pVM->tm.s.fRunningVirtualSyncQueue, false);
1834 tmVirtualSyncUnlock(pVM);
1835 STAM_PROFILE_ADV_STOP(&pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL_SYNC], s1);
1836
1837 /* TMCLOCK_VIRTUAL */
1838 STAM_PROFILE_ADV_START(&pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL], s2);
1839 if (pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL].offSchedule)
1840 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL]);
1841 tmR3TimerQueueRun(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL]);
1842 STAM_PROFILE_ADV_STOP(&pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL], s2);
1843
1844 /* TMCLOCK_TSC */
1845 Assert(!pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC].offActive); /* not used */
1846
1847 /* TMCLOCK_REAL */
1848 STAM_PROFILE_ADV_START(&pVM->tm.s.aStatDoQueues[TMCLOCK_REAL], s3);
1849 if (pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL].offSchedule)
1850 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL]);
1851 tmR3TimerQueueRun(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL]);
1852 STAM_PROFILE_ADV_STOP(&pVM->tm.s.aStatDoQueues[TMCLOCK_REAL], s3);
1853
1854#ifdef VBOX_STRICT
1855 /* check that we didn't screw up. */
1856 tmTimerQueuesSanityChecks(pVM, "TMR3TimerQueuesDo");
1857#endif
1858
1859 /* done */
1860 Log2(("TMR3TimerQueuesDo: returns void\n"));
1861 ASMAtomicWriteBool(&pVM->tm.s.fRunningQueues, false);
1862 tmTimerUnlock(pVM);
1863 STAM_PROFILE_STOP(&pVM->tm.s.StatDoQueues, a);
1864}
1865
1866//RT_C_DECLS_BEGIN
1867//int iomLock(PVM pVM);
1868//void iomUnlock(PVM pVM);
1869//RT_C_DECLS_END
1870
1871
1872/**
1873 * Schedules and runs any pending times in the specified queue.
1874 *
1875 * This is normally called from a forced action handler in EMT.
1876 *
1877 * @param pVM The VM to run the timers for.
1878 * @param pQueue The queue to run.
1879 */
1880static void tmR3TimerQueueRun(PVM pVM, PTMTIMERQUEUE pQueue)
1881{
1882 VM_ASSERT_EMT(pVM);
1883
1884 /*
1885 * Run timers.
1886 *
1887 * We check the clock once and run all timers which are ACTIVE
1888 * and have an expire time less or equal to the time we read.
1889 *
1890 * N.B. A generic unlink must be applied since other threads
1891 * are allowed to mess with any active timer at any time.
1892 * However, we only allow EMT to handle EXPIRED_PENDING
1893 * timers, thus enabling the timer handler function to
1894 * arm the timer again.
1895 */
1896 PTMTIMER pNext = TMTIMER_GET_HEAD(pQueue);
1897 if (!pNext)
1898 return;
1899 const uint64_t u64Now = tmClock(pVM, pQueue->enmClock);
1900 while (pNext && pNext->u64Expire <= u64Now)
1901 {
1902 PTMTIMER pTimer = pNext;
1903 pNext = TMTIMER_GET_NEXT(pTimer);
1904 PPDMCRITSECT pCritSect = pTimer->pCritSect;
1905 if (pCritSect)
1906 PDMCritSectEnter(pCritSect, VERR_INTERNAL_ERROR);
1907 Log2(("tmR3TimerQueueRun: %p:{.enmState=%s, .enmClock=%d, .enmType=%d, u64Expire=%llx (now=%llx) .pszDesc=%s}\n",
1908 pTimer, tmTimerState(pTimer->enmState), pTimer->enmClock, pTimer->enmType, pTimer->u64Expire, u64Now, pTimer->pszDesc));
1909 bool fRc;
1910 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_EXPIRED_GET_UNLINK, TMTIMERSTATE_ACTIVE, fRc);
1911 if (fRc)
1912 {
1913 Assert(!pTimer->offScheduleNext); /* this can trigger falsely */
1914
1915 /* unlink */
1916 const PTMTIMER pPrev = TMTIMER_GET_PREV(pTimer);
1917 if (pPrev)
1918 TMTIMER_SET_NEXT(pPrev, pNext);
1919 else
1920 {
1921 TMTIMER_SET_HEAD(pQueue, pNext);
1922 pQueue->u64Expire = pNext ? pNext->u64Expire : INT64_MAX;
1923 }
1924 if (pNext)
1925 TMTIMER_SET_PREV(pNext, pPrev);
1926 pTimer->offNext = 0;
1927 pTimer->offPrev = 0;
1928
1929 /* fire */
1930 TM_SET_STATE(pTimer, TMTIMERSTATE_EXPIRED_DELIVER);
1931 switch (pTimer->enmType)
1932 {
1933 case TMTIMERTYPE_DEV: pTimer->u.Dev.pfnTimer(pTimer->u.Dev.pDevIns, pTimer, pTimer->pvUser); break;
1934 case TMTIMERTYPE_DRV: pTimer->u.Drv.pfnTimer(pTimer->u.Drv.pDrvIns, pTimer, pTimer->pvUser); break;
1935 case TMTIMERTYPE_INTERNAL: pTimer->u.Internal.pfnTimer(pVM, pTimer, pTimer->pvUser); break;
1936 case TMTIMERTYPE_EXTERNAL: pTimer->u.External.pfnTimer(pTimer->pvUser); break;
1937 default:
1938 AssertMsgFailed(("Invalid timer type %d (%s)\n", pTimer->enmType, pTimer->pszDesc));
1939 break;
1940 }
1941
1942 /* change the state if it wasn't changed already in the handler. */
1943 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_STOPPED, TMTIMERSTATE_EXPIRED_DELIVER, fRc);
1944 Log2(("tmR3TimerQueueRun: new state %s\n", tmTimerState(pTimer->enmState)));
1945 }
1946 if (pCritSect)
1947 PDMCritSectLeave(pCritSect);
1948 } /* run loop */
1949}
1950
1951
1952/**
1953 * Schedules and runs any pending times in the timer queue for the
1954 * synchronous virtual clock.
1955 *
1956 * This scheduling is a bit different from the other queues as it need
1957 * to implement the special requirements of the timer synchronous virtual
1958 * clock, thus this 2nd queue run function.
1959 *
1960 * @param pVM The VM to run the timers for.
1961 *
1962 * @remarks The caller must own both the TM/EMT and the Virtual Sync locks.
1963 */
1964static void tmR3TimerQueueRunVirtualSync(PVM pVM)
1965{
1966 PTMTIMERQUEUE const pQueue = &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC];
1967 VM_ASSERT_EMT(pVM);
1968
1969 /*
1970 * Any timers?
1971 */
1972 PTMTIMER pNext = TMTIMER_GET_HEAD(pQueue);
1973 if (RT_UNLIKELY(!pNext))
1974 {
1975 Assert(pVM->tm.s.fVirtualSyncTicking || !pVM->tm.s.cVirtualTicking);
1976 return;
1977 }
1978 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRun);
1979
1980 /*
1981 * Calculate the time frame for which we will dispatch timers.
1982 *
1983 * We use a time frame ranging from the current sync time (which is most likely the
1984 * same as the head timer) and some configurable period (100000ns) up towards the
1985 * current virtual time. This period might also need to be restricted by the catch-up
1986 * rate so frequent calls to this function won't accelerate the time too much, however
1987 * this will be implemented at a later point if necessary.
1988 *
1989 * Without this frame we would 1) having to run timers much more frequently
1990 * and 2) lag behind at a steady rate.
1991 */
1992 const uint64_t u64VirtualNow = TMVirtualGetNoCheck(pVM);
1993 uint64_t const offSyncGivenUp = pVM->tm.s.offVirtualSyncGivenUp;
1994 uint64_t u64Now;
1995 if (!pVM->tm.s.fVirtualSyncTicking)
1996 {
1997 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRunStoppedAlready);
1998 u64Now = pVM->tm.s.u64VirtualSync;
1999#ifdef DEBUG_bird
2000 Assert(u64Now <= pNext->u64Expire);
2001#endif
2002 }
2003 else
2004 {
2005 /* Calc 'now'. */
2006 bool fStopCatchup = false;
2007 bool fUpdateStuff = false;
2008 uint64_t off = pVM->tm.s.offVirtualSync;
2009 if (pVM->tm.s.fVirtualSyncCatchUp)
2010 {
2011 uint64_t u64Delta = u64VirtualNow - pVM->tm.s.u64VirtualSyncCatchUpPrev;
2012 if (RT_LIKELY(!(u64Delta >> 32)))
2013 {
2014 uint64_t u64Sub = ASMMultU64ByU32DivByU32(u64Delta, pVM->tm.s.u32VirtualSyncCatchUpPercentage, 100);
2015 if (off > u64Sub + offSyncGivenUp)
2016 {
2017 off -= u64Sub;
2018 Log4(("TM: %'RU64/-%'8RU64: sub %'RU64 [tmR3TimerQueueRunVirtualSync]\n", u64VirtualNow - off, off - offSyncGivenUp, u64Sub));
2019 }
2020 else
2021 {
2022 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
2023 fStopCatchup = true;
2024 off = offSyncGivenUp;
2025 }
2026 fUpdateStuff = true;
2027 }
2028 }
2029 u64Now = u64VirtualNow - off;
2030
2031 /* Check if stopped by expired timer. */
2032 uint64_t u64Expire = pNext->u64Expire;
2033 if (u64Now >= pNext->u64Expire)
2034 {
2035 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRunStop);
2036 u64Now = pNext->u64Expire;
2037 ASMAtomicWriteU64(&pVM->tm.s.u64VirtualSync, u64Now);
2038 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncTicking, false);
2039 Log4(("TM: %'RU64/-%'8RU64: exp tmr [tmR3TimerQueueRunVirtualSync]\n", u64Now, u64VirtualNow - u64Now - offSyncGivenUp));
2040 }
2041 else if (fUpdateStuff)
2042 {
2043 ASMAtomicWriteU64(&pVM->tm.s.offVirtualSync, off);
2044 ASMAtomicWriteU64(&pVM->tm.s.u64VirtualSyncCatchUpPrev, u64VirtualNow);
2045 if (fStopCatchup)
2046 {
2047 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
2048 Log4(("TM: %'RU64/0: caught up [tmR3TimerQueueRunVirtualSync]\n", u64VirtualNow));
2049 }
2050 }
2051 }
2052
2053 /* calc end of frame. */
2054 uint64_t u64Max = u64Now + pVM->tm.s.u32VirtualSyncScheduleSlack;
2055 if (u64Max > u64VirtualNow - offSyncGivenUp)
2056 u64Max = u64VirtualNow - offSyncGivenUp;
2057
2058 /* assert sanity */
2059#ifdef DEBUG_bird
2060 Assert(u64Now <= u64VirtualNow - offSyncGivenUp);
2061 Assert(u64Max <= u64VirtualNow - offSyncGivenUp);
2062 Assert(u64Now <= u64Max);
2063 Assert(offSyncGivenUp == pVM->tm.s.offVirtualSyncGivenUp);
2064#endif
2065
2066 /*
2067 * Process the expired timers moving the clock along as we progress.
2068 */
2069#ifdef DEBUG_bird
2070#ifdef VBOX_STRICT
2071 uint64_t u64Prev = u64Now; NOREF(u64Prev);
2072#endif
2073#endif
2074 while (pNext && pNext->u64Expire <= u64Max)
2075 {
2076 PTMTIMER pTimer = pNext;
2077 pNext = TMTIMER_GET_NEXT(pTimer);
2078 PPDMCRITSECT pCritSect = pTimer->pCritSect;
2079 if (pCritSect)
2080 PDMCritSectEnter(pCritSect, VERR_INTERNAL_ERROR);
2081 Log2(("tmR3TimerQueueRun: %p:{.enmState=%s, .enmClock=%d, .enmType=%d, u64Expire=%llx (now=%llx) .pszDesc=%s}\n",
2082 pTimer, tmTimerState(pTimer->enmState), pTimer->enmClock, pTimer->enmType, pTimer->u64Expire, u64Now, pTimer->pszDesc));
2083 bool fRc;
2084 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_EXPIRED_GET_UNLINK, TMTIMERSTATE_ACTIVE, fRc);
2085 if (fRc)
2086 {
2087 /* unlink */
2088 const PTMTIMER pPrev = TMTIMER_GET_PREV(pTimer);
2089 if (pPrev)
2090 TMTIMER_SET_NEXT(pPrev, pNext);
2091 else
2092 {
2093 TMTIMER_SET_HEAD(pQueue, pNext);
2094 pQueue->u64Expire = pNext ? pNext->u64Expire : INT64_MAX;
2095 }
2096 if (pNext)
2097 TMTIMER_SET_PREV(pNext, pPrev);
2098 pTimer->offNext = 0;
2099 pTimer->offPrev = 0;
2100
2101 /* advance the clock - don't permit timers to be out of order or armed in the 'past'. */
2102#ifdef DEBUG_bird
2103#ifdef VBOX_STRICT
2104 AssertMsg(pTimer->u64Expire >= u64Prev, ("%'RU64 < %'RU64 %s\n", pTimer->u64Expire, u64Prev, pTimer->pszDesc));
2105 u64Prev = pTimer->u64Expire;
2106#endif
2107#endif
2108 ASMAtomicWriteU64(&pVM->tm.s.u64VirtualSync, pTimer->u64Expire);
2109 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncTicking, false);
2110
2111 /* fire */
2112 TM_SET_STATE(pTimer, TMTIMERSTATE_EXPIRED_DELIVER);
2113 switch (pTimer->enmType)
2114 {
2115 case TMTIMERTYPE_DEV: pTimer->u.Dev.pfnTimer(pTimer->u.Dev.pDevIns, pTimer, pTimer->pvUser); break;
2116 case TMTIMERTYPE_DRV: pTimer->u.Drv.pfnTimer(pTimer->u.Drv.pDrvIns, pTimer, pTimer->pvUser); break;
2117 case TMTIMERTYPE_INTERNAL: pTimer->u.Internal.pfnTimer(pVM, pTimer, pTimer->pvUser); break;
2118 case TMTIMERTYPE_EXTERNAL: pTimer->u.External.pfnTimer(pTimer->pvUser); break;
2119 default:
2120 AssertMsgFailed(("Invalid timer type %d (%s)\n", pTimer->enmType, pTimer->pszDesc));
2121 break;
2122 }
2123
2124 /* Change the state if it wasn't changed already in the handler.
2125 Reset the Hz hint too since this is the same as TMTimerStop. */
2126 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_STOPPED, TMTIMERSTATE_EXPIRED_DELIVER, fRc);
2127 if (fRc && pTimer->uHzHint)
2128 {
2129 if (pTimer->uHzHint >= pVM->tm.s.uMaxHzHint)
2130 ASMAtomicWriteBool(&pVM->tm.s.fHzHintNeedsUpdating, true);
2131 pTimer->uHzHint = 0;
2132 }
2133 Log2(("tmR3TimerQueueRun: new state %s\n", tmTimerState(pTimer->enmState)));
2134 }
2135 if (pCritSect)
2136 PDMCritSectLeave(pCritSect);
2137 } /* run loop */
2138
2139 /*
2140 * Restart the clock if it was stopped to serve any timers,
2141 * and start/adjust catch-up if necessary.
2142 */
2143 if ( !pVM->tm.s.fVirtualSyncTicking
2144 && pVM->tm.s.cVirtualTicking)
2145 {
2146 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRunRestart);
2147
2148 /* calc the slack we've handed out. */
2149 const uint64_t u64VirtualNow2 = TMVirtualGetNoCheck(pVM);
2150 Assert(u64VirtualNow2 >= u64VirtualNow);
2151#ifdef DEBUG_bird
2152 AssertMsg(pVM->tm.s.u64VirtualSync >= u64Now, ("%'RU64 < %'RU64\n", pVM->tm.s.u64VirtualSync, u64Now));
2153#endif
2154 const uint64_t offSlack = pVM->tm.s.u64VirtualSync - u64Now;
2155 STAM_STATS({
2156 if (offSlack)
2157 {
2158 PSTAMPROFILE p = &pVM->tm.s.StatVirtualSyncRunSlack;
2159 p->cPeriods++;
2160 p->cTicks += offSlack;
2161 if (p->cTicksMax < offSlack) p->cTicksMax = offSlack;
2162 if (p->cTicksMin > offSlack) p->cTicksMin = offSlack;
2163 }
2164 });
2165
2166 /* Let the time run a little bit while we were busy running timers(?). */
2167 uint64_t u64Elapsed;
2168#define MAX_ELAPSED 30000U /* ns */
2169 if (offSlack > MAX_ELAPSED)
2170 u64Elapsed = 0;
2171 else
2172 {
2173 u64Elapsed = u64VirtualNow2 - u64VirtualNow;
2174 if (u64Elapsed > MAX_ELAPSED)
2175 u64Elapsed = MAX_ELAPSED;
2176 u64Elapsed = u64Elapsed > offSlack ? u64Elapsed - offSlack : 0;
2177 }
2178#undef MAX_ELAPSED
2179
2180 /* Calc the current offset. */
2181 uint64_t offNew = u64VirtualNow2 - pVM->tm.s.u64VirtualSync - u64Elapsed;
2182 Assert(!(offNew & RT_BIT_64(63)));
2183 uint64_t offLag = offNew - pVM->tm.s.offVirtualSyncGivenUp;
2184 Assert(!(offLag & RT_BIT_64(63)));
2185
2186 /*
2187 * Deal with starting, adjusting and stopping catchup.
2188 */
2189 if (pVM->tm.s.fVirtualSyncCatchUp)
2190 {
2191 if (offLag <= pVM->tm.s.u64VirtualSyncCatchUpStopThreshold)
2192 {
2193 /* stop */
2194 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
2195 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
2196 Log4(("TM: %'RU64/-%'8RU64: caught up [pt]\n", u64VirtualNow2 - offNew, offLag));
2197 }
2198 else if (offLag <= pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold)
2199 {
2200 /* adjust */
2201 unsigned i = 0;
2202 while ( i + 1 < RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods)
2203 && offLag >= pVM->tm.s.aVirtualSyncCatchUpPeriods[i + 1].u64Start)
2204 i++;
2205 if (pVM->tm.s.u32VirtualSyncCatchUpPercentage < pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage)
2206 {
2207 STAM_COUNTER_INC(&pVM->tm.s.aStatVirtualSyncCatchupAdjust[i]);
2208 ASMAtomicWriteU32(&pVM->tm.s.u32VirtualSyncCatchUpPercentage, pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage);
2209 Log4(("TM: %'RU64/%'8RU64: adj %u%%\n", u64VirtualNow2 - offNew, offLag, pVM->tm.s.u32VirtualSyncCatchUpPercentage));
2210 }
2211 pVM->tm.s.u64VirtualSyncCatchUpPrev = u64VirtualNow2;
2212 }
2213 else
2214 {
2215 /* give up */
2216 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncGiveUp);
2217 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
2218 ASMAtomicWriteU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp, offNew);
2219 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
2220 Log4(("TM: %'RU64/%'8RU64: give up %u%%\n", u64VirtualNow2 - offNew, offLag, pVM->tm.s.u32VirtualSyncCatchUpPercentage));
2221 LogRel(("TM: Giving up catch-up attempt at a %'RU64 ns lag; new total: %'RU64 ns\n", offLag, offNew));
2222 }
2223 }
2224 else if (offLag >= pVM->tm.s.aVirtualSyncCatchUpPeriods[0].u64Start)
2225 {
2226 if (offLag <= pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold)
2227 {
2228 /* start */
2229 STAM_PROFILE_ADV_START(&pVM->tm.s.StatVirtualSyncCatchup, c);
2230 unsigned i = 0;
2231 while ( i + 1 < RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods)
2232 && offLag >= pVM->tm.s.aVirtualSyncCatchUpPeriods[i + 1].u64Start)
2233 i++;
2234 STAM_COUNTER_INC(&pVM->tm.s.aStatVirtualSyncCatchupInitial[i]);
2235 ASMAtomicWriteU32(&pVM->tm.s.u32VirtualSyncCatchUpPercentage, pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage);
2236 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, true);
2237 Log4(("TM: %'RU64/%'8RU64: catch-up %u%%\n", u64VirtualNow2 - offNew, offLag, pVM->tm.s.u32VirtualSyncCatchUpPercentage));
2238 }
2239 else
2240 {
2241 /* don't bother */
2242 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncGiveUpBeforeStarting);
2243 ASMAtomicWriteU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp, offNew);
2244 Log4(("TM: %'RU64/%'8RU64: give up\n", u64VirtualNow2 - offNew, offLag));
2245 LogRel(("TM: Not bothering to attempt catching up a %'RU64 ns lag; new total: %'RU64\n", offLag, offNew));
2246 }
2247 }
2248
2249 /*
2250 * Update the offset and restart the clock.
2251 */
2252 Assert(!(offNew & RT_BIT_64(63)));
2253 ASMAtomicWriteU64(&pVM->tm.s.offVirtualSync, offNew);
2254 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncTicking, true);
2255 }
2256}
2257
2258
2259/**
2260 * Deals with stopped Virtual Sync clock.
2261 *
2262 * This is called by the forced action flag handling code in EM when it
2263 * encounters the VM_FF_TM_VIRTUAL_SYNC flag. It is called by all VCPUs and they
2264 * will block on the VirtualSyncLock until the pending timers has been executed
2265 * and the clock restarted.
2266 *
2267 * @param pVM The VM to run the timers for.
2268 * @param pVCpu The virtual CPU we're running at.
2269 *
2270 * @thread EMTs
2271 */
2272VMM_INT_DECL(void) TMR3VirtualSyncFF(PVM pVM, PVMCPU pVCpu)
2273{
2274 Log2(("TMR3VirtualSyncFF:\n"));
2275
2276 /*
2277 * The EMT doing the timers is diverted to them.
2278 */
2279 if (pVCpu->idCpu == pVM->tm.s.idTimerCpu)
2280 TMR3TimerQueuesDo(pVM);
2281 /*
2282 * The other EMTs will block on the virtual sync lock and the first owner
2283 * will run the queue and thus restarting the clock.
2284 *
2285 * Note! This is very suboptimal code wrt to resuming execution when there
2286 * are more than two Virtual CPUs, since they will all have to enter
2287 * the critical section one by one. But it's a very simple solution
2288 * which will have to do the job for now.
2289 */
2290 else
2291 {
2292 STAM_PROFILE_START(&pVM->tm.s.StatVirtualSyncFF, a);
2293 tmVirtualSyncLock(pVM);
2294 if (pVM->tm.s.fVirtualSyncTicking)
2295 {
2296 STAM_PROFILE_STOP(&pVM->tm.s.StatVirtualSyncFF, a); /* before the unlock! */
2297 tmVirtualSyncUnlock(pVM);
2298 Log2(("TMR3VirtualSyncFF: ticking\n"));
2299 }
2300 else
2301 {
2302 tmVirtualSyncUnlock(pVM);
2303
2304 /* try run it. */
2305 tmTimerLock(pVM);
2306 tmVirtualSyncLock(pVM);
2307 if (pVM->tm.s.fVirtualSyncTicking)
2308 Log2(("TMR3VirtualSyncFF: ticking (2)\n"));
2309 else
2310 {
2311 ASMAtomicWriteBool(&pVM->tm.s.fRunningVirtualSyncQueue, true);
2312 Log2(("TMR3VirtualSyncFF: running queue\n"));
2313
2314 if (pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].offSchedule)
2315 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC]);
2316 tmR3TimerQueueRunVirtualSync(pVM);
2317 if (pVM->tm.s.fVirtualSyncTicking) /** @todo move into tmR3TimerQueueRunVirtualSync - FIXME */
2318 VM_FF_CLEAR(pVM, VM_FF_TM_VIRTUAL_SYNC);
2319
2320 ASMAtomicWriteBool(&pVM->tm.s.fRunningVirtualSyncQueue, false);
2321 }
2322 STAM_PROFILE_STOP(&pVM->tm.s.StatVirtualSyncFF, a); /* before the unlock! */
2323 tmVirtualSyncUnlock(pVM);
2324 tmTimerUnlock(pVM);
2325 }
2326 }
2327}
2328
2329
2330/** @name Saved state values
2331 * @{ */
2332#define TMTIMERSTATE_SAVED_PENDING_STOP 4
2333#define TMTIMERSTATE_SAVED_PENDING_SCHEDULE 7
2334/** @} */
2335
2336
2337/**
2338 * Saves the state of a timer to a saved state.
2339 *
2340 * @returns VBox status.
2341 * @param pTimer Timer to save.
2342 * @param pSSM Save State Manager handle.
2343 */
2344VMMR3DECL(int) TMR3TimerSave(PTMTIMERR3 pTimer, PSSMHANDLE pSSM)
2345{
2346 LogFlow(("TMR3TimerSave: %p:{enmState=%s, .pszDesc={%s}} pSSM=%p\n", pTimer, tmTimerState(pTimer->enmState), pTimer->pszDesc, pSSM));
2347 switch (pTimer->enmState)
2348 {
2349 case TMTIMERSTATE_STOPPED:
2350 case TMTIMERSTATE_PENDING_STOP:
2351 case TMTIMERSTATE_PENDING_STOP_SCHEDULE:
2352 return SSMR3PutU8(pSSM, TMTIMERSTATE_SAVED_PENDING_STOP);
2353
2354 case TMTIMERSTATE_PENDING_SCHEDULE_SET_EXPIRE:
2355 case TMTIMERSTATE_PENDING_RESCHEDULE_SET_EXPIRE:
2356 AssertMsgFailed(("u64Expire is being updated! (%s)\n", pTimer->pszDesc));
2357 if (!RTThreadYield())
2358 RTThreadSleep(1);
2359 /* fall thru */
2360 case TMTIMERSTATE_ACTIVE:
2361 case TMTIMERSTATE_PENDING_SCHEDULE:
2362 case TMTIMERSTATE_PENDING_RESCHEDULE:
2363 SSMR3PutU8(pSSM, TMTIMERSTATE_SAVED_PENDING_SCHEDULE);
2364 return SSMR3PutU64(pSSM, pTimer->u64Expire);
2365
2366 case TMTIMERSTATE_EXPIRED_GET_UNLINK:
2367 case TMTIMERSTATE_EXPIRED_DELIVER:
2368 case TMTIMERSTATE_DESTROY:
2369 case TMTIMERSTATE_FREE:
2370 AssertMsgFailed(("Invalid timer state %d %s (%s)\n", pTimer->enmState, tmTimerState(pTimer->enmState), pTimer->pszDesc));
2371 return SSMR3HandleSetStatus(pSSM, VERR_TM_INVALID_STATE);
2372 }
2373
2374 AssertMsgFailed(("Unknown timer state %d (%s)\n", pTimer->enmState, pTimer->pszDesc));
2375 return SSMR3HandleSetStatus(pSSM, VERR_TM_UNKNOWN_STATE);
2376}
2377
2378
2379/**
2380 * Loads the state of a timer from a saved state.
2381 *
2382 * @returns VBox status.
2383 * @param pTimer Timer to restore.
2384 * @param pSSM Save State Manager handle.
2385 */
2386VMMR3DECL(int) TMR3TimerLoad(PTMTIMERR3 pTimer, PSSMHANDLE pSSM)
2387{
2388 Assert(pTimer); Assert(pSSM); VM_ASSERT_EMT(pTimer->pVMR3);
2389 LogFlow(("TMR3TimerLoad: %p:{enmState=%s, .pszDesc={%s}} pSSM=%p\n", pTimer, tmTimerState(pTimer->enmState), pTimer->pszDesc, pSSM));
2390
2391 /*
2392 * Load the state and validate it.
2393 */
2394 uint8_t u8State;
2395 int rc = SSMR3GetU8(pSSM, &u8State);
2396 if (RT_FAILURE(rc))
2397 return rc;
2398#if 1 /* Workaround for accidental state shift in r47786 (2009-05-26 19:12:12). */ /** @todo remove this in a few weeks! */
2399 if ( u8State == TMTIMERSTATE_SAVED_PENDING_STOP + 1
2400 || u8State == TMTIMERSTATE_SAVED_PENDING_SCHEDULE + 1)
2401 u8State--;
2402#endif
2403 if ( u8State != TMTIMERSTATE_SAVED_PENDING_STOP
2404 && u8State != TMTIMERSTATE_SAVED_PENDING_SCHEDULE)
2405 {
2406 AssertLogRelMsgFailed(("u8State=%d\n", u8State));
2407 return SSMR3HandleSetStatus(pSSM, VERR_TM_LOAD_STATE);
2408 }
2409
2410 /* Enter the critical section to make TMTimerSet/Stop happy. */
2411 PPDMCRITSECT pCritSect = pTimer->pCritSect;
2412 if (pCritSect)
2413 PDMCritSectEnter(pCritSect, VERR_INTERNAL_ERROR);
2414
2415 if (u8State == TMTIMERSTATE_SAVED_PENDING_SCHEDULE)
2416 {
2417 /*
2418 * Load the expire time.
2419 */
2420 uint64_t u64Expire;
2421 rc = SSMR3GetU64(pSSM, &u64Expire);
2422 if (RT_FAILURE(rc))
2423 return rc;
2424
2425 /*
2426 * Set it.
2427 */
2428 Log(("u8State=%d u64Expire=%llu\n", u8State, u64Expire));
2429 rc = TMTimerSet(pTimer, u64Expire);
2430 }
2431 else
2432 {
2433 /*
2434 * Stop it.
2435 */
2436 Log(("u8State=%d\n", u8State));
2437 rc = TMTimerStop(pTimer);
2438 }
2439
2440 if (pCritSect)
2441 PDMCritSectLeave(pCritSect);
2442
2443 /*
2444 * On failure set SSM status.
2445 */
2446 if (RT_FAILURE(rc))
2447 rc = SSMR3HandleSetStatus(pSSM, rc);
2448 return rc;
2449}
2450
2451
2452/**
2453 * Associates a critical section with a timer.
2454 *
2455 * The critical section will be entered prior to doing the timer call back, thus
2456 * avoiding potential races between the timer thread and other threads trying to
2457 * stop or adjust the timer expiration while it's being delivered. The timer
2458 * thread will leave the critical section when the timer callback returns.
2459 *
2460 * In strict builds, ownership of the critical section will be asserted by
2461 * TMTimerSet, TMTimerStop, TMTimerGetExpire and TMTimerDestroy (when called at
2462 * runtime).
2463 *
2464 * @retval VINF_SUCCESS on success.
2465 * @retval VERR_INVALID_HANDLE if the timer handle is NULL or invalid
2466 * (asserted).
2467 * @retval VERR_INVALID_PARAMETER if pCritSect is NULL or has an invalid magic
2468 * (asserted).
2469 * @retval VERR_ALREADY_EXISTS if a critical section was already associated
2470 * with the timer (asserted).
2471 * @retval VERR_INVALID_STATE if the timer isn't stopped.
2472 *
2473 * @param pTimer The timer handle.
2474 * @param pCritSect The critical section. The caller must make sure this
2475 * is around for the life time of the timer.
2476 *
2477 * @thread Any, but the caller is responsible for making sure the timer is not
2478 * active.
2479 */
2480VMMR3DECL(int) TMR3TimerSetCritSect(PTMTIMERR3 pTimer, PPDMCRITSECT pCritSect)
2481{
2482 AssertPtrReturn(pTimer, VERR_INVALID_HANDLE);
2483 AssertPtrReturn(pCritSect, VERR_INVALID_PARAMETER);
2484 const char *pszName = PDMR3CritSectName(pCritSect); /* exploited for validation */
2485 AssertReturn(pszName, VERR_INVALID_PARAMETER);
2486 AssertReturn(!pTimer->pCritSect, VERR_ALREADY_EXISTS);
2487 AssertReturn(pTimer->enmState == TMTIMERSTATE_STOPPED, VERR_INVALID_STATE);
2488 LogFlow(("pTimer=%p (%s) pCritSect=%p (%s)\n", pTimer, pTimer->pszDesc, pCritSect, pszName));
2489
2490 pTimer->pCritSect = pCritSect;
2491 return VINF_SUCCESS;
2492}
2493
2494
2495/**
2496 * Get the real world UTC time adjusted for VM lag.
2497 *
2498 * @returns pTime.
2499 * @param pVM The VM instance.
2500 * @param pTime Where to store the time.
2501 */
2502VMM_INT_DECL(PRTTIMESPEC) TMR3UtcNow(PVM pVM, PRTTIMESPEC pTime)
2503{
2504 RTTimeNow(pTime);
2505 RTTimeSpecSubNano(pTime, ASMAtomicReadU64(&pVM->tm.s.offVirtualSync) - ASMAtomicReadU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp));
2506 RTTimeSpecAddNano(pTime, pVM->tm.s.offUTC);
2507 return pTime;
2508}
2509
2510
2511/**
2512 * Pauses all clocks except TMCLOCK_REAL.
2513 *
2514 * @returns VBox status code, all errors are asserted.
2515 * @param pVM The VM handle.
2516 * @param pVCpu The virtual CPU handle.
2517 * @thread EMT corresponding to the virtual CPU handle.
2518 */
2519VMMR3DECL(int) TMR3NotifySuspend(PVM pVM, PVMCPU pVCpu)
2520{
2521 VMCPU_ASSERT_EMT(pVCpu);
2522
2523 /*
2524 * The shared virtual clock (includes virtual sync which is tied to it).
2525 */
2526 tmTimerLock(pVM); /* Paranoia: Exploiting the timer lock here. */
2527 int rc = tmVirtualPauseLocked(pVM);
2528 tmTimerUnlock(pVM);
2529 if (RT_FAILURE(rc))
2530 return rc;
2531
2532 /*
2533 * Pause the TSC last since it is normally linked to the virtual
2534 * sync clock, so the above code may actually stop both clock.
2535 */
2536 rc = tmCpuTickPause(pVM, pVCpu);
2537 if (RT_FAILURE(rc))
2538 return rc;
2539
2540#ifndef VBOX_WITHOUT_NS_ACCOUNTING
2541 /*
2542 * Update cNsTotal.
2543 */
2544 uint32_t uGen = ASMAtomicIncU32(&pVCpu->tm.s.uTimesGen); Assert(uGen & 1);
2545 pVCpu->tm.s.cNsTotal = RTTimeNanoTS() - pVCpu->tm.s.u64NsTsStartTotal;
2546 pVCpu->tm.s.cNsOther = pVCpu->tm.s.cNsTotal - pVCpu->tm.s.cNsExecuting - pVCpu->tm.s.cNsHalted;
2547 ASMAtomicWriteU32(&pVCpu->tm.s.uTimesGen, (uGen | 1) + 1);
2548#endif
2549
2550 return VINF_SUCCESS;
2551}
2552
2553
2554/**
2555 * Resumes all clocks except TMCLOCK_REAL.
2556 *
2557 * @returns VBox status code, all errors are asserted.
2558 * @param pVM The VM handle.
2559 * @param pVCpu The virtual CPU handle.
2560 * @thread EMT corresponding to the virtual CPU handle.
2561 */
2562VMMR3DECL(int) TMR3NotifyResume(PVM pVM, PVMCPU pVCpu)
2563{
2564 VMCPU_ASSERT_EMT(pVCpu);
2565 int rc;
2566
2567#ifndef VBOX_WITHOUT_NS_ACCOUNTING
2568 /*
2569 * Set u64NsTsStartTotal. There is no need to back this out if either of
2570 * the two calls below fail.
2571 */
2572 pVCpu->tm.s.u64NsTsStartTotal = RTTimeNanoTS() - pVCpu->tm.s.cNsTotal;
2573#endif
2574
2575 /*
2576 * Resume the TSC first since it is normally linked to the virtual sync
2577 * clock, so it may actually not be resumed until we've executed the code
2578 * below.
2579 */
2580 if (!pVM->tm.s.fTSCTiedToExecution)
2581 {
2582 rc = tmCpuTickResume(pVM, pVCpu);
2583 if (RT_FAILURE(rc))
2584 return rc;
2585 }
2586
2587 /*
2588 * The shared virtual clock (includes virtual sync which is tied to it).
2589 */
2590 tmTimerLock(pVM); /* Paranoia: Exploiting the timer lock here. */
2591 rc = tmVirtualResumeLocked(pVM);
2592 tmTimerUnlock(pVM);
2593
2594 return rc;
2595}
2596
2597
2598/**
2599 * Sets the warp drive percent of the virtual time.
2600 *
2601 * @returns VBox status code.
2602 * @param pVM The VM handle.
2603 * @param u32Percent The new percentage. 100 means normal operation.
2604 *
2605 * @todo Move to Ring-3!
2606 */
2607VMMDECL(int) TMR3SetWarpDrive(PVM pVM, uint32_t u32Percent)
2608{
2609 return VMR3ReqCallWait(pVM, VMCPUID_ANY, (PFNRT)tmR3SetWarpDrive, 2, pVM, u32Percent);
2610}
2611
2612
2613/**
2614 * EMT worker for TMR3SetWarpDrive.
2615 *
2616 * @returns VBox status code.
2617 * @param pVM The VM handle.
2618 * @param u32Percent See TMR3SetWarpDrive().
2619 * @internal
2620 */
2621static DECLCALLBACK(int) tmR3SetWarpDrive(PVM pVM, uint32_t u32Percent)
2622{
2623 PVMCPU pVCpu = VMMGetCpu(pVM);
2624
2625 /*
2626 * Validate it.
2627 */
2628 AssertMsgReturn(u32Percent >= 2 && u32Percent <= 20000,
2629 ("%RX32 is not between 2 and 20000 (inclusive).\n", u32Percent),
2630 VERR_INVALID_PARAMETER);
2631
2632/** @todo This isn't a feature specific to virtual time, move the variables to
2633 * TM level and make it affect TMR3UCTNow as well! */
2634
2635 /*
2636 * If the time is running we'll have to pause it before we can change
2637 * the warp drive settings.
2638 */
2639 tmTimerLock(pVM); /* Paranoia: Exploiting the timer lock here. */
2640 bool fPaused = !!pVM->tm.s.cVirtualTicking;
2641 if (fPaused) /** @todo this isn't really working, but wtf. */
2642 TMR3NotifySuspend(pVM, pVCpu);
2643
2644 pVM->tm.s.u32VirtualWarpDrivePercentage = u32Percent;
2645 pVM->tm.s.fVirtualWarpDrive = u32Percent != 100;
2646 LogRel(("TM: u32VirtualWarpDrivePercentage=%RI32 fVirtualWarpDrive=%RTbool\n",
2647 pVM->tm.s.u32VirtualWarpDrivePercentage, pVM->tm.s.fVirtualWarpDrive));
2648
2649 if (fPaused)
2650 TMR3NotifyResume(pVM, pVCpu);
2651 tmTimerUnlock(pVM);
2652 return VINF_SUCCESS;
2653}
2654
2655
2656/**
2657 * Gets the performance information for one virtual CPU as seen by the VMM.
2658 *
2659 * The returned times covers the period where the VM is running and will be
2660 * reset when restoring a previous VM state (at least for the time being).
2661 *
2662 * @retval VINF_SUCCESS on success.
2663 * @retval VERR_NOT_IMPLEMENTED if not compiled in.
2664 * @retval VERR_INVALID_STATE if the VM handle is bad.
2665 * @retval VERR_INVALID_PARAMETER if idCpu is out of range.
2666 *
2667 * @param pVM The VM handle.
2668 * @param idCpu The ID of the virtual CPU which times to get.
2669 * @param pcNsTotal Where to store the total run time (nano seconds) of
2670 * the CPU, i.e. the sum of the three other returns.
2671 * Optional.
2672 * @param pcNsExecuting Where to store the time (nano seconds) spent
2673 * executing guest code. Optional.
2674 * @param pcNsHalted Where to store the time (nano seconds) spent
2675 * halted. Optional
2676 * @param pcNsOther Where to store the time (nano seconds) spent
2677 * preempted by the host scheduler, on virtualization
2678 * overhead and on other tasks.
2679 */
2680VMMR3DECL(int) TMR3GetCpuLoadTimes(PVM pVM, VMCPUID idCpu, uint64_t *pcNsTotal, uint64_t *pcNsExecuting,
2681 uint64_t *pcNsHalted, uint64_t *pcNsOther)
2682{
2683 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_STATE);
2684 AssertReturn(idCpu < pVM->cCpus, VERR_INVALID_PARAMETER);
2685
2686#ifndef VBOX_WITHOUT_NS_ACCOUNTING
2687 /*
2688 * Get a stable result set.
2689 * This should be way quicker than an EMT request.
2690 */
2691 PVMCPU pVCpu = &pVM->aCpus[idCpu];
2692 uint32_t uTimesGen = ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen);
2693 uint64_t cNsTotal = pVCpu->tm.s.cNsTotal;
2694 uint64_t cNsExecuting = pVCpu->tm.s.cNsExecuting;
2695 uint64_t cNsHalted = pVCpu->tm.s.cNsHalted;
2696 uint64_t cNsOther = pVCpu->tm.s.cNsOther;
2697 while ( (uTimesGen & 1) /* update in progress */
2698 || uTimesGen != ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen))
2699 {
2700 RTThreadYield();
2701 uTimesGen = ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen);
2702 cNsTotal = pVCpu->tm.s.cNsTotal;
2703 cNsExecuting = pVCpu->tm.s.cNsExecuting;
2704 cNsHalted = pVCpu->tm.s.cNsHalted;
2705 cNsOther = pVCpu->tm.s.cNsOther;
2706 }
2707
2708 /*
2709 * Fill in the return values.
2710 */
2711 if (pcNsTotal)
2712 *pcNsTotal = cNsTotal;
2713 if (pcNsExecuting)
2714 *pcNsExecuting = cNsExecuting;
2715 if (pcNsHalted)
2716 *pcNsHalted = cNsHalted;
2717 if (pcNsOther)
2718 *pcNsOther = cNsOther;
2719
2720 return VINF_SUCCESS;
2721
2722#else
2723 return VERR_NOT_IMPLEMENTED;
2724#endif
2725}
2726
2727#ifndef VBOX_WITHOUT_NS_ACCOUNTING
2728
2729/**
2730 * Helper for tmR3CpuLoadTimer.
2731 * @returns
2732 * @param pState The state to update.
2733 * @param cNsTotalDelta Total time.
2734 * @param cNsExecutingDelta Time executing.
2735 * @param cNsHaltedDelta Time halted.
2736 */
2737DECLINLINE(void) tmR3CpuLoadTimerMakeUpdate(PTMCPULOADSTATE pState,
2738 uint64_t cNsTotal,
2739 uint64_t cNsExecuting,
2740 uint64_t cNsHalted)
2741{
2742 /* Calc deltas */
2743 uint64_t cNsTotalDelta = cNsTotal - pState->cNsPrevTotal;
2744 pState->cNsPrevTotal = cNsTotal;
2745
2746 uint64_t cNsExecutingDelta = cNsExecuting - pState->cNsPrevExecuting;
2747 pState->cNsPrevExecuting = cNsExecuting;
2748
2749 uint64_t cNsHaltedDelta = cNsHalted - pState->cNsPrevHalted;
2750 pState->cNsPrevHalted = cNsHalted;
2751
2752 /* Calc pcts. */
2753 if (!cNsTotalDelta)
2754 {
2755 pState->cPctExecuting = 0;
2756 pState->cPctHalted = 100;
2757 pState->cPctOther = 0;
2758 }
2759 else if (cNsTotalDelta < UINT64_MAX / 4)
2760 {
2761 pState->cPctExecuting = (uint8_t)(cNsExecutingDelta * 100 / cNsTotalDelta);
2762 pState->cPctHalted = (uint8_t)(cNsHaltedDelta * 100 / cNsTotalDelta);
2763 pState->cPctOther = (uint8_t)((cNsTotalDelta - cNsExecutingDelta - cNsHaltedDelta) * 100 / cNsTotalDelta);
2764 }
2765 else
2766 {
2767 pState->cPctExecuting = 0;
2768 pState->cPctHalted = 100;
2769 pState->cPctOther = 0;
2770 }
2771}
2772
2773
2774/**
2775 * Timer callback that calculates the CPU load since the last time it was
2776 * called.
2777 *
2778 * @param pVM The VM handle.
2779 * @param pTimer The timer.
2780 * @param pvUser NULL, unused.
2781 */
2782static DECLCALLBACK(void) tmR3CpuLoadTimer(PVM pVM, PTMTIMER pTimer, void *pvUser)
2783{
2784 /*
2785 * Re-arm the timer first.
2786 */
2787 int rc = TMTimerSetMillies(pTimer, 1000);
2788 AssertLogRelRC(rc);
2789 NOREF(pvUser);
2790
2791 /*
2792 * Update the values for each CPU.
2793 */
2794 uint64_t cNsTotalAll = 0;
2795 uint64_t cNsExecutingAll = 0;
2796 uint64_t cNsHaltedAll = 0;
2797 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2798 {
2799 PVMCPU pVCpu = &pVM->aCpus[iCpu];
2800
2801 /* Try get a stable data set. */
2802 uint32_t cTries = 3;
2803 uint32_t uTimesGen = ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen);
2804 uint64_t cNsTotal = pVCpu->tm.s.cNsTotal;
2805 uint64_t cNsExecuting = pVCpu->tm.s.cNsExecuting;
2806 uint64_t cNsHalted = pVCpu->tm.s.cNsHalted;
2807 while (RT_UNLIKELY( (uTimesGen & 1) /* update in progress */
2808 || uTimesGen != ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen)))
2809 {
2810 if (!--cTries)
2811 break;
2812 ASMNopPause();
2813 uTimesGen = ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen);
2814 cNsTotal = pVCpu->tm.s.cNsTotal;
2815 cNsExecuting = pVCpu->tm.s.cNsExecuting;
2816 cNsHalted = pVCpu->tm.s.cNsHalted;
2817 }
2818
2819 /* Totals */
2820 cNsTotalAll += cNsTotal;
2821 cNsExecutingAll += cNsExecuting;
2822 cNsHaltedAll += cNsHalted;
2823
2824 /* Calc the PCTs and update the state. */
2825 tmR3CpuLoadTimerMakeUpdate(&pVCpu->tm.s.CpuLoad, cNsTotal, cNsExecuting, cNsHalted);
2826 }
2827
2828 /*
2829 * Update the value for all the CPUs.
2830 */
2831 tmR3CpuLoadTimerMakeUpdate(&pVM->tm.s.CpuLoad, cNsTotalAll, cNsExecutingAll, cNsHaltedAll);
2832
2833 /** @todo Try add 1, 5 and 15 min load stats. */
2834
2835}
2836
2837#endif /* !VBOX_WITHOUT_NS_ACCOUNTING */
2838
2839/**
2840 * Gets the 5 char clock name for the info tables.
2841 *
2842 * @returns The name.
2843 * @param enmClock The clock.
2844 */
2845DECLINLINE(const char *) tmR3Get5CharClockName(TMCLOCK enmClock)
2846{
2847 switch (enmClock)
2848 {
2849 case TMCLOCK_REAL: return "Real ";
2850 case TMCLOCK_VIRTUAL: return "Virt ";
2851 case TMCLOCK_VIRTUAL_SYNC: return "VrSy ";
2852 case TMCLOCK_TSC: return "TSC ";
2853 default: return "Bad ";
2854 }
2855}
2856
2857
2858/**
2859 * Display all timers.
2860 *
2861 * @param pVM VM Handle.
2862 * @param pHlp The info helpers.
2863 * @param pszArgs Arguments, ignored.
2864 */
2865static DECLCALLBACK(void) tmR3TimerInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2866{
2867 NOREF(pszArgs);
2868 pHlp->pfnPrintf(pHlp,
2869 "Timers (pVM=%p)\n"
2870 "%.*s %.*s %.*s %.*s Clock %18s %18s %6s %-25s Description\n",
2871 pVM,
2872 sizeof(RTR3PTR) * 2, "pTimerR3 ",
2873 sizeof(int32_t) * 2, "offNext ",
2874 sizeof(int32_t) * 2, "offPrev ",
2875 sizeof(int32_t) * 2, "offSched ",
2876 "Time",
2877 "Expire",
2878 "HzHint",
2879 "State");
2880 tmTimerLock(pVM);
2881 for (PTMTIMERR3 pTimer = pVM->tm.s.pCreated; pTimer; pTimer = pTimer->pBigNext)
2882 {
2883 pHlp->pfnPrintf(pHlp,
2884 "%p %08RX32 %08RX32 %08RX32 %s %18RU64 %18RU64 %6RU32 %-25s %s\n",
2885 pTimer,
2886 pTimer->offNext,
2887 pTimer->offPrev,
2888 pTimer->offScheduleNext,
2889 tmR3Get5CharClockName(pTimer->enmClock),
2890 TMTimerGet(pTimer),
2891 pTimer->u64Expire,
2892 pTimer->uHzHint,
2893 tmTimerState(pTimer->enmState),
2894 pTimer->pszDesc);
2895 }
2896 tmTimerUnlock(pVM);
2897}
2898
2899
2900/**
2901 * Display all active timers.
2902 *
2903 * @param pVM VM Handle.
2904 * @param pHlp The info helpers.
2905 * @param pszArgs Arguments, ignored.
2906 */
2907static DECLCALLBACK(void) tmR3TimerInfoActive(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2908{
2909 NOREF(pszArgs);
2910 pHlp->pfnPrintf(pHlp,
2911 "Active Timers (pVM=%p)\n"
2912 "%.*s %.*s %.*s %.*s Clock %18s %18s %6s %-25s Description\n",
2913 pVM,
2914 sizeof(RTR3PTR) * 2, "pTimerR3 ",
2915 sizeof(int32_t) * 2, "offNext ",
2916 sizeof(int32_t) * 2, "offPrev ",
2917 sizeof(int32_t) * 2, "offSched ",
2918 "Time",
2919 "Expire",
2920 "HzHint",
2921 "State");
2922 for (unsigned iQueue = 0; iQueue < TMCLOCK_MAX; iQueue++)
2923 {
2924 tmTimerLock(pVM);
2925 for (PTMTIMERR3 pTimer = TMTIMER_GET_HEAD(&pVM->tm.s.paTimerQueuesR3[iQueue]);
2926 pTimer;
2927 pTimer = TMTIMER_GET_NEXT(pTimer))
2928 {
2929 pHlp->pfnPrintf(pHlp,
2930 "%p %08RX32 %08RX32 %08RX32 %s %18RU64 %18RU64 %6RU32 %-25s %s\n",
2931 pTimer,
2932 pTimer->offNext,
2933 pTimer->offPrev,
2934 pTimer->offScheduleNext,
2935 tmR3Get5CharClockName(pTimer->enmClock),
2936 TMTimerGet(pTimer),
2937 pTimer->u64Expire,
2938 pTimer->uHzHint,
2939 tmTimerState(pTimer->enmState),
2940 pTimer->pszDesc);
2941 }
2942 tmTimerUnlock(pVM);
2943 }
2944}
2945
2946
2947/**
2948 * Display all clocks.
2949 *
2950 * @param pVM VM Handle.
2951 * @param pHlp The info helpers.
2952 * @param pszArgs Arguments, ignored.
2953 */
2954static DECLCALLBACK(void) tmR3InfoClocks(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2955{
2956 NOREF(pszArgs);
2957
2958 /*
2959 * Read the times first to avoid more than necessary time variation.
2960 */
2961 const uint64_t u64Virtual = TMVirtualGet(pVM);
2962 const uint64_t u64VirtualSync = TMVirtualSyncGet(pVM);
2963 const uint64_t u64Real = TMRealGet(pVM);
2964
2965 for (VMCPUID i = 0; i < pVM->cCpus; i++)
2966 {
2967 PVMCPU pVCpu = &pVM->aCpus[i];
2968 uint64_t u64TSC = TMCpuTickGet(pVCpu);
2969
2970 /*
2971 * TSC
2972 */
2973 pHlp->pfnPrintf(pHlp,
2974 "Cpu Tick: %18RU64 (%#016RX64) %RU64Hz %s%s",
2975 u64TSC, u64TSC, TMCpuTicksPerSecond(pVM),
2976 pVCpu->tm.s.fTSCTicking ? "ticking" : "paused",
2977 pVM->tm.s.fTSCVirtualized ? " - virtualized" : "");
2978 if (pVM->tm.s.fTSCUseRealTSC)
2979 {
2980 pHlp->pfnPrintf(pHlp, " - real tsc");
2981 if (pVCpu->tm.s.offTSCRawSrc)
2982 pHlp->pfnPrintf(pHlp, "\n offset %RU64", pVCpu->tm.s.offTSCRawSrc);
2983 }
2984 else
2985 pHlp->pfnPrintf(pHlp, " - virtual clock");
2986 pHlp->pfnPrintf(pHlp, "\n");
2987 }
2988
2989 /*
2990 * virtual
2991 */
2992 pHlp->pfnPrintf(pHlp,
2993 " Virtual: %18RU64 (%#016RX64) %RU64Hz %s",
2994 u64Virtual, u64Virtual, TMVirtualGetFreq(pVM),
2995 pVM->tm.s.cVirtualTicking ? "ticking" : "paused");
2996 if (pVM->tm.s.fVirtualWarpDrive)
2997 pHlp->pfnPrintf(pHlp, " WarpDrive %RU32 %%", pVM->tm.s.u32VirtualWarpDrivePercentage);
2998 pHlp->pfnPrintf(pHlp, "\n");
2999
3000 /*
3001 * virtual sync
3002 */
3003 pHlp->pfnPrintf(pHlp,
3004 "VirtSync: %18RU64 (%#016RX64) %s%s",
3005 u64VirtualSync, u64VirtualSync,
3006 pVM->tm.s.fVirtualSyncTicking ? "ticking" : "paused",
3007 pVM->tm.s.fVirtualSyncCatchUp ? " - catchup" : "");
3008 if (pVM->tm.s.offVirtualSync)
3009 {
3010 pHlp->pfnPrintf(pHlp, "\n offset %RU64", pVM->tm.s.offVirtualSync);
3011 if (pVM->tm.s.u32VirtualSyncCatchUpPercentage)
3012 pHlp->pfnPrintf(pHlp, " catch-up rate %u %%", pVM->tm.s.u32VirtualSyncCatchUpPercentage);
3013 }
3014 pHlp->pfnPrintf(pHlp, "\n");
3015
3016 /*
3017 * real
3018 */
3019 pHlp->pfnPrintf(pHlp,
3020 " Real: %18RU64 (%#016RX64) %RU64Hz\n",
3021 u64Real, u64Real, TMRealGetFreq(pVM));
3022}
3023
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette