VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR3/TM.cpp@ 37503

Last change on this file since 37503 was 37466, checked in by vboxsync, 14 years ago

VMM,Devices: Automatically use a per-device lock instead of the giant IOM lock. With exception of the PIC, APIC, IOAPIC and PCI buses which are all using the PDM crit sect, there should be no calls between devices. So, this change should be relatively safe.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id
File size: 132.4 KB
Line 
1/* $Id: TM.cpp 37466 2011-06-15 12:44:16Z vboxsync $ */
2/** @file
3 * TM - Time Manager.
4 */
5
6/*
7 * Copyright (C) 2006-2010 Oracle Corporation
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 */
17
18/** @page pg_tm TM - The Time Manager
19 *
20 * The Time Manager abstracts the CPU clocks and manages timers used by the VMM,
21 * device and drivers.
22 *
23 * @see grp_tm
24 *
25 *
26 * @section sec_tm_clocks Clocks
27 *
28 * There are currently 4 clocks:
29 * - Virtual (guest).
30 * - Synchronous virtual (guest).
31 * - CPU Tick (TSC) (guest). Only current use is rdtsc emulation. Usually a
32 * function of the virtual clock.
33 * - Real (host). This is only used for display updates atm.
34 *
35 * The most important clocks are the three first ones and of these the second is
36 * the most interesting.
37 *
38 *
39 * The synchronous virtual clock is tied to the virtual clock except that it
40 * will take into account timer delivery lag caused by host scheduling. It will
41 * normally never advance beyond the head timer, and when lagging too far behind
42 * it will gradually speed up to catch up with the virtual clock. All devices
43 * implementing time sources accessible to and used by the guest is using this
44 * clock (for timers and other things). This ensures consistency between the
45 * time sources.
46 *
47 * The virtual clock is implemented as an offset to a monotonic, high
48 * resolution, wall clock. The current time source is using the RTTimeNanoTS()
49 * machinery based upon the Global Info Pages (GIP), that is, we're using TSC
50 * deltas (usually 10 ms) to fill the gaps between GIP updates. The result is
51 * a fairly high res clock that works in all contexts and on all hosts. The
52 * virtual clock is paused when the VM isn't in the running state.
53 *
54 * The CPU tick (TSC) is normally virtualized as a function of the synchronous
55 * virtual clock, where the frequency defaults to the host cpu frequency (as we
56 * measure it). In this mode it is possible to configure the frequency. Another
57 * (non-default) option is to use the raw unmodified host TSC values. And yet
58 * another, to tie it to time spent executing guest code. All these things are
59 * configurable should non-default behavior be desirable.
60 *
61 * The real clock is a monotonic clock (when available) with relatively low
62 * resolution, though this a bit host specific. Note that we're currently not
63 * servicing timers using the real clock when the VM is not running, this is
64 * simply because it has not been needed yet therefore not implemented.
65 *
66 *
67 * @subsection subsec_tm_timesync Guest Time Sync / UTC time
68 *
69 * Guest time syncing is primarily taken care of by the VMM device. The
70 * principle is very simple, the guest additions periodically asks the VMM
71 * device what the current UTC time is and makes adjustments accordingly.
72 *
73 * A complicating factor is that the synchronous virtual clock might be doing
74 * catchups and the guest perception is currently a little bit behind the world
75 * but it will (hopefully) be catching up soon as we're feeding timer interrupts
76 * at a slightly higher rate. Adjusting the guest clock to the current wall
77 * time in the real world would be a bad idea then because the guest will be
78 * advancing too fast and run ahead of world time (if the catchup works out).
79 * To solve this problem TM provides the VMM device with an UTC time source that
80 * gets adjusted with the current lag, so that when the guest eventually catches
81 * up the lag it will be showing correct real world time.
82 *
83 *
84 * @section sec_tm_timers Timers
85 *
86 * The timers can use any of the TM clocks described in the previous section.
87 * Each clock has its own scheduling facility, or timer queue if you like.
88 * There are a few factors which makes it a bit complex. First, there is the
89 * usual R0 vs R3 vs. RC thing. Then there are multiple threads, and then there
90 * is the timer thread that periodically checks whether any timers has expired
91 * without EMT noticing. On the API level, all but the create and save APIs
92 * must be multithreaded. EMT will always run the timers.
93 *
94 * The design is using a doubly linked list of active timers which is ordered
95 * by expire date. This list is only modified by the EMT thread. Updates to
96 * the list are batched in a singly linked list, which is then processed by the
97 * EMT thread at the first opportunity (immediately, next time EMT modifies a
98 * timer on that clock, or next timer timeout). Both lists are offset based and
99 * all the elements are therefore allocated from the hyper heap.
100 *
101 * For figuring out when there is need to schedule and run timers TM will:
102 * - Poll whenever somebody queries the virtual clock.
103 * - Poll the virtual clocks from the EM and REM loops.
104 * - Poll the virtual clocks from trap exit path.
105 * - Poll the virtual clocks and calculate first timeout from the halt loop.
106 * - Employ a thread which periodically (100Hz) polls all the timer queues.
107 *
108 *
109 * @image html TMTIMER-Statechart-Diagram.gif
110 *
111 * @section sec_tm_timer Logging
112 *
113 * Level 2: Logs a most of the timer state transitions and queue servicing.
114 * Level 3: Logs a few oddments.
115 * Level 4: Logs TMCLOCK_VIRTUAL_SYNC catch-up events.
116 *
117 */
118
119/*******************************************************************************
120* Header Files *
121*******************************************************************************/
122#define LOG_GROUP LOG_GROUP_TM
123#include <VBox/vmm/tm.h>
124#include <iprt/asm-amd64-x86.h> /* for SUPGetCpuHzFromGIP from sup.h */
125#include <VBox/vmm/vmm.h>
126#include <VBox/vmm/mm.h>
127#include <VBox/vmm/ssm.h>
128#include <VBox/vmm/dbgf.h>
129#include <VBox/vmm/rem.h>
130#include <VBox/vmm/pdmapi.h>
131#include <VBox/vmm/iom.h>
132#include "TMInternal.h"
133#include <VBox/vmm/vm.h>
134
135#include <VBox/vmm/pdmdev.h>
136#include <VBox/param.h>
137#include <VBox/err.h>
138
139#include <VBox/log.h>
140#include <iprt/asm.h>
141#include <iprt/asm-math.h>
142#include <iprt/assert.h>
143#include <iprt/thread.h>
144#include <iprt/time.h>
145#include <iprt/timer.h>
146#include <iprt/semaphore.h>
147#include <iprt/string.h>
148#include <iprt/env.h>
149
150
151/*******************************************************************************
152* Defined Constants And Macros *
153*******************************************************************************/
154/** The current saved state version.*/
155#define TM_SAVED_STATE_VERSION 3
156
157
158/*******************************************************************************
159* Internal Functions *
160*******************************************************************************/
161static bool tmR3HasFixedTSC(PVM pVM);
162static uint64_t tmR3CalibrateTSC(PVM pVM);
163static DECLCALLBACK(int) tmR3Save(PVM pVM, PSSMHANDLE pSSM);
164static DECLCALLBACK(int) tmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass);
165static DECLCALLBACK(void) tmR3TimerCallback(PRTTIMER pTimer, void *pvUser, uint64_t iTick);
166static void tmR3TimerQueueRun(PVM pVM, PTMTIMERQUEUE pQueue);
167static void tmR3TimerQueueRunVirtualSync(PVM pVM);
168static DECLCALLBACK(int) tmR3SetWarpDrive(PVM pVM, uint32_t u32Percent);
169#ifndef VBOX_WITHOUT_NS_ACCOUNTING
170static DECLCALLBACK(void) tmR3CpuLoadTimer(PVM pVM, PTMTIMER pTimer, void *pvUser);
171#endif
172static DECLCALLBACK(void) tmR3TimerInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
173static DECLCALLBACK(void) tmR3TimerInfoActive(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
174static DECLCALLBACK(void) tmR3InfoClocks(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
175
176
177/**
178 * Initializes the TM.
179 *
180 * @returns VBox status code.
181 * @param pVM The VM to operate on.
182 */
183VMM_INT_DECL(int) TMR3Init(PVM pVM)
184{
185 LogFlow(("TMR3Init:\n"));
186
187 /*
188 * Assert alignment and sizes.
189 */
190 AssertCompileMemberAlignment(VM, tm.s, 32);
191 AssertCompile(sizeof(pVM->tm.s) <= sizeof(pVM->tm.padding));
192 AssertCompileMemberAlignment(TM, TimerCritSect, 8);
193 AssertCompileMemberAlignment(TM, VirtualSyncLock, 8);
194
195 /*
196 * Init the structure.
197 */
198 void *pv;
199 int rc = MMHyperAlloc(pVM, sizeof(pVM->tm.s.paTimerQueuesR3[0]) * TMCLOCK_MAX, 0, MM_TAG_TM, &pv);
200 AssertRCReturn(rc, rc);
201 pVM->tm.s.paTimerQueuesR3 = (PTMTIMERQUEUE)pv;
202 pVM->tm.s.paTimerQueuesR0 = MMHyperR3ToR0(pVM, pv);
203 pVM->tm.s.paTimerQueuesRC = MMHyperR3ToRC(pVM, pv);
204
205 pVM->tm.s.offVM = RT_OFFSETOF(VM, tm.s);
206 pVM->tm.s.idTimerCpu = pVM->cCpus - 1; /* The last CPU. */
207 pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL].enmClock = TMCLOCK_VIRTUAL;
208 pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL].u64Expire = INT64_MAX;
209 pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].enmClock = TMCLOCK_VIRTUAL_SYNC;
210 pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].u64Expire = INT64_MAX;
211 pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL].enmClock = TMCLOCK_REAL;
212 pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL].u64Expire = INT64_MAX;
213 pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC].enmClock = TMCLOCK_TSC;
214 pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC].u64Expire = INT64_MAX;
215
216
217 /*
218 * We directly use the GIP to calculate the virtual time. We map the
219 * the GIP into the guest context so we can do this calculation there
220 * as well and save costly world switches.
221 */
222 pVM->tm.s.pvGIPR3 = (void *)g_pSUPGlobalInfoPage;
223 AssertMsgReturn(pVM->tm.s.pvGIPR3, ("GIP support is now required!\n"), VERR_INTERNAL_ERROR);
224 AssertMsgReturn((g_pSUPGlobalInfoPage->u32Version >> 16) == (SUPGLOBALINFOPAGE_VERSION >> 16),
225 ("Unsupported GIP version!\n"), VERR_INTERNAL_ERROR);
226
227 RTHCPHYS HCPhysGIP;
228 rc = SUPR3GipGetPhys(&HCPhysGIP);
229 AssertMsgRCReturn(rc, ("Failed to get GIP physical address!\n"), rc);
230
231 RTGCPTR GCPtr;
232#ifdef SUP_WITH_LOTS_OF_CPUS
233 rc = MMR3HyperMapHCPhys(pVM, pVM->tm.s.pvGIPR3, NIL_RTR0PTR, HCPhysGIP, (size_t)g_pSUPGlobalInfoPage->cPages * PAGE_SIZE,
234 "GIP", &GCPtr);
235#else
236 rc = MMR3HyperMapHCPhys(pVM, pVM->tm.s.pvGIPR3, NIL_RTR0PTR, HCPhysGIP, PAGE_SIZE, "GIP", &GCPtr);
237#endif
238 if (RT_FAILURE(rc))
239 {
240 AssertMsgFailed(("Failed to map GIP into GC, rc=%Rrc!\n", rc));
241 return rc;
242 }
243 pVM->tm.s.pvGIPRC = GCPtr;
244 LogFlow(("TMR3Init: HCPhysGIP=%RHp at %RRv\n", HCPhysGIP, pVM->tm.s.pvGIPRC));
245 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
246
247 /* Check assumptions made in TMAllVirtual.cpp about the GIP update interval. */
248 if ( g_pSUPGlobalInfoPage->u32Magic == SUPGLOBALINFOPAGE_MAGIC
249 && g_pSUPGlobalInfoPage->u32UpdateIntervalNS >= 250000000 /* 0.25s */)
250 return VMSetError(pVM, VERR_INTERNAL_ERROR, RT_SRC_POS,
251 N_("The GIP update interval is too big. u32UpdateIntervalNS=%RU32 (u32UpdateHz=%RU32)"),
252 g_pSUPGlobalInfoPage->u32UpdateIntervalNS, g_pSUPGlobalInfoPage->u32UpdateHz);
253 LogRel(("TM: GIP - u32Mode=%d (%s) u32UpdateHz=%u\n", g_pSUPGlobalInfoPage->u32Mode,
254 g_pSUPGlobalInfoPage->u32Mode == SUPGIPMODE_SYNC_TSC ? "SyncTSC"
255 : g_pSUPGlobalInfoPage->u32Mode == SUPGIPMODE_ASYNC_TSC ? "AsyncTSC" : "Unknown",
256 g_pSUPGlobalInfoPage->u32UpdateHz));
257
258 /*
259 * Setup the VirtualGetRaw backend.
260 */
261 pVM->tm.s.VirtualGetRawDataR3.pu64Prev = &pVM->tm.s.u64VirtualRawPrev;
262 pVM->tm.s.VirtualGetRawDataR3.pfnBad = tmVirtualNanoTSBad;
263 pVM->tm.s.VirtualGetRawDataR3.pfnRediscover = tmVirtualNanoTSRediscover;
264 if (ASMCpuId_EDX(1) & X86_CPUID_FEATURE_EDX_SSE2)
265 {
266 if (g_pSUPGlobalInfoPage->u32Mode == SUPGIPMODE_SYNC_TSC)
267 pVM->tm.s.pfnVirtualGetRawR3 = RTTimeNanoTSLFenceSync;
268 else
269 pVM->tm.s.pfnVirtualGetRawR3 = RTTimeNanoTSLFenceAsync;
270 }
271 else
272 {
273 if (g_pSUPGlobalInfoPage->u32Mode == SUPGIPMODE_SYNC_TSC)
274 pVM->tm.s.pfnVirtualGetRawR3 = RTTimeNanoTSLegacySync;
275 else
276 pVM->tm.s.pfnVirtualGetRawR3 = RTTimeNanoTSLegacyAsync;
277 }
278
279 pVM->tm.s.VirtualGetRawDataRC.pu64Prev = MMHyperR3ToRC(pVM, (void *)&pVM->tm.s.u64VirtualRawPrev);
280 pVM->tm.s.VirtualGetRawDataR0.pu64Prev = MMHyperR3ToR0(pVM, (void *)&pVM->tm.s.u64VirtualRawPrev);
281 AssertReturn(pVM->tm.s.VirtualGetRawDataR0.pu64Prev, VERR_INTERNAL_ERROR);
282 /* The rest is done in TMR3InitFinalize since it's too early to call PDM. */
283
284 /*
285 * Init the locks.
286 */
287 rc = PDMR3CritSectInit(pVM, &pVM->tm.s.TimerCritSect, RT_SRC_POS, "TM Timer Lock");
288 if (RT_FAILURE(rc))
289 return rc;
290 rc = PDMR3CritSectInit(pVM, &pVM->tm.s.VirtualSyncLock, RT_SRC_POS, "TM VirtualSync Lock");
291 if (RT_FAILURE(rc))
292 return rc;
293
294 /*
295 * Get our CFGM node, create it if necessary.
296 */
297 PCFGMNODE pCfgHandle = CFGMR3GetChild(CFGMR3GetRoot(pVM), "TM");
298 if (!pCfgHandle)
299 {
300 rc = CFGMR3InsertNode(CFGMR3GetRoot(pVM), "TM", &pCfgHandle);
301 AssertRCReturn(rc, rc);
302 }
303
304 /*
305 * Determine the TSC configuration and frequency.
306 */
307 /* mode */
308 /** @cfgm{/TM/TSCVirtualized,bool,true}
309 * Use a virtualize TSC, i.e. trap all TSC access. */
310 rc = CFGMR3QueryBool(pCfgHandle, "TSCVirtualized", &pVM->tm.s.fTSCVirtualized);
311 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
312 pVM->tm.s.fTSCVirtualized = true; /* trap rdtsc */
313 else if (RT_FAILURE(rc))
314 return VMSetError(pVM, rc, RT_SRC_POS,
315 N_("Configuration error: Failed to querying bool value \"UseRealTSC\""));
316
317 /* source */
318 /** @cfgm{/TM/UseRealTSC,bool,false}
319 * Use the real TSC as time source for the TSC instead of the synchronous
320 * virtual clock (false, default). */
321 rc = CFGMR3QueryBool(pCfgHandle, "UseRealTSC", &pVM->tm.s.fTSCUseRealTSC);
322 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
323 pVM->tm.s.fTSCUseRealTSC = false; /* use virtual time */
324 else if (RT_FAILURE(rc))
325 return VMSetError(pVM, rc, RT_SRC_POS,
326 N_("Configuration error: Failed to querying bool value \"UseRealTSC\""));
327 if (!pVM->tm.s.fTSCUseRealTSC)
328 pVM->tm.s.fTSCVirtualized = true;
329
330 /* TSC reliability */
331 /** @cfgm{/TM/MaybeUseOffsettedHostTSC,bool,detect}
332 * Whether the CPU has a fixed TSC rate and may be used in offsetted mode with
333 * VT-x/AMD-V execution. This is autodetected in a very restrictive way by
334 * default. */
335 rc = CFGMR3QueryBool(pCfgHandle, "MaybeUseOffsettedHostTSC", &pVM->tm.s.fMaybeUseOffsettedHostTSC);
336 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
337 {
338 if (!pVM->tm.s.fTSCUseRealTSC)
339 pVM->tm.s.fMaybeUseOffsettedHostTSC = tmR3HasFixedTSC(pVM);
340 else
341 pVM->tm.s.fMaybeUseOffsettedHostTSC = true;
342 }
343
344 /** @cfgm{TM/TSCTicksPerSecond, uint32_t, Current TSC frequency from GIP}
345 * The number of TSC ticks per second (i.e. the TSC frequency). This will
346 * override TSCUseRealTSC, TSCVirtualized and MaybeUseOffsettedHostTSC.
347 */
348 rc = CFGMR3QueryU64(pCfgHandle, "TSCTicksPerSecond", &pVM->tm.s.cTSCTicksPerSecond);
349 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
350 {
351 pVM->tm.s.cTSCTicksPerSecond = tmR3CalibrateTSC(pVM);
352 if ( !pVM->tm.s.fTSCUseRealTSC
353 && pVM->tm.s.cTSCTicksPerSecond >= _4G)
354 {
355 pVM->tm.s.cTSCTicksPerSecond = _4G - 1; /* (A limitation of our math code) */
356 pVM->tm.s.fMaybeUseOffsettedHostTSC = false;
357 }
358 }
359 else if (RT_FAILURE(rc))
360 return VMSetError(pVM, rc, RT_SRC_POS,
361 N_("Configuration error: Failed to querying uint64_t value \"TSCTicksPerSecond\""));
362 else if ( pVM->tm.s.cTSCTicksPerSecond < _1M
363 || pVM->tm.s.cTSCTicksPerSecond >= _4G)
364 return VMSetError(pVM, VERR_INVALID_PARAMETER, RT_SRC_POS,
365 N_("Configuration error: \"TSCTicksPerSecond\" = %RI64 is not in the range 1MHz..4GHz-1"),
366 pVM->tm.s.cTSCTicksPerSecond);
367 else
368 {
369 pVM->tm.s.fTSCUseRealTSC = pVM->tm.s.fMaybeUseOffsettedHostTSC = false;
370 pVM->tm.s.fTSCVirtualized = true;
371 }
372
373 /** @cfgm{TM/TSCTiedToExecution, bool, false}
374 * Whether the TSC should be tied to execution. This will exclude most of the
375 * virtualization overhead, but will by default include the time spent in the
376 * halt state (see TM/TSCNotTiedToHalt). This setting will override all other
377 * TSC settings except for TSCTicksPerSecond and TSCNotTiedToHalt, which should
378 * be used avoided or used with great care. Note that this will only work right
379 * together with VT-x or AMD-V, and with a single virtual CPU. */
380 rc = CFGMR3QueryBoolDef(pCfgHandle, "TSCTiedToExecution", &pVM->tm.s.fTSCTiedToExecution, false);
381 if (RT_FAILURE(rc))
382 return VMSetError(pVM, rc, RT_SRC_POS,
383 N_("Configuration error: Failed to querying bool value \"TSCTiedToExecution\""));
384 if (pVM->tm.s.fTSCTiedToExecution)
385 {
386 /* tied to execution, override all other settings. */
387 pVM->tm.s.fTSCVirtualized = true;
388 pVM->tm.s.fTSCUseRealTSC = true;
389 pVM->tm.s.fMaybeUseOffsettedHostTSC = false;
390 }
391
392 /** @cfgm{TM/TSCNotTiedToHalt, bool, true}
393 * For overriding the default of TM/TSCTiedToExecution, i.e. set this to false
394 * to make the TSC freeze during HLT. */
395 rc = CFGMR3QueryBoolDef(pCfgHandle, "TSCNotTiedToHalt", &pVM->tm.s.fTSCNotTiedToHalt, false);
396 if (RT_FAILURE(rc))
397 return VMSetError(pVM, rc, RT_SRC_POS,
398 N_("Configuration error: Failed to querying bool value \"TSCNotTiedToHalt\""));
399
400 /* setup and report */
401 if (pVM->tm.s.fTSCVirtualized)
402 CPUMR3SetCR4Feature(pVM, X86_CR4_TSD, ~X86_CR4_TSD);
403 else
404 CPUMR3SetCR4Feature(pVM, 0, ~X86_CR4_TSD);
405 LogRel(("TM: cTSCTicksPerSecond=%#RX64 (%'RU64) fTSCVirtualized=%RTbool fTSCUseRealTSC=%RTbool\n"
406 "TM: fMaybeUseOffsettedHostTSC=%RTbool TSCTiedToExecution=%RTbool TSCNotTiedToHalt=%RTbool\n",
407 pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.fTSCVirtualized, pVM->tm.s.fTSCUseRealTSC,
408 pVM->tm.s.fMaybeUseOffsettedHostTSC, pVM->tm.s.fTSCTiedToExecution, pVM->tm.s.fTSCNotTiedToHalt));
409
410 /*
411 * Configure the timer synchronous virtual time.
412 */
413 /** @cfgm{TM/ScheduleSlack, uint32_t, ns, 0, UINT32_MAX, 100000}
414 * Scheduling slack when processing timers. */
415 rc = CFGMR3QueryU32(pCfgHandle, "ScheduleSlack", &pVM->tm.s.u32VirtualSyncScheduleSlack);
416 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
417 pVM->tm.s.u32VirtualSyncScheduleSlack = 100000; /* 0.100ms (ASSUMES virtual time is nanoseconds) */
418 else if (RT_FAILURE(rc))
419 return VMSetError(pVM, rc, RT_SRC_POS,
420 N_("Configuration error: Failed to querying 32-bit integer value \"ScheduleSlack\""));
421
422 /** @cfgm{TM/CatchUpStopThreshold, uint64_t, ns, 0, UINT64_MAX, 500000}
423 * When to stop a catch-up, considering it successful. */
424 rc = CFGMR3QueryU64(pCfgHandle, "CatchUpStopThreshold", &pVM->tm.s.u64VirtualSyncCatchUpStopThreshold);
425 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
426 pVM->tm.s.u64VirtualSyncCatchUpStopThreshold = 500000; /* 0.5ms */
427 else if (RT_FAILURE(rc))
428 return VMSetError(pVM, rc, RT_SRC_POS,
429 N_("Configuration error: Failed to querying 64-bit integer value \"CatchUpStopThreshold\""));
430
431 /** @cfgm{TM/CatchUpGiveUpThreshold, uint64_t, ns, 0, UINT64_MAX, 60000000000}
432 * When to give up a catch-up attempt. */
433 rc = CFGMR3QueryU64(pCfgHandle, "CatchUpGiveUpThreshold", &pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold);
434 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
435 pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold = UINT64_C(60000000000); /* 60 sec */
436 else if (RT_FAILURE(rc))
437 return VMSetError(pVM, rc, RT_SRC_POS,
438 N_("Configuration error: Failed to querying 64-bit integer value \"CatchUpGiveUpThreshold\""));
439
440
441 /** @cfgm{TM/CatchUpPrecentage[0..9], uint32_t, %, 1, 2000, various}
442 * The catch-up percent for a given period. */
443 /** @cfgm{TM/CatchUpStartThreshold[0..9], uint64_t, ns, 0, UINT64_MAX,
444 * The catch-up period threshold, or if you like, when a period starts. */
445#define TM_CFG_PERIOD(iPeriod, DefStart, DefPct) \
446 do \
447 { \
448 uint64_t u64; \
449 rc = CFGMR3QueryU64(pCfgHandle, "CatchUpStartThreshold" #iPeriod, &u64); \
450 if (rc == VERR_CFGM_VALUE_NOT_FOUND) \
451 u64 = UINT64_C(DefStart); \
452 else if (RT_FAILURE(rc)) \
453 return VMSetError(pVM, rc, RT_SRC_POS, N_("Configuration error: Failed to querying 64-bit integer value \"CatchUpThreshold" #iPeriod "\"")); \
454 if ( (iPeriod > 0 && u64 <= pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod - 1].u64Start) \
455 || u64 >= pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold) \
456 return VMSetError(pVM, VERR_INVALID_PARAMETER, RT_SRC_POS, N_("Configuration error: Invalid start of period #" #iPeriod ": %'RU64"), u64); \
457 pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod].u64Start = u64; \
458 rc = CFGMR3QueryU32(pCfgHandle, "CatchUpPrecentage" #iPeriod, &pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod].u32Percentage); \
459 if (rc == VERR_CFGM_VALUE_NOT_FOUND) \
460 pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod].u32Percentage = (DefPct); \
461 else if (RT_FAILURE(rc)) \
462 return VMSetError(pVM, rc, RT_SRC_POS, N_("Configuration error: Failed to querying 32-bit integer value \"CatchUpPrecentage" #iPeriod "\"")); \
463 } while (0)
464 /* This needs more tuning. Not sure if we really need so many period and be so gentle. */
465 TM_CFG_PERIOD(0, 750000, 5); /* 0.75ms at 1.05x */
466 TM_CFG_PERIOD(1, 1500000, 10); /* 1.50ms at 1.10x */
467 TM_CFG_PERIOD(2, 8000000, 25); /* 8ms at 1.25x */
468 TM_CFG_PERIOD(3, 30000000, 50); /* 30ms at 1.50x */
469 TM_CFG_PERIOD(4, 75000000, 75); /* 75ms at 1.75x */
470 TM_CFG_PERIOD(5, 175000000, 100); /* 175ms at 2x */
471 TM_CFG_PERIOD(6, 500000000, 200); /* 500ms at 3x */
472 TM_CFG_PERIOD(7, 3000000000, 300); /* 3s at 4x */
473 TM_CFG_PERIOD(8,30000000000, 400); /* 30s at 5x */
474 TM_CFG_PERIOD(9,55000000000, 500); /* 55s at 6x */
475 AssertCompile(RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods) == 10);
476#undef TM_CFG_PERIOD
477
478 /*
479 * Configure real world time (UTC).
480 */
481 /** @cfgm{TM/UTCOffset, int64_t, ns, INT64_MIN, INT64_MAX, 0}
482 * The UTC offset. This is used to put the guest back or forwards in time. */
483 rc = CFGMR3QueryS64(pCfgHandle, "UTCOffset", &pVM->tm.s.offUTC);
484 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
485 pVM->tm.s.offUTC = 0; /* ns */
486 else if (RT_FAILURE(rc))
487 return VMSetError(pVM, rc, RT_SRC_POS,
488 N_("Configuration error: Failed to querying 64-bit integer value \"UTCOffset\""));
489
490 /*
491 * Setup the warp drive.
492 */
493 /** @cfgm{TM/WarpDrivePercentage, uint32_t, %, 0, 20000, 100}
494 * The warp drive percentage, 100% is normal speed. This is used to speed up
495 * or slow down the virtual clock, which can be useful for fast forwarding
496 * borring periods during tests. */
497 rc = CFGMR3QueryU32(pCfgHandle, "WarpDrivePercentage", &pVM->tm.s.u32VirtualWarpDrivePercentage);
498 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
499 rc = CFGMR3QueryU32(CFGMR3GetRoot(pVM), "WarpDrivePercentage", &pVM->tm.s.u32VirtualWarpDrivePercentage); /* legacy */
500 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
501 pVM->tm.s.u32VirtualWarpDrivePercentage = 100;
502 else if (RT_FAILURE(rc))
503 return VMSetError(pVM, rc, RT_SRC_POS,
504 N_("Configuration error: Failed to querying uint32_t value \"WarpDrivePercent\""));
505 else if ( pVM->tm.s.u32VirtualWarpDrivePercentage < 2
506 || pVM->tm.s.u32VirtualWarpDrivePercentage > 20000)
507 return VMSetError(pVM, VERR_INVALID_PARAMETER, RT_SRC_POS,
508 N_("Configuration error: \"WarpDrivePercent\" = %RI32 is not in the range 2..20000"),
509 pVM->tm.s.u32VirtualWarpDrivePercentage);
510 pVM->tm.s.fVirtualWarpDrive = pVM->tm.s.u32VirtualWarpDrivePercentage != 100;
511 if (pVM->tm.s.fVirtualWarpDrive)
512 LogRel(("TM: u32VirtualWarpDrivePercentage=%RI32\n", pVM->tm.s.u32VirtualWarpDrivePercentage));
513
514 /*
515 * Gather the Host Hz configuration values.
516 */
517 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzMax", &pVM->tm.s.cHostHzMax, 20000);
518 if (RT_FAILURE(rc))
519 return VMSetError(pVM, rc, RT_SRC_POS,
520 N_("Configuration error: Failed to querying uint32_t value \"HostHzMax\""));
521
522 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorTimerCpu", &pVM->tm.s.cPctHostHzFudgeFactorTimerCpu, 111);
523 if (RT_FAILURE(rc))
524 return VMSetError(pVM, rc, RT_SRC_POS,
525 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorTimerCpu\""));
526
527 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorOtherCpu", &pVM->tm.s.cPctHostHzFudgeFactorOtherCpu, 110);
528 if (RT_FAILURE(rc))
529 return VMSetError(pVM, rc, RT_SRC_POS,
530 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorOtherCpu\""));
531
532 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorCatchUp100", &pVM->tm.s.cPctHostHzFudgeFactorCatchUp100, 300);
533 if (RT_FAILURE(rc))
534 return VMSetError(pVM, rc, RT_SRC_POS,
535 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorCatchUp100\""));
536
537 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorCatchUp200", &pVM->tm.s.cPctHostHzFudgeFactorCatchUp200, 250);
538 if (RT_FAILURE(rc))
539 return VMSetError(pVM, rc, RT_SRC_POS,
540 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorCatchUp200\""));
541
542 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorCatchUp400", &pVM->tm.s.cPctHostHzFudgeFactorCatchUp400, 200);
543 if (RT_FAILURE(rc))
544 return VMSetError(pVM, rc, RT_SRC_POS,
545 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorCatchUp400\""));
546
547 /*
548 * Start the timer (guard against REM not yielding).
549 */
550 /** @cfgm{TM/TimerMillies, uint32_t, ms, 1, 1000, 10}
551 * The watchdog timer interval. */
552 uint32_t u32Millies;
553 rc = CFGMR3QueryU32(pCfgHandle, "TimerMillies", &u32Millies);
554 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
555 u32Millies = 10;
556 else if (RT_FAILURE(rc))
557 return VMSetError(pVM, rc, RT_SRC_POS,
558 N_("Configuration error: Failed to query uint32_t value \"TimerMillies\""));
559 rc = RTTimerCreate(&pVM->tm.s.pTimer, u32Millies, tmR3TimerCallback, pVM);
560 if (RT_FAILURE(rc))
561 {
562 AssertMsgFailed(("Failed to create timer, u32Millies=%d rc=%Rrc.\n", u32Millies, rc));
563 return rc;
564 }
565 Log(("TM: Created timer %p firing every %d milliseconds\n", pVM->tm.s.pTimer, u32Millies));
566 pVM->tm.s.u32TimerMillies = u32Millies;
567
568 /*
569 * Register saved state.
570 */
571 rc = SSMR3RegisterInternal(pVM, "tm", 1, TM_SAVED_STATE_VERSION, sizeof(uint64_t) * 8,
572 NULL, NULL, NULL,
573 NULL, tmR3Save, NULL,
574 NULL, tmR3Load, NULL);
575 if (RT_FAILURE(rc))
576 return rc;
577
578 /*
579 * Register statistics.
580 */
581 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR3.c1nsSteps,STAMTYPE_U32, "/TM/R3/1nsSteps", STAMUNIT_OCCURENCES, "Virtual time 1ns steps (due to TSC / GIP variations).");
582 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR3.cBadPrev, STAMTYPE_U32, "/TM/R3/cBadPrev", STAMUNIT_OCCURENCES, "Times the previous virtual time was considered erratic (shouldn't ever happen).");
583 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR0.c1nsSteps,STAMTYPE_U32, "/TM/R0/1nsSteps", STAMUNIT_OCCURENCES, "Virtual time 1ns steps (due to TSC / GIP variations).");
584 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR0.cBadPrev, STAMTYPE_U32, "/TM/R0/cBadPrev", STAMUNIT_OCCURENCES, "Times the previous virtual time was considered erratic (shouldn't ever happen).");
585 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataRC.c1nsSteps,STAMTYPE_U32, "/TM/RC/1nsSteps", STAMUNIT_OCCURENCES, "Virtual time 1ns steps (due to TSC / GIP variations).");
586 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataRC.cBadPrev, STAMTYPE_U32, "/TM/RC/cBadPrev", STAMUNIT_OCCURENCES, "Times the previous virtual time was considered erratic (shouldn't ever happen).");
587 STAM_REL_REG( pVM,(void*)&pVM->tm.s.offVirtualSync, STAMTYPE_U64, "/TM/VirtualSync/CurrentOffset", STAMUNIT_NS, "The current offset. (subtract GivenUp to get the lag)");
588 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.offVirtualSyncGivenUp, STAMTYPE_U64, "/TM/VirtualSync/GivenUp", STAMUNIT_NS, "Nanoseconds of the 'CurrentOffset' that's been given up and won't ever be attempted caught up with.");
589 STAM_REL_REG( pVM,(void*)&pVM->tm.s.uMaxHzHint, STAMTYPE_U32, "/TM/MaxHzHint", STAMUNIT_HZ, "Max guest timer frequency hint.");
590
591#ifdef VBOX_WITH_STATISTICS
592 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR3.cExpired, STAMTYPE_U32, "/TM/R3/cExpired", STAMUNIT_OCCURENCES, "Times the TSC interval expired (overlaps 1ns steps).");
593 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR3.cUpdateRaces,STAMTYPE_U32, "/TM/R3/cUpdateRaces", STAMUNIT_OCCURENCES, "Thread races when updating the previous timestamp.");
594 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR0.cExpired, STAMTYPE_U32, "/TM/R0/cExpired", STAMUNIT_OCCURENCES, "Times the TSC interval expired (overlaps 1ns steps).");
595 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR0.cUpdateRaces,STAMTYPE_U32, "/TM/R0/cUpdateRaces", STAMUNIT_OCCURENCES, "Thread races when updating the previous timestamp.");
596 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataRC.cExpired, STAMTYPE_U32, "/TM/RC/cExpired", STAMUNIT_OCCURENCES, "Times the TSC interval expired (overlaps 1ns steps).");
597 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataRC.cUpdateRaces,STAMTYPE_U32, "/TM/RC/cUpdateRaces", STAMUNIT_OCCURENCES, "Thread races when updating the previous timestamp.");
598 STAM_REG(pVM, &pVM->tm.s.StatDoQueues, STAMTYPE_PROFILE, "/TM/DoQueues", STAMUNIT_TICKS_PER_CALL, "Profiling timer TMR3TimerQueuesDo.");
599 STAM_REG(pVM, &pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL], STAMTYPE_PROFILE_ADV, "/TM/DoQueues/Virtual", STAMUNIT_TICKS_PER_CALL, "Time spent on the virtual clock queue.");
600 STAM_REG(pVM, &pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL_SYNC], STAMTYPE_PROFILE_ADV, "/TM/DoQueues/VirtualSync", STAMUNIT_TICKS_PER_CALL, "Time spent on the virtual sync clock queue.");
601 STAM_REG(pVM, &pVM->tm.s.aStatDoQueues[TMCLOCK_REAL], STAMTYPE_PROFILE_ADV, "/TM/DoQueues/Real", STAMUNIT_TICKS_PER_CALL, "Time spent on the real clock queue.");
602
603 STAM_REG(pVM, &pVM->tm.s.StatPoll, STAMTYPE_COUNTER, "/TM/Poll", STAMUNIT_OCCURENCES, "TMTimerPoll calls.");
604 STAM_REG(pVM, &pVM->tm.s.StatPollAlreadySet, STAMTYPE_COUNTER, "/TM/Poll/AlreadySet", STAMUNIT_OCCURENCES, "TMTimerPoll calls where the FF was already set.");
605 STAM_REG(pVM, &pVM->tm.s.StatPollELoop, STAMTYPE_COUNTER, "/TM/Poll/ELoop", STAMUNIT_OCCURENCES, "Times TMTimerPoll has given up getting a consistent virtual sync data set.");
606 STAM_REG(pVM, &pVM->tm.s.StatPollMiss, STAMTYPE_COUNTER, "/TM/Poll/Miss", STAMUNIT_OCCURENCES, "TMTimerPoll calls where nothing had expired.");
607 STAM_REG(pVM, &pVM->tm.s.StatPollRunning, STAMTYPE_COUNTER, "/TM/Poll/Running", STAMUNIT_OCCURENCES, "TMTimerPoll calls where the queues were being run.");
608 STAM_REG(pVM, &pVM->tm.s.StatPollSimple, STAMTYPE_COUNTER, "/TM/Poll/Simple", STAMUNIT_OCCURENCES, "TMTimerPoll calls where we could take the simple path.");
609 STAM_REG(pVM, &pVM->tm.s.StatPollVirtual, STAMTYPE_COUNTER, "/TM/Poll/HitsVirtual", STAMUNIT_OCCURENCES, "The number of times TMTimerPoll found an expired TMCLOCK_VIRTUAL queue.");
610 STAM_REG(pVM, &pVM->tm.s.StatPollVirtualSync, STAMTYPE_COUNTER, "/TM/Poll/HitsVirtualSync", STAMUNIT_OCCURENCES, "The number of times TMTimerPoll found an expired TMCLOCK_VIRTUAL_SYNC queue.");
611
612 STAM_REG(pVM, &pVM->tm.s.StatPostponedR3, STAMTYPE_COUNTER, "/TM/PostponedR3", STAMUNIT_OCCURENCES, "Postponed due to unschedulable state, in ring-3.");
613 STAM_REG(pVM, &pVM->tm.s.StatPostponedRZ, STAMTYPE_COUNTER, "/TM/PostponedRZ", STAMUNIT_OCCURENCES, "Postponed due to unschedulable state, in ring-0 / RC.");
614
615 STAM_REG(pVM, &pVM->tm.s.StatScheduleOneR3, STAMTYPE_PROFILE, "/TM/ScheduleOneR3", STAMUNIT_TICKS_PER_CALL, "Profiling the scheduling of one queue during a TMTimer* call in EMT.");
616 STAM_REG(pVM, &pVM->tm.s.StatScheduleOneRZ, STAMTYPE_PROFILE, "/TM/ScheduleOneRZ", STAMUNIT_TICKS_PER_CALL, "Profiling the scheduling of one queue during a TMTimer* call in EMT.");
617 STAM_REG(pVM, &pVM->tm.s.StatScheduleSetFF, STAMTYPE_COUNTER, "/TM/ScheduleSetFF", STAMUNIT_OCCURENCES, "The number of times the timer FF was set instead of doing scheduling.");
618
619 STAM_REG(pVM, &pVM->tm.s.StatTimerSet, STAMTYPE_COUNTER, "/TM/TimerSet", STAMUNIT_OCCURENCES, "Calls");
620 STAM_REG(pVM, &pVM->tm.s.StatTimerSetOpt, STAMTYPE_COUNTER, "/TM/TimerSet/Opt", STAMUNIT_OCCURENCES, "Optimized path taken.");
621 STAM_REG(pVM, &pVM->tm.s.StatTimerSetR3, STAMTYPE_PROFILE, "/TM/TimerSet/R3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSet calls made in ring-3.");
622 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRZ, STAMTYPE_PROFILE, "/TM/TimerSet/RZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSet calls made in ring-0 / RC.");
623 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStActive, STAMTYPE_COUNTER, "/TM/TimerSet/StActive", STAMUNIT_OCCURENCES, "ACTIVE");
624 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStExpDeliver, STAMTYPE_COUNTER, "/TM/TimerSet/StExpDeliver", STAMUNIT_OCCURENCES, "EXPIRED_DELIVER");
625 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStOther, STAMTYPE_COUNTER, "/TM/TimerSet/StOther", STAMUNIT_OCCURENCES, "Other states");
626 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStPendStop, STAMTYPE_COUNTER, "/TM/TimerSet/StPendStop", STAMUNIT_OCCURENCES, "PENDING_STOP");
627 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStPendStopSched, STAMTYPE_COUNTER, "/TM/TimerSet/StPendStopSched", STAMUNIT_OCCURENCES, "PENDING_STOP_SCHEDULE");
628 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStPendSched, STAMTYPE_COUNTER, "/TM/TimerSet/StPendSched", STAMUNIT_OCCURENCES, "PENDING_SCHEDULE");
629 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStPendResched, STAMTYPE_COUNTER, "/TM/TimerSet/StPendResched", STAMUNIT_OCCURENCES, "PENDING_RESCHEDULE");
630 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStStopped, STAMTYPE_COUNTER, "/TM/TimerSet/StStopped", STAMUNIT_OCCURENCES, "STOPPED");
631
632 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelative, STAMTYPE_COUNTER, "/TM/TimerSetRelative", STAMUNIT_OCCURENCES, "Calls");
633 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeOpt, STAMTYPE_COUNTER, "/TM/TimerSetRelative/Opt", STAMUNIT_OCCURENCES, "Optimized path taken.");
634 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeR3, STAMTYPE_PROFILE, "/TM/TimerSetRelative/R3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSetRelative calls made in ring-3.");
635 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeRZ, STAMTYPE_PROFILE, "/TM/TimerSetRelative/RZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSetReltaive calls made in ring-0 / RC.");
636 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeRacyVirtSync, STAMTYPE_COUNTER, "/TM/TimerSetRelative/RacyVirtSync", STAMUNIT_OCCURENCES, "Potentially racy virtual sync timer update.");
637 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStActive, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StActive", STAMUNIT_OCCURENCES, "ACTIVE");
638 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStExpDeliver, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StExpDeliver", STAMUNIT_OCCURENCES, "EXPIRED_DELIVER");
639 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStOther, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StOther", STAMUNIT_OCCURENCES, "Other states");
640 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStPendStop, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StPendStop", STAMUNIT_OCCURENCES, "PENDING_STOP");
641 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStPendStopSched, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StPendStopSched",STAMUNIT_OCCURENCES, "PENDING_STOP_SCHEDULE");
642 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStPendSched, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StPendSched", STAMUNIT_OCCURENCES, "PENDING_SCHEDULE");
643 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStPendResched, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StPendResched", STAMUNIT_OCCURENCES, "PENDING_RESCHEDULE");
644 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStStopped, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StStopped", STAMUNIT_OCCURENCES, "STOPPED");
645
646 STAM_REG(pVM, &pVM->tm.s.StatTimerStopR3, STAMTYPE_PROFILE, "/TM/TimerStopR3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerStop calls made in ring-3.");
647 STAM_REG(pVM, &pVM->tm.s.StatTimerStopRZ, STAMTYPE_PROFILE, "/TM/TimerStopRZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerStop calls made in ring-0 / RC.");
648
649 STAM_REG(pVM, &pVM->tm.s.StatVirtualGet, STAMTYPE_COUNTER, "/TM/VirtualGet", STAMUNIT_OCCURENCES, "The number of times TMTimerGet was called when the clock was running.");
650 STAM_REG(pVM, &pVM->tm.s.StatVirtualGetSetFF, STAMTYPE_COUNTER, "/TM/VirtualGetSetFF", STAMUNIT_OCCURENCES, "Times we set the FF when calling TMTimerGet.");
651 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGet, STAMTYPE_COUNTER, "/TM/VirtualSyncGet", STAMUNIT_OCCURENCES, "The number of times tmVirtualSyncGetEx was called.");
652 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetELoop, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/ELoop", STAMUNIT_OCCURENCES, "Times tmVirtualSyncGetEx has given up getting a consistent virtual sync data set.");
653 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetExpired, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/Expired", STAMUNIT_OCCURENCES, "Times tmVirtualSyncGetEx encountered an expired timer stopping the clock.");
654 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetLocked, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/Locked", STAMUNIT_OCCURENCES, "Times we successfully acquired the lock in tmVirtualSyncGetEx.");
655 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetLockless, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/Lockless", STAMUNIT_OCCURENCES, "Times tmVirtualSyncGetEx returned without needing to take the lock.");
656 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetSetFF, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/SetFF", STAMUNIT_OCCURENCES, "Times we set the FF when calling tmVirtualSyncGetEx.");
657 STAM_REG(pVM, &pVM->tm.s.StatVirtualPause, STAMTYPE_COUNTER, "/TM/VirtualPause", STAMUNIT_OCCURENCES, "The number of times TMR3TimerPause was called.");
658 STAM_REG(pVM, &pVM->tm.s.StatVirtualResume, STAMTYPE_COUNTER, "/TM/VirtualResume", STAMUNIT_OCCURENCES, "The number of times TMR3TimerResume was called.");
659
660 STAM_REG(pVM, &pVM->tm.s.StatTimerCallbackSetFF, STAMTYPE_COUNTER, "/TM/CallbackSetFF", STAMUNIT_OCCURENCES, "The number of times the timer callback set FF.");
661
662 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupLE010, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupLE010", STAMUNIT_OCCURENCES, "In catch-up mode, 10% or lower.");
663 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupLE025, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupLE025", STAMUNIT_OCCURENCES, "In catch-up mode, 25%-11%.");
664 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupLE100, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupLE100", STAMUNIT_OCCURENCES, "In catch-up mode, 100%-26%.");
665 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupOther, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupOther", STAMUNIT_OCCURENCES, "In catch-up mode, > 100%.");
666 STAM_REG(pVM, &pVM->tm.s.StatTSCNotFixed, STAMTYPE_COUNTER, "/TM/TSC/Intercept/NotFixed", STAMUNIT_OCCURENCES, "TSC is not fixed, it may run at variable speed.");
667 STAM_REG(pVM, &pVM->tm.s.StatTSCNotTicking, STAMTYPE_COUNTER, "/TM/TSC/Intercept/NotTicking", STAMUNIT_OCCURENCES, "TSC is not ticking.");
668 STAM_REG(pVM, &pVM->tm.s.StatTSCSyncNotTicking, STAMTYPE_COUNTER, "/TM/TSC/Intercept/SyncNotTicking", STAMUNIT_OCCURENCES, "VirtualSync isn't ticking.");
669 STAM_REG(pVM, &pVM->tm.s.StatTSCWarp, STAMTYPE_COUNTER, "/TM/TSC/Intercept/Warp", STAMUNIT_OCCURENCES, "Warpdrive is active.");
670 STAM_REG(pVM, &pVM->tm.s.StatTSCSet, STAMTYPE_COUNTER, "/TM/TSC/Sets", STAMUNIT_OCCURENCES, "Calls to TMCpuTickSet.");
671 STAM_REG(pVM, &pVM->tm.s.StatTSCUnderflow, STAMTYPE_COUNTER, "/TM/TSC/Underflow", STAMUNIT_OCCURENCES, "TSC underflow; corrected with last seen value .");
672#endif /* VBOX_WITH_STATISTICS */
673
674 for (VMCPUID i = 0; i < pVM->cCpus; i++)
675 {
676 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.offTSCRawSrc, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_TICKS, "TSC offset relative the raw source", "/TM/TSC/offCPU%u", i);
677#ifndef VBOX_WITHOUT_NS_ACCOUNTING
678# if defined(VBOX_WITH_STATISTICS) || defined(VBOX_WITH_NS_ACCOUNTING_STATS)
679 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsTotal, STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Resettable: Total CPU run time.", "/TM/CPU/%02u", i);
680 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsExecuting, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent executing guest code.", "/TM/CPU/%02u/PrfExecuting", i);
681 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsExecLong, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent executing guest code - long hauls.", "/TM/CPU/%02u/PrfExecLong", i);
682 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsExecShort, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent executing guest code - short stretches.", "/TM/CPU/%02u/PrfExecShort", i);
683 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsExecTiny, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent executing guest code - tiny bits.", "/TM/CPU/%02u/PrfExecTiny", i);
684 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsHalted, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent halted.", "/TM/CPU/%02u/PrfHalted", i);
685 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.StatNsOther, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent in the VMM or preempted.", "/TM/CPU/%02u/PrfOther", i);
686# endif
687 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cNsTotal, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Total CPU run time.", "/TM/CPU/%02u/cNsTotal", i);
688 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cNsExecuting, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Time spent executing guest code.", "/TM/CPU/%02u/cNsExecuting", i);
689 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cNsHalted, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Time spent halted.", "/TM/CPU/%02u/cNsHalted", i);
690 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cNsOther, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Time spent in the VMM or preempted.", "/TM/CPU/%02u/cNsOther", i);
691 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cPeriodsExecuting, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_COUNT, "Times executed guest code.", "/TM/CPU/%02u/cPeriodsExecuting", i);
692 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.cPeriodsHalted, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_COUNT, "Times halted.", "/TM/CPU/%02u/cPeriodsHalted", i);
693 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.CpuLoad.cPctExecuting, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent executing guest code recently.", "/TM/CPU/%02u/pctExecuting", i);
694 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.CpuLoad.cPctHalted, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent halted recently.", "/TM/CPU/%02u/pctHalted", i);
695 STAMR3RegisterF(pVM, &pVM->aCpus[i].tm.s.CpuLoad.cPctOther, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent in the VMM or preempted recently.", "/TM/CPU/%02u/pctOther", i);
696#endif
697 }
698#ifndef VBOX_WITHOUT_NS_ACCOUNTING
699 STAMR3RegisterF(pVM, &pVM->tm.s.CpuLoad.cPctExecuting, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent executing guest code recently.", "/TM/CPU/pctExecuting");
700 STAMR3RegisterF(pVM, &pVM->tm.s.CpuLoad.cPctHalted, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent halted recently.", "/TM/CPU/pctHalted");
701 STAMR3RegisterF(pVM, &pVM->tm.s.CpuLoad.cPctOther, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent in the VMM or preempted recently.", "/TM/CPU/pctOther");
702#endif
703
704#ifdef VBOX_WITH_STATISTICS
705 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncCatchup, STAMTYPE_PROFILE_ADV, "/TM/VirtualSync/CatchUp", STAMUNIT_TICKS_PER_OCCURENCE, "Counting and measuring the times spent catching up.");
706 STAM_REG(pVM, (void *)&pVM->tm.s.fVirtualSyncCatchUp, STAMTYPE_U8, "/TM/VirtualSync/CatchUpActive", STAMUNIT_NONE, "Catch-Up active indicator.");
707 STAM_REG(pVM, (void *)&pVM->tm.s.u32VirtualSyncCatchUpPercentage, STAMTYPE_U32, "/TM/VirtualSync/CatchUpPercentage", STAMUNIT_PCT, "The catch-up percentage. (+100/100 to get clock multiplier)");
708 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncFF, STAMTYPE_PROFILE, "/TM/VirtualSync/FF", STAMUNIT_TICKS_PER_OCCURENCE, "Time spent in TMR3VirtualSyncFF by all but the dedicate timer EMT.");
709 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGiveUp, STAMTYPE_COUNTER, "/TM/VirtualSync/GiveUp", STAMUNIT_OCCURENCES, "Times the catch-up was abandoned.");
710 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGiveUpBeforeStarting, STAMTYPE_COUNTER, "/TM/VirtualSync/GiveUpBeforeStarting",STAMUNIT_OCCURENCES, "Times the catch-up was abandoned before even starting. (Typically debugging++.)");
711 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRun, STAMTYPE_COUNTER, "/TM/VirtualSync/Run", STAMUNIT_OCCURENCES, "Times the virtual sync timer queue was considered.");
712 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunRestart, STAMTYPE_COUNTER, "/TM/VirtualSync/Run/Restarts", STAMUNIT_OCCURENCES, "Times the clock was restarted after a run.");
713 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunStop, STAMTYPE_COUNTER, "/TM/VirtualSync/Run/Stop", STAMUNIT_OCCURENCES, "Times the clock was stopped when calculating the current time before examining the timers.");
714 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunStoppedAlready, STAMTYPE_COUNTER, "/TM/VirtualSync/Run/StoppedAlready", STAMUNIT_OCCURENCES, "Times the clock was already stopped elsewhere (TMVirtualSyncGet).");
715 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunSlack, STAMTYPE_PROFILE, "/TM/VirtualSync/Run/Slack", STAMUNIT_NS_PER_OCCURENCE, "The scheduling slack. (Catch-up handed out when running timers.)");
716 for (unsigned i = 0; i < RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods); i++)
717 {
718 STAMR3RegisterF(pVM, &pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage, STAMTYPE_U32, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "The catch-up percentage.", "/TM/VirtualSync/Periods/%u", i);
719 STAMR3RegisterF(pVM, &pVM->tm.s.aStatVirtualSyncCatchupAdjust[i], STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_OCCURENCES, "Times adjusted to this period.", "/TM/VirtualSync/Periods/%u/Adjust", i);
720 STAMR3RegisterF(pVM, &pVM->tm.s.aStatVirtualSyncCatchupInitial[i], STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_OCCURENCES, "Times started in this period.", "/TM/VirtualSync/Periods/%u/Initial", i);
721 STAMR3RegisterF(pVM, &pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u64Start, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Start of this period (lag).", "/TM/VirtualSync/Periods/%u/Start", i);
722 }
723#endif /* VBOX_WITH_STATISTICS */
724
725 /*
726 * Register info handlers.
727 */
728 DBGFR3InfoRegisterInternalEx(pVM, "timers", "Dumps all timers. No arguments.", tmR3TimerInfo, DBGFINFO_FLAGS_RUN_ON_EMT);
729 DBGFR3InfoRegisterInternalEx(pVM, "activetimers", "Dumps active all timers. No arguments.", tmR3TimerInfoActive, DBGFINFO_FLAGS_RUN_ON_EMT);
730 DBGFR3InfoRegisterInternalEx(pVM, "clocks", "Display the time of the various clocks.", tmR3InfoClocks, DBGFINFO_FLAGS_RUN_ON_EMT);
731
732 return VINF_SUCCESS;
733}
734
735
736/**
737 * Checks if the host CPU has a fixed TSC frequency.
738 *
739 * @returns true if it has, false if it hasn't.
740 *
741 * @remark This test doesn't bother with very old CPUs that don't do power
742 * management or any other stuff that might influence the TSC rate.
743 * This isn't currently relevant.
744 */
745static bool tmR3HasFixedTSC(PVM pVM)
746{
747 if (ASMHasCpuId())
748 {
749 uint32_t uEAX, uEBX, uECX, uEDX;
750
751 if (CPUMGetHostCpuVendor(pVM) == CPUMCPUVENDOR_AMD)
752 {
753 /*
754 * AuthenticAMD - Check for APM support and that TscInvariant is set.
755 *
756 * This test isn't correct with respect to fixed/non-fixed TSC and
757 * older models, but this isn't relevant since the result is currently
758 * only used for making a decision on AMD-V models.
759 */
760 ASMCpuId(0x80000000, &uEAX, &uEBX, &uECX, &uEDX);
761 if (uEAX >= 0x80000007)
762 {
763 PSUPGLOBALINFOPAGE pGip = g_pSUPGlobalInfoPage;
764
765 ASMCpuId(0x80000007, &uEAX, &uEBX, &uECX, &uEDX);
766 if ( (uEDX & X86_CPUID_AMD_ADVPOWER_EDX_TSCINVAR) /* TscInvariant */
767 && pGip->u32Mode == SUPGIPMODE_SYNC_TSC /* no fixed tsc if the gip timer is in async mode */)
768 return true;
769 }
770 }
771 else if (CPUMGetHostCpuVendor(pVM) == CPUMCPUVENDOR_INTEL)
772 {
773 /*
774 * GenuineIntel - Check the model number.
775 *
776 * This test is lacking in the same way and for the same reasons
777 * as the AMD test above.
778 */
779 ASMCpuId(1, &uEAX, &uEBX, &uECX, &uEDX);
780 unsigned uModel = (uEAX >> 4) & 0x0f;
781 unsigned uFamily = (uEAX >> 8) & 0x0f;
782 if (uFamily == 0x0f)
783 uFamily += (uEAX >> 20) & 0xff;
784 if (uFamily >= 0x06)
785 uModel += ((uEAX >> 16) & 0x0f) << 4;
786 if ( (uFamily == 0x0f /*P4*/ && uModel >= 0x03)
787 || (uFamily == 0x06 /*P2/P3*/ && uModel >= 0x0e))
788 return true;
789 }
790 }
791 return false;
792}
793
794
795/**
796 * Calibrate the CPU tick.
797 *
798 * @returns Number of ticks per second.
799 */
800static uint64_t tmR3CalibrateTSC(PVM pVM)
801{
802 /*
803 * Use GIP when available present.
804 */
805 uint64_t u64Hz = SUPGetCpuHzFromGIP(g_pSUPGlobalInfoPage);
806 if (u64Hz != UINT64_MAX)
807 {
808 if (tmR3HasFixedTSC(pVM))
809 /* Sleep a bit to get a more reliable CpuHz value. */
810 RTThreadSleep(32);
811 else
812 {
813 /* Spin for 40ms to try push up the CPU frequency and get a more reliable CpuHz value. */
814 const uint64_t u64 = RTTimeMilliTS();
815 while ((RTTimeMilliTS() - u64) < 40 /*ms*/)
816 /* nothing */;
817 }
818
819 u64Hz = SUPGetCpuHzFromGIP(g_pSUPGlobalInfoPage);
820 if (u64Hz != UINT64_MAX)
821 return u64Hz;
822 }
823
824 /* call this once first to make sure it's initialized. */
825 RTTimeNanoTS();
826
827 /*
828 * Yield the CPU to increase our chances of getting
829 * a correct value.
830 */
831 RTThreadYield(); /* Try avoid interruptions between TSC and NanoTS samplings. */
832 static const unsigned s_auSleep[5] = { 50, 30, 30, 40, 40 };
833 uint64_t au64Samples[5];
834 unsigned i;
835 for (i = 0; i < RT_ELEMENTS(au64Samples); i++)
836 {
837 RTMSINTERVAL cMillies;
838 int cTries = 5;
839 uint64_t u64Start = ASMReadTSC();
840 uint64_t u64End;
841 uint64_t StartTS = RTTimeNanoTS();
842 uint64_t EndTS;
843 do
844 {
845 RTThreadSleep(s_auSleep[i]);
846 u64End = ASMReadTSC();
847 EndTS = RTTimeNanoTS();
848 cMillies = (RTMSINTERVAL)((EndTS - StartTS + 500000) / 1000000);
849 } while ( cMillies == 0 /* the sleep may be interrupted... */
850 || (cMillies < 20 && --cTries > 0));
851 uint64_t u64Diff = u64End - u64Start;
852
853 au64Samples[i] = (u64Diff * 1000) / cMillies;
854 AssertMsg(cTries > 0, ("cMillies=%d i=%d\n", cMillies, i));
855 }
856
857 /*
858 * Discard the highest and lowest results and calculate the average.
859 */
860 unsigned iHigh = 0;
861 unsigned iLow = 0;
862 for (i = 1; i < RT_ELEMENTS(au64Samples); i++)
863 {
864 if (au64Samples[i] < au64Samples[iLow])
865 iLow = i;
866 if (au64Samples[i] > au64Samples[iHigh])
867 iHigh = i;
868 }
869 au64Samples[iLow] = 0;
870 au64Samples[iHigh] = 0;
871
872 u64Hz = au64Samples[0];
873 for (i = 1; i < RT_ELEMENTS(au64Samples); i++)
874 u64Hz += au64Samples[i];
875 u64Hz /= RT_ELEMENTS(au64Samples) - 2;
876
877 return u64Hz;
878}
879
880
881/**
882 * Finalizes the TM initialization.
883 *
884 * @returns VBox status code.
885 * @param pVM The VM to operate on.
886 */
887VMM_INT_DECL(int) TMR3InitFinalize(PVM pVM)
888{
889 int rc;
890
891 /*
892 * Resolve symbols.
893 */
894 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "tmVirtualNanoTSBad", &pVM->tm.s.VirtualGetRawDataRC.pfnBad);
895 AssertRCReturn(rc, rc);
896 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "tmVirtualNanoTSRediscover", &pVM->tm.s.VirtualGetRawDataRC.pfnRediscover);
897 AssertRCReturn(rc, rc);
898 if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceSync)
899 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLFenceSync", &pVM->tm.s.pfnVirtualGetRawRC);
900 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceAsync)
901 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLFenceAsync", &pVM->tm.s.pfnVirtualGetRawRC);
902 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacySync)
903 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLegacySync", &pVM->tm.s.pfnVirtualGetRawRC);
904 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacyAsync)
905 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLegacyAsync", &pVM->tm.s.pfnVirtualGetRawRC);
906 else
907 AssertFatalFailed();
908 AssertRCReturn(rc, rc);
909
910 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "tmVirtualNanoTSBad", &pVM->tm.s.VirtualGetRawDataR0.pfnBad);
911 AssertRCReturn(rc, rc);
912 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "tmVirtualNanoTSRediscover", &pVM->tm.s.VirtualGetRawDataR0.pfnRediscover);
913 AssertRCReturn(rc, rc);
914 if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceSync)
915 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "RTTimeNanoTSLFenceSync", &pVM->tm.s.pfnVirtualGetRawR0);
916 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceAsync)
917 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "RTTimeNanoTSLFenceAsync", &pVM->tm.s.pfnVirtualGetRawR0);
918 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacySync)
919 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "RTTimeNanoTSLegacySync", &pVM->tm.s.pfnVirtualGetRawR0);
920 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacyAsync)
921 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "RTTimeNanoTSLegacyAsync", &pVM->tm.s.pfnVirtualGetRawR0);
922 else
923 AssertFatalFailed();
924 AssertRCReturn(rc, rc);
925
926#ifndef VBOX_WITHOUT_NS_ACCOUNTING
927 /*
928 * Create a timer for refreshing the CPU load stats.
929 */
930 PTMTIMER pTimer;
931 rc = TMR3TimerCreateInternal(pVM, TMCLOCK_REAL, tmR3CpuLoadTimer, NULL, "CPU Load Timer", &pTimer);
932 if (RT_SUCCESS(rc))
933 rc = TMTimerSetMillies(pTimer, 1000);
934#endif
935
936 return rc;
937}
938
939
940/**
941 * Applies relocations to data and code managed by this
942 * component. This function will be called at init and
943 * whenever the VMM need to relocate it self inside the GC.
944 *
945 * @param pVM The VM.
946 * @param offDelta Relocation delta relative to old location.
947 */
948VMM_INT_DECL(void) TMR3Relocate(PVM pVM, RTGCINTPTR offDelta)
949{
950 int rc;
951 LogFlow(("TMR3Relocate\n"));
952
953 pVM->tm.s.pvGIPRC = MMHyperR3ToRC(pVM, pVM->tm.s.pvGIPR3);
954 pVM->tm.s.paTimerQueuesRC = MMHyperR3ToRC(pVM, pVM->tm.s.paTimerQueuesR3);
955 pVM->tm.s.paTimerQueuesR0 = MMHyperR3ToR0(pVM, pVM->tm.s.paTimerQueuesR3);
956
957 pVM->tm.s.VirtualGetRawDataRC.pu64Prev = MMHyperR3ToRC(pVM, (void *)&pVM->tm.s.u64VirtualRawPrev);
958 AssertFatal(pVM->tm.s.VirtualGetRawDataRC.pu64Prev);
959 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "tmVirtualNanoTSBad", &pVM->tm.s.VirtualGetRawDataRC.pfnBad);
960 AssertFatalRC(rc);
961 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "tmVirtualNanoTSRediscover", &pVM->tm.s.VirtualGetRawDataRC.pfnRediscover);
962 AssertFatalRC(rc);
963
964 if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceSync)
965 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLFenceSync", &pVM->tm.s.pfnVirtualGetRawRC);
966 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceAsync)
967 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLFenceAsync", &pVM->tm.s.pfnVirtualGetRawRC);
968 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacySync)
969 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLegacySync", &pVM->tm.s.pfnVirtualGetRawRC);
970 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacyAsync)
971 rc = PDMR3LdrGetSymbolRC(pVM, NULL, "RTTimeNanoTSLegacyAsync", &pVM->tm.s.pfnVirtualGetRawRC);
972 else
973 AssertFatalFailed();
974 AssertFatalRC(rc);
975
976 /*
977 * Iterate the timers updating the pVMRC pointers.
978 */
979 for (PTMTIMER pTimer = pVM->tm.s.pCreated; pTimer; pTimer = pTimer->pBigNext)
980 {
981 pTimer->pVMRC = pVM->pVMRC;
982 pTimer->pVMR0 = pVM->pVMR0;
983 }
984}
985
986
987/**
988 * Terminates the TM.
989 *
990 * Termination means cleaning up and freeing all resources,
991 * the VM it self is at this point powered off or suspended.
992 *
993 * @returns VBox status code.
994 * @param pVM The VM to operate on.
995 */
996VMM_INT_DECL(int) TMR3Term(PVM pVM)
997{
998 AssertMsg(pVM->tm.s.offVM, ("bad init order!\n"));
999 if (pVM->tm.s.pTimer)
1000 {
1001 int rc = RTTimerDestroy(pVM->tm.s.pTimer);
1002 AssertRC(rc);
1003 pVM->tm.s.pTimer = NULL;
1004 }
1005
1006 return VINF_SUCCESS;
1007}
1008
1009
1010/**
1011 * The VM is being reset.
1012 *
1013 * For the TM component this means that a rescheduling is preformed,
1014 * the FF is cleared and but without running the queues. We'll have to
1015 * check if this makes sense or not, but it seems like a good idea now....
1016 *
1017 * @param pVM VM handle.
1018 */
1019VMM_INT_DECL(void) TMR3Reset(PVM pVM)
1020{
1021 LogFlow(("TMR3Reset:\n"));
1022 VM_ASSERT_EMT(pVM);
1023 tmTimerLock(pVM);
1024
1025 /*
1026 * Abort any pending catch up.
1027 * This isn't perfect...
1028 */
1029 if (pVM->tm.s.fVirtualSyncCatchUp)
1030 {
1031 const uint64_t offVirtualNow = TMVirtualGetNoCheck(pVM);
1032 const uint64_t offVirtualSyncNow = TMVirtualSyncGetNoCheck(pVM);
1033 if (pVM->tm.s.fVirtualSyncCatchUp)
1034 {
1035 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
1036
1037 const uint64_t offOld = pVM->tm.s.offVirtualSyncGivenUp;
1038 const uint64_t offNew = offVirtualNow - offVirtualSyncNow;
1039 Assert(offOld <= offNew);
1040 ASMAtomicWriteU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp, offNew);
1041 ASMAtomicWriteU64((uint64_t volatile *)&pVM->tm.s.offVirtualSync, offNew);
1042 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
1043 LogRel(("TM: Aborting catch-up attempt on reset with a %'RU64 ns lag on reset; new total: %'RU64 ns\n", offNew - offOld, offNew));
1044 }
1045 }
1046
1047 /*
1048 * Process the queues.
1049 */
1050 for (int i = 0; i < TMCLOCK_MAX; i++)
1051 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[i]);
1052#ifdef VBOX_STRICT
1053 tmTimerQueuesSanityChecks(pVM, "TMR3Reset");
1054#endif
1055
1056 PVMCPU pVCpuDst = &pVM->aCpus[pVM->tm.s.idTimerCpu];
1057 VMCPU_FF_CLEAR(pVCpuDst, VMCPU_FF_TIMER); /** @todo FIXME: this isn't right. */
1058 tmTimerUnlock(pVM);
1059}
1060
1061
1062/**
1063 * Resolve a builtin RC symbol.
1064 * Called by PDM when loading or relocating GC modules.
1065 *
1066 * @returns VBox status
1067 * @param pVM VM Handle.
1068 * @param pszSymbol Symbol to resolve.
1069 * @param pRCPtrValue Where to store the symbol value.
1070 * @remark This has to work before TMR3Relocate() is called.
1071 */
1072VMM_INT_DECL(int) TMR3GetImportRC(PVM pVM, const char *pszSymbol, PRTRCPTR pRCPtrValue)
1073{
1074 if (!strcmp(pszSymbol, "g_pSUPGlobalInfoPage"))
1075 *pRCPtrValue = MMHyperR3ToRC(pVM, &pVM->tm.s.pvGIPRC);
1076 //else if (..)
1077 else
1078 return VERR_SYMBOL_NOT_FOUND;
1079 return VINF_SUCCESS;
1080}
1081
1082
1083/**
1084 * Execute state save operation.
1085 *
1086 * @returns VBox status code.
1087 * @param pVM VM Handle.
1088 * @param pSSM SSM operation handle.
1089 */
1090static DECLCALLBACK(int) tmR3Save(PVM pVM, PSSMHANDLE pSSM)
1091{
1092 LogFlow(("tmR3Save:\n"));
1093#ifdef VBOX_STRICT
1094 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1095 {
1096 PVMCPU pVCpu = &pVM->aCpus[i];
1097 Assert(!pVCpu->tm.s.fTSCTicking);
1098 }
1099 Assert(!pVM->tm.s.cVirtualTicking);
1100 Assert(!pVM->tm.s.fVirtualSyncTicking);
1101#endif
1102
1103 /*
1104 * Save the virtual clocks.
1105 */
1106 /* the virtual clock. */
1107 SSMR3PutU64(pSSM, TMCLOCK_FREQ_VIRTUAL);
1108 SSMR3PutU64(pSSM, pVM->tm.s.u64Virtual);
1109
1110 /* the virtual timer synchronous clock. */
1111 SSMR3PutU64(pSSM, pVM->tm.s.u64VirtualSync);
1112 SSMR3PutU64(pSSM, pVM->tm.s.offVirtualSync);
1113 SSMR3PutU64(pSSM, pVM->tm.s.offVirtualSyncGivenUp);
1114 SSMR3PutU64(pSSM, pVM->tm.s.u64VirtualSyncCatchUpPrev);
1115 SSMR3PutBool(pSSM, pVM->tm.s.fVirtualSyncCatchUp);
1116
1117 /* real time clock */
1118 SSMR3PutU64(pSSM, TMCLOCK_FREQ_REAL);
1119
1120 /* the cpu tick clock. */
1121 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1122 {
1123 PVMCPU pVCpu = &pVM->aCpus[i];
1124 SSMR3PutU64(pSSM, TMCpuTickGet(pVCpu));
1125 }
1126 return SSMR3PutU64(pSSM, pVM->tm.s.cTSCTicksPerSecond);
1127}
1128
1129
1130/**
1131 * Execute state load operation.
1132 *
1133 * @returns VBox status code.
1134 * @param pVM VM Handle.
1135 * @param pSSM SSM operation handle.
1136 * @param uVersion Data layout version.
1137 * @param uPass The data pass.
1138 */
1139static DECLCALLBACK(int) tmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass)
1140{
1141 LogFlow(("tmR3Load:\n"));
1142
1143 Assert(uPass == SSM_PASS_FINAL); NOREF(uPass);
1144#ifdef VBOX_STRICT
1145 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1146 {
1147 PVMCPU pVCpu = &pVM->aCpus[i];
1148 Assert(!pVCpu->tm.s.fTSCTicking);
1149 }
1150 Assert(!pVM->tm.s.cVirtualTicking);
1151 Assert(!pVM->tm.s.fVirtualSyncTicking);
1152#endif
1153
1154 /*
1155 * Validate version.
1156 */
1157 if (uVersion != TM_SAVED_STATE_VERSION)
1158 {
1159 AssertMsgFailed(("tmR3Load: Invalid version uVersion=%d!\n", uVersion));
1160 return VERR_SSM_UNSUPPORTED_DATA_UNIT_VERSION;
1161 }
1162
1163 /*
1164 * Load the virtual clock.
1165 */
1166 pVM->tm.s.cVirtualTicking = 0;
1167 /* the virtual clock. */
1168 uint64_t u64Hz;
1169 int rc = SSMR3GetU64(pSSM, &u64Hz);
1170 if (RT_FAILURE(rc))
1171 return rc;
1172 if (u64Hz != TMCLOCK_FREQ_VIRTUAL)
1173 {
1174 AssertMsgFailed(("The virtual clock frequency differs! Saved: %'RU64 Binary: %'RU64\n",
1175 u64Hz, TMCLOCK_FREQ_VIRTUAL));
1176 return VERR_SSM_VIRTUAL_CLOCK_HZ;
1177 }
1178 SSMR3GetU64(pSSM, &pVM->tm.s.u64Virtual);
1179 pVM->tm.s.u64VirtualOffset = 0;
1180
1181 /* the virtual timer synchronous clock. */
1182 pVM->tm.s.fVirtualSyncTicking = false;
1183 uint64_t u64;
1184 SSMR3GetU64(pSSM, &u64);
1185 pVM->tm.s.u64VirtualSync = u64;
1186 SSMR3GetU64(pSSM, &u64);
1187 pVM->tm.s.offVirtualSync = u64;
1188 SSMR3GetU64(pSSM, &u64);
1189 pVM->tm.s.offVirtualSyncGivenUp = u64;
1190 SSMR3GetU64(pSSM, &u64);
1191 pVM->tm.s.u64VirtualSyncCatchUpPrev = u64;
1192 bool f;
1193 SSMR3GetBool(pSSM, &f);
1194 pVM->tm.s.fVirtualSyncCatchUp = f;
1195
1196 /* the real clock */
1197 rc = SSMR3GetU64(pSSM, &u64Hz);
1198 if (RT_FAILURE(rc))
1199 return rc;
1200 if (u64Hz != TMCLOCK_FREQ_REAL)
1201 {
1202 AssertMsgFailed(("The real clock frequency differs! Saved: %'RU64 Binary: %'RU64\n",
1203 u64Hz, TMCLOCK_FREQ_REAL));
1204 return VERR_SSM_VIRTUAL_CLOCK_HZ; /* misleading... */
1205 }
1206
1207 /* the cpu tick clock. */
1208 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1209 {
1210 PVMCPU pVCpu = &pVM->aCpus[i];
1211
1212 pVCpu->tm.s.fTSCTicking = false;
1213 SSMR3GetU64(pSSM, &pVCpu->tm.s.u64TSC);
1214
1215 if (pVM->tm.s.fTSCUseRealTSC)
1216 pVCpu->tm.s.offTSCRawSrc = 0; /** @todo TSC restore stuff and HWACC. */
1217 }
1218
1219 rc = SSMR3GetU64(pSSM, &u64Hz);
1220 if (RT_FAILURE(rc))
1221 return rc;
1222 if (!pVM->tm.s.fTSCUseRealTSC)
1223 pVM->tm.s.cTSCTicksPerSecond = u64Hz;
1224
1225 LogRel(("TM: cTSCTicksPerSecond=%#RX64 (%'RU64) fTSCVirtualized=%RTbool fTSCUseRealTSC=%RTbool (state load)\n",
1226 pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.fTSCVirtualized, pVM->tm.s.fTSCUseRealTSC));
1227
1228 /*
1229 * Make sure timers get rescheduled immediately.
1230 */
1231 PVMCPU pVCpuDst = &pVM->aCpus[pVM->tm.s.idTimerCpu];
1232 VMCPU_FF_SET(pVCpuDst, VMCPU_FF_TIMER);
1233
1234 return VINF_SUCCESS;
1235}
1236
1237
1238/**
1239 * Internal TMR3TimerCreate worker.
1240 *
1241 * @returns VBox status code.
1242 * @param pVM The VM handle.
1243 * @param enmClock The timer clock.
1244 * @param pszDesc The timer description.
1245 * @param ppTimer Where to store the timer pointer on success.
1246 */
1247static int tmr3TimerCreate(PVM pVM, TMCLOCK enmClock, const char *pszDesc, PPTMTIMERR3 ppTimer)
1248{
1249 VM_ASSERT_EMT(pVM);
1250
1251 /*
1252 * Allocate the timer.
1253 */
1254 PTMTIMERR3 pTimer = NULL;
1255 if (pVM->tm.s.pFree && VM_IS_EMT(pVM))
1256 {
1257 pTimer = pVM->tm.s.pFree;
1258 pVM->tm.s.pFree = pTimer->pBigNext;
1259 Log3(("TM: Recycling timer %p, new free head %p.\n", pTimer, pTimer->pBigNext));
1260 }
1261
1262 if (!pTimer)
1263 {
1264 int rc = MMHyperAlloc(pVM, sizeof(*pTimer), 0, MM_TAG_TM, (void **)&pTimer);
1265 if (RT_FAILURE(rc))
1266 return rc;
1267 Log3(("TM: Allocated new timer %p\n", pTimer));
1268 }
1269
1270 /*
1271 * Initialize it.
1272 */
1273 pTimer->u64Expire = 0;
1274 pTimer->enmClock = enmClock;
1275 pTimer->pVMR3 = pVM;
1276 pTimer->pVMR0 = pVM->pVMR0;
1277 pTimer->pVMRC = pVM->pVMRC;
1278 pTimer->enmState = TMTIMERSTATE_STOPPED;
1279 pTimer->offScheduleNext = 0;
1280 pTimer->offNext = 0;
1281 pTimer->offPrev = 0;
1282 pTimer->pvUser = NULL;
1283 pTimer->pCritSect = NULL;
1284 pTimer->pszDesc = pszDesc;
1285
1286 /* insert into the list of created timers. */
1287 tmTimerLock(pVM);
1288 pTimer->pBigPrev = NULL;
1289 pTimer->pBigNext = pVM->tm.s.pCreated;
1290 pVM->tm.s.pCreated = pTimer;
1291 if (pTimer->pBigNext)
1292 pTimer->pBigNext->pBigPrev = pTimer;
1293#ifdef VBOX_STRICT
1294 tmTimerQueuesSanityChecks(pVM, "tmR3TimerCreate");
1295#endif
1296 tmTimerUnlock(pVM);
1297
1298 *ppTimer = pTimer;
1299 return VINF_SUCCESS;
1300}
1301
1302
1303/**
1304 * Creates a device timer.
1305 *
1306 * @returns VBox status.
1307 * @param pVM The VM to create the timer in.
1308 * @param pDevIns Device instance.
1309 * @param enmClock The clock to use on this timer.
1310 * @param pfnCallback Callback function.
1311 * @param pvUser The user argument to the callback.
1312 * @param fFlags Timer creation flags, see grp_tm_timer_flags.
1313 * @param pszDesc Pointer to description string which must stay around
1314 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1315 * @param ppTimer Where to store the timer on success.
1316 */
1317VMM_INT_DECL(int) TMR3TimerCreateDevice(PVM pVM, PPDMDEVINS pDevIns, TMCLOCK enmClock,
1318 PFNTMTIMERDEV pfnCallback, void *pvUser,
1319 uint32_t fFlags, const char *pszDesc, PPTMTIMERR3 ppTimer)
1320{
1321 AssertReturn(!(fFlags & ~(TMTIMER_FLAGS_NO_CRIT_SECT)), VERR_INVALID_PARAMETER);
1322
1323 /*
1324 * Allocate and init stuff.
1325 */
1326 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, ppTimer);
1327 if (RT_SUCCESS(rc))
1328 {
1329 (*ppTimer)->enmType = TMTIMERTYPE_DEV;
1330 (*ppTimer)->u.Dev.pfnTimer = pfnCallback;
1331 (*ppTimer)->u.Dev.pDevIns = pDevIns;
1332 (*ppTimer)->pvUser = pvUser;
1333 if (!(fFlags & TMTIMER_FLAGS_NO_CRIT_SECT))
1334 (*ppTimer)->pCritSect = PDMR3DevGetCritSect(pVM, pDevIns);
1335 Log(("TM: Created device timer %p clock %d callback %p '%s'\n", (*ppTimer), enmClock, pfnCallback, pszDesc));
1336 }
1337
1338 return rc;
1339}
1340
1341
1342
1343
1344/**
1345 * Creates a USB device timer.
1346 *
1347 * @returns VBox status.
1348 * @param pVM The VM to create the timer in.
1349 * @param pUsbIns The USB device instance.
1350 * @param enmClock The clock to use on this timer.
1351 * @param pfnCallback Callback function.
1352 * @param pvUser The user argument to the callback.
1353 * @param fFlags Timer creation flags, see grp_tm_timer_flags.
1354 * @param pszDesc Pointer to description string which must stay around
1355 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1356 * @param ppTimer Where to store the timer on success.
1357 */
1358VMM_INT_DECL(int) TMR3TimerCreateUsb(PVM pVM, PPDMUSBINS pUsbIns, TMCLOCK enmClock,
1359 PFNTMTIMERUSB pfnCallback, void *pvUser,
1360 uint32_t fFlags, const char *pszDesc, PPTMTIMERR3 ppTimer)
1361{
1362 AssertReturn(!(fFlags & ~(TMTIMER_FLAGS_NO_CRIT_SECT)), VERR_INVALID_PARAMETER);
1363
1364 /*
1365 * Allocate and init stuff.
1366 */
1367 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, ppTimer);
1368 if (RT_SUCCESS(rc))
1369 {
1370 (*ppTimer)->enmType = TMTIMERTYPE_USB;
1371 (*ppTimer)->u.Usb.pfnTimer = pfnCallback;
1372 (*ppTimer)->u.Usb.pUsbIns = pUsbIns;
1373 (*ppTimer)->pvUser = pvUser;
1374 //if (!(fFlags & TMTIMER_FLAGS_NO_CRIT_SECT))
1375 //{
1376 // if (pDevIns->pCritSectR3)
1377 // (*ppTimer)->pCritSect = pUsbIns->pCritSectR3;
1378 // else
1379 // (*ppTimer)->pCritSect = IOMR3GetCritSect(pVM);
1380 //}
1381 Log(("TM: Created USB device timer %p clock %d callback %p '%s'\n", (*ppTimer), enmClock, pfnCallback, pszDesc));
1382 }
1383
1384 return rc;
1385}
1386
1387
1388/**
1389 * Creates a driver timer.
1390 *
1391 * @returns VBox status.
1392 * @param pVM The VM to create the timer in.
1393 * @param pDrvIns Driver instance.
1394 * @param enmClock The clock to use on this timer.
1395 * @param pfnCallback Callback function.
1396 * @param pvUser The user argument to the callback.
1397 * @param fFlags Timer creation flags, see grp_tm_timer_flags.
1398 * @param pszDesc Pointer to description string which must stay around
1399 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1400 * @param ppTimer Where to store the timer on success.
1401 */
1402VMM_INT_DECL(int) TMR3TimerCreateDriver(PVM pVM, PPDMDRVINS pDrvIns, TMCLOCK enmClock, PFNTMTIMERDRV pfnCallback, void *pvUser,
1403 uint32_t fFlags, const char *pszDesc, PPTMTIMERR3 ppTimer)
1404{
1405 AssertReturn(!(fFlags & ~(TMTIMER_FLAGS_NO_CRIT_SECT)), VERR_INVALID_PARAMETER);
1406
1407 /*
1408 * Allocate and init stuff.
1409 */
1410 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, ppTimer);
1411 if (RT_SUCCESS(rc))
1412 {
1413 (*ppTimer)->enmType = TMTIMERTYPE_DRV;
1414 (*ppTimer)->u.Drv.pfnTimer = pfnCallback;
1415 (*ppTimer)->u.Drv.pDrvIns = pDrvIns;
1416 (*ppTimer)->pvUser = pvUser;
1417 Log(("TM: Created device timer %p clock %d callback %p '%s'\n", (*ppTimer), enmClock, pfnCallback, pszDesc));
1418 }
1419
1420 return rc;
1421}
1422
1423
1424/**
1425 * Creates an internal timer.
1426 *
1427 * @returns VBox status.
1428 * @param pVM The VM to create the timer in.
1429 * @param enmClock The clock to use on this timer.
1430 * @param pfnCallback Callback function.
1431 * @param pvUser User argument to be passed to the callback.
1432 * @param pszDesc Pointer to description string which must stay around
1433 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1434 * @param ppTimer Where to store the timer on success.
1435 */
1436VMMR3DECL(int) TMR3TimerCreateInternal(PVM pVM, TMCLOCK enmClock, PFNTMTIMERINT pfnCallback, void *pvUser, const char *pszDesc, PPTMTIMERR3 ppTimer)
1437{
1438 /*
1439 * Allocate and init stuff.
1440 */
1441 PTMTIMER pTimer;
1442 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, &pTimer);
1443 if (RT_SUCCESS(rc))
1444 {
1445 pTimer->enmType = TMTIMERTYPE_INTERNAL;
1446 pTimer->u.Internal.pfnTimer = pfnCallback;
1447 pTimer->pvUser = pvUser;
1448 *ppTimer = pTimer;
1449 Log(("TM: Created internal timer %p clock %d callback %p '%s'\n", pTimer, enmClock, pfnCallback, pszDesc));
1450 }
1451
1452 return rc;
1453}
1454
1455/**
1456 * Creates an external timer.
1457 *
1458 * @returns Timer handle on success.
1459 * @returns NULL on failure.
1460 * @param pVM The VM to create the timer in.
1461 * @param enmClock The clock to use on this timer.
1462 * @param pfnCallback Callback function.
1463 * @param pvUser User argument.
1464 * @param pszDesc Pointer to description string which must stay around
1465 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1466 */
1467VMMR3DECL(PTMTIMERR3) TMR3TimerCreateExternal(PVM pVM, TMCLOCK enmClock, PFNTMTIMEREXT pfnCallback, void *pvUser, const char *pszDesc)
1468{
1469 /*
1470 * Allocate and init stuff.
1471 */
1472 PTMTIMERR3 pTimer;
1473 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, &pTimer);
1474 if (RT_SUCCESS(rc))
1475 {
1476 pTimer->enmType = TMTIMERTYPE_EXTERNAL;
1477 pTimer->u.External.pfnTimer = pfnCallback;
1478 pTimer->pvUser = pvUser;
1479 Log(("TM: Created external timer %p clock %d callback %p '%s'\n", pTimer, enmClock, pfnCallback, pszDesc));
1480 return pTimer;
1481 }
1482
1483 return NULL;
1484}
1485
1486
1487/**
1488 * Destroy a timer
1489 *
1490 * @returns VBox status.
1491 * @param pTimer Timer handle as returned by one of the create functions.
1492 */
1493VMMR3DECL(int) TMR3TimerDestroy(PTMTIMER pTimer)
1494{
1495 /*
1496 * Be extra careful here.
1497 */
1498 if (!pTimer)
1499 return VINF_SUCCESS;
1500 AssertPtr(pTimer);
1501 Assert((unsigned)pTimer->enmClock < (unsigned)TMCLOCK_MAX);
1502
1503 PVM pVM = pTimer->CTX_SUFF(pVM);
1504 PTMTIMERQUEUE pQueue = &pVM->tm.s.CTX_SUFF(paTimerQueues)[pTimer->enmClock];
1505 bool fActive = false;
1506 bool fPending = false;
1507
1508 AssertMsg( !pTimer->pCritSect
1509 || VMR3GetState(pVM) != VMSTATE_RUNNING
1510 || PDMCritSectIsOwner(pTimer->pCritSect), ("%s\n", pTimer->pszDesc));
1511
1512 /*
1513 * The rest of the game happens behind the lock, just
1514 * like create does. All the work is done here.
1515 */
1516 tmTimerLock(pVM);
1517 for (int cRetries = 1000;; cRetries--)
1518 {
1519 /*
1520 * Change to the DESTROY state.
1521 */
1522 TMTIMERSTATE enmState = pTimer->enmState;
1523 TMTIMERSTATE enmNewState = enmState;
1524 Log2(("TMTimerDestroy: %p:{.enmState=%s, .pszDesc='%s'} cRetries=%d\n",
1525 pTimer, tmTimerState(enmState), R3STRING(pTimer->pszDesc), cRetries));
1526 switch (enmState)
1527 {
1528 case TMTIMERSTATE_STOPPED:
1529 case TMTIMERSTATE_EXPIRED_DELIVER:
1530 break;
1531
1532 case TMTIMERSTATE_ACTIVE:
1533 fActive = true;
1534 break;
1535
1536 case TMTIMERSTATE_PENDING_STOP:
1537 case TMTIMERSTATE_PENDING_STOP_SCHEDULE:
1538 case TMTIMERSTATE_PENDING_RESCHEDULE:
1539 fActive = true;
1540 fPending = true;
1541 break;
1542
1543 case TMTIMERSTATE_PENDING_SCHEDULE:
1544 fPending = true;
1545 break;
1546
1547 /*
1548 * This shouldn't happen as the caller should make sure there are no races.
1549 */
1550 case TMTIMERSTATE_EXPIRED_GET_UNLINK:
1551 case TMTIMERSTATE_PENDING_SCHEDULE_SET_EXPIRE:
1552 case TMTIMERSTATE_PENDING_RESCHEDULE_SET_EXPIRE:
1553 AssertMsgFailed(("%p:.enmState=%s %s\n", pTimer, tmTimerState(enmState), pTimer->pszDesc));
1554 tmTimerUnlock(pVM);
1555 if (!RTThreadYield())
1556 RTThreadSleep(1);
1557 AssertMsgReturn(cRetries > 0, ("Failed waiting for stable state. state=%d (%s)\n", pTimer->enmState, pTimer->pszDesc),
1558 VERR_TM_UNSTABLE_STATE);
1559 tmTimerLock(pVM);
1560 continue;
1561
1562 /*
1563 * Invalid states.
1564 */
1565 case TMTIMERSTATE_FREE:
1566 case TMTIMERSTATE_DESTROY:
1567 tmTimerUnlock(pVM);
1568 AssertLogRelMsgFailedReturn(("pTimer=%p %s\n", pTimer, tmTimerState(enmState)), VERR_TM_INVALID_STATE);
1569
1570 default:
1571 AssertMsgFailed(("Unknown timer state %d (%s)\n", enmState, R3STRING(pTimer->pszDesc)));
1572 tmTimerUnlock(pVM);
1573 return VERR_TM_UNKNOWN_STATE;
1574 }
1575
1576 /*
1577 * Try switch to the destroy state.
1578 * This should always succeed as the caller should make sure there are no race.
1579 */
1580 bool fRc;
1581 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_DESTROY, enmState, fRc);
1582 if (fRc)
1583 break;
1584 AssertMsgFailed(("%p:.enmState=%s %s\n", pTimer, tmTimerState(enmState), pTimer->pszDesc));
1585 tmTimerUnlock(pVM);
1586 AssertMsgReturn(cRetries > 0, ("Failed waiting for stable state. state=%d (%s)\n", pTimer->enmState, pTimer->pszDesc),
1587 VERR_TM_UNSTABLE_STATE);
1588 tmTimerLock(pVM);
1589 }
1590
1591 /*
1592 * Unlink from the active list.
1593 */
1594 if (fActive)
1595 {
1596 const PTMTIMER pPrev = TMTIMER_GET_PREV(pTimer);
1597 const PTMTIMER pNext = TMTIMER_GET_NEXT(pTimer);
1598 if (pPrev)
1599 TMTIMER_SET_NEXT(pPrev, pNext);
1600 else
1601 {
1602 TMTIMER_SET_HEAD(pQueue, pNext);
1603 pQueue->u64Expire = pNext ? pNext->u64Expire : INT64_MAX;
1604 }
1605 if (pNext)
1606 TMTIMER_SET_PREV(pNext, pPrev);
1607 pTimer->offNext = 0;
1608 pTimer->offPrev = 0;
1609 }
1610
1611 /*
1612 * Unlink from the schedule list by running it.
1613 */
1614 if (fPending)
1615 {
1616 Log3(("TMR3TimerDestroy: tmTimerQueueSchedule\n"));
1617 STAM_PROFILE_START(&pVM->tm.s.CTX_SUFF_Z(StatScheduleOne), a);
1618 Assert(pQueue->offSchedule);
1619 tmTimerQueueSchedule(pVM, pQueue);
1620 }
1621
1622 /*
1623 * Read to move the timer from the created list and onto the free list.
1624 */
1625 Assert(!pTimer->offNext); Assert(!pTimer->offPrev); Assert(!pTimer->offScheduleNext);
1626
1627 /* unlink from created list */
1628 if (pTimer->pBigPrev)
1629 pTimer->pBigPrev->pBigNext = pTimer->pBigNext;
1630 else
1631 pVM->tm.s.pCreated = pTimer->pBigNext;
1632 if (pTimer->pBigNext)
1633 pTimer->pBigNext->pBigPrev = pTimer->pBigPrev;
1634 pTimer->pBigNext = 0;
1635 pTimer->pBigPrev = 0;
1636
1637 /* free */
1638 Log2(("TM: Inserting %p into the free list ahead of %p!\n", pTimer, pVM->tm.s.pFree));
1639 TM_SET_STATE(pTimer, TMTIMERSTATE_FREE);
1640 pTimer->pBigNext = pVM->tm.s.pFree;
1641 pVM->tm.s.pFree = pTimer;
1642
1643#ifdef VBOX_STRICT
1644 tmTimerQueuesSanityChecks(pVM, "TMR3TimerDestroy");
1645#endif
1646 tmTimerUnlock(pVM);
1647 return VINF_SUCCESS;
1648}
1649
1650
1651/**
1652 * Destroy all timers owned by a device.
1653 *
1654 * @returns VBox status.
1655 * @param pVM VM handle.
1656 * @param pDevIns Device which timers should be destroyed.
1657 */
1658VMM_INT_DECL(int) TMR3TimerDestroyDevice(PVM pVM, PPDMDEVINS pDevIns)
1659{
1660 LogFlow(("TMR3TimerDestroyDevice: pDevIns=%p\n", pDevIns));
1661 if (!pDevIns)
1662 return VERR_INVALID_PARAMETER;
1663
1664 tmTimerLock(pVM);
1665 PTMTIMER pCur = pVM->tm.s.pCreated;
1666 while (pCur)
1667 {
1668 PTMTIMER pDestroy = pCur;
1669 pCur = pDestroy->pBigNext;
1670 if ( pDestroy->enmType == TMTIMERTYPE_DEV
1671 && pDestroy->u.Dev.pDevIns == pDevIns)
1672 {
1673 int rc = TMR3TimerDestroy(pDestroy);
1674 AssertRC(rc);
1675 }
1676 }
1677 tmTimerUnlock(pVM);
1678
1679 LogFlow(("TMR3TimerDestroyDevice: returns VINF_SUCCESS\n"));
1680 return VINF_SUCCESS;
1681}
1682
1683
1684/**
1685 * Destroy all timers owned by a USB device.
1686 *
1687 * @returns VBox status.
1688 * @param pVM VM handle.
1689 * @param pUsbIns USB device which timers should be destroyed.
1690 */
1691VMM_INT_DECL(int) TMR3TimerDestroyUsb(PVM pVM, PPDMUSBINS pUsbIns)
1692{
1693 LogFlow(("TMR3TimerDestroyUsb: pUsbIns=%p\n", pUsbIns));
1694 if (!pUsbIns)
1695 return VERR_INVALID_PARAMETER;
1696
1697 tmTimerLock(pVM);
1698 PTMTIMER pCur = pVM->tm.s.pCreated;
1699 while (pCur)
1700 {
1701 PTMTIMER pDestroy = pCur;
1702 pCur = pDestroy->pBigNext;
1703 if ( pDestroy->enmType == TMTIMERTYPE_USB
1704 && pDestroy->u.Usb.pUsbIns == pUsbIns)
1705 {
1706 int rc = TMR3TimerDestroy(pDestroy);
1707 AssertRC(rc);
1708 }
1709 }
1710 tmTimerUnlock(pVM);
1711
1712 LogFlow(("TMR3TimerDestroyUsb: returns VINF_SUCCESS\n"));
1713 return VINF_SUCCESS;
1714}
1715
1716
1717/**
1718 * Destroy all timers owned by a driver.
1719 *
1720 * @returns VBox status.
1721 * @param pVM VM handle.
1722 * @param pDrvIns Driver which timers should be destroyed.
1723 */
1724VMM_INT_DECL(int) TMR3TimerDestroyDriver(PVM pVM, PPDMDRVINS pDrvIns)
1725{
1726 LogFlow(("TMR3TimerDestroyDriver: pDrvIns=%p\n", pDrvIns));
1727 if (!pDrvIns)
1728 return VERR_INVALID_PARAMETER;
1729
1730 tmTimerLock(pVM);
1731 PTMTIMER pCur = pVM->tm.s.pCreated;
1732 while (pCur)
1733 {
1734 PTMTIMER pDestroy = pCur;
1735 pCur = pDestroy->pBigNext;
1736 if ( pDestroy->enmType == TMTIMERTYPE_DRV
1737 && pDestroy->u.Drv.pDrvIns == pDrvIns)
1738 {
1739 int rc = TMR3TimerDestroy(pDestroy);
1740 AssertRC(rc);
1741 }
1742 }
1743 tmTimerUnlock(pVM);
1744
1745 LogFlow(("TMR3TimerDestroyDriver: returns VINF_SUCCESS\n"));
1746 return VINF_SUCCESS;
1747}
1748
1749
1750/**
1751 * Internal function for getting the clock time.
1752 *
1753 * @returns clock time.
1754 * @param pVM The VM handle.
1755 * @param enmClock The clock.
1756 */
1757DECLINLINE(uint64_t) tmClock(PVM pVM, TMCLOCK enmClock)
1758{
1759 switch (enmClock)
1760 {
1761 case TMCLOCK_VIRTUAL: return TMVirtualGet(pVM);
1762 case TMCLOCK_VIRTUAL_SYNC: return TMVirtualSyncGet(pVM);
1763 case TMCLOCK_REAL: return TMRealGet(pVM);
1764 case TMCLOCK_TSC: return TMCpuTickGet(&pVM->aCpus[0] /* just take VCPU 0 */);
1765 default:
1766 AssertMsgFailed(("enmClock=%d\n", enmClock));
1767 return ~(uint64_t)0;
1768 }
1769}
1770
1771
1772/**
1773 * Checks if the sync queue has one or more expired timers.
1774 *
1775 * @returns true / false.
1776 *
1777 * @param pVM The VM handle.
1778 * @param enmClock The queue.
1779 */
1780DECLINLINE(bool) tmR3HasExpiredTimer(PVM pVM, TMCLOCK enmClock)
1781{
1782 const uint64_t u64Expire = pVM->tm.s.CTX_SUFF(paTimerQueues)[enmClock].u64Expire;
1783 return u64Expire != INT64_MAX && u64Expire <= tmClock(pVM, enmClock);
1784}
1785
1786
1787/**
1788 * Checks for expired timers in all the queues.
1789 *
1790 * @returns true / false.
1791 * @param pVM The VM handle.
1792 */
1793DECLINLINE(bool) tmR3AnyExpiredTimers(PVM pVM)
1794{
1795 /*
1796 * Combine the time calculation for the first two since we're not on EMT
1797 * TMVirtualSyncGet only permits EMT.
1798 */
1799 uint64_t u64Now = TMVirtualGetNoCheck(pVM);
1800 if (pVM->tm.s.CTX_SUFF(paTimerQueues)[TMCLOCK_VIRTUAL].u64Expire <= u64Now)
1801 return true;
1802 u64Now = pVM->tm.s.fVirtualSyncTicking
1803 ? u64Now - pVM->tm.s.offVirtualSync
1804 : pVM->tm.s.u64VirtualSync;
1805 if (pVM->tm.s.CTX_SUFF(paTimerQueues)[TMCLOCK_VIRTUAL_SYNC].u64Expire <= u64Now)
1806 return true;
1807
1808 /*
1809 * The remaining timers.
1810 */
1811 if (tmR3HasExpiredTimer(pVM, TMCLOCK_REAL))
1812 return true;
1813 if (tmR3HasExpiredTimer(pVM, TMCLOCK_TSC))
1814 return true;
1815 return false;
1816}
1817
1818
1819/**
1820 * Schedule timer callback.
1821 *
1822 * @param pTimer Timer handle.
1823 * @param pvUser VM handle.
1824 * @thread Timer thread.
1825 *
1826 * @remark We cannot do the scheduling and queues running from a timer handler
1827 * since it's not executing in EMT, and even if it was it would be async
1828 * and we wouldn't know the state of the affairs.
1829 * So, we'll just raise the timer FF and force any REM execution to exit.
1830 */
1831static DECLCALLBACK(void) tmR3TimerCallback(PRTTIMER pTimer, void *pvUser, uint64_t /*iTick*/)
1832{
1833 PVM pVM = (PVM)pvUser;
1834 PVMCPU pVCpuDst = &pVM->aCpus[pVM->tm.s.idTimerCpu];
1835
1836 AssertCompile(TMCLOCK_MAX == 4);
1837#ifdef DEBUG_Sander /* very annoying, keep it private. */
1838 if (VMCPU_FF_ISSET(pVCpuDst, VMCPU_FF_TIMER))
1839 Log(("tmR3TimerCallback: timer event still pending!!\n"));
1840#endif
1841 if ( !VMCPU_FF_ISSET(pVCpuDst, VMCPU_FF_TIMER)
1842 && ( pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].offSchedule /** @todo FIXME - reconsider offSchedule as a reason for running the timer queues. */
1843 || pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL].offSchedule
1844 || pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL].offSchedule
1845 || pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC].offSchedule
1846 || tmR3AnyExpiredTimers(pVM)
1847 )
1848 && !VMCPU_FF_ISSET(pVCpuDst, VMCPU_FF_TIMER)
1849 && !pVM->tm.s.fRunningQueues
1850 )
1851 {
1852 Log5(("TM(%u): FF: 0 -> 1\n", __LINE__));
1853 VMCPU_FF_SET(pVCpuDst, VMCPU_FF_TIMER);
1854 REMR3NotifyTimerPending(pVM, pVCpuDst);
1855 VMR3NotifyCpuFFU(pVCpuDst->pUVCpu, VMNOTIFYFF_FLAGS_DONE_REM /** @todo | VMNOTIFYFF_FLAGS_POKE ?*/);
1856 STAM_COUNTER_INC(&pVM->tm.s.StatTimerCallbackSetFF);
1857 }
1858}
1859
1860
1861/**
1862 * Schedules and runs any pending timers.
1863 *
1864 * This is normally called from a forced action handler in EMT.
1865 *
1866 * @param pVM The VM to run the timers for.
1867 *
1868 * @thread EMT (actually EMT0, but we fend off the others)
1869 */
1870VMMR3DECL(void) TMR3TimerQueuesDo(PVM pVM)
1871{
1872 /*
1873 * Only the dedicated timer EMT should do stuff here.
1874 * (fRunningQueues is only used as an indicator.)
1875 */
1876 Assert(pVM->tm.s.idTimerCpu < pVM->cCpus);
1877 PVMCPU pVCpuDst = &pVM->aCpus[pVM->tm.s.idTimerCpu];
1878 if (VMMGetCpu(pVM) != pVCpuDst)
1879 {
1880 Assert(pVM->cCpus > 1);
1881 return;
1882 }
1883 STAM_PROFILE_START(&pVM->tm.s.StatDoQueues, a);
1884 Log2(("TMR3TimerQueuesDo:\n"));
1885 Assert(!pVM->tm.s.fRunningQueues);
1886 ASMAtomicWriteBool(&pVM->tm.s.fRunningQueues, true);
1887 tmTimerLock(pVM);
1888
1889 /*
1890 * Process the queues.
1891 */
1892 AssertCompile(TMCLOCK_MAX == 4);
1893
1894 /* TMCLOCK_VIRTUAL_SYNC (see also TMR3VirtualSyncFF) */
1895 STAM_PROFILE_ADV_START(&pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL_SYNC], s1);
1896 tmVirtualSyncLock(pVM);
1897 ASMAtomicWriteBool(&pVM->tm.s.fRunningVirtualSyncQueue, true);
1898 VMCPU_FF_CLEAR(pVCpuDst, VMCPU_FF_TIMER); /* Clear the FF once we started working for real. */
1899
1900 if (pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].offSchedule)
1901 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC]);
1902 tmR3TimerQueueRunVirtualSync(pVM);
1903 if (pVM->tm.s.fVirtualSyncTicking) /** @todo move into tmR3TimerQueueRunVirtualSync - FIXME */
1904 VM_FF_CLEAR(pVM, VM_FF_TM_VIRTUAL_SYNC);
1905
1906 ASMAtomicWriteBool(&pVM->tm.s.fRunningVirtualSyncQueue, false);
1907 tmVirtualSyncUnlock(pVM);
1908 STAM_PROFILE_ADV_STOP(&pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL_SYNC], s1);
1909
1910 /* TMCLOCK_VIRTUAL */
1911 STAM_PROFILE_ADV_START(&pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL], s2);
1912 if (pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL].offSchedule)
1913 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL]);
1914 tmR3TimerQueueRun(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL]);
1915 STAM_PROFILE_ADV_STOP(&pVM->tm.s.aStatDoQueues[TMCLOCK_VIRTUAL], s2);
1916
1917 /* TMCLOCK_TSC */
1918 Assert(!pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC].offActive); /* not used */
1919
1920 /* TMCLOCK_REAL */
1921 STAM_PROFILE_ADV_START(&pVM->tm.s.aStatDoQueues[TMCLOCK_REAL], s3);
1922 if (pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL].offSchedule)
1923 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL]);
1924 tmR3TimerQueueRun(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL]);
1925 STAM_PROFILE_ADV_STOP(&pVM->tm.s.aStatDoQueues[TMCLOCK_REAL], s3);
1926
1927#ifdef VBOX_STRICT
1928 /* check that we didn't screw up. */
1929 tmTimerQueuesSanityChecks(pVM, "TMR3TimerQueuesDo");
1930#endif
1931
1932 /* done */
1933 Log2(("TMR3TimerQueuesDo: returns void\n"));
1934 ASMAtomicWriteBool(&pVM->tm.s.fRunningQueues, false);
1935 tmTimerUnlock(pVM);
1936 STAM_PROFILE_STOP(&pVM->tm.s.StatDoQueues, a);
1937}
1938
1939//RT_C_DECLS_BEGIN
1940//int iomLock(PVM pVM);
1941//void iomUnlock(PVM pVM);
1942//RT_C_DECLS_END
1943
1944
1945/**
1946 * Schedules and runs any pending times in the specified queue.
1947 *
1948 * This is normally called from a forced action handler in EMT.
1949 *
1950 * @param pVM The VM to run the timers for.
1951 * @param pQueue The queue to run.
1952 */
1953static void tmR3TimerQueueRun(PVM pVM, PTMTIMERQUEUE pQueue)
1954{
1955 VM_ASSERT_EMT(pVM);
1956
1957 /*
1958 * Run timers.
1959 *
1960 * We check the clock once and run all timers which are ACTIVE
1961 * and have an expire time less or equal to the time we read.
1962 *
1963 * N.B. A generic unlink must be applied since other threads
1964 * are allowed to mess with any active timer at any time.
1965 * However, we only allow EMT to handle EXPIRED_PENDING
1966 * timers, thus enabling the timer handler function to
1967 * arm the timer again.
1968 */
1969 PTMTIMER pNext = TMTIMER_GET_HEAD(pQueue);
1970 if (!pNext)
1971 return;
1972 const uint64_t u64Now = tmClock(pVM, pQueue->enmClock);
1973 while (pNext && pNext->u64Expire <= u64Now)
1974 {
1975 PTMTIMER pTimer = pNext;
1976 pNext = TMTIMER_GET_NEXT(pTimer);
1977 PPDMCRITSECT pCritSect = pTimer->pCritSect;
1978 if (pCritSect)
1979 PDMCritSectEnter(pCritSect, VERR_INTERNAL_ERROR);
1980 Log2(("tmR3TimerQueueRun: %p:{.enmState=%s, .enmClock=%d, .enmType=%d, u64Expire=%llx (now=%llx) .pszDesc=%s}\n",
1981 pTimer, tmTimerState(pTimer->enmState), pTimer->enmClock, pTimer->enmType, pTimer->u64Expire, u64Now, pTimer->pszDesc));
1982 bool fRc;
1983 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_EXPIRED_GET_UNLINK, TMTIMERSTATE_ACTIVE, fRc);
1984 if (fRc)
1985 {
1986 Assert(!pTimer->offScheduleNext); /* this can trigger falsely */
1987
1988 /* unlink */
1989 const PTMTIMER pPrev = TMTIMER_GET_PREV(pTimer);
1990 if (pPrev)
1991 TMTIMER_SET_NEXT(pPrev, pNext);
1992 else
1993 {
1994 TMTIMER_SET_HEAD(pQueue, pNext);
1995 pQueue->u64Expire = pNext ? pNext->u64Expire : INT64_MAX;
1996 }
1997 if (pNext)
1998 TMTIMER_SET_PREV(pNext, pPrev);
1999 pTimer->offNext = 0;
2000 pTimer->offPrev = 0;
2001
2002 /* fire */
2003 TM_SET_STATE(pTimer, TMTIMERSTATE_EXPIRED_DELIVER);
2004 switch (pTimer->enmType)
2005 {
2006 case TMTIMERTYPE_DEV: pTimer->u.Dev.pfnTimer(pTimer->u.Dev.pDevIns, pTimer, pTimer->pvUser); break;
2007 case TMTIMERTYPE_USB: pTimer->u.Usb.pfnTimer(pTimer->u.Usb.pUsbIns, pTimer, pTimer->pvUser); break;
2008 case TMTIMERTYPE_DRV: pTimer->u.Drv.pfnTimer(pTimer->u.Drv.pDrvIns, pTimer, pTimer->pvUser); break;
2009 case TMTIMERTYPE_INTERNAL: pTimer->u.Internal.pfnTimer(pVM, pTimer, pTimer->pvUser); break;
2010 case TMTIMERTYPE_EXTERNAL: pTimer->u.External.pfnTimer(pTimer->pvUser); break;
2011 default:
2012 AssertMsgFailed(("Invalid timer type %d (%s)\n", pTimer->enmType, pTimer->pszDesc));
2013 break;
2014 }
2015
2016 /* change the state if it wasn't changed already in the handler. */
2017 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_STOPPED, TMTIMERSTATE_EXPIRED_DELIVER, fRc);
2018 Log2(("tmR3TimerQueueRun: new state %s\n", tmTimerState(pTimer->enmState)));
2019 }
2020 if (pCritSect)
2021 PDMCritSectLeave(pCritSect);
2022 } /* run loop */
2023}
2024
2025
2026/**
2027 * Schedules and runs any pending times in the timer queue for the
2028 * synchronous virtual clock.
2029 *
2030 * This scheduling is a bit different from the other queues as it need
2031 * to implement the special requirements of the timer synchronous virtual
2032 * clock, thus this 2nd queue run function.
2033 *
2034 * @param pVM The VM to run the timers for.
2035 *
2036 * @remarks The caller must own both the TM/EMT and the Virtual Sync locks.
2037 */
2038static void tmR3TimerQueueRunVirtualSync(PVM pVM)
2039{
2040 PTMTIMERQUEUE const pQueue = &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC];
2041 VM_ASSERT_EMT(pVM);
2042
2043 /*
2044 * Any timers?
2045 */
2046 PTMTIMER pNext = TMTIMER_GET_HEAD(pQueue);
2047 if (RT_UNLIKELY(!pNext))
2048 {
2049 Assert(pVM->tm.s.fVirtualSyncTicking || !pVM->tm.s.cVirtualTicking);
2050 return;
2051 }
2052 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRun);
2053
2054 /*
2055 * Calculate the time frame for which we will dispatch timers.
2056 *
2057 * We use a time frame ranging from the current sync time (which is most likely the
2058 * same as the head timer) and some configurable period (100000ns) up towards the
2059 * current virtual time. This period might also need to be restricted by the catch-up
2060 * rate so frequent calls to this function won't accelerate the time too much, however
2061 * this will be implemented at a later point if necessary.
2062 *
2063 * Without this frame we would 1) having to run timers much more frequently
2064 * and 2) lag behind at a steady rate.
2065 */
2066 const uint64_t u64VirtualNow = TMVirtualGetNoCheck(pVM);
2067 uint64_t const offSyncGivenUp = pVM->tm.s.offVirtualSyncGivenUp;
2068 uint64_t u64Now;
2069 if (!pVM->tm.s.fVirtualSyncTicking)
2070 {
2071 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRunStoppedAlready);
2072 u64Now = pVM->tm.s.u64VirtualSync;
2073#ifdef DEBUG_bird
2074 Assert(u64Now <= pNext->u64Expire);
2075#endif
2076 }
2077 else
2078 {
2079 /* Calc 'now'. */
2080 bool fStopCatchup = false;
2081 bool fUpdateStuff = false;
2082 uint64_t off = pVM->tm.s.offVirtualSync;
2083 if (pVM->tm.s.fVirtualSyncCatchUp)
2084 {
2085 uint64_t u64Delta = u64VirtualNow - pVM->tm.s.u64VirtualSyncCatchUpPrev;
2086 if (RT_LIKELY(!(u64Delta >> 32)))
2087 {
2088 uint64_t u64Sub = ASMMultU64ByU32DivByU32(u64Delta, pVM->tm.s.u32VirtualSyncCatchUpPercentage, 100);
2089 if (off > u64Sub + offSyncGivenUp)
2090 {
2091 off -= u64Sub;
2092 Log4(("TM: %'RU64/-%'8RU64: sub %'RU64 [tmR3TimerQueueRunVirtualSync]\n", u64VirtualNow - off, off - offSyncGivenUp, u64Sub));
2093 }
2094 else
2095 {
2096 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
2097 fStopCatchup = true;
2098 off = offSyncGivenUp;
2099 }
2100 fUpdateStuff = true;
2101 }
2102 }
2103 u64Now = u64VirtualNow - off;
2104
2105 /* Check if stopped by expired timer. */
2106 uint64_t u64Expire = pNext->u64Expire;
2107 if (u64Now >= pNext->u64Expire)
2108 {
2109 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRunStop);
2110 u64Now = pNext->u64Expire;
2111 ASMAtomicWriteU64(&pVM->tm.s.u64VirtualSync, u64Now);
2112 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncTicking, false);
2113 Log4(("TM: %'RU64/-%'8RU64: exp tmr [tmR3TimerQueueRunVirtualSync]\n", u64Now, u64VirtualNow - u64Now - offSyncGivenUp));
2114 }
2115 else if (fUpdateStuff)
2116 {
2117 ASMAtomicWriteU64(&pVM->tm.s.offVirtualSync, off);
2118 ASMAtomicWriteU64(&pVM->tm.s.u64VirtualSyncCatchUpPrev, u64VirtualNow);
2119 if (fStopCatchup)
2120 {
2121 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
2122 Log4(("TM: %'RU64/0: caught up [tmR3TimerQueueRunVirtualSync]\n", u64VirtualNow));
2123 }
2124 }
2125 }
2126
2127 /* calc end of frame. */
2128 uint64_t u64Max = u64Now + pVM->tm.s.u32VirtualSyncScheduleSlack;
2129 if (u64Max > u64VirtualNow - offSyncGivenUp)
2130 u64Max = u64VirtualNow - offSyncGivenUp;
2131
2132 /* assert sanity */
2133#ifdef DEBUG_bird
2134 Assert(u64Now <= u64VirtualNow - offSyncGivenUp);
2135 Assert(u64Max <= u64VirtualNow - offSyncGivenUp);
2136 Assert(u64Now <= u64Max);
2137 Assert(offSyncGivenUp == pVM->tm.s.offVirtualSyncGivenUp);
2138#endif
2139
2140 /*
2141 * Process the expired timers moving the clock along as we progress.
2142 */
2143#ifdef DEBUG_bird
2144#ifdef VBOX_STRICT
2145 uint64_t u64Prev = u64Now; NOREF(u64Prev);
2146#endif
2147#endif
2148 while (pNext && pNext->u64Expire <= u64Max)
2149 {
2150 PTMTIMER pTimer = pNext;
2151 pNext = TMTIMER_GET_NEXT(pTimer);
2152 PPDMCRITSECT pCritSect = pTimer->pCritSect;
2153 if (pCritSect)
2154 PDMCritSectEnter(pCritSect, VERR_INTERNAL_ERROR);
2155 Log2(("tmR3TimerQueueRun: %p:{.enmState=%s, .enmClock=%d, .enmType=%d, u64Expire=%llx (now=%llx) .pszDesc=%s}\n",
2156 pTimer, tmTimerState(pTimer->enmState), pTimer->enmClock, pTimer->enmType, pTimer->u64Expire, u64Now, pTimer->pszDesc));
2157 bool fRc;
2158 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_EXPIRED_GET_UNLINK, TMTIMERSTATE_ACTIVE, fRc);
2159 if (fRc)
2160 {
2161 /* unlink */
2162 const PTMTIMER pPrev = TMTIMER_GET_PREV(pTimer);
2163 if (pPrev)
2164 TMTIMER_SET_NEXT(pPrev, pNext);
2165 else
2166 {
2167 TMTIMER_SET_HEAD(pQueue, pNext);
2168 pQueue->u64Expire = pNext ? pNext->u64Expire : INT64_MAX;
2169 }
2170 if (pNext)
2171 TMTIMER_SET_PREV(pNext, pPrev);
2172 pTimer->offNext = 0;
2173 pTimer->offPrev = 0;
2174
2175 /* advance the clock - don't permit timers to be out of order or armed in the 'past'. */
2176#ifdef DEBUG_bird
2177#ifdef VBOX_STRICT
2178 AssertMsg(pTimer->u64Expire >= u64Prev, ("%'RU64 < %'RU64 %s\n", pTimer->u64Expire, u64Prev, pTimer->pszDesc));
2179 u64Prev = pTimer->u64Expire;
2180#endif
2181#endif
2182 ASMAtomicWriteU64(&pVM->tm.s.u64VirtualSync, pTimer->u64Expire);
2183 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncTicking, false);
2184
2185 /* fire */
2186 TM_SET_STATE(pTimer, TMTIMERSTATE_EXPIRED_DELIVER);
2187 switch (pTimer->enmType)
2188 {
2189 case TMTIMERTYPE_DEV: pTimer->u.Dev.pfnTimer(pTimer->u.Dev.pDevIns, pTimer, pTimer->pvUser); break;
2190 case TMTIMERTYPE_USB: pTimer->u.Usb.pfnTimer(pTimer->u.Usb.pUsbIns, pTimer, pTimer->pvUser); break;
2191 case TMTIMERTYPE_DRV: pTimer->u.Drv.pfnTimer(pTimer->u.Drv.pDrvIns, pTimer, pTimer->pvUser); break;
2192 case TMTIMERTYPE_INTERNAL: pTimer->u.Internal.pfnTimer(pVM, pTimer, pTimer->pvUser); break;
2193 case TMTIMERTYPE_EXTERNAL: pTimer->u.External.pfnTimer(pTimer->pvUser); break;
2194 default:
2195 AssertMsgFailed(("Invalid timer type %d (%s)\n", pTimer->enmType, pTimer->pszDesc));
2196 break;
2197 }
2198
2199 /* Change the state if it wasn't changed already in the handler.
2200 Reset the Hz hint too since this is the same as TMTimerStop. */
2201 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_STOPPED, TMTIMERSTATE_EXPIRED_DELIVER, fRc);
2202 if (fRc && pTimer->uHzHint)
2203 {
2204 if (pTimer->uHzHint >= pVM->tm.s.uMaxHzHint)
2205 ASMAtomicWriteBool(&pVM->tm.s.fHzHintNeedsUpdating, true);
2206 pTimer->uHzHint = 0;
2207 }
2208 Log2(("tmR3TimerQueueRun: new state %s\n", tmTimerState(pTimer->enmState)));
2209 }
2210 if (pCritSect)
2211 PDMCritSectLeave(pCritSect);
2212 } /* run loop */
2213
2214 /*
2215 * Restart the clock if it was stopped to serve any timers,
2216 * and start/adjust catch-up if necessary.
2217 */
2218 if ( !pVM->tm.s.fVirtualSyncTicking
2219 && pVM->tm.s.cVirtualTicking)
2220 {
2221 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRunRestart);
2222
2223 /* calc the slack we've handed out. */
2224 const uint64_t u64VirtualNow2 = TMVirtualGetNoCheck(pVM);
2225 Assert(u64VirtualNow2 >= u64VirtualNow);
2226#ifdef DEBUG_bird
2227 AssertMsg(pVM->tm.s.u64VirtualSync >= u64Now, ("%'RU64 < %'RU64\n", pVM->tm.s.u64VirtualSync, u64Now));
2228#endif
2229 const uint64_t offSlack = pVM->tm.s.u64VirtualSync - u64Now;
2230 STAM_STATS({
2231 if (offSlack)
2232 {
2233 PSTAMPROFILE p = &pVM->tm.s.StatVirtualSyncRunSlack;
2234 p->cPeriods++;
2235 p->cTicks += offSlack;
2236 if (p->cTicksMax < offSlack) p->cTicksMax = offSlack;
2237 if (p->cTicksMin > offSlack) p->cTicksMin = offSlack;
2238 }
2239 });
2240
2241 /* Let the time run a little bit while we were busy running timers(?). */
2242 uint64_t u64Elapsed;
2243#define MAX_ELAPSED 30000U /* ns */
2244 if (offSlack > MAX_ELAPSED)
2245 u64Elapsed = 0;
2246 else
2247 {
2248 u64Elapsed = u64VirtualNow2 - u64VirtualNow;
2249 if (u64Elapsed > MAX_ELAPSED)
2250 u64Elapsed = MAX_ELAPSED;
2251 u64Elapsed = u64Elapsed > offSlack ? u64Elapsed - offSlack : 0;
2252 }
2253#undef MAX_ELAPSED
2254
2255 /* Calc the current offset. */
2256 uint64_t offNew = u64VirtualNow2 - pVM->tm.s.u64VirtualSync - u64Elapsed;
2257 Assert(!(offNew & RT_BIT_64(63)));
2258 uint64_t offLag = offNew - pVM->tm.s.offVirtualSyncGivenUp;
2259 Assert(!(offLag & RT_BIT_64(63)));
2260
2261 /*
2262 * Deal with starting, adjusting and stopping catchup.
2263 */
2264 if (pVM->tm.s.fVirtualSyncCatchUp)
2265 {
2266 if (offLag <= pVM->tm.s.u64VirtualSyncCatchUpStopThreshold)
2267 {
2268 /* stop */
2269 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
2270 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
2271 Log4(("TM: %'RU64/-%'8RU64: caught up [pt]\n", u64VirtualNow2 - offNew, offLag));
2272 }
2273 else if (offLag <= pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold)
2274 {
2275 /* adjust */
2276 unsigned i = 0;
2277 while ( i + 1 < RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods)
2278 && offLag >= pVM->tm.s.aVirtualSyncCatchUpPeriods[i + 1].u64Start)
2279 i++;
2280 if (pVM->tm.s.u32VirtualSyncCatchUpPercentage < pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage)
2281 {
2282 STAM_COUNTER_INC(&pVM->tm.s.aStatVirtualSyncCatchupAdjust[i]);
2283 ASMAtomicWriteU32(&pVM->tm.s.u32VirtualSyncCatchUpPercentage, pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage);
2284 Log4(("TM: %'RU64/%'8RU64: adj %u%%\n", u64VirtualNow2 - offNew, offLag, pVM->tm.s.u32VirtualSyncCatchUpPercentage));
2285 }
2286 pVM->tm.s.u64VirtualSyncCatchUpPrev = u64VirtualNow2;
2287 }
2288 else
2289 {
2290 /* give up */
2291 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncGiveUp);
2292 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
2293 ASMAtomicWriteU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp, offNew);
2294 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
2295 Log4(("TM: %'RU64/%'8RU64: give up %u%%\n", u64VirtualNow2 - offNew, offLag, pVM->tm.s.u32VirtualSyncCatchUpPercentage));
2296 LogRel(("TM: Giving up catch-up attempt at a %'RU64 ns lag; new total: %'RU64 ns\n", offLag, offNew));
2297 }
2298 }
2299 else if (offLag >= pVM->tm.s.aVirtualSyncCatchUpPeriods[0].u64Start)
2300 {
2301 if (offLag <= pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold)
2302 {
2303 /* start */
2304 STAM_PROFILE_ADV_START(&pVM->tm.s.StatVirtualSyncCatchup, c);
2305 unsigned i = 0;
2306 while ( i + 1 < RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods)
2307 && offLag >= pVM->tm.s.aVirtualSyncCatchUpPeriods[i + 1].u64Start)
2308 i++;
2309 STAM_COUNTER_INC(&pVM->tm.s.aStatVirtualSyncCatchupInitial[i]);
2310 ASMAtomicWriteU32(&pVM->tm.s.u32VirtualSyncCatchUpPercentage, pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage);
2311 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, true);
2312 Log4(("TM: %'RU64/%'8RU64: catch-up %u%%\n", u64VirtualNow2 - offNew, offLag, pVM->tm.s.u32VirtualSyncCatchUpPercentage));
2313 }
2314 else
2315 {
2316 /* don't bother */
2317 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncGiveUpBeforeStarting);
2318 ASMAtomicWriteU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp, offNew);
2319 Log4(("TM: %'RU64/%'8RU64: give up\n", u64VirtualNow2 - offNew, offLag));
2320 LogRel(("TM: Not bothering to attempt catching up a %'RU64 ns lag; new total: %'RU64\n", offLag, offNew));
2321 }
2322 }
2323
2324 /*
2325 * Update the offset and restart the clock.
2326 */
2327 Assert(!(offNew & RT_BIT_64(63)));
2328 ASMAtomicWriteU64(&pVM->tm.s.offVirtualSync, offNew);
2329 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncTicking, true);
2330 }
2331}
2332
2333
2334/**
2335 * Deals with stopped Virtual Sync clock.
2336 *
2337 * This is called by the forced action flag handling code in EM when it
2338 * encounters the VM_FF_TM_VIRTUAL_SYNC flag. It is called by all VCPUs and they
2339 * will block on the VirtualSyncLock until the pending timers has been executed
2340 * and the clock restarted.
2341 *
2342 * @param pVM The VM to run the timers for.
2343 * @param pVCpu The virtual CPU we're running at.
2344 *
2345 * @thread EMTs
2346 */
2347VMMR3_INT_DECL(void) TMR3VirtualSyncFF(PVM pVM, PVMCPU pVCpu)
2348{
2349 Log2(("TMR3VirtualSyncFF:\n"));
2350
2351 /*
2352 * The EMT doing the timers is diverted to them.
2353 */
2354 if (pVCpu->idCpu == pVM->tm.s.idTimerCpu)
2355 TMR3TimerQueuesDo(pVM);
2356 /*
2357 * The other EMTs will block on the virtual sync lock and the first owner
2358 * will run the queue and thus restarting the clock.
2359 *
2360 * Note! This is very suboptimal code wrt to resuming execution when there
2361 * are more than two Virtual CPUs, since they will all have to enter
2362 * the critical section one by one. But it's a very simple solution
2363 * which will have to do the job for now.
2364 */
2365 else
2366 {
2367 STAM_PROFILE_START(&pVM->tm.s.StatVirtualSyncFF, a);
2368 tmVirtualSyncLock(pVM);
2369 if (pVM->tm.s.fVirtualSyncTicking)
2370 {
2371 STAM_PROFILE_STOP(&pVM->tm.s.StatVirtualSyncFF, a); /* before the unlock! */
2372 tmVirtualSyncUnlock(pVM);
2373 Log2(("TMR3VirtualSyncFF: ticking\n"));
2374 }
2375 else
2376 {
2377 tmVirtualSyncUnlock(pVM);
2378
2379 /* try run it. */
2380 tmTimerLock(pVM);
2381 tmVirtualSyncLock(pVM);
2382 if (pVM->tm.s.fVirtualSyncTicking)
2383 Log2(("TMR3VirtualSyncFF: ticking (2)\n"));
2384 else
2385 {
2386 ASMAtomicWriteBool(&pVM->tm.s.fRunningVirtualSyncQueue, true);
2387 Log2(("TMR3VirtualSyncFF: running queue\n"));
2388
2389 if (pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].offSchedule)
2390 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC]);
2391 tmR3TimerQueueRunVirtualSync(pVM);
2392 if (pVM->tm.s.fVirtualSyncTicking) /** @todo move into tmR3TimerQueueRunVirtualSync - FIXME */
2393 VM_FF_CLEAR(pVM, VM_FF_TM_VIRTUAL_SYNC);
2394
2395 ASMAtomicWriteBool(&pVM->tm.s.fRunningVirtualSyncQueue, false);
2396 }
2397 STAM_PROFILE_STOP(&pVM->tm.s.StatVirtualSyncFF, a); /* before the unlock! */
2398 tmVirtualSyncUnlock(pVM);
2399 tmTimerUnlock(pVM);
2400 }
2401 }
2402}
2403
2404
2405/** @name Saved state values
2406 * @{ */
2407#define TMTIMERSTATE_SAVED_PENDING_STOP 4
2408#define TMTIMERSTATE_SAVED_PENDING_SCHEDULE 7
2409/** @} */
2410
2411
2412/**
2413 * Saves the state of a timer to a saved state.
2414 *
2415 * @returns VBox status.
2416 * @param pTimer Timer to save.
2417 * @param pSSM Save State Manager handle.
2418 */
2419VMMR3DECL(int) TMR3TimerSave(PTMTIMERR3 pTimer, PSSMHANDLE pSSM)
2420{
2421 LogFlow(("TMR3TimerSave: %p:{enmState=%s, .pszDesc={%s}} pSSM=%p\n", pTimer, tmTimerState(pTimer->enmState), pTimer->pszDesc, pSSM));
2422 switch (pTimer->enmState)
2423 {
2424 case TMTIMERSTATE_STOPPED:
2425 case TMTIMERSTATE_PENDING_STOP:
2426 case TMTIMERSTATE_PENDING_STOP_SCHEDULE:
2427 return SSMR3PutU8(pSSM, TMTIMERSTATE_SAVED_PENDING_STOP);
2428
2429 case TMTIMERSTATE_PENDING_SCHEDULE_SET_EXPIRE:
2430 case TMTIMERSTATE_PENDING_RESCHEDULE_SET_EXPIRE:
2431 AssertMsgFailed(("u64Expire is being updated! (%s)\n", pTimer->pszDesc));
2432 if (!RTThreadYield())
2433 RTThreadSleep(1);
2434 /* fall thru */
2435 case TMTIMERSTATE_ACTIVE:
2436 case TMTIMERSTATE_PENDING_SCHEDULE:
2437 case TMTIMERSTATE_PENDING_RESCHEDULE:
2438 SSMR3PutU8(pSSM, TMTIMERSTATE_SAVED_PENDING_SCHEDULE);
2439 return SSMR3PutU64(pSSM, pTimer->u64Expire);
2440
2441 case TMTIMERSTATE_EXPIRED_GET_UNLINK:
2442 case TMTIMERSTATE_EXPIRED_DELIVER:
2443 case TMTIMERSTATE_DESTROY:
2444 case TMTIMERSTATE_FREE:
2445 AssertMsgFailed(("Invalid timer state %d %s (%s)\n", pTimer->enmState, tmTimerState(pTimer->enmState), pTimer->pszDesc));
2446 return SSMR3HandleSetStatus(pSSM, VERR_TM_INVALID_STATE);
2447 }
2448
2449 AssertMsgFailed(("Unknown timer state %d (%s)\n", pTimer->enmState, pTimer->pszDesc));
2450 return SSMR3HandleSetStatus(pSSM, VERR_TM_UNKNOWN_STATE);
2451}
2452
2453
2454/**
2455 * Loads the state of a timer from a saved state.
2456 *
2457 * @returns VBox status.
2458 * @param pTimer Timer to restore.
2459 * @param pSSM Save State Manager handle.
2460 */
2461VMMR3DECL(int) TMR3TimerLoad(PTMTIMERR3 pTimer, PSSMHANDLE pSSM)
2462{
2463 Assert(pTimer); Assert(pSSM); VM_ASSERT_EMT(pTimer->pVMR3);
2464 LogFlow(("TMR3TimerLoad: %p:{enmState=%s, .pszDesc={%s}} pSSM=%p\n", pTimer, tmTimerState(pTimer->enmState), pTimer->pszDesc, pSSM));
2465
2466 /*
2467 * Load the state and validate it.
2468 */
2469 uint8_t u8State;
2470 int rc = SSMR3GetU8(pSSM, &u8State);
2471 if (RT_FAILURE(rc))
2472 return rc;
2473#if 1 /* Workaround for accidental state shift in r47786 (2009-05-26 19:12:12). */ /** @todo remove this in a few weeks! */
2474 if ( u8State == TMTIMERSTATE_SAVED_PENDING_STOP + 1
2475 || u8State == TMTIMERSTATE_SAVED_PENDING_SCHEDULE + 1)
2476 u8State--;
2477#endif
2478 if ( u8State != TMTIMERSTATE_SAVED_PENDING_STOP
2479 && u8State != TMTIMERSTATE_SAVED_PENDING_SCHEDULE)
2480 {
2481 AssertLogRelMsgFailed(("u8State=%d\n", u8State));
2482 return SSMR3HandleSetStatus(pSSM, VERR_TM_LOAD_STATE);
2483 }
2484
2485 /* Enter the critical section to make TMTimerSet/Stop happy. */
2486 PPDMCRITSECT pCritSect = pTimer->pCritSect;
2487 if (pCritSect)
2488 PDMCritSectEnter(pCritSect, VERR_INTERNAL_ERROR);
2489
2490 if (u8State == TMTIMERSTATE_SAVED_PENDING_SCHEDULE)
2491 {
2492 /*
2493 * Load the expire time.
2494 */
2495 uint64_t u64Expire;
2496 rc = SSMR3GetU64(pSSM, &u64Expire);
2497 if (RT_FAILURE(rc))
2498 return rc;
2499
2500 /*
2501 * Set it.
2502 */
2503 Log(("u8State=%d u64Expire=%llu\n", u8State, u64Expire));
2504 rc = TMTimerSet(pTimer, u64Expire);
2505 }
2506 else
2507 {
2508 /*
2509 * Stop it.
2510 */
2511 Log(("u8State=%d\n", u8State));
2512 rc = TMTimerStop(pTimer);
2513 }
2514
2515 if (pCritSect)
2516 PDMCritSectLeave(pCritSect);
2517
2518 /*
2519 * On failure set SSM status.
2520 */
2521 if (RT_FAILURE(rc))
2522 rc = SSMR3HandleSetStatus(pSSM, rc);
2523 return rc;
2524}
2525
2526
2527/**
2528 * Associates a critical section with a timer.
2529 *
2530 * The critical section will be entered prior to doing the timer call back, thus
2531 * avoiding potential races between the timer thread and other threads trying to
2532 * stop or adjust the timer expiration while it's being delivered. The timer
2533 * thread will leave the critical section when the timer callback returns.
2534 *
2535 * In strict builds, ownership of the critical section will be asserted by
2536 * TMTimerSet, TMTimerStop, TMTimerGetExpire and TMTimerDestroy (when called at
2537 * runtime).
2538 *
2539 * @retval VINF_SUCCESS on success.
2540 * @retval VERR_INVALID_HANDLE if the timer handle is NULL or invalid
2541 * (asserted).
2542 * @retval VERR_INVALID_PARAMETER if pCritSect is NULL or has an invalid magic
2543 * (asserted).
2544 * @retval VERR_ALREADY_EXISTS if a critical section was already associated
2545 * with the timer (asserted).
2546 * @retval VERR_INVALID_STATE if the timer isn't stopped.
2547 *
2548 * @param pTimer The timer handle.
2549 * @param pCritSect The critical section. The caller must make sure this
2550 * is around for the life time of the timer.
2551 *
2552 * @thread Any, but the caller is responsible for making sure the timer is not
2553 * active.
2554 */
2555VMMR3DECL(int) TMR3TimerSetCritSect(PTMTIMERR3 pTimer, PPDMCRITSECT pCritSect)
2556{
2557 AssertPtrReturn(pTimer, VERR_INVALID_HANDLE);
2558 AssertPtrReturn(pCritSect, VERR_INVALID_PARAMETER);
2559 const char *pszName = PDMR3CritSectName(pCritSect); /* exploited for validation */
2560 AssertReturn(pszName, VERR_INVALID_PARAMETER);
2561 AssertReturn(!pTimer->pCritSect, VERR_ALREADY_EXISTS);
2562 AssertReturn(pTimer->enmState == TMTIMERSTATE_STOPPED, VERR_INVALID_STATE);
2563 LogFlow(("pTimer=%p (%s) pCritSect=%p (%s)\n", pTimer, pTimer->pszDesc, pCritSect, pszName));
2564
2565 pTimer->pCritSect = pCritSect;
2566 return VINF_SUCCESS;
2567}
2568
2569
2570/**
2571 * Get the real world UTC time adjusted for VM lag.
2572 *
2573 * @returns pTime.
2574 * @param pVM The VM instance.
2575 * @param pTime Where to store the time.
2576 */
2577VMMR3_INT_DECL(PRTTIMESPEC) TMR3UtcNow(PVM pVM, PRTTIMESPEC pTime)
2578{
2579 RTTimeNow(pTime);
2580 RTTimeSpecSubNano(pTime, ASMAtomicReadU64(&pVM->tm.s.offVirtualSync) - ASMAtomicReadU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp));
2581 RTTimeSpecAddNano(pTime, pVM->tm.s.offUTC);
2582 return pTime;
2583}
2584
2585
2586/**
2587 * Pauses all clocks except TMCLOCK_REAL.
2588 *
2589 * @returns VBox status code, all errors are asserted.
2590 * @param pVM The VM handle.
2591 * @param pVCpu The virtual CPU handle.
2592 * @thread EMT corresponding to the virtual CPU handle.
2593 */
2594VMMR3DECL(int) TMR3NotifySuspend(PVM pVM, PVMCPU pVCpu)
2595{
2596 VMCPU_ASSERT_EMT(pVCpu);
2597
2598 /*
2599 * The shared virtual clock (includes virtual sync which is tied to it).
2600 */
2601 tmTimerLock(pVM); /* Paranoia: Exploiting the timer lock here. */
2602 int rc = tmVirtualPauseLocked(pVM);
2603 tmTimerUnlock(pVM);
2604 if (RT_FAILURE(rc))
2605 return rc;
2606
2607 /*
2608 * Pause the TSC last since it is normally linked to the virtual
2609 * sync clock, so the above code may actually stop both clock.
2610 */
2611 rc = tmCpuTickPause(pVM, pVCpu);
2612 if (RT_FAILURE(rc))
2613 return rc;
2614
2615#ifndef VBOX_WITHOUT_NS_ACCOUNTING
2616 /*
2617 * Update cNsTotal.
2618 */
2619 uint32_t uGen = ASMAtomicIncU32(&pVCpu->tm.s.uTimesGen); Assert(uGen & 1);
2620 pVCpu->tm.s.cNsTotal = RTTimeNanoTS() - pVCpu->tm.s.u64NsTsStartTotal;
2621 pVCpu->tm.s.cNsOther = pVCpu->tm.s.cNsTotal - pVCpu->tm.s.cNsExecuting - pVCpu->tm.s.cNsHalted;
2622 ASMAtomicWriteU32(&pVCpu->tm.s.uTimesGen, (uGen | 1) + 1);
2623#endif
2624
2625 return VINF_SUCCESS;
2626}
2627
2628
2629/**
2630 * Resumes all clocks except TMCLOCK_REAL.
2631 *
2632 * @returns VBox status code, all errors are asserted.
2633 * @param pVM The VM handle.
2634 * @param pVCpu The virtual CPU handle.
2635 * @thread EMT corresponding to the virtual CPU handle.
2636 */
2637VMMR3DECL(int) TMR3NotifyResume(PVM pVM, PVMCPU pVCpu)
2638{
2639 VMCPU_ASSERT_EMT(pVCpu);
2640 int rc;
2641
2642#ifndef VBOX_WITHOUT_NS_ACCOUNTING
2643 /*
2644 * Set u64NsTsStartTotal. There is no need to back this out if either of
2645 * the two calls below fail.
2646 */
2647 pVCpu->tm.s.u64NsTsStartTotal = RTTimeNanoTS() - pVCpu->tm.s.cNsTotal;
2648#endif
2649
2650 /*
2651 * Resume the TSC first since it is normally linked to the virtual sync
2652 * clock, so it may actually not be resumed until we've executed the code
2653 * below.
2654 */
2655 if (!pVM->tm.s.fTSCTiedToExecution)
2656 {
2657 rc = tmCpuTickResume(pVM, pVCpu);
2658 if (RT_FAILURE(rc))
2659 return rc;
2660 }
2661
2662 /*
2663 * The shared virtual clock (includes virtual sync which is tied to it).
2664 */
2665 tmTimerLock(pVM); /* Paranoia: Exploiting the timer lock here. */
2666 rc = tmVirtualResumeLocked(pVM);
2667 tmTimerUnlock(pVM);
2668
2669 return rc;
2670}
2671
2672
2673/**
2674 * Sets the warp drive percent of the virtual time.
2675 *
2676 * @returns VBox status code.
2677 * @param pVM The VM handle.
2678 * @param u32Percent The new percentage. 100 means normal operation.
2679 */
2680VMMDECL(int) TMR3SetWarpDrive(PVM pVM, uint32_t u32Percent)
2681{
2682 return VMR3ReqCallWait(pVM, VMCPUID_ANY, (PFNRT)tmR3SetWarpDrive, 2, pVM, u32Percent);
2683}
2684
2685
2686/**
2687 * EMT worker for TMR3SetWarpDrive.
2688 *
2689 * @returns VBox status code.
2690 * @param pVM The VM handle.
2691 * @param u32Percent See TMR3SetWarpDrive().
2692 * @internal
2693 */
2694static DECLCALLBACK(int) tmR3SetWarpDrive(PVM pVM, uint32_t u32Percent)
2695{
2696 PVMCPU pVCpu = VMMGetCpu(pVM);
2697
2698 /*
2699 * Validate it.
2700 */
2701 AssertMsgReturn(u32Percent >= 2 && u32Percent <= 20000,
2702 ("%RX32 is not between 2 and 20000 (inclusive).\n", u32Percent),
2703 VERR_INVALID_PARAMETER);
2704
2705/** @todo This isn't a feature specific to virtual time, move the variables to
2706 * TM level and make it affect TMR3UTCNow as well! */
2707
2708 /*
2709 * If the time is running we'll have to pause it before we can change
2710 * the warp drive settings.
2711 */
2712 tmTimerLock(pVM); /* Paranoia: Exploiting the timer lock here. */
2713 bool fPaused = !!pVM->tm.s.cVirtualTicking;
2714 if (fPaused) /** @todo this isn't really working, but wtf. */
2715 TMR3NotifySuspend(pVM, pVCpu);
2716
2717 pVM->tm.s.u32VirtualWarpDrivePercentage = u32Percent;
2718 pVM->tm.s.fVirtualWarpDrive = u32Percent != 100;
2719 LogRel(("TM: u32VirtualWarpDrivePercentage=%RI32 fVirtualWarpDrive=%RTbool\n",
2720 pVM->tm.s.u32VirtualWarpDrivePercentage, pVM->tm.s.fVirtualWarpDrive));
2721
2722 if (fPaused)
2723 TMR3NotifyResume(pVM, pVCpu);
2724 tmTimerUnlock(pVM);
2725 return VINF_SUCCESS;
2726}
2727
2728
2729/**
2730 * Gets the performance information for one virtual CPU as seen by the VMM.
2731 *
2732 * The returned times covers the period where the VM is running and will be
2733 * reset when restoring a previous VM state (at least for the time being).
2734 *
2735 * @retval VINF_SUCCESS on success.
2736 * @retval VERR_NOT_IMPLEMENTED if not compiled in.
2737 * @retval VERR_INVALID_STATE if the VM handle is bad.
2738 * @retval VERR_INVALID_PARAMETER if idCpu is out of range.
2739 *
2740 * @param pVM The VM handle.
2741 * @param idCpu The ID of the virtual CPU which times to get.
2742 * @param pcNsTotal Where to store the total run time (nano seconds) of
2743 * the CPU, i.e. the sum of the three other returns.
2744 * Optional.
2745 * @param pcNsExecuting Where to store the time (nano seconds) spent
2746 * executing guest code. Optional.
2747 * @param pcNsHalted Where to store the time (nano seconds) spent
2748 * halted. Optional
2749 * @param pcNsOther Where to store the time (nano seconds) spent
2750 * preempted by the host scheduler, on virtualization
2751 * overhead and on other tasks.
2752 */
2753VMMR3DECL(int) TMR3GetCpuLoadTimes(PVM pVM, VMCPUID idCpu, uint64_t *pcNsTotal, uint64_t *pcNsExecuting,
2754 uint64_t *pcNsHalted, uint64_t *pcNsOther)
2755{
2756 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_STATE);
2757 AssertReturn(idCpu < pVM->cCpus, VERR_INVALID_PARAMETER);
2758
2759#ifndef VBOX_WITHOUT_NS_ACCOUNTING
2760 /*
2761 * Get a stable result set.
2762 * This should be way quicker than an EMT request.
2763 */
2764 PVMCPU pVCpu = &pVM->aCpus[idCpu];
2765 uint32_t uTimesGen = ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen);
2766 uint64_t cNsTotal = pVCpu->tm.s.cNsTotal;
2767 uint64_t cNsExecuting = pVCpu->tm.s.cNsExecuting;
2768 uint64_t cNsHalted = pVCpu->tm.s.cNsHalted;
2769 uint64_t cNsOther = pVCpu->tm.s.cNsOther;
2770 while ( (uTimesGen & 1) /* update in progress */
2771 || uTimesGen != ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen))
2772 {
2773 RTThreadYield();
2774 uTimesGen = ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen);
2775 cNsTotal = pVCpu->tm.s.cNsTotal;
2776 cNsExecuting = pVCpu->tm.s.cNsExecuting;
2777 cNsHalted = pVCpu->tm.s.cNsHalted;
2778 cNsOther = pVCpu->tm.s.cNsOther;
2779 }
2780
2781 /*
2782 * Fill in the return values.
2783 */
2784 if (pcNsTotal)
2785 *pcNsTotal = cNsTotal;
2786 if (pcNsExecuting)
2787 *pcNsExecuting = cNsExecuting;
2788 if (pcNsHalted)
2789 *pcNsHalted = cNsHalted;
2790 if (pcNsOther)
2791 *pcNsOther = cNsOther;
2792
2793 return VINF_SUCCESS;
2794
2795#else
2796 return VERR_NOT_IMPLEMENTED;
2797#endif
2798}
2799
2800#ifndef VBOX_WITHOUT_NS_ACCOUNTING
2801
2802/**
2803 * Helper for tmR3CpuLoadTimer.
2804 * @returns
2805 * @param pState The state to update.
2806 * @param cNsTotalDelta Total time.
2807 * @param cNsExecutingDelta Time executing.
2808 * @param cNsHaltedDelta Time halted.
2809 */
2810DECLINLINE(void) tmR3CpuLoadTimerMakeUpdate(PTMCPULOADSTATE pState,
2811 uint64_t cNsTotal,
2812 uint64_t cNsExecuting,
2813 uint64_t cNsHalted)
2814{
2815 /* Calc deltas */
2816 uint64_t cNsTotalDelta = cNsTotal - pState->cNsPrevTotal;
2817 pState->cNsPrevTotal = cNsTotal;
2818
2819 uint64_t cNsExecutingDelta = cNsExecuting - pState->cNsPrevExecuting;
2820 pState->cNsPrevExecuting = cNsExecuting;
2821
2822 uint64_t cNsHaltedDelta = cNsHalted - pState->cNsPrevHalted;
2823 pState->cNsPrevHalted = cNsHalted;
2824
2825 /* Calc pcts. */
2826 if (!cNsTotalDelta)
2827 {
2828 pState->cPctExecuting = 0;
2829 pState->cPctHalted = 100;
2830 pState->cPctOther = 0;
2831 }
2832 else if (cNsTotalDelta < UINT64_MAX / 4)
2833 {
2834 pState->cPctExecuting = (uint8_t)(cNsExecutingDelta * 100 / cNsTotalDelta);
2835 pState->cPctHalted = (uint8_t)(cNsHaltedDelta * 100 / cNsTotalDelta);
2836 pState->cPctOther = (uint8_t)((cNsTotalDelta - cNsExecutingDelta - cNsHaltedDelta) * 100 / cNsTotalDelta);
2837 }
2838 else
2839 {
2840 pState->cPctExecuting = 0;
2841 pState->cPctHalted = 100;
2842 pState->cPctOther = 0;
2843 }
2844}
2845
2846
2847/**
2848 * Timer callback that calculates the CPU load since the last time it was
2849 * called.
2850 *
2851 * @param pVM The VM handle.
2852 * @param pTimer The timer.
2853 * @param pvUser NULL, unused.
2854 */
2855static DECLCALLBACK(void) tmR3CpuLoadTimer(PVM pVM, PTMTIMER pTimer, void *pvUser)
2856{
2857 /*
2858 * Re-arm the timer first.
2859 */
2860 int rc = TMTimerSetMillies(pTimer, 1000);
2861 AssertLogRelRC(rc);
2862 NOREF(pvUser);
2863
2864 /*
2865 * Update the values for each CPU.
2866 */
2867 uint64_t cNsTotalAll = 0;
2868 uint64_t cNsExecutingAll = 0;
2869 uint64_t cNsHaltedAll = 0;
2870 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2871 {
2872 PVMCPU pVCpu = &pVM->aCpus[iCpu];
2873
2874 /* Try get a stable data set. */
2875 uint32_t cTries = 3;
2876 uint32_t uTimesGen = ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen);
2877 uint64_t cNsTotal = pVCpu->tm.s.cNsTotal;
2878 uint64_t cNsExecuting = pVCpu->tm.s.cNsExecuting;
2879 uint64_t cNsHalted = pVCpu->tm.s.cNsHalted;
2880 while (RT_UNLIKELY( (uTimesGen & 1) /* update in progress */
2881 || uTimesGen != ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen)))
2882 {
2883 if (!--cTries)
2884 break;
2885 ASMNopPause();
2886 uTimesGen = ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen);
2887 cNsTotal = pVCpu->tm.s.cNsTotal;
2888 cNsExecuting = pVCpu->tm.s.cNsExecuting;
2889 cNsHalted = pVCpu->tm.s.cNsHalted;
2890 }
2891
2892 /* Totals */
2893 cNsTotalAll += cNsTotal;
2894 cNsExecutingAll += cNsExecuting;
2895 cNsHaltedAll += cNsHalted;
2896
2897 /* Calc the PCTs and update the state. */
2898 tmR3CpuLoadTimerMakeUpdate(&pVCpu->tm.s.CpuLoad, cNsTotal, cNsExecuting, cNsHalted);
2899 }
2900
2901 /*
2902 * Update the value for all the CPUs.
2903 */
2904 tmR3CpuLoadTimerMakeUpdate(&pVM->tm.s.CpuLoad, cNsTotalAll, cNsExecutingAll, cNsHaltedAll);
2905
2906 /** @todo Try add 1, 5 and 15 min load stats. */
2907
2908}
2909
2910#endif /* !VBOX_WITHOUT_NS_ACCOUNTING */
2911
2912/**
2913 * Gets the 5 char clock name for the info tables.
2914 *
2915 * @returns The name.
2916 * @param enmClock The clock.
2917 */
2918DECLINLINE(const char *) tmR3Get5CharClockName(TMCLOCK enmClock)
2919{
2920 switch (enmClock)
2921 {
2922 case TMCLOCK_REAL: return "Real ";
2923 case TMCLOCK_VIRTUAL: return "Virt ";
2924 case TMCLOCK_VIRTUAL_SYNC: return "VrSy ";
2925 case TMCLOCK_TSC: return "TSC ";
2926 default: return "Bad ";
2927 }
2928}
2929
2930
2931/**
2932 * Display all timers.
2933 *
2934 * @param pVM VM Handle.
2935 * @param pHlp The info helpers.
2936 * @param pszArgs Arguments, ignored.
2937 */
2938static DECLCALLBACK(void) tmR3TimerInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2939{
2940 NOREF(pszArgs);
2941 pHlp->pfnPrintf(pHlp,
2942 "Timers (pVM=%p)\n"
2943 "%.*s %.*s %.*s %.*s Clock %18s %18s %6s %-25s Description\n",
2944 pVM,
2945 sizeof(RTR3PTR) * 2, "pTimerR3 ",
2946 sizeof(int32_t) * 2, "offNext ",
2947 sizeof(int32_t) * 2, "offPrev ",
2948 sizeof(int32_t) * 2, "offSched ",
2949 "Time",
2950 "Expire",
2951 "HzHint",
2952 "State");
2953 tmTimerLock(pVM);
2954 for (PTMTIMERR3 pTimer = pVM->tm.s.pCreated; pTimer; pTimer = pTimer->pBigNext)
2955 {
2956 pHlp->pfnPrintf(pHlp,
2957 "%p %08RX32 %08RX32 %08RX32 %s %18RU64 %18RU64 %6RU32 %-25s %s\n",
2958 pTimer,
2959 pTimer->offNext,
2960 pTimer->offPrev,
2961 pTimer->offScheduleNext,
2962 tmR3Get5CharClockName(pTimer->enmClock),
2963 TMTimerGet(pTimer),
2964 pTimer->u64Expire,
2965 pTimer->uHzHint,
2966 tmTimerState(pTimer->enmState),
2967 pTimer->pszDesc);
2968 }
2969 tmTimerUnlock(pVM);
2970}
2971
2972
2973/**
2974 * Display all active timers.
2975 *
2976 * @param pVM VM Handle.
2977 * @param pHlp The info helpers.
2978 * @param pszArgs Arguments, ignored.
2979 */
2980static DECLCALLBACK(void) tmR3TimerInfoActive(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2981{
2982 NOREF(pszArgs);
2983 pHlp->pfnPrintf(pHlp,
2984 "Active Timers (pVM=%p)\n"
2985 "%.*s %.*s %.*s %.*s Clock %18s %18s %6s %-25s Description\n",
2986 pVM,
2987 sizeof(RTR3PTR) * 2, "pTimerR3 ",
2988 sizeof(int32_t) * 2, "offNext ",
2989 sizeof(int32_t) * 2, "offPrev ",
2990 sizeof(int32_t) * 2, "offSched ",
2991 "Time",
2992 "Expire",
2993 "HzHint",
2994 "State");
2995 for (unsigned iQueue = 0; iQueue < TMCLOCK_MAX; iQueue++)
2996 {
2997 tmTimerLock(pVM);
2998 for (PTMTIMERR3 pTimer = TMTIMER_GET_HEAD(&pVM->tm.s.paTimerQueuesR3[iQueue]);
2999 pTimer;
3000 pTimer = TMTIMER_GET_NEXT(pTimer))
3001 {
3002 pHlp->pfnPrintf(pHlp,
3003 "%p %08RX32 %08RX32 %08RX32 %s %18RU64 %18RU64 %6RU32 %-25s %s\n",
3004 pTimer,
3005 pTimer->offNext,
3006 pTimer->offPrev,
3007 pTimer->offScheduleNext,
3008 tmR3Get5CharClockName(pTimer->enmClock),
3009 TMTimerGet(pTimer),
3010 pTimer->u64Expire,
3011 pTimer->uHzHint,
3012 tmTimerState(pTimer->enmState),
3013 pTimer->pszDesc);
3014 }
3015 tmTimerUnlock(pVM);
3016 }
3017}
3018
3019
3020/**
3021 * Display all clocks.
3022 *
3023 * @param pVM VM Handle.
3024 * @param pHlp The info helpers.
3025 * @param pszArgs Arguments, ignored.
3026 */
3027static DECLCALLBACK(void) tmR3InfoClocks(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3028{
3029 NOREF(pszArgs);
3030
3031 /*
3032 * Read the times first to avoid more than necessary time variation.
3033 */
3034 const uint64_t u64Virtual = TMVirtualGet(pVM);
3035 const uint64_t u64VirtualSync = TMVirtualSyncGet(pVM);
3036 const uint64_t u64Real = TMRealGet(pVM);
3037
3038 for (VMCPUID i = 0; i < pVM->cCpus; i++)
3039 {
3040 PVMCPU pVCpu = &pVM->aCpus[i];
3041 uint64_t u64TSC = TMCpuTickGet(pVCpu);
3042
3043 /*
3044 * TSC
3045 */
3046 pHlp->pfnPrintf(pHlp,
3047 "Cpu Tick: %18RU64 (%#016RX64) %RU64Hz %s%s",
3048 u64TSC, u64TSC, TMCpuTicksPerSecond(pVM),
3049 pVCpu->tm.s.fTSCTicking ? "ticking" : "paused",
3050 pVM->tm.s.fTSCVirtualized ? " - virtualized" : "");
3051 if (pVM->tm.s.fTSCUseRealTSC)
3052 {
3053 pHlp->pfnPrintf(pHlp, " - real tsc");
3054 if (pVCpu->tm.s.offTSCRawSrc)
3055 pHlp->pfnPrintf(pHlp, "\n offset %RU64", pVCpu->tm.s.offTSCRawSrc);
3056 }
3057 else
3058 pHlp->pfnPrintf(pHlp, " - virtual clock");
3059 pHlp->pfnPrintf(pHlp, "\n");
3060 }
3061
3062 /*
3063 * virtual
3064 */
3065 pHlp->pfnPrintf(pHlp,
3066 " Virtual: %18RU64 (%#016RX64) %RU64Hz %s",
3067 u64Virtual, u64Virtual, TMVirtualGetFreq(pVM),
3068 pVM->tm.s.cVirtualTicking ? "ticking" : "paused");
3069 if (pVM->tm.s.fVirtualWarpDrive)
3070 pHlp->pfnPrintf(pHlp, " WarpDrive %RU32 %%", pVM->tm.s.u32VirtualWarpDrivePercentage);
3071 pHlp->pfnPrintf(pHlp, "\n");
3072
3073 /*
3074 * virtual sync
3075 */
3076 pHlp->pfnPrintf(pHlp,
3077 "VirtSync: %18RU64 (%#016RX64) %s%s",
3078 u64VirtualSync, u64VirtualSync,
3079 pVM->tm.s.fVirtualSyncTicking ? "ticking" : "paused",
3080 pVM->tm.s.fVirtualSyncCatchUp ? " - catchup" : "");
3081 if (pVM->tm.s.offVirtualSync)
3082 {
3083 pHlp->pfnPrintf(pHlp, "\n offset %RU64", pVM->tm.s.offVirtualSync);
3084 if (pVM->tm.s.u32VirtualSyncCatchUpPercentage)
3085 pHlp->pfnPrintf(pHlp, " catch-up rate %u %%", pVM->tm.s.u32VirtualSyncCatchUpPercentage);
3086 }
3087 pHlp->pfnPrintf(pHlp, "\n");
3088
3089 /*
3090 * real
3091 */
3092 pHlp->pfnPrintf(pHlp,
3093 " Real: %18RU64 (%#016RX64) %RU64Hz\n",
3094 u64Real, u64Real, TMRealGetFreq(pVM));
3095}
3096
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette