VirtualBox

source: vbox/trunk/src/VBox/VMM/TM.cpp@ 14683

Last change on this file since 14683 was 14597, checked in by vboxsync, 16 years ago

Added R0 address to MMR3HyperMapHCPhys and made the MMHyperXToR0 use pvR0 for HCPhys and Locked more strickly.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id
File size: 90.6 KB
Line 
1/* $Id: TM.cpp 14597 2008-11-25 20:41:40Z vboxsync $ */
2/** @file
3 * TM - Time Manager.
4 */
5
6/*
7 * Copyright (C) 2006-2007 Sun Microsystems, Inc.
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 *
17 * Please contact Sun Microsystems, Inc., 4150 Network Circle, Santa
18 * Clara, CA 95054 USA or visit http://www.sun.com if you need
19 * additional information or have any questions.
20 */
21
22/** @page pg_tm TM - The Time Manager
23 *
24 * The Time Manager abstracts the CPU clocks and manages timers used by the VMM,
25 * device and drivers.
26 *
27 * @see grp_tm
28 *
29 *
30 * @section sec_tm_clocks Clocks
31 *
32 * There are currently 4 clocks:
33 * - Virtual (guest).
34 * - Synchronous virtual (guest).
35 * - CPU Tick (TSC) (guest). Only current use is rdtsc emulation. Usually a
36 * function of the virtual clock.
37 * - Real (host). This is only used for display updates atm.
38 *
39 * The most important clocks are the three first ones and of these the second is
40 * the most interesting.
41 *
42 *
43 * The synchronous virtual clock is tied to the virtual clock except that it
44 * will take into account timer delivery lag caused by host scheduling. It will
45 * normally never advance beyond the head timer, and when lagging too far behind
46 * it will gradually speed up to catch up with the virtual clock. All devices
47 * implementing time sources accessible to and used by the guest is using this
48 * clock (for timers and other things). This ensures consistency between the
49 * time sources.
50 *
51 * The virtual clock is implemented as an offset to a monotonic, high
52 * resolution, wall clock. The current time source is using the RTTimeNanoTS()
53 * machinery based upon the Global Info Pages (GIP), that is, we're using TSC
54 * deltas (usually 10 ms) to fill the gaps between GIP updates. The result is
55 * a fairly high res clock that works in all contexts and on all hosts. The
56 * virtual clock is paused when the VM isn't in the running state.
57 *
58 * The CPU tick (TSC) is normally virtualized as a function of the synchronous
59 * virtual clock, where the frequency defaults to the host cpu frequency (as we
60 * measure it). In this mode it is possible to configure the frequency. Another
61 * (non-default) option is to use the raw unmodified host TSC values. And yet
62 * another, to tie it to time spent executing guest code. All these things are
63 * configurable should non-default behavior be desirable.
64 *
65 * The real clock is a monotonic clock (when available) with relatively low
66 * resolution, though this a bit host specific. Note that we're currently not
67 * servicing timers using the real clock when the VM is not running, this is
68 * simply because it has not been needed yet therefore not implemented.
69 *
70 *
71 * @subsection subsec_tm_timesync Guest Time Sync / UTC time
72 *
73 * Guest time syncing is primarily taken care of by the VMM device. The
74 * principle is very simple, the guest additions periodically asks the VMM
75 * device what the current UTC time is and makes adjustments accordingly.
76 *
77 * A complicating factor is that the synchronous virtual clock might be doing
78 * catchups and the guest perception is currently a little bit behind the world
79 * but it will (hopefully) be catching up soon as we're feeding timer interrupts
80 * at a slightly higher rate. Adjusting the guest clock to the current wall
81 * time in the real world would be a bad idea then because the guest will be
82 * advancing too fast and run ahead of world time (if the catchup works out).
83 * To solve this problem TM provides the VMM device with an UTC time source that
84 * gets adjusted with the current lag, so that when the guest eventually catches
85 * up the lag it will be showing correct real world time.
86 *
87 *
88 * @section sec_tm_timers Timers
89 *
90 * The timers can use any of the TM clocks described in the previous section.
91 * Each clock has its own scheduling facility, or timer queue if you like.
92 * There are a few factors which makes it a bit complex. First, there is the
93 * usual R0 vs R3 vs. RC thing. Then there is multiple threads, and then there
94 * is the timer thread that periodically checks whether any timers has expired
95 * without EMT noticing. On the API level, all but the create and save APIs
96 * must be mulithreaded. EMT will always run the timers.
97 *
98 * The design is using a doubly linked list of active timers which is ordered
99 * by expire date. This list is only modified by the EMT thread. Updates to
100 * the list are batched in a singly linked list, which is then process by the
101 * EMT thread at the first opportunity (immediately, next time EMT modifies a
102 * timer on that clock, or next timer timeout). Both lists are offset based and
103 * all the elements are therefore allocated from the hyper heap.
104 *
105 * For figuring out when there is need to schedule and run timers TM will:
106 * - Poll whenever somebody queries the virtual clock.
107 * - Poll the virtual clocks from the EM and REM loops.
108 * - Poll the virtual clocks from trap exit path.
109 * - Poll the virtual clocks and calculate first timeout from the halt loop.
110 * - Employ a thread which periodically (100Hz) polls all the timer queues.
111 *
112 *
113 * @section sec_tm_timer Logging
114 *
115 * Level 2: Logs a most of the timer state transitions and queue servicing.
116 * Level 3: Logs a few oddments.
117 * Level 4: Logs TMCLOCK_VIRTUAL_SYNC catch-up events.
118 *
119 */
120
121/*******************************************************************************
122* Header Files *
123*******************************************************************************/
124#define LOG_GROUP LOG_GROUP_TM
125#include <VBox/tm.h>
126#include <VBox/vmm.h>
127#include <VBox/mm.h>
128#include <VBox/ssm.h>
129#include <VBox/dbgf.h>
130#include <VBox/rem.h>
131#include <VBox/pdm.h>
132#include "TMInternal.h"
133#include <VBox/vm.h>
134
135#include <VBox/param.h>
136#include <VBox/err.h>
137
138#include <VBox/log.h>
139#include <iprt/asm.h>
140#include <iprt/assert.h>
141#include <iprt/thread.h>
142#include <iprt/time.h>
143#include <iprt/timer.h>
144#include <iprt/semaphore.h>
145#include <iprt/string.h>
146#include <iprt/env.h>
147
148
149/*******************************************************************************
150* Defined Constants And Macros *
151*******************************************************************************/
152/** The current saved state version.*/
153#define TM_SAVED_STATE_VERSION 3
154
155
156/*******************************************************************************
157* Internal Functions *
158*******************************************************************************/
159static bool tmR3HasFixedTSC(PVM pVM);
160static uint64_t tmR3CalibrateTSC(PVM pVM);
161static DECLCALLBACK(int) tmR3Save(PVM pVM, PSSMHANDLE pSSM);
162static DECLCALLBACK(int) tmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t u32Version);
163static DECLCALLBACK(void) tmR3TimerCallback(PRTTIMER pTimer, void *pvUser, uint64_t iTick);
164static void tmR3TimerQueueRun(PVM pVM, PTMTIMERQUEUE pQueue);
165static void tmR3TimerQueueRunVirtualSync(PVM pVM);
166static DECLCALLBACK(void) tmR3TimerInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
167static DECLCALLBACK(void) tmR3TimerInfoActive(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
168static DECLCALLBACK(void) tmR3InfoClocks(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
169
170
171/**
172 * Initializes the TM.
173 *
174 * @returns VBox status code.
175 * @param pVM The VM to operate on.
176 */
177VMMR3DECL(int) TMR3Init(PVM pVM)
178{
179 LogFlow(("TMR3Init:\n"));
180
181 /*
182 * Assert alignment and sizes.
183 */
184 AssertRelease(!(RT_OFFSETOF(VM, tm.s) & 31));
185 AssertRelease(sizeof(pVM->tm.s) <= sizeof(pVM->tm.padding));
186
187 /*
188 * Init the structure.
189 */
190 void *pv;
191 int rc = MMHyperAlloc(pVM, sizeof(pVM->tm.s.paTimerQueuesR3[0]) * TMCLOCK_MAX, 0, MM_TAG_TM, &pv);
192 AssertRCReturn(rc, rc);
193 pVM->tm.s.paTimerQueuesR3 = (PTMTIMERQUEUE)pv;
194 pVM->tm.s.paTimerQueuesR0 = MMHyperR3ToR0(pVM, pv);
195 pVM->tm.s.paTimerQueuesRC = MMHyperR3ToRC(pVM, pv);
196
197 pVM->tm.s.offVM = RT_OFFSETOF(VM, tm.s);
198 pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL].enmClock = TMCLOCK_VIRTUAL;
199 pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL].u64Expire = INT64_MAX;
200 pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].enmClock = TMCLOCK_VIRTUAL_SYNC;
201 pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].u64Expire = INT64_MAX;
202 pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL].enmClock = TMCLOCK_REAL;
203 pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL].u64Expire = INT64_MAX;
204 pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC].enmClock = TMCLOCK_TSC;
205 pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC].u64Expire = INT64_MAX;
206
207 /*
208 * We directly use the GIP to calculate the virtual time. We map the
209 * the GIP into the guest context so we can do this calculation there
210 * as well and save costly world switches.
211 */
212 pVM->tm.s.pvGIPR3 = (void *)g_pSUPGlobalInfoPage;
213 AssertMsgReturn(pVM->tm.s.pvGIPR3, ("GIP support is now required!\n"), VERR_INTERNAL_ERROR);
214 RTHCPHYS HCPhysGIP;
215 rc = SUPGipGetPhys(&HCPhysGIP);
216 AssertMsgRCReturn(rc, ("Failed to get GIP physical address!\n"), rc);
217
218 RTGCPTR GCPtr;
219 rc = MMR3HyperMapHCPhys(pVM, pVM->tm.s.pvGIPR3, NIL_RTR0PTR, HCPhysGIP, PAGE_SIZE, "GIP", &GCPtr);
220 if (RT_FAILURE(rc))
221 {
222 AssertMsgFailed(("Failed to map GIP into GC, rc=%Rrc!\n", rc));
223 return rc;
224 }
225 pVM->tm.s.pvGIPRC = GCPtr;
226 LogFlow(("TMR3Init: HCPhysGIP=%RHp at %RRv\n", HCPhysGIP, pVM->tm.s.pvGIPRC));
227 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
228
229 /* Check assumptions made in TMAllVirtual.cpp about the GIP update interval. */
230 if ( g_pSUPGlobalInfoPage->u32Magic == SUPGLOBALINFOPAGE_MAGIC
231 && g_pSUPGlobalInfoPage->u32UpdateIntervalNS >= 250000000 /* 0.25s */)
232 return VMSetError(pVM, VERR_INTERNAL_ERROR, RT_SRC_POS,
233 N_("The GIP update interval is too big. u32UpdateIntervalNS=%RU32 (u32UpdateHz=%RU32)"),
234 g_pSUPGlobalInfoPage->u32UpdateIntervalNS, g_pSUPGlobalInfoPage->u32UpdateHz);
235
236 /*
237 * Setup the VirtualGetRaw backend.
238 */
239 pVM->tm.s.VirtualGetRawDataR3.pu64Prev = &pVM->tm.s.u64VirtualRawPrev;
240 pVM->tm.s.VirtualGetRawDataR3.pfnBad = tmVirtualNanoTSBad;
241 pVM->tm.s.VirtualGetRawDataR3.pfnRediscover = tmVirtualNanoTSRediscover;
242 if (ASMCpuId_EDX(1) & X86_CPUID_FEATURE_EDX_SSE2)
243 {
244 if (g_pSUPGlobalInfoPage->u32Mode == SUPGIPMODE_SYNC_TSC)
245 pVM->tm.s.pfnVirtualGetRawR3 = RTTimeNanoTSLFenceSync;
246 else
247 pVM->tm.s.pfnVirtualGetRawR3 = RTTimeNanoTSLFenceAsync;
248 }
249 else
250 {
251 if (g_pSUPGlobalInfoPage->u32Mode == SUPGIPMODE_SYNC_TSC)
252 pVM->tm.s.pfnVirtualGetRawR3 = RTTimeNanoTSLegacySync;
253 else
254 pVM->tm.s.pfnVirtualGetRawR3 = RTTimeNanoTSLegacyAsync;
255 }
256
257 pVM->tm.s.VirtualGetRawDataRC.pu64Prev = MMHyperR3ToRC(pVM, (void *)&pVM->tm.s.u64VirtualRawPrev);
258 pVM->tm.s.VirtualGetRawDataR0.pu64Prev = MMHyperR3ToR0(pVM, (void *)&pVM->tm.s.u64VirtualRawPrev);
259 AssertReturn(pVM->tm.s.VirtualGetRawDataR0.pu64Prev, VERR_INTERNAL_ERROR);
260 /* The rest is done in TMR3InitFinalize since it's too early to call PDM. */
261
262
263 /*
264 * Get our CFGM node, create it if necessary.
265 */
266 PCFGMNODE pCfgHandle = CFGMR3GetChild(CFGMR3GetRoot(pVM), "TM");
267 if (!pCfgHandle)
268 {
269 rc = CFGMR3InsertNode(CFGMR3GetRoot(pVM), "TM", &pCfgHandle);
270 AssertRCReturn(rc, rc);
271 }
272
273 /*
274 * Determin the TSC configuration and frequency.
275 */
276 /* mode */
277 /** @cfgm{/TM/TSCVirtualized,bool,true}
278 * Use a virtualize TSC, i.e. trap all TSC access. */
279 rc = CFGMR3QueryBool(pCfgHandle, "TSCVirtualized", &pVM->tm.s.fTSCVirtualized);
280 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
281 pVM->tm.s.fTSCVirtualized = true; /* trap rdtsc */
282 else if (RT_FAILURE(rc))
283 return VMSetError(pVM, rc, RT_SRC_POS,
284 N_("Configuration error: Failed to querying bool value \"UseRealTSC\""));
285
286 /* source */
287 /** @cfgm{/TM/UseRealTSC,bool,false}
288 * Use the real TSC as time source for the TSC instead of the synchronous
289 * virtual clock (false, default). */
290 rc = CFGMR3QueryBool(pCfgHandle, "UseRealTSC", &pVM->tm.s.fTSCUseRealTSC);
291 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
292 pVM->tm.s.fTSCUseRealTSC = false; /* use virtual time */
293 else if (RT_FAILURE(rc))
294 return VMSetError(pVM, rc, RT_SRC_POS,
295 N_("Configuration error: Failed to querying bool value \"UseRealTSC\""));
296 if (!pVM->tm.s.fTSCUseRealTSC)
297 pVM->tm.s.fTSCVirtualized = true;
298
299 /* TSC reliability */
300 /** @cfgm{/TM/MaybeUseOffsettedHostTSC,bool,detect}
301 * Whether the CPU has a fixed TSC rate and may be used in offsetted mode with
302 * VT-x/AMD-V execution. This is autodetected in a very restrictive way by
303 * default. */
304 rc = CFGMR3QueryBool(pCfgHandle, "MaybeUseOffsettedHostTSC", &pVM->tm.s.fMaybeUseOffsettedHostTSC);
305 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
306 {
307 if (!pVM->tm.s.fTSCUseRealTSC)
308 pVM->tm.s.fMaybeUseOffsettedHostTSC = tmR3HasFixedTSC(pVM);
309 else
310 pVM->tm.s.fMaybeUseOffsettedHostTSC = true;
311 }
312
313 /** @cfgm{TM/TSCTicksPerSecond, uint32_t, Current TSC frequency from GIP}
314 * The number of TSC ticks per second (i.e. the TSC frequency). This will
315 * override TSCUseRealTSC, TSCVirtualized and MaybeUseOffsettedHostTSC.
316 */
317 rc = CFGMR3QueryU64(pCfgHandle, "TSCTicksPerSecond", &pVM->tm.s.cTSCTicksPerSecond);
318 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
319 {
320 pVM->tm.s.cTSCTicksPerSecond = tmR3CalibrateTSC(pVM);
321 if ( !pVM->tm.s.fTSCUseRealTSC
322 && pVM->tm.s.cTSCTicksPerSecond >= _4G)
323 {
324 pVM->tm.s.cTSCTicksPerSecond = _4G - 1; /* (A limitation of our math code) */
325 pVM->tm.s.fMaybeUseOffsettedHostTSC = false;
326 }
327 }
328 else if (RT_FAILURE(rc))
329 return VMSetError(pVM, rc, RT_SRC_POS,
330 N_("Configuration error: Failed to querying uint64_t value \"TSCTicksPerSecond\""));
331 else if ( pVM->tm.s.cTSCTicksPerSecond < _1M
332 || pVM->tm.s.cTSCTicksPerSecond >= _4G)
333 return VMSetError(pVM, VERR_INVALID_PARAMETER, RT_SRC_POS,
334 N_("Configuration error: \"TSCTicksPerSecond\" = %RI64 is not in the range 1MHz..4GHz-1"),
335 pVM->tm.s.cTSCTicksPerSecond);
336 else
337 {
338 pVM->tm.s.fTSCUseRealTSC = pVM->tm.s.fMaybeUseOffsettedHostTSC = false;
339 pVM->tm.s.fTSCVirtualized = true;
340 }
341
342 /** @cfgm{TM/TSCTiedToExecution, bool, false}
343 * Whether the TSC should be tied to execution. This will exclude most of the
344 * virtualization overhead, but will by default include the time spend in the
345 * halt state (see TM/TSCNotTiedToHalt). This setting will override all other
346 * TSC settings except for TSCTicksPerSecond and TSCNotTiedToHalt, which should
347 * be used avoided or used with great care. Note that this will only work right
348 * together with VT-x or AMD-V, and with a single virtual CPU. */
349 rc = CFGMR3QueryBoolDef(pCfgHandle, "TSCTiedToExecution", &pVM->tm.s.fTSCTiedToExecution, false);
350 if (RT_FAILURE(rc))
351 return VMSetError(pVM, rc, RT_SRC_POS,
352 N_("Configuration error: Failed to querying bool value \"TSCTiedToExecution\""));
353 if (pVM->tm.s.fTSCTiedToExecution)
354 {
355 /* tied to execution, override all other settings. */
356 pVM->tm.s.fTSCVirtualized = true;
357 pVM->tm.s.fTSCUseRealTSC = true;
358 pVM->tm.s.fMaybeUseOffsettedHostTSC = false;
359 }
360
361 /** @cfgm{TM/TSCNotTiedToHalt, bool, true}
362 * For overriding the default of TM/TSCTiedToExecution, i.e. set this to false
363 * to make the TSC freeze during HLT. */
364 rc = CFGMR3QueryBoolDef(pCfgHandle, "TSCNotTiedToHalt", &pVM->tm.s.fTSCNotTiedToHalt, false);
365 if (RT_FAILURE(rc))
366 return VMSetError(pVM, rc, RT_SRC_POS,
367 N_("Configuration error: Failed to querying bool value \"TSCNotTiedToHalt\""));
368
369 /* setup and report */
370 if (pVM->tm.s.fTSCVirtualized)
371 CPUMR3SetCR4Feature(pVM, X86_CR4_TSD, ~X86_CR4_TSD);
372 else
373 CPUMR3SetCR4Feature(pVM, 0, ~X86_CR4_TSD);
374 LogRel(("TM: cTSCTicksPerSecond=%#RX64 (%RU64) fTSCVirtualized=%RTbool fTSCUseRealTSC=%RTbool\n"
375 "TM: fMaybeUseOffsettedHostTSC=%RTbool TSCTiedToExecution=%RTbool TSCNotTiedToHalt=%RTbool\n",
376 pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.fTSCVirtualized, pVM->tm.s.fTSCUseRealTSC,
377 pVM->tm.s.fMaybeUseOffsettedHostTSC, pVM->tm.s.fTSCTiedToExecution, pVM->tm.s.fTSCNotTiedToHalt));
378
379 /*
380 * Configure the timer synchronous virtual time.
381 */
382 /** @cfgm{TM/ScheduleSlack, uint32_t, ns, 0, UINT32_MAX, 100000}
383 * Scheduling slack when processing timers. */
384 rc = CFGMR3QueryU32(pCfgHandle, "ScheduleSlack", &pVM->tm.s.u32VirtualSyncScheduleSlack);
385 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
386 pVM->tm.s.u32VirtualSyncScheduleSlack = 100000; /* 0.100ms (ASSUMES virtual time is nanoseconds) */
387 else if (RT_FAILURE(rc))
388 return VMSetError(pVM, rc, RT_SRC_POS,
389 N_("Configuration error: Failed to querying 32-bit integer value \"ScheduleSlack\""));
390
391 /** @cfgm{TM/CatchUpStopThreshold, uint64_t, ns, 0, UINT64_MAX, 500000}
392 * When to stop a catch-up, considering it successful. */
393 rc = CFGMR3QueryU64(pCfgHandle, "CatchUpStopThreshold", &pVM->tm.s.u64VirtualSyncCatchUpStopThreshold);
394 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
395 pVM->tm.s.u64VirtualSyncCatchUpStopThreshold = 500000; /* 0.5ms */
396 else if (RT_FAILURE(rc))
397 return VMSetError(pVM, rc, RT_SRC_POS,
398 N_("Configuration error: Failed to querying 64-bit integer value \"CatchUpStopThreshold\""));
399
400 /** @cfgm{TM/CatchUpGiveUpThreshold, uint64_t, ns, 0, UINT64_MAX, 60000000000}
401 * When to give up a catch-up attempt. */
402 rc = CFGMR3QueryU64(pCfgHandle, "CatchUpGiveUpThreshold", &pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold);
403 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
404 pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold = UINT64_C(60000000000); /* 60 sec */
405 else if (RT_FAILURE(rc))
406 return VMSetError(pVM, rc, RT_SRC_POS,
407 N_("Configuration error: Failed to querying 64-bit integer value \"CatchUpGiveUpThreshold\""));
408
409
410 /** @cfgm{TM/CatchUpPrecentage[0..9], uint32_t, %, 1, 2000, various}
411 * The catch-up percent for a given period. */
412 /** @cfgm{TM/CatchUpStartThreshold[0..9], uint64_t, ns, 0, UINT64_MAX,
413 * The catch-up period threshold, or if you like, when a period starts. */
414#define TM_CFG_PERIOD(iPeriod, DefStart, DefPct) \
415 do \
416 { \
417 uint64_t u64; \
418 rc = CFGMR3QueryU64(pCfgHandle, "CatchUpStartThreshold" #iPeriod, &u64); \
419 if (rc == VERR_CFGM_VALUE_NOT_FOUND) \
420 u64 = UINT64_C(DefStart); \
421 else if (RT_FAILURE(rc)) \
422 return VMSetError(pVM, rc, RT_SRC_POS, N_("Configuration error: Failed to querying 64-bit integer value \"CatchUpThreshold" #iPeriod "\"")); \
423 if ( (iPeriod > 0 && u64 <= pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod - 1].u64Start) \
424 || u64 >= pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold) \
425 return VMSetError(pVM, VERR_INVALID_PARAMETER, RT_SRC_POS, N_("Configuration error: Invalid start of period #" #iPeriod ": %RU64"), u64); \
426 pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod].u64Start = u64; \
427 rc = CFGMR3QueryU32(pCfgHandle, "CatchUpPrecentage" #iPeriod, &pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod].u32Percentage); \
428 if (rc == VERR_CFGM_VALUE_NOT_FOUND) \
429 pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod].u32Percentage = (DefPct); \
430 else if (RT_FAILURE(rc)) \
431 return VMSetError(pVM, rc, RT_SRC_POS, N_("Configuration error: Failed to querying 32-bit integer value \"CatchUpPrecentage" #iPeriod "\"")); \
432 } while (0)
433 /* This needs more tuning. Not sure if we really need so many period and be so gentle. */
434 TM_CFG_PERIOD(0, 750000, 5); /* 0.75ms at 1.05x */
435 TM_CFG_PERIOD(1, 1500000, 10); /* 1.50ms at 1.10x */
436 TM_CFG_PERIOD(2, 8000000, 25); /* 8ms at 1.25x */
437 TM_CFG_PERIOD(3, 30000000, 50); /* 30ms at 1.50x */
438 TM_CFG_PERIOD(4, 75000000, 75); /* 75ms at 1.75x */
439 TM_CFG_PERIOD(5, 175000000, 100); /* 175ms at 2x */
440 TM_CFG_PERIOD(6, 500000000, 200); /* 500ms at 3x */
441 TM_CFG_PERIOD(7, 3000000000, 300); /* 3s at 4x */
442 TM_CFG_PERIOD(8,30000000000, 400); /* 30s at 5x */
443 TM_CFG_PERIOD(9,55000000000, 500); /* 55s at 6x */
444 AssertCompile(RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods) == 10);
445#undef TM_CFG_PERIOD
446
447 /*
448 * Configure real world time (UTC).
449 */
450 /** @cfgm{TM/UTCOffset, int64_t, ns, INT64_MIN, INT64_MAX, 0}
451 * The UTC offset. This is used to put the guest back or forwards in time. */
452 rc = CFGMR3QueryS64(pCfgHandle, "UTCOffset", &pVM->tm.s.offUTC);
453 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
454 pVM->tm.s.offUTC = 0; /* ns */
455 else if (RT_FAILURE(rc))
456 return VMSetError(pVM, rc, RT_SRC_POS,
457 N_("Configuration error: Failed to querying 64-bit integer value \"UTCOffset\""));
458
459 /*
460 * Setup the warp drive.
461 */
462 /** @cfgm{TM/WarpDrivePercentage, uint32_t, %, 0, 20000, 100}
463 * The warp drive percentage, 100% is normal speed. This is used to speed up
464 * or slow down the virtual clock, which can be useful for fast forwarding
465 * borring periods during tests. */
466 rc = CFGMR3QueryU32(pCfgHandle, "WarpDrivePercentage", &pVM->tm.s.u32VirtualWarpDrivePercentage);
467 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
468 rc = CFGMR3QueryU32(CFGMR3GetRoot(pVM), "WarpDrivePercentage", &pVM->tm.s.u32VirtualWarpDrivePercentage); /* legacy */
469 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
470 pVM->tm.s.u32VirtualWarpDrivePercentage = 100;
471 else if (RT_FAILURE(rc))
472 return VMSetError(pVM, rc, RT_SRC_POS,
473 N_("Configuration error: Failed to querying uint32_t value \"WarpDrivePercent\""));
474 else if ( pVM->tm.s.u32VirtualWarpDrivePercentage < 2
475 || pVM->tm.s.u32VirtualWarpDrivePercentage > 20000)
476 return VMSetError(pVM, VERR_INVALID_PARAMETER, RT_SRC_POS,
477 N_("Configuration error: \"WarpDrivePercent\" = %RI32 is not in the range 2..20000"),
478 pVM->tm.s.u32VirtualWarpDrivePercentage);
479 pVM->tm.s.fVirtualWarpDrive = pVM->tm.s.u32VirtualWarpDrivePercentage != 100;
480 if (pVM->tm.s.fVirtualWarpDrive)
481 LogRel(("TM: u32VirtualWarpDrivePercentage=%RI32\n", pVM->tm.s.u32VirtualWarpDrivePercentage));
482
483 /*
484 * Start the timer (guard against REM not yielding).
485 */
486 /** @cfgm{TM/TimerMillies, uint32_t, ms, 1, 1000, 10}
487 * The watchdog timer interval. */
488 uint32_t u32Millies;
489 rc = CFGMR3QueryU32(pCfgHandle, "TimerMillies", &u32Millies);
490 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
491 u32Millies = 10;
492 else if (RT_FAILURE(rc))
493 return VMSetError(pVM, rc, RT_SRC_POS,
494 N_("Configuration error: Failed to query uint32_t value \"TimerMillies\""));
495 rc = RTTimerCreate(&pVM->tm.s.pTimer, u32Millies, tmR3TimerCallback, pVM);
496 if (RT_FAILURE(rc))
497 {
498 AssertMsgFailed(("Failed to create timer, u32Millies=%d rc=%Rrc.\n", u32Millies, rc));
499 return rc;
500 }
501 Log(("TM: Created timer %p firing every %d millieseconds\n", pVM->tm.s.pTimer, u32Millies));
502 pVM->tm.s.u32TimerMillies = u32Millies;
503
504 /*
505 * Register saved state.
506 */
507 rc = SSMR3RegisterInternal(pVM, "tm", 1, TM_SAVED_STATE_VERSION, sizeof(uint64_t) * 8,
508 NULL, tmR3Save, NULL,
509 NULL, tmR3Load, NULL);
510 if (RT_FAILURE(rc))
511 return rc;
512
513 /*
514 * Register statistics.
515 */
516 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR3.c1nsSteps,STAMTYPE_U32, "/TM/R3/1nsSteps", STAMUNIT_OCCURENCES, "Virtual time 1ns steps (due to TSC / GIP variations).");
517 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR3.cBadPrev, STAMTYPE_U32, "/TM/R3/cBadPrev", STAMUNIT_OCCURENCES, "Times the previous virtual time was considered erratic (shouldn't ever happen).");
518 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR0.c1nsSteps,STAMTYPE_U32, "/TM/R0/1nsSteps", STAMUNIT_OCCURENCES, "Virtual time 1ns steps (due to TSC / GIP variations).");
519 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR0.cBadPrev, STAMTYPE_U32, "/TM/R0/cBadPrev", STAMUNIT_OCCURENCES, "Times the previous virtual time was considered erratic (shouldn't ever happen).");
520 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataRC.c1nsSteps,STAMTYPE_U32, "/TM/GC/1nsSteps", STAMUNIT_OCCURENCES, "Virtual time 1ns steps (due to TSC / GIP variations).");
521 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataRC.cBadPrev, STAMTYPE_U32, "/TM/GC/cBadPrev", STAMUNIT_OCCURENCES, "Times the previous virtual time was considered erratic (shouldn't ever happen).");
522 STAM_REL_REG( pVM,(void*)&pVM->tm.s.offVirtualSync, STAMTYPE_U64, "/TM/VirtualSync/CurrentOffset", STAMUNIT_NS, "The current offset. (subtract GivenUp to get the lag)");
523 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.offVirtualSyncGivenUp, STAMTYPE_U64, "/TM/VirtualSync/GivenUp", STAMUNIT_NS, "Nanoseconds of the 'CurrentOffset' that's been given up and won't ever be attemted caught up with.");
524
525#ifdef VBOX_WITH_STATISTICS
526 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR3.cExpired, STAMTYPE_U32, "/TM/R3/cExpired", STAMUNIT_OCCURENCES, "Times the TSC interval expired (overlaps 1ns steps).");
527 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR3.cUpdateRaces,STAMTYPE_U32, "/TM/R3/cUpdateRaces", STAMUNIT_OCCURENCES, "Thread races when updating the previous timestamp.");
528 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR0.cExpired, STAMTYPE_U32, "/TM/R0/cExpired", STAMUNIT_OCCURENCES, "Times the TSC interval expired (overlaps 1ns steps).");
529 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR0.cUpdateRaces,STAMTYPE_U32, "/TM/R0/cUpdateRaces", STAMUNIT_OCCURENCES, "Thread races when updating the previous timestamp.");
530 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataRC.cExpired, STAMTYPE_U32, "/TM/GC/cExpired", STAMUNIT_OCCURENCES, "Times the TSC interval expired (overlaps 1ns steps).");
531 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataRC.cUpdateRaces,STAMTYPE_U32, "/TM/GC/cUpdateRaces", STAMUNIT_OCCURENCES, "Thread races when updating the previous timestamp.");
532 STAM_REG(pVM, &pVM->tm.s.StatDoQueues, STAMTYPE_PROFILE, "/TM/DoQueues", STAMUNIT_TICKS_PER_CALL, "Profiling timer TMR3TimerQueuesDo.");
533 STAM_REG(pVM, &pVM->tm.s.StatDoQueuesSchedule, STAMTYPE_PROFILE_ADV, "/TM/DoQueues/Schedule", STAMUNIT_TICKS_PER_CALL, "The scheduling part.");
534 STAM_REG(pVM, &pVM->tm.s.StatDoQueuesRun, STAMTYPE_PROFILE_ADV, "/TM/DoQueues/Run", STAMUNIT_TICKS_PER_CALL, "The run part.");
535
536 STAM_REG(pVM, &pVM->tm.s.StatPollAlreadySet, STAMTYPE_COUNTER, "/TM/PollAlreadySet", STAMUNIT_OCCURENCES, "TMTimerPoll calls where the FF was already set.");
537 STAM_REG(pVM, &pVM->tm.s.StatPollVirtual, STAMTYPE_COUNTER, "/TM/PollHitsVirtual", STAMUNIT_OCCURENCES, "The number of times TMTimerPoll found an expired TMCLOCK_VIRTUAL queue.");
538 STAM_REG(pVM, &pVM->tm.s.StatPollVirtualSync, STAMTYPE_COUNTER, "/TM/PollHitsVirtualSync", STAMUNIT_OCCURENCES, "The number of times TMTimerPoll found an expired TMCLOCK_VIRTUAL_SYNC queue.");
539 STAM_REG(pVM, &pVM->tm.s.StatPollMiss, STAMTYPE_COUNTER, "/TM/PollMiss", STAMUNIT_OCCURENCES, "TMTimerPoll calls where nothing had expired.");
540
541 STAM_REG(pVM, &pVM->tm.s.StatPostponedR3, STAMTYPE_COUNTER, "/TM/PostponedR3", STAMUNIT_OCCURENCES, "Postponed due to unschedulable state, in ring-3.");
542 STAM_REG(pVM, &pVM->tm.s.StatPostponedRZ, STAMTYPE_COUNTER, "/TM/PostponedRZ", STAMUNIT_OCCURENCES, "Postponed due to unschedulable state, in ring-0 / RC.");
543
544 STAM_REG(pVM, &pVM->tm.s.StatScheduleOneR3, STAMTYPE_PROFILE, "/TM/ScheduleOneR3", STAMUNIT_TICKS_PER_CALL, "Profiling the scheduling of one queue during a TMTimer* call in EMT.");
545 STAM_REG(pVM, &pVM->tm.s.StatScheduleOneRZ, STAMTYPE_PROFILE, "/TM/ScheduleOneRZ", STAMUNIT_TICKS_PER_CALL, "Profiling the scheduling of one queue during a TMTimer* call in EMT.");
546 STAM_REG(pVM, &pVM->tm.s.StatScheduleSetFF, STAMTYPE_COUNTER, "/TM/ScheduleSetFF", STAMUNIT_OCCURENCES, "The number of times the timer FF was set instead of doing scheduling.");
547
548 STAM_REG(pVM, &pVM->tm.s.StatTimerSetR3, STAMTYPE_PROFILE, "/TM/TimerSetR3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSet calls made in ring-3.");
549 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRZ, STAMTYPE_PROFILE, "/TM/TimerSetRZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSet calls made in ring-0 / RC.");
550
551 STAM_REG(pVM, &pVM->tm.s.StatTimerStopR3, STAMTYPE_PROFILE, "/TM/TimerStopR3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerStop calls made in ring-3.");
552 STAM_REG(pVM, &pVM->tm.s.StatTimerStopRZ, STAMTYPE_PROFILE, "/TM/TimerStopRZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerStop calls made in ring-0 / RC.");
553
554 STAM_REG(pVM, &pVM->tm.s.StatVirtualGet, STAMTYPE_COUNTER, "/TM/VirtualGet", STAMUNIT_OCCURENCES, "The number of times TMTimerGet was called when the clock was running.");
555 STAM_REG(pVM, &pVM->tm.s.StatVirtualGetSetFF, STAMTYPE_COUNTER, "/TM/VirtualGetSetFF", STAMUNIT_OCCURENCES, "Times we set the FF when calling TMTimerGet.");
556 STAM_REG(pVM, &pVM->tm.s.StatVirtualGetSync, STAMTYPE_COUNTER, "/TM/VirtualGetSync", STAMUNIT_OCCURENCES, "The number of times TMTimerGetSync was called when the clock was running.");
557 STAM_REG(pVM, &pVM->tm.s.StatVirtualGetSyncSetFF, STAMTYPE_COUNTER, "/TM/VirtualGetSyncSetFF", STAMUNIT_OCCURENCES, "Times we set the FF when calling TMTimerGetSync.");
558 STAM_REG(pVM, &pVM->tm.s.StatVirtualPause, STAMTYPE_COUNTER, "/TM/VirtualPause", STAMUNIT_OCCURENCES, "The number of times TMR3TimerPause was called.");
559 STAM_REG(pVM, &pVM->tm.s.StatVirtualResume, STAMTYPE_COUNTER, "/TM/VirtualResume", STAMUNIT_OCCURENCES, "The number of times TMR3TimerResume was called.");
560
561 STAM_REG(pVM, &pVM->tm.s.StatTimerCallbackSetFF, STAMTYPE_COUNTER, "/TM/CallbackSetFF", STAMUNIT_OCCURENCES, "The number of times the timer callback set FF.");
562
563 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupLE010, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupLE010", STAMUNIT_OCCURENCES, "In catch-up mode, 10% or lower.");
564 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupLE025, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupLE025", STAMUNIT_OCCURENCES, "In catch-up mode, 25%-11%.");
565 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupLE100, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupLE100", STAMUNIT_OCCURENCES, "In catch-up mode, 100%-26%.");
566 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupOther, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupOther", STAMUNIT_OCCURENCES, "In catch-up mode, > 100%.");
567 STAM_REG(pVM, &pVM->tm.s.StatTSCNotFixed, STAMTYPE_COUNTER, "/TM/TSC/Intercept/NotFixed", STAMUNIT_OCCURENCES, "TSC is not fixed, it may run at variable speed.");
568 STAM_REG(pVM, &pVM->tm.s.StatTSCNotTicking, STAMTYPE_COUNTER, "/TM/TSC/Intercept/NotTicking", STAMUNIT_OCCURENCES, "TSC is not ticking.");
569 STAM_REG(pVM, &pVM->tm.s.StatTSCSyncNotTicking, STAMTYPE_COUNTER, "/TM/TSC/Intercept/SyncNotTicking", STAMUNIT_OCCURENCES, "VirtualSync isn't ticking.");
570 STAM_REG(pVM, &pVM->tm.s.StatTSCWarp, STAMTYPE_COUNTER, "/TM/TSC/Intercept/Warp", STAMUNIT_OCCURENCES, "Warpdrive is active.");
571
572
573 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncCatchup, STAMTYPE_PROFILE_ADV, "/TM/VirtualSync/CatchUp", STAMUNIT_TICKS_PER_OCCURENCE, "Counting and measuring the times spent catching up.");
574 STAM_REG(pVM, (void *)&pVM->tm.s.fVirtualSyncCatchUp, STAMTYPE_U8, "/TM/VirtualSync/CatchUpActive", STAMUNIT_NONE, "Catch-Up active indicator.");
575 STAM_REG(pVM, (void *)&pVM->tm.s.u32VirtualSyncCatchUpPercentage, STAMTYPE_U32, "/TM/VirtualSync/CatchUpPercentage", STAMUNIT_PCT, "The catch-up percentage. (+100/100 to get clock multiplier)");
576 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGiveUp, STAMTYPE_COUNTER, "/TM/VirtualSync/GiveUp", STAMUNIT_OCCURENCES, "Times the catch-up was abandoned.");
577 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGiveUpBeforeStarting, STAMTYPE_COUNTER, "/TM/VirtualSync/GiveUpBeforeStarting",STAMUNIT_OCCURENCES, "Times the catch-up was abandoned before even starting. (Typically debugging++.)");
578 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRun, STAMTYPE_COUNTER, "/TM/VirtualSync/Run", STAMUNIT_OCCURENCES, "Times the virtual sync timer queue was considered.");
579 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunRestart, STAMTYPE_COUNTER, "/TM/VirtualSync/Run/Restarts", STAMUNIT_OCCURENCES, "Times the clock was restarted after a run.");
580 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunStop, STAMTYPE_COUNTER, "/TM/VirtualSync/Run/Stop", STAMUNIT_OCCURENCES, "Times the clock was stopped when calculating the current time before examining the timers.");
581 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunStoppedAlready, STAMTYPE_COUNTER, "/TM/VirtualSync/Run/StoppedAlready", STAMUNIT_OCCURENCES, "Times the clock was already stopped elsewhere (TMVirtualSyncGet).");
582 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunSlack, STAMTYPE_PROFILE, "/TM/VirtualSync/Run/Slack", STAMUNIT_NS_PER_OCCURENCE, "The scheduling slack. (Catch-up handed out when running timers.)");
583 for (unsigned i = 0; i < RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods); i++)
584 {
585 STAMR3RegisterF(pVM, &pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage, STAMTYPE_U32, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "The catch-up percentage.", "/TM/VirtualSync/Periods/%u", i);
586 STAMR3RegisterF(pVM, &pVM->tm.s.aStatVirtualSyncCatchupAdjust[i], STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_OCCURENCES, "Times adjusted to this period.", "/TM/VirtualSync/Periods/%u/Adjust", i);
587 STAMR3RegisterF(pVM, &pVM->tm.s.aStatVirtualSyncCatchupInitial[i], STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_OCCURENCES, "Times started in this period.", "/TM/VirtualSync/Periods/%u/Initial", i);
588 STAMR3RegisterF(pVM, &pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u64Start, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Start of this period (lag).", "/TM/VirtualSync/Periods/%u/Start", i);
589 }
590
591#endif /* VBOX_WITH_STATISTICS */
592
593 /*
594 * Register info handlers.
595 */
596 DBGFR3InfoRegisterInternalEx(pVM, "timers", "Dumps all timers. No arguments.", tmR3TimerInfo, DBGFINFO_FLAGS_RUN_ON_EMT);
597 DBGFR3InfoRegisterInternalEx(pVM, "activetimers", "Dumps active all timers. No arguments.", tmR3TimerInfoActive, DBGFINFO_FLAGS_RUN_ON_EMT);
598 DBGFR3InfoRegisterInternalEx(pVM, "clocks", "Display the time of the various clocks.", tmR3InfoClocks, DBGFINFO_FLAGS_RUN_ON_EMT);
599
600 return VINF_SUCCESS;
601}
602
603
604/**
605 * Initializes the per-VCPU TM.
606 *
607 * @returns VBox status code.
608 * @param pVM The VM to operate on.
609 */
610VMMR3DECL(int) TMR3InitCPU(PVM pVM)
611{
612 LogFlow(("TMR3InitCPU\n"));
613 return VINF_SUCCESS;
614}
615
616
617/**
618 * Checks if the host CPU has a fixed TSC frequency.
619 *
620 * @returns true if it has, false if it hasn't.
621 *
622 * @remark This test doesn't bother with very old CPUs that don't do power
623 * management or any other stuff that might influence the TSC rate.
624 * This isn't currently relevant.
625 */
626static bool tmR3HasFixedTSC(PVM pVM)
627{
628 if (ASMHasCpuId())
629 {
630 uint32_t uEAX, uEBX, uECX, uEDX;
631
632 if (CPUMGetCPUVendor(pVM) == CPUMCPUVENDOR_AMD)
633 {
634 /*
635 * AuthenticAMD - Check for APM support and that TscInvariant is set.
636 *
637 * This test isn't correct with respect to fixed/non-fixed TSC and
638 * older models, but this isn't relevant since the result is currently
639 * only used for making a descision on AMD-V models.
640 */
641 ASMCpuId(0x80000000, &uEAX, &uEBX, &uECX, &uEDX);
642 if (uEAX >= 0x80000007)
643 {
644 PSUPGLOBALINFOPAGE pGip = g_pSUPGlobalInfoPage;
645
646 ASMCpuId(0x80000007, &uEAX, &uEBX, &uECX, &uEDX);
647 if ( (uEDX & X86_CPUID_AMD_ADVPOWER_EDX_TSCINVAR) /* TscInvariant */
648 && pGip->u32Mode == SUPGIPMODE_SYNC_TSC /* no fixed tsc if the gip timer is in async mode */)
649 return true;
650 }
651 }
652 else if (CPUMGetCPUVendor(pVM) == CPUMCPUVENDOR_INTEL)
653 {
654 /*
655 * GenuineIntel - Check the model number.
656 *
657 * This test is lacking in the same way and for the same reasons
658 * as the AMD test above.
659 */
660 ASMCpuId(1, &uEAX, &uEBX, &uECX, &uEDX);
661 unsigned uModel = (uEAX >> 4) & 0x0f;
662 unsigned uFamily = (uEAX >> 8) & 0x0f;
663 if (uFamily == 0x0f)
664 uFamily += (uEAX >> 20) & 0xff;
665 if (uFamily >= 0x06)
666 uModel += ((uEAX >> 16) & 0x0f) << 4;
667 if ( (uFamily == 0x0f /*P4*/ && uModel >= 0x03)
668 || (uFamily == 0x06 /*P2/P3*/ && uModel >= 0x0e))
669 return true;
670 }
671 }
672 return false;
673}
674
675
676/**
677 * Calibrate the CPU tick.
678 *
679 * @returns Number of ticks per second.
680 */
681static uint64_t tmR3CalibrateTSC(PVM pVM)
682{
683 /*
684 * Use GIP when available present.
685 */
686 uint64_t u64Hz;
687 PSUPGLOBALINFOPAGE pGip = g_pSUPGlobalInfoPage;
688 if ( pGip
689 && pGip->u32Magic == SUPGLOBALINFOPAGE_MAGIC)
690 {
691 unsigned iCpu = pGip->u32Mode != SUPGIPMODE_ASYNC_TSC ? 0 : ASMGetApicId();
692 if (iCpu >= RT_ELEMENTS(pGip->aCPUs))
693 AssertReleaseMsgFailed(("iCpu=%d - the ApicId is too high. send VBox.log and hardware specs!\n", iCpu));
694 else
695 {
696 if (tmR3HasFixedTSC(pVM))
697 /* Sleep a bit to get a more reliable CpuHz value. */
698 RTThreadSleep(32);
699 else
700 {
701 /* Spin for 40ms to try push up the CPU frequency and get a more reliable CpuHz value. */
702 const uint64_t u64 = RTTimeMilliTS();
703 while ((RTTimeMilliTS() - u64) < 40 /*ms*/)
704 /* nothing */;
705 }
706
707 pGip = g_pSUPGlobalInfoPage;
708 if ( pGip
709 && pGip->u32Magic == SUPGLOBALINFOPAGE_MAGIC
710 && (u64Hz = pGip->aCPUs[iCpu].u64CpuHz)
711 && u64Hz != ~(uint64_t)0)
712 return u64Hz;
713 }
714 }
715
716 /* call this once first to make sure it's initialized. */
717 RTTimeNanoTS();
718
719 /*
720 * Yield the CPU to increase our chances of getting
721 * a correct value.
722 */
723 RTThreadYield(); /* Try avoid interruptions between TSC and NanoTS samplings. */
724 static const unsigned s_auSleep[5] = { 50, 30, 30, 40, 40 };
725 uint64_t au64Samples[5];
726 unsigned i;
727 for (i = 0; i < RT_ELEMENTS(au64Samples); i++)
728 {
729 unsigned cMillies;
730 int cTries = 5;
731 uint64_t u64Start = ASMReadTSC();
732 uint64_t u64End;
733 uint64_t StartTS = RTTimeNanoTS();
734 uint64_t EndTS;
735 do
736 {
737 RTThreadSleep(s_auSleep[i]);
738 u64End = ASMReadTSC();
739 EndTS = RTTimeNanoTS();
740 cMillies = (unsigned)((EndTS - StartTS + 500000) / 1000000);
741 } while ( cMillies == 0 /* the sleep may be interrupted... */
742 || (cMillies < 20 && --cTries > 0));
743 uint64_t u64Diff = u64End - u64Start;
744
745 au64Samples[i] = (u64Diff * 1000) / cMillies;
746 AssertMsg(cTries > 0, ("cMillies=%d i=%d\n", cMillies, i));
747 }
748
749 /*
750 * Discard the highest and lowest results and calculate the average.
751 */
752 unsigned iHigh = 0;
753 unsigned iLow = 0;
754 for (i = 1; i < RT_ELEMENTS(au64Samples); i++)
755 {
756 if (au64Samples[i] < au64Samples[iLow])
757 iLow = i;
758 if (au64Samples[i] > au64Samples[iHigh])
759 iHigh = i;
760 }
761 au64Samples[iLow] = 0;
762 au64Samples[iHigh] = 0;
763
764 u64Hz = au64Samples[0];
765 for (i = 1; i < RT_ELEMENTS(au64Samples); i++)
766 u64Hz += au64Samples[i];
767 u64Hz /= RT_ELEMENTS(au64Samples) - 2;
768
769 return u64Hz;
770}
771
772
773/**
774 * Finalizes the TM initialization.
775 *
776 * @returns VBox status code.
777 * @param pVM The VM to operate on.
778 */
779VMMR3DECL(int) TMR3InitFinalize(PVM pVM)
780{
781 int rc;
782
783 rc = PDMR3LdrGetSymbolRCLazy(pVM, NULL, "tmVirtualNanoTSBad", &pVM->tm.s.VirtualGetRawDataRC.pfnBad);
784 AssertRCReturn(rc, rc);
785 rc = PDMR3LdrGetSymbolRCLazy(pVM, NULL, "tmVirtualNanoTSRediscover", &pVM->tm.s.VirtualGetRawDataRC.pfnRediscover);
786 AssertRCReturn(rc, rc);
787 if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceSync)
788 rc = PDMR3LdrGetSymbolRCLazy(pVM, NULL, "RTTimeNanoTSLFenceSync", &pVM->tm.s.pfnVirtualGetRawRC);
789 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceAsync)
790 rc = PDMR3LdrGetSymbolRCLazy(pVM, NULL, "RTTimeNanoTSLFenceAsync", &pVM->tm.s.pfnVirtualGetRawRC);
791 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacySync)
792 rc = PDMR3LdrGetSymbolRCLazy(pVM, NULL, "RTTimeNanoTSLegacySync", &pVM->tm.s.pfnVirtualGetRawRC);
793 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacyAsync)
794 rc = PDMR3LdrGetSymbolRCLazy(pVM, NULL, "RTTimeNanoTSLegacyAsync", &pVM->tm.s.pfnVirtualGetRawRC);
795 else
796 AssertFatalFailed();
797 AssertRCReturn(rc, rc);
798
799 rc = PDMR3LdrGetSymbolR0Lazy(pVM, NULL, "tmVirtualNanoTSBad", &pVM->tm.s.VirtualGetRawDataR0.pfnBad);
800 AssertRCReturn(rc, rc);
801 rc = PDMR3LdrGetSymbolR0Lazy(pVM, NULL, "tmVirtualNanoTSRediscover", &pVM->tm.s.VirtualGetRawDataR0.pfnRediscover);
802 AssertRCReturn(rc, rc);
803 if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceSync)
804 rc = PDMR3LdrGetSymbolR0Lazy(pVM, NULL, "RTTimeNanoTSLFenceSync", &pVM->tm.s.pfnVirtualGetRawR0);
805 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceAsync)
806 rc = PDMR3LdrGetSymbolR0Lazy(pVM, NULL, "RTTimeNanoTSLFenceAsync", &pVM->tm.s.pfnVirtualGetRawR0);
807 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacySync)
808 rc = PDMR3LdrGetSymbolR0Lazy(pVM, NULL, "RTTimeNanoTSLegacySync", &pVM->tm.s.pfnVirtualGetRawR0);
809 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacyAsync)
810 rc = PDMR3LdrGetSymbolR0Lazy(pVM, NULL, "RTTimeNanoTSLegacyAsync", &pVM->tm.s.pfnVirtualGetRawR0);
811 else
812 AssertFatalFailed();
813 AssertRCReturn(rc, rc);
814
815 return VINF_SUCCESS;
816}
817
818
819/**
820 * Applies relocations to data and code managed by this
821 * component. This function will be called at init and
822 * whenever the VMM need to relocate it self inside the GC.
823 *
824 * @param pVM The VM.
825 * @param offDelta Relocation delta relative to old location.
826 */
827VMMR3DECL(void) TMR3Relocate(PVM pVM, RTGCINTPTR offDelta)
828{
829 int rc;
830 LogFlow(("TMR3Relocate\n"));
831
832 pVM->tm.s.pvGIPRC = MMHyperR3ToRC(pVM, pVM->tm.s.pvGIPR3);
833 pVM->tm.s.paTimerQueuesRC = MMHyperR3ToRC(pVM, pVM->tm.s.paTimerQueuesR3);
834 pVM->tm.s.paTimerQueuesR0 = MMHyperR3ToR0(pVM, pVM->tm.s.paTimerQueuesR3);
835
836 pVM->tm.s.VirtualGetRawDataRC.pu64Prev = MMHyperR3ToRC(pVM, (void *)&pVM->tm.s.u64VirtualRawPrev);
837 AssertFatal(pVM->tm.s.VirtualGetRawDataRC.pu64Prev);
838 rc = PDMR3LdrGetSymbolRCLazy(pVM, NULL, "tmVirtualNanoTSBad", &pVM->tm.s.VirtualGetRawDataRC.pfnBad);
839 AssertFatalRC(rc);
840 rc = PDMR3LdrGetSymbolRCLazy(pVM, NULL, "tmVirtualNanoTSRediscover", &pVM->tm.s.VirtualGetRawDataRC.pfnRediscover);
841 AssertFatalRC(rc);
842
843 if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceSync)
844 rc = PDMR3LdrGetSymbolRCLazy(pVM, NULL, "RTTimeNanoTSLFenceSync", &pVM->tm.s.pfnVirtualGetRawRC);
845 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLFenceAsync)
846 rc = PDMR3LdrGetSymbolRCLazy(pVM, NULL, "RTTimeNanoTSLFenceAsync", &pVM->tm.s.pfnVirtualGetRawRC);
847 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacySync)
848 rc = PDMR3LdrGetSymbolRCLazy(pVM, NULL, "RTTimeNanoTSLegacySync", &pVM->tm.s.pfnVirtualGetRawRC);
849 else if (pVM->tm.s.pfnVirtualGetRawR3 == RTTimeNanoTSLegacyAsync)
850 rc = PDMR3LdrGetSymbolRCLazy(pVM, NULL, "RTTimeNanoTSLegacyAsync", &pVM->tm.s.pfnVirtualGetRawRC);
851 else
852 AssertFatalFailed();
853 AssertFatalRC(rc);
854
855 /*
856 * Iterate the timers updating the pVMRC pointers.
857 */
858 for (PTMTIMER pTimer = pVM->tm.s.pCreated; pTimer; pTimer = pTimer->pBigNext)
859 {
860 pTimer->pVMRC = pVM->pVMRC;
861 pTimer->pVMR0 = pVM->pVMR0;
862 }
863}
864
865
866/**
867 * Terminates the TM.
868 *
869 * Termination means cleaning up and freeing all resources,
870 * the VM it self is at this point powered off or suspended.
871 *
872 * @returns VBox status code.
873 * @param pVM The VM to operate on.
874 */
875VMMR3DECL(int) TMR3Term(PVM pVM)
876{
877 AssertMsg(pVM->tm.s.offVM, ("bad init order!\n"));
878 if (pVM->tm.s.pTimer)
879 {
880 int rc = RTTimerDestroy(pVM->tm.s.pTimer);
881 AssertRC(rc);
882 pVM->tm.s.pTimer = NULL;
883 }
884
885 return VINF_SUCCESS;
886}
887
888
889/**
890 * Terminates the per-VCPU TM.
891 *
892 * Termination means cleaning up and freeing all resources,
893 * the VM it self is at this point powered off or suspended.
894 *
895 * @returns VBox status code.
896 * @param pVM The VM to operate on.
897 */
898VMMR3DECL(int) TMR3TermCPU(PVM pVM)
899{
900 return 0;
901}
902
903
904/**
905 * The VM is being reset.
906 *
907 * For the TM component this means that a rescheduling is preformed,
908 * the FF is cleared and but without running the queues. We'll have to
909 * check if this makes sense or not, but it seems like a good idea now....
910 *
911 * @param pVM VM handle.
912 */
913VMMR3DECL(void) TMR3Reset(PVM pVM)
914{
915 LogFlow(("TMR3Reset:\n"));
916 VM_ASSERT_EMT(pVM);
917
918 /*
919 * Abort any pending catch up.
920 * This isn't perfect,
921 */
922 if (pVM->tm.s.fVirtualSyncCatchUp)
923 {
924 const uint64_t offVirtualNow = TMVirtualGetEx(pVM, false /* don't check timers */);
925 const uint64_t offVirtualSyncNow = TMVirtualSyncGetEx(pVM, false /* don't check timers */);
926 if (pVM->tm.s.fVirtualSyncCatchUp)
927 {
928 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
929
930 const uint64_t offOld = pVM->tm.s.offVirtualSyncGivenUp;
931 const uint64_t offNew = offVirtualNow - offVirtualSyncNow;
932 Assert(offOld <= offNew);
933 ASMAtomicXchgU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp, offNew);
934 ASMAtomicXchgU64((uint64_t volatile *)&pVM->tm.s.offVirtualSync, offNew);
935 ASMAtomicXchgBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
936 LogRel(("TM: Aborting catch-up attempt on reset with a %RU64 ns lag on reset; new total: %RU64 ns\n", offNew - offOld, offNew));
937 }
938 }
939
940 /*
941 * Process the queues.
942 */
943 for (int i = 0; i < TMCLOCK_MAX; i++)
944 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[i]);
945#ifdef VBOX_STRICT
946 tmTimerQueuesSanityChecks(pVM, "TMR3Reset");
947#endif
948 VM_FF_CLEAR(pVM, VM_FF_TIMER);
949}
950
951
952/**
953 * Resolve a builtin RC symbol.
954 * Called by PDM when loading or relocating GC modules.
955 *
956 * @returns VBox status
957 * @param pVM VM Handle.
958 * @param pszSymbol Symbol to resolve.
959 * @param pRCPtrValue Where to store the symbol value.
960 * @remark This has to work before TMR3Relocate() is called.
961 */
962VMMR3DECL(int) TMR3GetImportRC(PVM pVM, const char *pszSymbol, PRTRCPTR pRCPtrValue)
963{
964 if (!strcmp(pszSymbol, "g_pSUPGlobalInfoPage"))
965 *pRCPtrValue = MMHyperR3ToRC(pVM, &pVM->tm.s.pvGIPRC);
966 //else if (..)
967 else
968 return VERR_SYMBOL_NOT_FOUND;
969 return VINF_SUCCESS;
970}
971
972
973/**
974 * Execute state save operation.
975 *
976 * @returns VBox status code.
977 * @param pVM VM Handle.
978 * @param pSSM SSM operation handle.
979 */
980static DECLCALLBACK(int) tmR3Save(PVM pVM, PSSMHANDLE pSSM)
981{
982 LogFlow(("tmR3Save:\n"));
983 Assert(!pVM->tm.s.fTSCTicking);
984 Assert(!pVM->tm.s.fVirtualTicking);
985 Assert(!pVM->tm.s.fVirtualSyncTicking);
986
987 /*
988 * Save the virtual clocks.
989 */
990 /* the virtual clock. */
991 SSMR3PutU64(pSSM, TMCLOCK_FREQ_VIRTUAL);
992 SSMR3PutU64(pSSM, pVM->tm.s.u64Virtual);
993
994 /* the virtual timer synchronous clock. */
995 SSMR3PutU64(pSSM, pVM->tm.s.u64VirtualSync);
996 SSMR3PutU64(pSSM, pVM->tm.s.offVirtualSync);
997 SSMR3PutU64(pSSM, pVM->tm.s.offVirtualSyncGivenUp);
998 SSMR3PutU64(pSSM, pVM->tm.s.u64VirtualSyncCatchUpPrev);
999 SSMR3PutBool(pSSM, pVM->tm.s.fVirtualSyncCatchUp);
1000
1001 /* real time clock */
1002 SSMR3PutU64(pSSM, TMCLOCK_FREQ_REAL);
1003
1004 /* the cpu tick clock. */
1005 SSMR3PutU64(pSSM, TMCpuTickGet(pVM));
1006 return SSMR3PutU64(pSSM, pVM->tm.s.cTSCTicksPerSecond);
1007}
1008
1009
1010/**
1011 * Execute state load operation.
1012 *
1013 * @returns VBox status code.
1014 * @param pVM VM Handle.
1015 * @param pSSM SSM operation handle.
1016 * @param u32Version Data layout version.
1017 */
1018static DECLCALLBACK(int) tmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t u32Version)
1019{
1020 LogFlow(("tmR3Load:\n"));
1021 Assert(!pVM->tm.s.fTSCTicking);
1022 Assert(!pVM->tm.s.fVirtualTicking);
1023 Assert(!pVM->tm.s.fVirtualSyncTicking);
1024
1025 /*
1026 * Validate version.
1027 */
1028 if (u32Version != TM_SAVED_STATE_VERSION)
1029 {
1030 AssertMsgFailed(("tmR3Load: Invalid version u32Version=%d!\n", u32Version));
1031 return VERR_SSM_UNSUPPORTED_DATA_UNIT_VERSION;
1032 }
1033
1034 /*
1035 * Load the virtual clock.
1036 */
1037 pVM->tm.s.fVirtualTicking = false;
1038 /* the virtual clock. */
1039 uint64_t u64Hz;
1040 int rc = SSMR3GetU64(pSSM, &u64Hz);
1041 if (RT_FAILURE(rc))
1042 return rc;
1043 if (u64Hz != TMCLOCK_FREQ_VIRTUAL)
1044 {
1045 AssertMsgFailed(("The virtual clock frequency differs! Saved: %RU64 Binary: %RU64\n",
1046 u64Hz, TMCLOCK_FREQ_VIRTUAL));
1047 return VERR_SSM_VIRTUAL_CLOCK_HZ;
1048 }
1049 SSMR3GetU64(pSSM, &pVM->tm.s.u64Virtual);
1050 pVM->tm.s.u64VirtualOffset = 0;
1051
1052 /* the virtual timer synchronous clock. */
1053 pVM->tm.s.fVirtualSyncTicking = false;
1054 uint64_t u64;
1055 SSMR3GetU64(pSSM, &u64);
1056 pVM->tm.s.u64VirtualSync = u64;
1057 SSMR3GetU64(pSSM, &u64);
1058 pVM->tm.s.offVirtualSync = u64;
1059 SSMR3GetU64(pSSM, &u64);
1060 pVM->tm.s.offVirtualSyncGivenUp = u64;
1061 SSMR3GetU64(pSSM, &u64);
1062 pVM->tm.s.u64VirtualSyncCatchUpPrev = u64;
1063 bool f;
1064 SSMR3GetBool(pSSM, &f);
1065 pVM->tm.s.fVirtualSyncCatchUp = f;
1066
1067 /* the real clock */
1068 rc = SSMR3GetU64(pSSM, &u64Hz);
1069 if (RT_FAILURE(rc))
1070 return rc;
1071 if (u64Hz != TMCLOCK_FREQ_REAL)
1072 {
1073 AssertMsgFailed(("The real clock frequency differs! Saved: %RU64 Binary: %RU64\n",
1074 u64Hz, TMCLOCK_FREQ_REAL));
1075 return VERR_SSM_VIRTUAL_CLOCK_HZ; /* missleading... */
1076 }
1077
1078 /* the cpu tick clock. */
1079 pVM->tm.s.fTSCTicking = false;
1080 SSMR3GetU64(pSSM, &pVM->tm.s.u64TSC);
1081 rc = SSMR3GetU64(pSSM, &u64Hz);
1082 if (RT_FAILURE(rc))
1083 return rc;
1084 if (pVM->tm.s.fTSCUseRealTSC)
1085 pVM->tm.s.u64TSCOffset = 0; /** @todo TSC restore stuff and HWACC. */
1086 else
1087 pVM->tm.s.cTSCTicksPerSecond = u64Hz;
1088 LogRel(("TM: cTSCTicksPerSecond=%#RX64 (%RU64) fTSCVirtualized=%RTbool fTSCUseRealTSC=%RTbool (state load)\n",
1089 pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.fTSCVirtualized, pVM->tm.s.fTSCUseRealTSC));
1090
1091 /*
1092 * Make sure timers get rescheduled immediately.
1093 */
1094 VM_FF_SET(pVM, VM_FF_TIMER);
1095
1096 return VINF_SUCCESS;
1097}
1098
1099
1100/**
1101 * Internal TMR3TimerCreate worker.
1102 *
1103 * @returns VBox status code.
1104 * @param pVM The VM handle.
1105 * @param enmClock The timer clock.
1106 * @param pszDesc The timer description.
1107 * @param ppTimer Where to store the timer pointer on success.
1108 */
1109static int tmr3TimerCreate(PVM pVM, TMCLOCK enmClock, const char *pszDesc, PPTMTIMERR3 ppTimer)
1110{
1111 VM_ASSERT_EMT(pVM);
1112
1113 /*
1114 * Allocate the timer.
1115 */
1116 PTMTIMERR3 pTimer = NULL;
1117 if (pVM->tm.s.pFree && VM_IS_EMT(pVM))
1118 {
1119 pTimer = pVM->tm.s.pFree;
1120 pVM->tm.s.pFree = pTimer->pBigNext;
1121 Log3(("TM: Recycling timer %p, new free head %p.\n", pTimer, pTimer->pBigNext));
1122 }
1123
1124 if (!pTimer)
1125 {
1126 int rc = MMHyperAlloc(pVM, sizeof(*pTimer), 0, MM_TAG_TM, (void **)&pTimer);
1127 if (RT_FAILURE(rc))
1128 return rc;
1129 Log3(("TM: Allocated new timer %p\n", pTimer));
1130 }
1131
1132 /*
1133 * Initialize it.
1134 */
1135 pTimer->u64Expire = 0;
1136 pTimer->enmClock = enmClock;
1137 pTimer->pVMR3 = pVM;
1138 pTimer->pVMR0 = pVM->pVMR0;
1139 pTimer->pVMRC = pVM->pVMRC;
1140 pTimer->enmState = TMTIMERSTATE_STOPPED;
1141 pTimer->offScheduleNext = 0;
1142 pTimer->offNext = 0;
1143 pTimer->offPrev = 0;
1144 pTimer->pszDesc = pszDesc;
1145
1146 /* insert into the list of created timers. */
1147 pTimer->pBigPrev = NULL;
1148 pTimer->pBigNext = pVM->tm.s.pCreated;
1149 pVM->tm.s.pCreated = pTimer;
1150 if (pTimer->pBigNext)
1151 pTimer->pBigNext->pBigPrev = pTimer;
1152#ifdef VBOX_STRICT
1153 tmTimerQueuesSanityChecks(pVM, "tmR3TimerCreate");
1154#endif
1155
1156 *ppTimer = pTimer;
1157 return VINF_SUCCESS;
1158}
1159
1160
1161/**
1162 * Creates a device timer.
1163 *
1164 * @returns VBox status.
1165 * @param pVM The VM to create the timer in.
1166 * @param pDevIns Device instance.
1167 * @param enmClock The clock to use on this timer.
1168 * @param pfnCallback Callback function.
1169 * @param pszDesc Pointer to description string which must stay around
1170 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1171 * @param ppTimer Where to store the timer on success.
1172 */
1173VMMR3DECL(int) TMR3TimerCreateDevice(PVM pVM, PPDMDEVINS pDevIns, TMCLOCK enmClock, PFNTMTIMERDEV pfnCallback, const char *pszDesc, PPTMTIMERR3 ppTimer)
1174{
1175 /*
1176 * Allocate and init stuff.
1177 */
1178 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, ppTimer);
1179 if (RT_SUCCESS(rc))
1180 {
1181 (*ppTimer)->enmType = TMTIMERTYPE_DEV;
1182 (*ppTimer)->u.Dev.pfnTimer = pfnCallback;
1183 (*ppTimer)->u.Dev.pDevIns = pDevIns;
1184 Log(("TM: Created device timer %p clock %d callback %p '%s'\n", (*ppTimer), enmClock, pfnCallback, pszDesc));
1185 }
1186
1187 return rc;
1188}
1189
1190
1191/**
1192 * Creates a driver timer.
1193 *
1194 * @returns VBox status.
1195 * @param pVM The VM to create the timer in.
1196 * @param pDrvIns Driver instance.
1197 * @param enmClock The clock to use on this timer.
1198 * @param pfnCallback Callback function.
1199 * @param pszDesc Pointer to description string which must stay around
1200 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1201 * @param ppTimer Where to store the timer on success.
1202 */
1203VMMR3DECL(int) TMR3TimerCreateDriver(PVM pVM, PPDMDRVINS pDrvIns, TMCLOCK enmClock, PFNTMTIMERDRV pfnCallback, const char *pszDesc, PPTMTIMERR3 ppTimer)
1204{
1205 /*
1206 * Allocate and init stuff.
1207 */
1208 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, ppTimer);
1209 if (RT_SUCCESS(rc))
1210 {
1211 (*ppTimer)->enmType = TMTIMERTYPE_DRV;
1212 (*ppTimer)->u.Drv.pfnTimer = pfnCallback;
1213 (*ppTimer)->u.Drv.pDrvIns = pDrvIns;
1214 Log(("TM: Created device timer %p clock %d callback %p '%s'\n", (*ppTimer), enmClock, pfnCallback, pszDesc));
1215 }
1216
1217 return rc;
1218}
1219
1220
1221/**
1222 * Creates an internal timer.
1223 *
1224 * @returns VBox status.
1225 * @param pVM The VM to create the timer in.
1226 * @param enmClock The clock to use on this timer.
1227 * @param pfnCallback Callback function.
1228 * @param pvUser User argument to be passed to the callback.
1229 * @param pszDesc Pointer to description string which must stay around
1230 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1231 * @param ppTimer Where to store the timer on success.
1232 */
1233VMMR3DECL(int) TMR3TimerCreateInternal(PVM pVM, TMCLOCK enmClock, PFNTMTIMERINT pfnCallback, void *pvUser, const char *pszDesc, PPTMTIMERR3 ppTimer)
1234{
1235 /*
1236 * Allocate and init stuff.
1237 */
1238 PTMTIMER pTimer;
1239 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, &pTimer);
1240 if (RT_SUCCESS(rc))
1241 {
1242 pTimer->enmType = TMTIMERTYPE_INTERNAL;
1243 pTimer->u.Internal.pfnTimer = pfnCallback;
1244 pTimer->u.Internal.pvUser = pvUser;
1245 *ppTimer = pTimer;
1246 Log(("TM: Created internal timer %p clock %d callback %p '%s'\n", pTimer, enmClock, pfnCallback, pszDesc));
1247 }
1248
1249 return rc;
1250}
1251
1252/**
1253 * Creates an external timer.
1254 *
1255 * @returns Timer handle on success.
1256 * @returns NULL on failure.
1257 * @param pVM The VM to create the timer in.
1258 * @param enmClock The clock to use on this timer.
1259 * @param pfnCallback Callback function.
1260 * @param pvUser User argument.
1261 * @param pszDesc Pointer to description string which must stay around
1262 * until the timer is fully destroyed (i.e. a bit after TMTimerDestroy()).
1263 */
1264VMMR3DECL(PTMTIMERR3) TMR3TimerCreateExternal(PVM pVM, TMCLOCK enmClock, PFNTMTIMEREXT pfnCallback, void *pvUser, const char *pszDesc)
1265{
1266 /*
1267 * Allocate and init stuff.
1268 */
1269 PTMTIMERR3 pTimer;
1270 int rc = tmr3TimerCreate(pVM, enmClock, pszDesc, &pTimer);
1271 if (RT_SUCCESS(rc))
1272 {
1273 pTimer->enmType = TMTIMERTYPE_EXTERNAL;
1274 pTimer->u.External.pfnTimer = pfnCallback;
1275 pTimer->u.External.pvUser = pvUser;
1276 Log(("TM: Created external timer %p clock %d callback %p '%s'\n", pTimer, enmClock, pfnCallback, pszDesc));
1277 return pTimer;
1278 }
1279
1280 return NULL;
1281}
1282
1283
1284/**
1285 * Destroy all timers owned by a device.
1286 *
1287 * @returns VBox status.
1288 * @param pVM VM handle.
1289 * @param pDevIns Device which timers should be destroyed.
1290 */
1291VMMR3DECL(int) TMR3TimerDestroyDevice(PVM pVM, PPDMDEVINS pDevIns)
1292{
1293 LogFlow(("TMR3TimerDestroyDevice: pDevIns=%p\n", pDevIns));
1294 if (!pDevIns)
1295 return VERR_INVALID_PARAMETER;
1296
1297 PTMTIMER pCur = pVM->tm.s.pCreated;
1298 while (pCur)
1299 {
1300 PTMTIMER pDestroy = pCur;
1301 pCur = pDestroy->pBigNext;
1302 if ( pDestroy->enmType == TMTIMERTYPE_DEV
1303 && pDestroy->u.Dev.pDevIns == pDevIns)
1304 {
1305 int rc = TMTimerDestroy(pDestroy);
1306 AssertRC(rc);
1307 }
1308 }
1309 LogFlow(("TMR3TimerDestroyDevice: returns VINF_SUCCESS\n"));
1310 return VINF_SUCCESS;
1311}
1312
1313
1314/**
1315 * Destroy all timers owned by a driver.
1316 *
1317 * @returns VBox status.
1318 * @param pVM VM handle.
1319 * @param pDrvIns Driver which timers should be destroyed.
1320 */
1321VMMR3DECL(int) TMR3TimerDestroyDriver(PVM pVM, PPDMDRVINS pDrvIns)
1322{
1323 LogFlow(("TMR3TimerDestroyDriver: pDrvIns=%p\n", pDrvIns));
1324 if (!pDrvIns)
1325 return VERR_INVALID_PARAMETER;
1326
1327 PTMTIMER pCur = pVM->tm.s.pCreated;
1328 while (pCur)
1329 {
1330 PTMTIMER pDestroy = pCur;
1331 pCur = pDestroy->pBigNext;
1332 if ( pDestroy->enmType == TMTIMERTYPE_DRV
1333 && pDestroy->u.Drv.pDrvIns == pDrvIns)
1334 {
1335 int rc = TMTimerDestroy(pDestroy);
1336 AssertRC(rc);
1337 }
1338 }
1339 LogFlow(("TMR3TimerDestroyDriver: returns VINF_SUCCESS\n"));
1340 return VINF_SUCCESS;
1341}
1342
1343
1344/**
1345 * Internal function for getting the clock time.
1346 *
1347 * @returns clock time.
1348 * @param pVM The VM handle.
1349 * @param enmClock The clock.
1350 */
1351DECLINLINE(uint64_t) tmClock(PVM pVM, TMCLOCK enmClock)
1352{
1353 switch (enmClock)
1354 {
1355 case TMCLOCK_VIRTUAL: return TMVirtualGet(pVM);
1356 case TMCLOCK_VIRTUAL_SYNC: return TMVirtualSyncGet(pVM);
1357 case TMCLOCK_REAL: return TMRealGet(pVM);
1358 case TMCLOCK_TSC: return TMCpuTickGet(pVM);
1359 default:
1360 AssertMsgFailed(("enmClock=%d\n", enmClock));
1361 return ~(uint64_t)0;
1362 }
1363}
1364
1365
1366/**
1367 * Checks if the sync queue has one or more expired timers.
1368 *
1369 * @returns true / false.
1370 *
1371 * @param pVM The VM handle.
1372 * @param enmClock The queue.
1373 */
1374DECLINLINE(bool) tmR3HasExpiredTimer(PVM pVM, TMCLOCK enmClock)
1375{
1376 const uint64_t u64Expire = pVM->tm.s.CTX_SUFF(paTimerQueues)[enmClock].u64Expire;
1377 return u64Expire != INT64_MAX && u64Expire <= tmClock(pVM, enmClock);
1378}
1379
1380
1381/**
1382 * Checks for expired timers in all the queues.
1383 *
1384 * @returns true / false.
1385 * @param pVM The VM handle.
1386 */
1387DECLINLINE(bool) tmR3AnyExpiredTimers(PVM pVM)
1388{
1389 /*
1390 * Combine the time calculation for the first two since we're not on EMT
1391 * TMVirtualSyncGet only permits EMT.
1392 */
1393 uint64_t u64Now = TMVirtualGet(pVM);
1394 if (pVM->tm.s.CTX_SUFF(paTimerQueues)[TMCLOCK_VIRTUAL].u64Expire <= u64Now)
1395 return true;
1396 u64Now = pVM->tm.s.fVirtualSyncTicking
1397 ? u64Now - pVM->tm.s.offVirtualSync
1398 : pVM->tm.s.u64VirtualSync;
1399 if (pVM->tm.s.CTX_SUFF(paTimerQueues)[TMCLOCK_VIRTUAL_SYNC].u64Expire <= u64Now)
1400 return true;
1401
1402 /*
1403 * The remaining timers.
1404 */
1405 if (tmR3HasExpiredTimer(pVM, TMCLOCK_REAL))
1406 return true;
1407 if (tmR3HasExpiredTimer(pVM, TMCLOCK_TSC))
1408 return true;
1409 return false;
1410}
1411
1412
1413/**
1414 * Schedulation timer callback.
1415 *
1416 * @param pTimer Timer handle.
1417 * @param pvUser VM handle.
1418 * @thread Timer thread.
1419 *
1420 * @remark We cannot do the scheduling and queues running from a timer handler
1421 * since it's not executing in EMT, and even if it was it would be async
1422 * and we wouldn't know the state of the affairs.
1423 * So, we'll just raise the timer FF and force any REM execution to exit.
1424 */
1425static DECLCALLBACK(void) tmR3TimerCallback(PRTTIMER pTimer, void *pvUser, uint64_t /*iTick*/)
1426{
1427 PVM pVM = (PVM)pvUser;
1428 AssertCompile(TMCLOCK_MAX == 4);
1429#ifdef DEBUG_Sander /* very annoying, keep it private. */
1430 if (VM_FF_ISSET(pVM, VM_FF_TIMER))
1431 Log(("tmR3TimerCallback: timer event still pending!!\n"));
1432#endif
1433 if ( !VM_FF_ISSET(pVM, VM_FF_TIMER)
1434 && ( pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC].offSchedule
1435 || pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL].offSchedule
1436 || pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL].offSchedule
1437 || pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC].offSchedule
1438 || tmR3AnyExpiredTimers(pVM)
1439 )
1440 && !VM_FF_ISSET(pVM, VM_FF_TIMER)
1441 )
1442 {
1443 VM_FF_SET(pVM, VM_FF_TIMER);
1444 REMR3NotifyTimerPending(pVM);
1445 VMR3NotifyFF(pVM, true);
1446 STAM_COUNTER_INC(&pVM->tm.s.StatTimerCallbackSetFF);
1447 }
1448}
1449
1450
1451/**
1452 * Schedules and runs any pending timers.
1453 *
1454 * This is normally called from a forced action handler in EMT.
1455 *
1456 * @param pVM The VM to run the timers for.
1457 */
1458VMMR3DECL(void) TMR3TimerQueuesDo(PVM pVM)
1459{
1460 STAM_PROFILE_START(&pVM->tm.s.StatDoQueues, a);
1461 Log2(("TMR3TimerQueuesDo:\n"));
1462
1463 /*
1464 * Process the queues.
1465 */
1466 AssertCompile(TMCLOCK_MAX == 4);
1467
1468 /* TMCLOCK_VIRTUAL_SYNC */
1469 STAM_PROFILE_ADV_START(&pVM->tm.s.StatDoQueuesSchedule, s1);
1470 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC]);
1471 STAM_PROFILE_ADV_SUSPEND(&pVM->tm.s.StatDoQueuesSchedule, s1);
1472 STAM_PROFILE_ADV_START(&pVM->tm.s.StatDoQueuesRun, r1);
1473 tmR3TimerQueueRunVirtualSync(pVM);
1474 STAM_PROFILE_ADV_SUSPEND(&pVM->tm.s.StatDoQueuesRun, r1);
1475
1476 /* TMCLOCK_VIRTUAL */
1477 STAM_PROFILE_ADV_RESUME(&pVM->tm.s.StatDoQueuesSchedule, s1);
1478 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL]);
1479 STAM_PROFILE_ADV_SUSPEND(&pVM->tm.s.StatDoQueuesSchedule, s2);
1480 STAM_PROFILE_ADV_RESUME(&pVM->tm.s.StatDoQueuesRun, r1);
1481 tmR3TimerQueueRun(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL]);
1482 STAM_PROFILE_ADV_SUSPEND(&pVM->tm.s.StatDoQueuesRun, r2);
1483
1484#if 0 /** @todo if ever used, remove this and fix the stam prefixes on TMCLOCK_REAL below. */
1485 /* TMCLOCK_TSC */
1486 STAM_PROFILE_ADV_RESUME(&pVM->tm.s.StatDoQueuesSchedule, s2);
1487 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC]);
1488 STAM_PROFILE_ADV_SUSPEND(&pVM->tm.s.StatDoQueuesSchedule, s3);
1489 STAM_PROFILE_ADV_RESUME(&pVM->tm.s.StatDoQueuesRun, r2);
1490 tmR3TimerQueueRun(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_TSC]);
1491 STAM_PROFILE_ADV_SUSPEND(&pVM->tm.s.StatDoQueuesRun, r3);
1492#endif
1493
1494 /* TMCLOCK_REAL */
1495 STAM_PROFILE_ADV_RESUME(&pVM->tm.s.StatDoQueuesSchedule, s2);
1496 tmTimerQueueSchedule(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL]);
1497 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatDoQueuesSchedule, s3);
1498 STAM_PROFILE_ADV_RESUME(&pVM->tm.s.StatDoQueuesRun, r2);
1499 tmR3TimerQueueRun(pVM, &pVM->tm.s.paTimerQueuesR3[TMCLOCK_REAL]);
1500 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatDoQueuesRun, r3);
1501
1502 /* done. */
1503 VM_FF_CLEAR(pVM, VM_FF_TIMER);
1504
1505#ifdef VBOX_STRICT
1506 /* check that we didn't screwup. */
1507 tmTimerQueuesSanityChecks(pVM, "TMR3TimerQueuesDo");
1508#endif
1509
1510 Log2(("TMR3TimerQueuesDo: returns void\n"));
1511 STAM_PROFILE_STOP(&pVM->tm.s.StatDoQueues, a);
1512}
1513
1514
1515/**
1516 * Schedules and runs any pending times in the specified queue.
1517 *
1518 * This is normally called from a forced action handler in EMT.
1519 *
1520 * @param pVM The VM to run the timers for.
1521 * @param pQueue The queue to run.
1522 */
1523static void tmR3TimerQueueRun(PVM pVM, PTMTIMERQUEUE pQueue)
1524{
1525 VM_ASSERT_EMT(pVM);
1526
1527 /*
1528 * Run timers.
1529 *
1530 * We check the clock once and run all timers which are ACTIVE
1531 * and have an expire time less or equal to the time we read.
1532 *
1533 * N.B. A generic unlink must be applied since other threads
1534 * are allowed to mess with any active timer at any time.
1535 * However, we only allow EMT to handle EXPIRED_PENDING
1536 * timers, thus enabling the timer handler function to
1537 * arm the timer again.
1538 */
1539 PTMTIMER pNext = TMTIMER_GET_HEAD(pQueue);
1540 if (!pNext)
1541 return;
1542 const uint64_t u64Now = tmClock(pVM, pQueue->enmClock);
1543 while (pNext && pNext->u64Expire <= u64Now)
1544 {
1545 PTMTIMER pTimer = pNext;
1546 pNext = TMTIMER_GET_NEXT(pTimer);
1547 Log2(("tmR3TimerQueueRun: pTimer=%p:{.enmState=%s, .enmClock=%d, .enmType=%d, u64Expire=%llx (now=%llx) .pszDesc=%s}\n",
1548 pTimer, tmTimerState(pTimer->enmState), pTimer->enmClock, pTimer->enmType, pTimer->u64Expire, u64Now, pTimer->pszDesc));
1549 bool fRc;
1550 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_EXPIRED, TMTIMERSTATE_ACTIVE, fRc);
1551 if (fRc)
1552 {
1553 Assert(!pTimer->offScheduleNext); /* this can trigger falsely */
1554
1555 /* unlink */
1556 const PTMTIMER pPrev = TMTIMER_GET_PREV(pTimer);
1557 if (pPrev)
1558 TMTIMER_SET_NEXT(pPrev, pNext);
1559 else
1560 {
1561 TMTIMER_SET_HEAD(pQueue, pNext);
1562 pQueue->u64Expire = pNext ? pNext->u64Expire : INT64_MAX;
1563 }
1564 if (pNext)
1565 TMTIMER_SET_PREV(pNext, pPrev);
1566 pTimer->offNext = 0;
1567 pTimer->offPrev = 0;
1568
1569
1570 /* fire */
1571 switch (pTimer->enmType)
1572 {
1573 case TMTIMERTYPE_DEV: pTimer->u.Dev.pfnTimer(pTimer->u.Dev.pDevIns, pTimer); break;
1574 case TMTIMERTYPE_DRV: pTimer->u.Drv.pfnTimer(pTimer->u.Drv.pDrvIns, pTimer); break;
1575 case TMTIMERTYPE_INTERNAL: pTimer->u.Internal.pfnTimer(pVM, pTimer, pTimer->u.Internal.pvUser); break;
1576 case TMTIMERTYPE_EXTERNAL: pTimer->u.External.pfnTimer(pTimer->u.External.pvUser); break;
1577 default:
1578 AssertMsgFailed(("Invalid timer type %d (%s)\n", pTimer->enmType, pTimer->pszDesc));
1579 break;
1580 }
1581
1582 /* change the state if it wasn't changed already in the handler. */
1583 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_STOPPED, TMTIMERSTATE_EXPIRED, fRc);
1584 Log2(("tmR3TimerQueueRun: new state %s\n", tmTimerState(pTimer->enmState)));
1585 }
1586 } /* run loop */
1587}
1588
1589
1590/**
1591 * Schedules and runs any pending times in the timer queue for the
1592 * synchronous virtual clock.
1593 *
1594 * This scheduling is a bit different from the other queues as it need
1595 * to implement the special requirements of the timer synchronous virtual
1596 * clock, thus this 2nd queue run funcion.
1597 *
1598 * @param pVM The VM to run the timers for.
1599 */
1600static void tmR3TimerQueueRunVirtualSync(PVM pVM)
1601{
1602 PTMTIMERQUEUE const pQueue = &pVM->tm.s.paTimerQueuesR3[TMCLOCK_VIRTUAL_SYNC];
1603 VM_ASSERT_EMT(pVM);
1604
1605 /*
1606 * Any timers?
1607 */
1608 PTMTIMER pNext = TMTIMER_GET_HEAD(pQueue);
1609 if (RT_UNLIKELY(!pNext))
1610 {
1611 Assert(pVM->tm.s.fVirtualSyncTicking || !pVM->tm.s.fVirtualTicking);
1612 return;
1613 }
1614 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRun);
1615
1616 /*
1617 * Calculate the time frame for which we will dispatch timers.
1618 *
1619 * We use a time frame ranging from the current sync time (which is most likely the
1620 * same as the head timer) and some configurable period (100000ns) up towards the
1621 * current virtual time. This period might also need to be restricted by the catch-up
1622 * rate so frequent calls to this function won't accelerate the time too much, however
1623 * this will be implemented at a later point if neccessary.
1624 *
1625 * Without this frame we would 1) having to run timers much more frequently
1626 * and 2) lag behind at a steady rate.
1627 */
1628 const uint64_t u64VirtualNow = TMVirtualGetEx(pVM, false /* don't check timers */);
1629 uint64_t u64Now;
1630 if (!pVM->tm.s.fVirtualSyncTicking)
1631 {
1632 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRunStoppedAlready);
1633 u64Now = pVM->tm.s.u64VirtualSync;
1634 Assert(u64Now <= pNext->u64Expire);
1635 }
1636 else
1637 {
1638 /* Calc 'now'. (update order doesn't really matter here) */
1639 uint64_t off = pVM->tm.s.offVirtualSync;
1640 if (pVM->tm.s.fVirtualSyncCatchUp)
1641 {
1642 uint64_t u64Delta = u64VirtualNow - pVM->tm.s.u64VirtualSyncCatchUpPrev;
1643 if (RT_LIKELY(!(u64Delta >> 32)))
1644 {
1645 uint64_t u64Sub = ASMMultU64ByU32DivByU32(u64Delta, pVM->tm.s.u32VirtualSyncCatchUpPercentage, 100);
1646 if (off > u64Sub + pVM->tm.s.offVirtualSyncGivenUp)
1647 {
1648 off -= u64Sub;
1649 Log4(("TM: %RU64/%RU64: sub %RU64 (run)\n", u64VirtualNow - off, off - pVM->tm.s.offVirtualSyncGivenUp, u64Sub));
1650 }
1651 else
1652 {
1653 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
1654 ASMAtomicXchgBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
1655 off = pVM->tm.s.offVirtualSyncGivenUp;
1656 Log4(("TM: %RU64/0: caught up (run)\n", u64VirtualNow));
1657 }
1658 }
1659 ASMAtomicXchgU64(&pVM->tm.s.offVirtualSync, off);
1660 pVM->tm.s.u64VirtualSyncCatchUpPrev = u64VirtualNow;
1661 }
1662 u64Now = u64VirtualNow - off;
1663
1664 /* Check if stopped by expired timer. */
1665 if (u64Now >= pNext->u64Expire)
1666 {
1667 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRunStop);
1668 u64Now = pNext->u64Expire;
1669 ASMAtomicXchgU64(&pVM->tm.s.u64VirtualSync, u64Now);
1670 ASMAtomicXchgBool(&pVM->tm.s.fVirtualSyncTicking, false);
1671 Log4(("TM: %RU64/%RU64: exp tmr (run)\n", u64Now, u64VirtualNow - u64Now - pVM->tm.s.offVirtualSyncGivenUp));
1672
1673 }
1674 }
1675
1676 /* calc end of frame. */
1677 uint64_t u64Max = u64Now + pVM->tm.s.u32VirtualSyncScheduleSlack;
1678 if (u64Max > u64VirtualNow - pVM->tm.s.offVirtualSyncGivenUp)
1679 u64Max = u64VirtualNow - pVM->tm.s.offVirtualSyncGivenUp;
1680
1681 /* assert sanity */
1682 Assert(u64Now <= u64VirtualNow - pVM->tm.s.offVirtualSyncGivenUp);
1683 Assert(u64Max <= u64VirtualNow - pVM->tm.s.offVirtualSyncGivenUp);
1684 Assert(u64Now <= u64Max);
1685
1686 /*
1687 * Process the expired timers moving the clock along as we progress.
1688 */
1689#ifdef VBOX_STRICT
1690 uint64_t u64Prev = u64Now; NOREF(u64Prev);
1691#endif
1692 while (pNext && pNext->u64Expire <= u64Max)
1693 {
1694 PTMTIMER pTimer = pNext;
1695 pNext = TMTIMER_GET_NEXT(pTimer);
1696 Log2(("tmR3TimerQueueRun: pTimer=%p:{.enmState=%s, .enmClock=%d, .enmType=%d, u64Expire=%llx (now=%llx) .pszDesc=%s}\n",
1697 pTimer, tmTimerState(pTimer->enmState), pTimer->enmClock, pTimer->enmType, pTimer->u64Expire, u64Now, pTimer->pszDesc));
1698 bool fRc;
1699 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_EXPIRED, TMTIMERSTATE_ACTIVE, fRc);
1700 if (fRc)
1701 {
1702 /* unlink */
1703 const PTMTIMER pPrev = TMTIMER_GET_PREV(pTimer);
1704 if (pPrev)
1705 TMTIMER_SET_NEXT(pPrev, pNext);
1706 else
1707 {
1708 TMTIMER_SET_HEAD(pQueue, pNext);
1709 pQueue->u64Expire = pNext ? pNext->u64Expire : INT64_MAX;
1710 }
1711 if (pNext)
1712 TMTIMER_SET_PREV(pNext, pPrev);
1713 pTimer->offNext = 0;
1714 pTimer->offPrev = 0;
1715
1716 /* advance the clock - don't permit timers to be out of order or armed in the 'past'. */
1717#ifdef VBOX_STRICT
1718 AssertMsg(pTimer->u64Expire >= u64Prev, ("%RU64 < %RU64 %s\n", pTimer->u64Expire, u64Prev, pTimer->pszDesc));
1719 u64Prev = pTimer->u64Expire;
1720#endif
1721 ASMAtomicXchgSize(&pVM->tm.s.fVirtualSyncTicking, false);
1722 ASMAtomicXchgU64(&pVM->tm.s.u64VirtualSync, pTimer->u64Expire);
1723
1724 /* fire */
1725 switch (pTimer->enmType)
1726 {
1727 case TMTIMERTYPE_DEV: pTimer->u.Dev.pfnTimer(pTimer->u.Dev.pDevIns, pTimer); break;
1728 case TMTIMERTYPE_DRV: pTimer->u.Drv.pfnTimer(pTimer->u.Drv.pDrvIns, pTimer); break;
1729 case TMTIMERTYPE_INTERNAL: pTimer->u.Internal.pfnTimer(pVM, pTimer, pTimer->u.Internal.pvUser); break;
1730 case TMTIMERTYPE_EXTERNAL: pTimer->u.External.pfnTimer(pTimer->u.External.pvUser); break;
1731 default:
1732 AssertMsgFailed(("Invalid timer type %d (%s)\n", pTimer->enmType, pTimer->pszDesc));
1733 break;
1734 }
1735
1736 /* change the state if it wasn't changed already in the handler. */
1737 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_STOPPED, TMTIMERSTATE_EXPIRED, fRc);
1738 Log2(("tmR3TimerQueueRun: new state %s\n", tmTimerState(pTimer->enmState)));
1739 }
1740 } /* run loop */
1741
1742 /*
1743 * Restart the clock if it was stopped to serve any timers,
1744 * and start/adjust catch-up if necessary.
1745 */
1746 if ( !pVM->tm.s.fVirtualSyncTicking
1747 && pVM->tm.s.fVirtualTicking)
1748 {
1749 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRunRestart);
1750
1751 /* calc the slack we've handed out. */
1752 const uint64_t u64VirtualNow2 = TMVirtualGetEx(pVM, false /* don't check timers */);
1753 Assert(u64VirtualNow2 >= u64VirtualNow);
1754 AssertMsg(pVM->tm.s.u64VirtualSync >= u64Now, ("%RU64 < %RU64\n", pVM->tm.s.u64VirtualSync, u64Now));
1755 const uint64_t offSlack = pVM->tm.s.u64VirtualSync - u64Now;
1756 STAM_STATS({
1757 if (offSlack)
1758 {
1759 PSTAMPROFILE p = &pVM->tm.s.StatVirtualSyncRunSlack;
1760 p->cPeriods++;
1761 p->cTicks += offSlack;
1762 if (p->cTicksMax < offSlack) p->cTicksMax = offSlack;
1763 if (p->cTicksMin > offSlack) p->cTicksMin = offSlack;
1764 }
1765 });
1766
1767 /* Let the time run a little bit while we were busy running timers(?). */
1768 uint64_t u64Elapsed;
1769#define MAX_ELAPSED 30000 /* ns */
1770 if (offSlack > MAX_ELAPSED)
1771 u64Elapsed = 0;
1772 else
1773 {
1774 u64Elapsed = u64VirtualNow2 - u64VirtualNow;
1775 if (u64Elapsed > MAX_ELAPSED)
1776 u64Elapsed = MAX_ELAPSED;
1777 u64Elapsed = u64Elapsed > offSlack ? u64Elapsed - offSlack : 0;
1778 }
1779#undef MAX_ELAPSED
1780
1781 /* Calc the current offset. */
1782 uint64_t offNew = u64VirtualNow2 - pVM->tm.s.u64VirtualSync - u64Elapsed;
1783 Assert(!(offNew & RT_BIT_64(63)));
1784 uint64_t offLag = offNew - pVM->tm.s.offVirtualSyncGivenUp;
1785 Assert(!(offLag & RT_BIT_64(63)));
1786
1787 /*
1788 * Deal with starting, adjusting and stopping catchup.
1789 */
1790 if (pVM->tm.s.fVirtualSyncCatchUp)
1791 {
1792 if (offLag <= pVM->tm.s.u64VirtualSyncCatchUpStopThreshold)
1793 {
1794 /* stop */
1795 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
1796 ASMAtomicXchgBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
1797 Log4(("TM: %RU64/%RU64: caught up\n", u64VirtualNow2 - offNew, offLag));
1798 }
1799 else if (offLag <= pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold)
1800 {
1801 /* adjust */
1802 unsigned i = 0;
1803 while ( i + 1 < RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods)
1804 && offLag >= pVM->tm.s.aVirtualSyncCatchUpPeriods[i + 1].u64Start)
1805 i++;
1806 if (pVM->tm.s.u32VirtualSyncCatchUpPercentage < pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage)
1807 {
1808 STAM_COUNTER_INC(&pVM->tm.s.aStatVirtualSyncCatchupAdjust[i]);
1809 ASMAtomicXchgU32(&pVM->tm.s.u32VirtualSyncCatchUpPercentage, pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage);
1810 Log4(("TM: %RU64/%RU64: adj %u%%\n", u64VirtualNow2 - offNew, offLag, pVM->tm.s.u32VirtualSyncCatchUpPercentage));
1811 }
1812 pVM->tm.s.u64VirtualSyncCatchUpPrev = u64VirtualNow2;
1813 }
1814 else
1815 {
1816 /* give up */
1817 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncGiveUp);
1818 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
1819 ASMAtomicXchgU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp, offNew);
1820 ASMAtomicXchgBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
1821 Log4(("TM: %RU64/%RU64: give up %u%%\n", u64VirtualNow2 - offNew, offLag, pVM->tm.s.u32VirtualSyncCatchUpPercentage));
1822 LogRel(("TM: Giving up catch-up attempt at a %RU64 ns lag; new total: %RU64 ns\n", offLag, offNew));
1823 }
1824 }
1825 else if (offLag >= pVM->tm.s.aVirtualSyncCatchUpPeriods[0].u64Start)
1826 {
1827 if (offLag <= pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold)
1828 {
1829 /* start */
1830 STAM_PROFILE_ADV_START(&pVM->tm.s.StatVirtualSyncCatchup, c);
1831 unsigned i = 0;
1832 while ( i + 1 < RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods)
1833 && offLag >= pVM->tm.s.aVirtualSyncCatchUpPeriods[i + 1].u64Start)
1834 i++;
1835 STAM_COUNTER_INC(&pVM->tm.s.aStatVirtualSyncCatchupInitial[i]);
1836 ASMAtomicXchgU32(&pVM->tm.s.u32VirtualSyncCatchUpPercentage, pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage);
1837 ASMAtomicXchgBool(&pVM->tm.s.fVirtualSyncCatchUp, true);
1838 Log4(("TM: %RU64/%RU64: catch-up %u%%\n", u64VirtualNow2 - offNew, offLag, pVM->tm.s.u32VirtualSyncCatchUpPercentage));
1839 }
1840 else
1841 {
1842 /* don't bother */
1843 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncGiveUpBeforeStarting);
1844 ASMAtomicXchgU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp, offNew);
1845 Log4(("TM: %RU64/%RU64: give up\n", u64VirtualNow2 - offNew, offLag));
1846 LogRel(("TM: Not bothering to attempt catching up a %RU64 ns lag; new total: %RU64\n", offLag, offNew));
1847 }
1848 }
1849
1850 /*
1851 * Update the offset and restart the clock.
1852 */
1853 Assert(!(offNew & RT_BIT_64(63)));
1854 ASMAtomicXchgU64(&pVM->tm.s.offVirtualSync, offNew);
1855 ASMAtomicXchgBool(&pVM->tm.s.fVirtualSyncTicking, true);
1856 }
1857}
1858
1859
1860/**
1861 * Saves the state of a timer to a saved state.
1862 *
1863 * @returns VBox status.
1864 * @param pTimer Timer to save.
1865 * @param pSSM Save State Manager handle.
1866 */
1867VMMR3DECL(int) TMR3TimerSave(PTMTIMERR3 pTimer, PSSMHANDLE pSSM)
1868{
1869 LogFlow(("TMR3TimerSave: pTimer=%p:{enmState=%s, .pszDesc={%s}} pSSM=%p\n", pTimer, tmTimerState(pTimer->enmState), pTimer->pszDesc, pSSM));
1870 switch (pTimer->enmState)
1871 {
1872 case TMTIMERSTATE_STOPPED:
1873 case TMTIMERSTATE_PENDING_STOP:
1874 case TMTIMERSTATE_PENDING_STOP_SCHEDULE:
1875 return SSMR3PutU8(pSSM, (uint8_t)TMTIMERSTATE_PENDING_STOP);
1876
1877 case TMTIMERSTATE_PENDING_SCHEDULE_SET_EXPIRE:
1878 case TMTIMERSTATE_PENDING_RESCHEDULE_SET_EXPIRE:
1879 AssertMsgFailed(("u64Expire is being updated! (%s)\n", pTimer->pszDesc));
1880 if (!RTThreadYield())
1881 RTThreadSleep(1);
1882 /* fall thru */
1883 case TMTIMERSTATE_ACTIVE:
1884 case TMTIMERSTATE_PENDING_SCHEDULE:
1885 case TMTIMERSTATE_PENDING_RESCHEDULE:
1886 SSMR3PutU8(pSSM, (uint8_t)TMTIMERSTATE_PENDING_SCHEDULE);
1887 return SSMR3PutU64(pSSM, pTimer->u64Expire);
1888
1889 case TMTIMERSTATE_EXPIRED:
1890 case TMTIMERSTATE_PENDING_DESTROY:
1891 case TMTIMERSTATE_PENDING_STOP_DESTROY:
1892 case TMTIMERSTATE_FREE:
1893 AssertMsgFailed(("Invalid timer state %d %s (%s)\n", pTimer->enmState, tmTimerState(pTimer->enmState), pTimer->pszDesc));
1894 return SSMR3HandleSetStatus(pSSM, VERR_TM_INVALID_STATE);
1895 }
1896
1897 AssertMsgFailed(("Unknown timer state %d (%s)\n", pTimer->enmState, pTimer->pszDesc));
1898 return SSMR3HandleSetStatus(pSSM, VERR_TM_UNKNOWN_STATE);
1899}
1900
1901
1902/**
1903 * Loads the state of a timer from a saved state.
1904 *
1905 * @returns VBox status.
1906 * @param pTimer Timer to restore.
1907 * @param pSSM Save State Manager handle.
1908 */
1909VMMR3DECL(int) TMR3TimerLoad(PTMTIMERR3 pTimer, PSSMHANDLE pSSM)
1910{
1911 Assert(pTimer); Assert(pSSM); VM_ASSERT_EMT(pTimer->pVMR3);
1912 LogFlow(("TMR3TimerLoad: pTimer=%p:{enmState=%s, .pszDesc={%s}} pSSM=%p\n", pTimer, tmTimerState(pTimer->enmState), pTimer->pszDesc, pSSM));
1913
1914 /*
1915 * Load the state and validate it.
1916 */
1917 uint8_t u8State;
1918 int rc = SSMR3GetU8(pSSM, &u8State);
1919 if (RT_FAILURE(rc))
1920 return rc;
1921 TMTIMERSTATE enmState = (TMTIMERSTATE)u8State;
1922 if ( enmState != TMTIMERSTATE_PENDING_STOP
1923 && enmState != TMTIMERSTATE_PENDING_SCHEDULE
1924 && enmState != TMTIMERSTATE_PENDING_STOP_SCHEDULE)
1925 {
1926 AssertMsgFailed(("enmState=%d %s\n", enmState, tmTimerState(enmState)));
1927 return SSMR3HandleSetStatus(pSSM, VERR_TM_LOAD_STATE);
1928 }
1929
1930 if (enmState == TMTIMERSTATE_PENDING_SCHEDULE)
1931 {
1932 /*
1933 * Load the expire time.
1934 */
1935 uint64_t u64Expire;
1936 rc = SSMR3GetU64(pSSM, &u64Expire);
1937 if (RT_FAILURE(rc))
1938 return rc;
1939
1940 /*
1941 * Set it.
1942 */
1943 Log(("enmState=%d %s u64Expire=%llu\n", enmState, tmTimerState(enmState), u64Expire));
1944 rc = TMTimerSet(pTimer, u64Expire);
1945 }
1946 else
1947 {
1948 /*
1949 * Stop it.
1950 */
1951 Log(("enmState=%d %s\n", enmState, tmTimerState(enmState)));
1952 rc = TMTimerStop(pTimer);
1953 }
1954
1955 /*
1956 * On failure set SSM status.
1957 */
1958 if (RT_FAILURE(rc))
1959 rc = SSMR3HandleSetStatus(pSSM, rc);
1960 return rc;
1961}
1962
1963
1964/**
1965 * Get the real world UTC time adjusted for VM lag.
1966 *
1967 * @returns pTime.
1968 * @param pVM The VM instance.
1969 * @param pTime Where to store the time.
1970 */
1971VMMR3DECL(PRTTIMESPEC) TMR3UTCNow(PVM pVM, PRTTIMESPEC pTime)
1972{
1973 RTTimeNow(pTime);
1974 RTTimeSpecSubNano(pTime, pVM->tm.s.offVirtualSync - pVM->tm.s.offVirtualSyncGivenUp);
1975 RTTimeSpecAddNano(pTime, pVM->tm.s.offUTC);
1976 return pTime;
1977}
1978
1979
1980/**
1981 * Display all timers.
1982 *
1983 * @param pVM VM Handle.
1984 * @param pHlp The info helpers.
1985 * @param pszArgs Arguments, ignored.
1986 */
1987static DECLCALLBACK(void) tmR3TimerInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
1988{
1989 NOREF(pszArgs);
1990 pHlp->pfnPrintf(pHlp,
1991 "Timers (pVM=%p)\n"
1992 "%.*s %.*s %.*s %.*s Clock %-18s %-18s %-25s Description\n",
1993 pVM,
1994 sizeof(RTR3PTR) * 2, "pTimerR3 ",
1995 sizeof(int32_t) * 2, "offNext ",
1996 sizeof(int32_t) * 2, "offPrev ",
1997 sizeof(int32_t) * 2, "offSched ",
1998 "Time",
1999 "Expire",
2000 "State");
2001 for (PTMTIMERR3 pTimer = pVM->tm.s.pCreated; pTimer; pTimer = pTimer->pBigNext)
2002 {
2003 pHlp->pfnPrintf(pHlp,
2004 "%p %08RX32 %08RX32 %08RX32 %s %18RU64 %18RU64 %-25s %s\n",
2005 pTimer,
2006 pTimer->offNext,
2007 pTimer->offPrev,
2008 pTimer->offScheduleNext,
2009 pTimer->enmClock == TMCLOCK_REAL ? "Real " : "Virt ",
2010 TMTimerGet(pTimer),
2011 pTimer->u64Expire,
2012 tmTimerState(pTimer->enmState),
2013 pTimer->pszDesc);
2014 }
2015}
2016
2017
2018/**
2019 * Display all active timers.
2020 *
2021 * @param pVM VM Handle.
2022 * @param pHlp The info helpers.
2023 * @param pszArgs Arguments, ignored.
2024 */
2025static DECLCALLBACK(void) tmR3TimerInfoActive(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2026{
2027 NOREF(pszArgs);
2028 pHlp->pfnPrintf(pHlp,
2029 "Active Timers (pVM=%p)\n"
2030 "%.*s %.*s %.*s %.*s Clock %-18s %-18s %-25s Description\n",
2031 pVM,
2032 sizeof(RTR3PTR) * 2, "pTimerR3 ",
2033 sizeof(int32_t) * 2, "offNext ",
2034 sizeof(int32_t) * 2, "offPrev ",
2035 sizeof(int32_t) * 2, "offSched ",
2036 "Time",
2037 "Expire",
2038 "State");
2039 for (unsigned iQueue = 0; iQueue < TMCLOCK_MAX; iQueue++)
2040 {
2041 for (PTMTIMERR3 pTimer = TMTIMER_GET_HEAD(&pVM->tm.s.paTimerQueuesR3[iQueue]);
2042 pTimer;
2043 pTimer = TMTIMER_GET_NEXT(pTimer))
2044 {
2045 pHlp->pfnPrintf(pHlp,
2046 "%p %08RX32 %08RX32 %08RX32 %s %18RU64 %18RU64 %-25s %s\n",
2047 pTimer,
2048 pTimer->offNext,
2049 pTimer->offPrev,
2050 pTimer->offScheduleNext,
2051 pTimer->enmClock == TMCLOCK_REAL
2052 ? "Real "
2053 : pTimer->enmClock == TMCLOCK_VIRTUAL
2054 ? "Virt "
2055 : pTimer->enmClock == TMCLOCK_VIRTUAL_SYNC
2056 ? "VrSy "
2057 : "TSC ",
2058 TMTimerGet(pTimer),
2059 pTimer->u64Expire,
2060 tmTimerState(pTimer->enmState),
2061 pTimer->pszDesc);
2062 }
2063 }
2064}
2065
2066
2067/**
2068 * Display all clocks.
2069 *
2070 * @param pVM VM Handle.
2071 * @param pHlp The info helpers.
2072 * @param pszArgs Arguments, ignored.
2073 */
2074static DECLCALLBACK(void) tmR3InfoClocks(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2075{
2076 NOREF(pszArgs);
2077
2078 /*
2079 * Read the times first to avoid more than necessary time variation.
2080 */
2081 const uint64_t u64TSC = TMCpuTickGet(pVM);
2082 const uint64_t u64Virtual = TMVirtualGet(pVM);
2083 const uint64_t u64VirtualSync = TMVirtualSyncGet(pVM);
2084 const uint64_t u64Real = TMRealGet(pVM);
2085
2086 /*
2087 * TSC
2088 */
2089 pHlp->pfnPrintf(pHlp,
2090 "Cpu Tick: %18RU64 (%#016RX64) %RU64Hz %s%s",
2091 u64TSC, u64TSC, TMCpuTicksPerSecond(pVM),
2092 pVM->tm.s.fTSCTicking ? "ticking" : "paused",
2093 pVM->tm.s.fTSCVirtualized ? " - virtualized" : "");
2094 if (pVM->tm.s.fTSCUseRealTSC)
2095 {
2096 pHlp->pfnPrintf(pHlp, " - real tsc");
2097 if (pVM->tm.s.u64TSCOffset)
2098 pHlp->pfnPrintf(pHlp, "\n offset %RU64", pVM->tm.s.u64TSCOffset);
2099 }
2100 else
2101 pHlp->pfnPrintf(pHlp, " - virtual clock");
2102 pHlp->pfnPrintf(pHlp, "\n");
2103
2104 /*
2105 * virtual
2106 */
2107 pHlp->pfnPrintf(pHlp,
2108 " Virtual: %18RU64 (%#016RX64) %RU64Hz %s",
2109 u64Virtual, u64Virtual, TMVirtualGetFreq(pVM),
2110 pVM->tm.s.fVirtualTicking ? "ticking" : "paused");
2111 if (pVM->tm.s.fVirtualWarpDrive)
2112 pHlp->pfnPrintf(pHlp, " WarpDrive %RU32 %%", pVM->tm.s.u32VirtualWarpDrivePercentage);
2113 pHlp->pfnPrintf(pHlp, "\n");
2114
2115 /*
2116 * virtual sync
2117 */
2118 pHlp->pfnPrintf(pHlp,
2119 "VirtSync: %18RU64 (%#016RX64) %s%s",
2120 u64VirtualSync, u64VirtualSync,
2121 pVM->tm.s.fVirtualSyncTicking ? "ticking" : "paused",
2122 pVM->tm.s.fVirtualSyncCatchUp ? " - catchup" : "");
2123 if (pVM->tm.s.offVirtualSync)
2124 {
2125 pHlp->pfnPrintf(pHlp, "\n offset %RU64", pVM->tm.s.offVirtualSync);
2126 if (pVM->tm.s.u32VirtualSyncCatchUpPercentage)
2127 pHlp->pfnPrintf(pHlp, " catch-up rate %u %%", pVM->tm.s.u32VirtualSyncCatchUpPercentage);
2128 }
2129 pHlp->pfnPrintf(pHlp, "\n");
2130
2131 /*
2132 * real
2133 */
2134 pHlp->pfnPrintf(pHlp,
2135 " Real: %18RU64 (%#016RX64) %RU64Hz\n",
2136 u64Real, u64Real, TMRealGetFreq(pVM));
2137}
2138
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette