VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR3/DBGFR3Bp.cpp@ 87519

Last change on this file since 87519 was 87346, checked in by vboxsync, 4 years ago

VMM/CPUM: Dropped the fForceHyper parameter of CPUMRecalcHyperDRx. It seems to stem from some confusion in the HM implementation about how to handle DRx registers. The DBGF breakpoints shall always take precedence over the guest ones.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id Revision
File size: 90.5 KB
Line 
1/* $Id: DBGFR3Bp.cpp 87346 2021-01-21 11:42:23Z vboxsync $ */
2/** @file
3 * DBGF - Debugger Facility, Breakpoint Management.
4 */
5
6/*
7 * Copyright (C) 2006-2020 Oracle Corporation
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 */
17
18
19/** @page pg_dbgf_bp DBGF - The Debugger Facility, Breakpoint Management
20 *
21 * The debugger facilities breakpoint managers purpose is to efficiently manage
22 * large amounts of breakpoints for various use cases like dtrace like operations
23 * or execution flow tracing for instance. Especially execution flow tracing can
24 * require thousands of breakpoints which need to be managed efficiently to not slow
25 * down guest operation too much. Before the rewrite starting end of 2020, DBGF could
26 * only handle 32 breakpoints (+ 4 hardware assisted breakpoints). The new
27 * manager is supposed to be able to handle up to one million breakpoints.
28 *
29 * @see grp_dbgf
30 *
31 *
32 * @section sec_dbgf_bp_owner Breakpoint owners
33 *
34 * A single breakpoint owner has a mandatory ring-3 callback and an optional ring-0
35 * callback assigned which is called whenever a breakpoint with the owner assigned is hit.
36 * The common part of the owner is managed by a single table mapped into both ring-0
37 * and ring-3 and the handle being the index into the table. This allows resolving
38 * the handle to the internal structure efficiently. Searching for a free entry is
39 * done using a bitmap indicating free and occupied entries. For the optional
40 * ring-0 owner part there is a separate ring-0 only table for security reasons.
41 *
42 * The callback of the owner can be used to gather and log guest state information
43 * and decide whether to continue guest execution or stop and drop into the debugger.
44 * Breakpoints which don't have an owner assigned will always drop the VM right into
45 * the debugger.
46 *
47 *
48 * @section sec_dbgf_bp_bps Breakpoints
49 *
50 * Breakpoints are referenced by an opaque handle which acts as an index into a global table
51 * mapped into ring-3 and ring-0. Each entry contains the necessary state to manage the breakpoint
52 * like trigger conditions, type, owner, etc. If an owner is given an optional opaque user argument
53 * can be supplied which is passed in the respective owner callback. For owners with ring-0 callbacks
54 * a dedicated ring-0 table is held saving possible ring-0 user arguments.
55 *
56 * To keep memory consumption under control and still support large amounts of
57 * breakpoints the table is split into fixed sized chunks and the chunk index and index
58 * into the chunk can be derived from the handle with only a few logical operations.
59 *
60 *
61 * @section sec_dbgf_bp_resolv Resolving breakpoint addresses
62 *
63 * Whenever a \#BP(0) event is triggered DBGF needs to decide whether the event originated
64 * from within the guest or whether a DBGF breakpoint caused it. This has to happen as fast
65 * as possible. The following scheme is employed to achieve this:
66 *
67 * @verbatim
68 * 7 6 5 4 3 2 1 0
69 * +---+---+---+---+---+---+---+---+
70 * | | | | | | | | | BP address
71 * +---+---+---+---+---+---+---+---+
72 * \_____________________/ \_____/
73 * | |
74 * | +---------------+
75 * | |
76 * BP table | v
77 * +------------+ | +-----------+
78 * | hBp 0 | | X <- | 0 | xxxxx |
79 * | hBp 1 | <----------------+------------------------ | 1 | hBp 1 |
80 * | | | +--- | 2 | idxL2 |
81 * | hBp <m> | <---+ v | |...| ... |
82 * | | | +-----------+ | |...| ... |
83 * | | | | | | |...| ... |
84 * | hBp <n> | <-+ +----- | +> leaf | | | . |
85 * | | | | | | | | . |
86 * | | | | + root + | <------------+ | . |
87 * | | | | | | +-----------+
88 * | | +------- | leaf<+ | L1: 65536
89 * | . | | . |
90 * | . | | . |
91 * | . | | . |
92 * +------------+ +-----------+
93 * L2 idx AVL
94 * @endverbatim
95 *
96 * -# Take the lowest 16 bits of the breakpoint address and use it as an direct index
97 * into the L1 table. The L1 table is contiguous and consists of 4 byte entries
98 * resulting in 256KiB of memory used. The topmost 4 bits indicate how to proceed
99 * and the meaning of the remaining 28bits depends on the topmost 4 bits:
100 * - A 0 type entry means no breakpoint is registered with the matching lowest 16bits,
101 * so forward the event to the guest.
102 * - A 1 in the topmost 4 bits means that the remaining 28bits directly denote a breakpoint
103 * handle which can be resolved by extracting the chunk index and index into the chunk
104 * of the global breakpoint table. If the address matches the breakpoint is processed
105 * according to the configuration. Otherwise the breakpoint is again forwarded to the guest.
106 * - A 2 in the topmost 4 bits means that there are multiple breakpoints registered
107 * matching the lowest 16bits and the search must continue in the L2 table with the
108 * remaining 28bits acting as an index into the L2 table indicating the search root.
109 * -# The L2 table consists of multiple index based AVL trees, there is one for each reference
110 * from the L1 table. The key for the table are the upper 6 bytes of the breakpoint address
111 * used for searching. This tree is traversed until either a matching address is found and
112 * the breakpoint is being processed or again forwarded to the guest if it isn't successful.
113 * Each entry in the L2 table is 16 bytes big and densly packed to avoid excessive memory usage.
114 *
115 *
116 * @section sec_dbgf_bp_note Random thoughts and notes for the implementation
117 *
118 * - The assumption for this approach is that the lowest 16bits of the breakpoint address are
119 * hopefully the ones being the most varying ones across breakpoints so the traversal
120 * can skip the L2 table in most of the cases. Even if the L2 table must be taken the
121 * individual trees should be quite shallow resulting in low overhead when walking it
122 * (though only real world testing can assert this assumption).
123 * - Index based tables and trees are used instead of pointers because the tables
124 * are always mapped into ring-0 and ring-3 with different base addresses.
125 * - Efficent breakpoint allocation is done by having a global bitmap indicating free
126 * and occupied breakpoint entries. Same applies for the L2 AVL table.
127 * - Special care must be taken when modifying the L1 and L2 tables as other EMTs
128 * might still access it (want to try a lockless approach first using
129 * atomic updates, have to resort to locking if that turns out to be too difficult).
130 * - Each BP entry is supposed to be 64 byte big and each chunk should contain 65536
131 * breakpoints which results in 4MiB for each chunk plus the allocation bitmap.
132 * - ring-0 has to take special care when traversing the L2 AVL tree to not run into cycles
133 * and do strict bounds checking before accessing anything. The L1 and L2 table
134 * are written to from ring-3 only. Same goes for the breakpoint table with the
135 * exception being the opaque user argument for ring-0 which is stored in ring-0 only
136 * memory.
137 */
138
139
140/*********************************************************************************************************************************
141* Header Files *
142*********************************************************************************************************************************/
143#define LOG_GROUP LOG_GROUP_DBGF
144#define VMCPU_INCL_CPUM_GST_CTX
145#include <VBox/vmm/dbgf.h>
146#include <VBox/vmm/selm.h>
147#include <VBox/vmm/iem.h>
148#include <VBox/vmm/mm.h>
149#include <VBox/vmm/iom.h>
150#include <VBox/vmm/hm.h>
151#include "DBGFInternal.h"
152#include <VBox/vmm/vm.h>
153#include <VBox/vmm/uvm.h>
154
155#include <VBox/err.h>
156#include <VBox/log.h>
157#include <iprt/assert.h>
158#include <iprt/mem.h>
159
160#include "DBGFInline.h"
161
162
163/*********************************************************************************************************************************
164* Structures and Typedefs *
165*********************************************************************************************************************************/
166
167
168/*********************************************************************************************************************************
169* Internal Functions *
170*********************************************************************************************************************************/
171RT_C_DECLS_BEGIN
172RT_C_DECLS_END
173
174
175/**
176 * Initialize the breakpoint mangement.
177 *
178 * @returns VBox status code.
179 * @param pUVM The user mode VM handle.
180 */
181DECLHIDDEN(int) dbgfR3BpInit(PUVM pUVM)
182{
183 PVM pVM = pUVM->pVM;
184
185 //pUVM->dbgf.s.paBpOwnersR3 = NULL;
186 //pUVM->dbgf.s.pbmBpOwnersAllocR3 = NULL;
187
188 /* Init hardware breakpoint states. */
189 for (uint32_t i = 0; i < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints); i++)
190 {
191 PDBGFBPHW pHwBp = &pVM->dbgf.s.aHwBreakpoints[i];
192
193 AssertCompileSize(DBGFBP, sizeof(uint32_t));
194 pHwBp->hBp = NIL_DBGFBP;
195 //pHwBp->fEnabled = false;
196 }
197
198 /* Now the global breakpoint table chunks. */
199 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpChunks); i++)
200 {
201 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[i];
202
203 //pBpChunk->pBpBaseR3 = NULL;
204 //pBpChunk->pbmAlloc = NULL;
205 //pBpChunk->cBpsFree = 0;
206 pBpChunk->idChunk = DBGF_BP_CHUNK_ID_INVALID; /* Not allocated. */
207 }
208
209 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpL2TblChunks); i++)
210 {
211 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[i];
212
213 //pL2Chunk->pL2BaseR3 = NULL;
214 //pL2Chunk->pbmAlloc = NULL;
215 //pL2Chunk->cFree = 0;
216 pL2Chunk->idChunk = DBGF_BP_CHUNK_ID_INVALID; /* Not allocated. */
217 }
218
219 //pUVM->dbgf.s.paBpLocL1R3 = NULL;
220 pUVM->dbgf.s.hMtxBpL2Wr = NIL_RTSEMFASTMUTEX;
221 return RTSemFastMutexCreate(&pUVM->dbgf.s.hMtxBpL2Wr);
222}
223
224
225/**
226 * Terminates the breakpoint mangement.
227 *
228 * @returns VBox status code.
229 * @param pUVM The user mode VM handle.
230 */
231DECLHIDDEN(int) dbgfR3BpTerm(PUVM pUVM)
232{
233 if (pUVM->dbgf.s.pbmBpOwnersAllocR3)
234 {
235 RTMemFree((void *)pUVM->dbgf.s.pbmBpOwnersAllocR3);
236 pUVM->dbgf.s.pbmBpOwnersAllocR3 = NULL;
237 }
238
239 /* Free all allocated chunk bitmaps (the chunks itself are destroyed during ring-0 VM destruction). */
240 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpChunks); i++)
241 {
242 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[i];
243
244 if (pBpChunk->idChunk != DBGF_BP_CHUNK_ID_INVALID)
245 {
246 AssertPtr(pBpChunk->pbmAlloc);
247 RTMemFree((void *)pBpChunk->pbmAlloc);
248 pBpChunk->pbmAlloc = NULL;
249 pBpChunk->idChunk = DBGF_BP_CHUNK_ID_INVALID;
250 }
251 }
252
253 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpL2TblChunks); i++)
254 {
255 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[i];
256
257 if (pL2Chunk->idChunk != DBGF_BP_CHUNK_ID_INVALID)
258 {
259 AssertPtr(pL2Chunk->pbmAlloc);
260 RTMemFree((void *)pL2Chunk->pbmAlloc);
261 pL2Chunk->pbmAlloc = NULL;
262 pL2Chunk->idChunk = DBGF_BP_CHUNK_ID_INVALID;
263 }
264 }
265
266 if (pUVM->dbgf.s.hMtxBpL2Wr != NIL_RTSEMFASTMUTEX)
267 {
268 RTSemFastMutexDestroy(pUVM->dbgf.s.hMtxBpL2Wr);
269 pUVM->dbgf.s.hMtxBpL2Wr = NIL_RTSEMFASTMUTEX;
270 }
271
272 return VINF_SUCCESS;
273}
274
275
276/**
277 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
278 */
279static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpInitEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
280{
281 RT_NOREF(pvUser);
282
283 VMCPU_ASSERT_EMT(pVCpu);
284 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
285
286 /*
287 * The initialization will be done on EMT(0). It is possible that multiple
288 * initialization attempts are done because dbgfR3BpEnsureInit() can be called
289 * from racing non EMT threads when trying to set a breakpoint for the first time.
290 * Just fake success if the L1 is already present which means that a previous rendezvous
291 * successfully initialized the breakpoint manager.
292 */
293 PUVM pUVM = pVM->pUVM;
294 if ( pVCpu->idCpu == 0
295 && !pUVM->dbgf.s.paBpLocL1R3)
296 {
297 DBGFBPINITREQ Req;
298 Req.Hdr.u32Magic = SUPVMMR0REQHDR_MAGIC;
299 Req.Hdr.cbReq = sizeof(Req);
300 Req.paBpLocL1R3 = NULL;
301 int rc = VMMR3CallR0Emt(pVM, pVCpu, VMMR0_DO_DBGF_BP_INIT, 0 /*u64Arg*/, &Req.Hdr);
302 AssertLogRelMsgRCReturn(rc, ("VMMR0_DO_DBGF_BP_INIT failed: %Rrc\n", rc), rc);
303 pUVM->dbgf.s.paBpLocL1R3 = Req.paBpLocL1R3;
304 }
305
306 return VINF_SUCCESS;
307}
308
309
310/**
311 * Ensures that the breakpoint manager is fully initialized.
312 *
313 * @returns VBox status code.
314 * @param pUVM The user mode VM handle.
315 *
316 * @thread Any thread.
317 */
318static int dbgfR3BpEnsureInit(PUVM pUVM)
319{
320 /* If the L1 lookup table is allocated initialization succeeded before. */
321 if (RT_LIKELY(pUVM->dbgf.s.paBpLocL1R3))
322 return VINF_SUCCESS;
323
324 /* Gather all EMTs and call into ring-0 to initialize the breakpoint manager. */
325 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpInitEmtWorker, NULL /*pvUser*/);
326}
327
328
329/**
330 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
331 */
332static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpOwnerInitEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
333{
334 RT_NOREF(pvUser);
335
336 VMCPU_ASSERT_EMT(pVCpu);
337 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
338
339 /*
340 * The initialization will be done on EMT(0). It is possible that multiple
341 * initialization attempts are done because dbgfR3BpOwnerEnsureInit() can be called
342 * from racing non EMT threads when trying to create a breakpoint owner for the first time.
343 * Just fake success if the pointers are initialized already, meaning that a previous rendezvous
344 * successfully initialized the breakpoint owner table.
345 */
346 int rc = VINF_SUCCESS;
347 PUVM pUVM = pVM->pUVM;
348 if ( pVCpu->idCpu == 0
349 && !pUVM->dbgf.s.pbmBpOwnersAllocR3)
350 {
351 pUVM->dbgf.s.pbmBpOwnersAllocR3 = (volatile void *)RTMemAllocZ(DBGF_BP_OWNER_COUNT_MAX / 8);
352 if (pUVM->dbgf.s.pbmBpOwnersAllocR3)
353 {
354 DBGFBPOWNERINITREQ Req;
355 Req.Hdr.u32Magic = SUPVMMR0REQHDR_MAGIC;
356 Req.Hdr.cbReq = sizeof(Req);
357 Req.paBpOwnerR3 = NULL;
358 rc = VMMR3CallR0Emt(pVM, pVCpu, VMMR0_DO_DBGF_BP_OWNER_INIT, 0 /*u64Arg*/, &Req.Hdr);
359 AssertLogRelMsgRC(rc, ("VMMR0_DO_DBGF_BP_OWNER_INIT failed: %Rrc\n", rc));
360 if (RT_SUCCESS(rc))
361 {
362 pUVM->dbgf.s.paBpOwnersR3 = (PDBGFBPOWNERINT)Req.paBpOwnerR3;
363 return VINF_SUCCESS;
364 }
365
366 RTMemFree((void *)pUVM->dbgf.s.pbmBpOwnersAllocR3);
367 pUVM->dbgf.s.pbmBpOwnersAllocR3 = NULL;
368 }
369 else
370 rc = VERR_NO_MEMORY;
371 }
372
373 return rc;
374}
375
376
377/**
378 * Ensures that the breakpoint manager is fully initialized.
379 *
380 * @returns VBox status code.
381 * @param pUVM The user mode VM handle.
382 *
383 * @thread Any thread.
384 */
385static int dbgfR3BpOwnerEnsureInit(PUVM pUVM)
386{
387 /* If the allocation bitmap is allocated initialization succeeded before. */
388 if (RT_LIKELY(pUVM->dbgf.s.pbmBpOwnersAllocR3))
389 return VINF_SUCCESS;
390
391 /* Gather all EMTs and call into ring-0 to initialize the breakpoint manager. */
392 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpOwnerInitEmtWorker, NULL /*pvUser*/);
393}
394
395
396/**
397 * Returns the internal breakpoint owner state for the given handle.
398 *
399 * @returns Pointer to the internal breakpoint owner state or NULL if the handle is invalid.
400 * @param pUVM The user mode VM handle.
401 * @param hBpOwner The breakpoint owner handle to resolve.
402 */
403DECLINLINE(PDBGFBPOWNERINT) dbgfR3BpOwnerGetByHnd(PUVM pUVM, DBGFBPOWNER hBpOwner)
404{
405 AssertReturn(hBpOwner < DBGF_BP_OWNER_COUNT_MAX, NULL);
406 AssertPtrReturn(pUVM->dbgf.s.pbmBpOwnersAllocR3, NULL);
407
408 AssertReturn(ASMBitTest(pUVM->dbgf.s.pbmBpOwnersAllocR3, hBpOwner), NULL);
409 return &pUVM->dbgf.s.paBpOwnersR3[hBpOwner];
410}
411
412
413/**
414 * Retains the given breakpoint owner handle for use.
415 *
416 * @returns VBox status code.
417 * @retval VERR_INVALID_HANDLE if the given breakpoint owner handle is invalid.
418 * @param pUVM The user mode VM handle.
419 * @param hBpOwner The breakpoint owner handle to retain, NIL_DBGFOWNER is accepted without doing anything.
420 */
421DECLINLINE(int) dbgfR3BpOwnerRetain(PUVM pUVM, DBGFBPOWNER hBpOwner)
422{
423 if (hBpOwner == NIL_DBGFBPOWNER)
424 return VINF_SUCCESS;
425
426 PDBGFBPOWNERINT pBpOwner = dbgfR3BpOwnerGetByHnd(pUVM, hBpOwner);
427 if (pBpOwner)
428 {
429 ASMAtomicIncU32(&pBpOwner->cRefs);
430 return VINF_SUCCESS;
431 }
432
433 return VERR_INVALID_HANDLE;
434}
435
436
437/**
438 * Releases the given breakpoint owner handle.
439 *
440 * @returns VBox status code.
441 * @retval VERR_INVALID_HANDLE if the given breakpoint owner handle is invalid.
442 * @param pUVM The user mode VM handle.
443 * @param hBpOwner The breakpoint owner handle to retain, NIL_DBGFOWNER is accepted without doing anything.
444 */
445DECLINLINE(int) dbgfR3BpOwnerRelease(PUVM pUVM, DBGFBPOWNER hBpOwner)
446{
447 if (hBpOwner == NIL_DBGFBPOWNER)
448 return VINF_SUCCESS;
449
450 PDBGFBPOWNERINT pBpOwner = dbgfR3BpOwnerGetByHnd(pUVM, hBpOwner);
451 if (pBpOwner)
452 {
453 Assert(pBpOwner->cRefs > 1);
454 ASMAtomicDecU32(&pBpOwner->cRefs);
455 return VINF_SUCCESS;
456 }
457
458 return VERR_INVALID_HANDLE;
459}
460
461
462/**
463 * Returns the internal breakpoint state for the given handle.
464 *
465 * @returns Pointer to the internal breakpoint state or NULL if the handle is invalid.
466 * @param pUVM The user mode VM handle.
467 * @param hBp The breakpoint handle to resolve.
468 */
469DECLINLINE(PDBGFBPINT) dbgfR3BpGetByHnd(PUVM pUVM, DBGFBP hBp)
470{
471 uint32_t idChunk = DBGF_BP_HND_GET_CHUNK_ID(hBp);
472 uint32_t idxEntry = DBGF_BP_HND_GET_ENTRY(hBp);
473
474 AssertReturn(idChunk < DBGF_BP_CHUNK_COUNT, NULL);
475 AssertReturn(idxEntry < DBGF_BP_COUNT_PER_CHUNK, NULL);
476
477 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[idChunk];
478 AssertReturn(pBpChunk->idChunk == idChunk, NULL);
479 AssertPtrReturn(pBpChunk->pbmAlloc, NULL);
480 AssertReturn(ASMBitTest(pBpChunk->pbmAlloc, idxEntry), NULL);
481
482 return &pBpChunk->pBpBaseR3[idxEntry];
483}
484
485
486/**
487 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
488 */
489static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpChunkAllocEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
490{
491 uint32_t idChunk = (uint32_t)(uintptr_t)pvUser;
492
493 VMCPU_ASSERT_EMT(pVCpu);
494 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
495
496 AssertReturn(idChunk < DBGF_BP_CHUNK_COUNT, VERR_DBGF_BP_IPE_1);
497
498 PUVM pUVM = pVM->pUVM;
499 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[idChunk];
500
501 AssertReturn( pBpChunk->idChunk == DBGF_BP_CHUNK_ID_INVALID
502 || pBpChunk->idChunk == idChunk,
503 VERR_DBGF_BP_IPE_2);
504
505 /*
506 * The initialization will be done on EMT(0). It is possible that multiple
507 * allocation attempts are done when multiple racing non EMT threads try to
508 * allocate a breakpoint and a new chunk needs to be allocated.
509 * Ignore the request and succeed if the chunk is allocated meaning that a
510 * previous rendezvous successfully allocated the chunk.
511 */
512 int rc = VINF_SUCCESS;
513 if ( pVCpu->idCpu == 0
514 && pBpChunk->idChunk == DBGF_BP_CHUNK_ID_INVALID)
515 {
516 /* Allocate the bitmap first so we can skip calling into VMMR0 if it fails. */
517 AssertCompile(!(DBGF_BP_COUNT_PER_CHUNK % 8));
518 volatile void *pbmAlloc = RTMemAllocZ(DBGF_BP_COUNT_PER_CHUNK / 8);
519 if (RT_LIKELY(pbmAlloc))
520 {
521 DBGFBPCHUNKALLOCREQ Req;
522 Req.Hdr.u32Magic = SUPVMMR0REQHDR_MAGIC;
523 Req.Hdr.cbReq = sizeof(Req);
524 Req.idChunk = idChunk;
525 Req.pChunkBaseR3 = NULL;
526 rc = VMMR3CallR0Emt(pVM, pVCpu, VMMR0_DO_DBGF_BP_CHUNK_ALLOC, 0 /*u64Arg*/, &Req.Hdr);
527 AssertLogRelMsgRC(rc, ("VMMR0_DO_DBGF_BP_CHUNK_ALLOC failed: %Rrc\n", rc));
528 if (RT_SUCCESS(rc))
529 {
530 pBpChunk->pBpBaseR3 = (PDBGFBPINT)Req.pChunkBaseR3;
531 pBpChunk->pbmAlloc = pbmAlloc;
532 pBpChunk->cBpsFree = DBGF_BP_COUNT_PER_CHUNK;
533 pBpChunk->idChunk = idChunk;
534 return VINF_SUCCESS;
535 }
536
537 RTMemFree((void *)pbmAlloc);
538 }
539 else
540 rc = VERR_NO_MEMORY;
541 }
542
543 return rc;
544}
545
546
547/**
548 * Tries to allocate the given chunk which requires an EMT rendezvous.
549 *
550 * @returns VBox status code.
551 * @param pUVM The user mode VM handle.
552 * @param idChunk The chunk to allocate.
553 *
554 * @thread Any thread.
555 */
556DECLINLINE(int) dbgfR3BpChunkAlloc(PUVM pUVM, uint32_t idChunk)
557{
558 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpChunkAllocEmtWorker, (void *)(uintptr_t)idChunk);
559}
560
561
562/**
563 * Tries to allocate a new breakpoint of the given type.
564 *
565 * @returns VBox status code.
566 * @param pUVM The user mode VM handle.
567 * @param hOwner The owner handle, NIL_DBGFBPOWNER if none assigned.
568 * @param pvUser Opaque user data passed in the owner callback.
569 * @param enmType Breakpoint type to allocate.
570 * @param iHitTrigger The hit count at which the breakpoint start triggering.
571 * Use 0 (or 1) if it's gonna trigger at once.
572 * @param iHitDisable The hit count which disables the breakpoint.
573 * Use ~(uint64_t) if it's never gonna be disabled.
574 * @param phBp Where to return the opaque breakpoint handle on success.
575 * @param ppBp Where to return the pointer to the internal breakpoint state on success.
576 *
577 * @thread Any thread.
578 */
579static int dbgfR3BpAlloc(PUVM pUVM, DBGFBPOWNER hOwner, void *pvUser, DBGFBPTYPE enmType,
580 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp,
581 PDBGFBPINT *ppBp)
582{
583 int rc = dbgfR3BpOwnerRetain(pUVM, hOwner);
584 if (RT_FAILURE(rc))
585 return rc;
586
587 /*
588 * Search for a chunk having a free entry, allocating new chunks
589 * if the encountered ones are full.
590 *
591 * This can be called from multiple threads at the same time so special care
592 * has to be taken to not require any locking here.
593 */
594 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpChunks); i++)
595 {
596 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[i];
597
598 uint32_t idChunk = ASMAtomicReadU32(&pBpChunk->idChunk);
599 if (idChunk == DBGF_BP_CHUNK_ID_INVALID)
600 {
601 rc = dbgfR3BpChunkAlloc(pUVM, i);
602 if (RT_FAILURE(rc))
603 {
604 LogRel(("DBGF/Bp: Allocating new breakpoint table chunk failed with %Rrc\n", rc));
605 break;
606 }
607
608 idChunk = ASMAtomicReadU32(&pBpChunk->idChunk);
609 Assert(idChunk == i);
610 }
611
612 /** @todo Optimize with some hinting if this turns out to be too slow. */
613 for (;;)
614 {
615 uint32_t cBpsFree = ASMAtomicReadU32(&pBpChunk->cBpsFree);
616 if (cBpsFree)
617 {
618 /*
619 * Scan the associated bitmap for a free entry, if none can be found another thread
620 * raced us and we go to the next chunk.
621 */
622 int32_t iClr = ASMBitFirstClear(pBpChunk->pbmAlloc, DBGF_BP_COUNT_PER_CHUNK);
623 if (iClr != -1)
624 {
625 /*
626 * Try to allocate, we could get raced here as well. In that case
627 * we try again.
628 */
629 if (!ASMAtomicBitTestAndSet(pBpChunk->pbmAlloc, iClr))
630 {
631 /* Success, immediately mark as allocated, initialize the breakpoint state and return. */
632 ASMAtomicDecU32(&pBpChunk->cBpsFree);
633
634 PDBGFBPINT pBp = &pBpChunk->pBpBaseR3[iClr];
635 pBp->Pub.cHits = 0;
636 pBp->Pub.iHitTrigger = iHitTrigger;
637 pBp->Pub.iHitDisable = iHitDisable;
638 pBp->Pub.hOwner = hOwner;
639 pBp->Pub.fFlagsAndType = DBGF_BP_PUB_SET_FLAGS_AND_TYPE(enmType, DBGF_BP_F_DEFAULT);
640 pBp->pvUserR3 = pvUser;
641
642 /** @todo Owner handling (reference and call ring-0 if it has an ring-0 callback). */
643
644 *phBp = DBGF_BP_HND_CREATE(idChunk, iClr);
645 *ppBp = pBp;
646 return VINF_SUCCESS;
647 }
648 /* else Retry with another spot. */
649 }
650 else /* no free entry in bitmap, go to the next chunk */
651 break;
652 }
653 else /* !cBpsFree, go to the next chunk */
654 break;
655 }
656 }
657
658 rc = dbgfR3BpOwnerRelease(pUVM, hOwner); AssertRC(rc);
659 return VERR_DBGF_NO_MORE_BP_SLOTS;
660}
661
662
663/**
664 * Frees the given breakpoint handle.
665 *
666 * @returns nothing.
667 * @param pUVM The user mode VM handle.
668 * @param hBp The breakpoint handle to free.
669 * @param pBp The internal breakpoint state pointer.
670 */
671static void dbgfR3BpFree(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
672{
673 uint32_t idChunk = DBGF_BP_HND_GET_CHUNK_ID(hBp);
674 uint32_t idxEntry = DBGF_BP_HND_GET_ENTRY(hBp);
675
676 AssertReturnVoid(idChunk < DBGF_BP_CHUNK_COUNT);
677 AssertReturnVoid(idxEntry < DBGF_BP_COUNT_PER_CHUNK);
678
679 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[idChunk];
680 AssertPtrReturnVoid(pBpChunk->pbmAlloc);
681 AssertReturnVoid(ASMBitTest(pBpChunk->pbmAlloc, idxEntry));
682
683 /** @todo Need a trip to Ring-0 if an owner is assigned with a Ring-0 part to clear the breakpoint. */
684 int rc = dbgfR3BpOwnerRelease(pUVM, pBp->Pub.hOwner); AssertRC(rc); RT_NOREF(rc);
685 memset(pBp, 0, sizeof(*pBp));
686
687 ASMAtomicBitClear(pBpChunk->pbmAlloc, idxEntry);
688 ASMAtomicIncU32(&pBpChunk->cBpsFree);
689}
690
691
692/**
693 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
694 */
695static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpL2TblChunkAllocEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
696{
697 uint32_t idChunk = (uint32_t)(uintptr_t)pvUser;
698
699 VMCPU_ASSERT_EMT(pVCpu);
700 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
701
702 AssertReturn(idChunk < DBGF_BP_L2_TBL_CHUNK_COUNT, VERR_DBGF_BP_IPE_1);
703
704 PUVM pUVM = pVM->pUVM;
705 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[idChunk];
706
707 AssertReturn( pL2Chunk->idChunk == DBGF_BP_L2_IDX_CHUNK_ID_INVALID
708 || pL2Chunk->idChunk == idChunk,
709 VERR_DBGF_BP_IPE_2);
710
711 /*
712 * The initialization will be done on EMT(0). It is possible that multiple
713 * allocation attempts are done when multiple racing non EMT threads try to
714 * allocate a breakpoint and a new chunk needs to be allocated.
715 * Ignore the request and succeed if the chunk is allocated meaning that a
716 * previous rendezvous successfully allocated the chunk.
717 */
718 int rc = VINF_SUCCESS;
719 if ( pVCpu->idCpu == 0
720 && pL2Chunk->idChunk == DBGF_BP_L2_IDX_CHUNK_ID_INVALID)
721 {
722 /* Allocate the bitmap first so we can skip calling into VMMR0 if it fails. */
723 AssertCompile(!(DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK % 8));
724 volatile void *pbmAlloc = RTMemAllocZ(DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK / 8);
725 if (RT_LIKELY(pbmAlloc))
726 {
727 DBGFBPL2TBLCHUNKALLOCREQ Req;
728 Req.Hdr.u32Magic = SUPVMMR0REQHDR_MAGIC;
729 Req.Hdr.cbReq = sizeof(Req);
730 Req.idChunk = idChunk;
731 Req.pChunkBaseR3 = NULL;
732 rc = VMMR3CallR0Emt(pVM, pVCpu, VMMR0_DO_DBGF_BP_L2_TBL_CHUNK_ALLOC, 0 /*u64Arg*/, &Req.Hdr);
733 AssertLogRelMsgRC(rc, ("VMMR0_DO_DBGF_BP_L2_TBL_CHUNK_ALLOC failed: %Rrc\n", rc));
734 if (RT_SUCCESS(rc))
735 {
736 pL2Chunk->pL2BaseR3 = (PDBGFBPL2ENTRY)Req.pChunkBaseR3;
737 pL2Chunk->pbmAlloc = pbmAlloc;
738 pL2Chunk->cFree = DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK;
739 pL2Chunk->idChunk = idChunk;
740 return VINF_SUCCESS;
741 }
742
743 RTMemFree((void *)pbmAlloc);
744 }
745 else
746 rc = VERR_NO_MEMORY;
747 }
748
749 return rc;
750}
751
752
753/**
754 * Tries to allocate the given L2 table chunk which requires an EMT rendezvous.
755 *
756 * @returns VBox status code.
757 * @param pUVM The user mode VM handle.
758 * @param idChunk The chunk to allocate.
759 *
760 * @thread Any thread.
761 */
762DECLINLINE(int) dbgfR3BpL2TblChunkAlloc(PUVM pUVM, uint32_t idChunk)
763{
764 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpL2TblChunkAllocEmtWorker, (void *)(uintptr_t)idChunk);
765}
766
767
768/**
769 * Tries to allocate a new breakpoint of the given type.
770 *
771 * @returns VBox status code.
772 * @param pUVM The user mode VM handle.
773 * @param pidxL2Tbl Where to return the L2 table entry index on success.
774 * @param ppL2TblEntry Where to return the pointer to the L2 table entry on success.
775 *
776 * @thread Any thread.
777 */
778static int dbgfR3BpL2TblEntryAlloc(PUVM pUVM, uint32_t *pidxL2Tbl, PDBGFBPL2ENTRY *ppL2TblEntry)
779{
780 /*
781 * Search for a chunk having a free entry, allocating new chunks
782 * if the encountered ones are full.
783 *
784 * This can be called from multiple threads at the same time so special care
785 * has to be taken to not require any locking here.
786 */
787 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpL2TblChunks); i++)
788 {
789 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[i];
790
791 uint32_t idChunk = ASMAtomicReadU32(&pL2Chunk->idChunk);
792 if (idChunk == DBGF_BP_L2_IDX_CHUNK_ID_INVALID)
793 {
794 int rc = dbgfR3BpL2TblChunkAlloc(pUVM, i);
795 if (RT_FAILURE(rc))
796 {
797 LogRel(("DBGF/Bp: Allocating new breakpoint L2 lookup table chunk failed with %Rrc\n", rc));
798 break;
799 }
800
801 idChunk = ASMAtomicReadU32(&pL2Chunk->idChunk);
802 Assert(idChunk == i);
803 }
804
805 /** @todo Optimize with some hinting if this turns out to be too slow. */
806 for (;;)
807 {
808 uint32_t cFree = ASMAtomicReadU32(&pL2Chunk->cFree);
809 if (cFree)
810 {
811 /*
812 * Scan the associated bitmap for a free entry, if none can be found another thread
813 * raced us and we go to the next chunk.
814 */
815 int32_t iClr = ASMBitFirstClear(pL2Chunk->pbmAlloc, DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK);
816 if (iClr != -1)
817 {
818 /*
819 * Try to allocate, we could get raced here as well. In that case
820 * we try again.
821 */
822 if (!ASMAtomicBitTestAndSet(pL2Chunk->pbmAlloc, iClr))
823 {
824 /* Success, immediately mark as allocated, initialize the breakpoint state and return. */
825 ASMAtomicDecU32(&pL2Chunk->cFree);
826
827 PDBGFBPL2ENTRY pL2Entry = &pL2Chunk->pL2BaseR3[iClr];
828
829 *pidxL2Tbl = DBGF_BP_L2_IDX_CREATE(idChunk, iClr);
830 *ppL2TblEntry = pL2Entry;
831 return VINF_SUCCESS;
832 }
833 /* else Retry with another spot. */
834 }
835 else /* no free entry in bitmap, go to the next chunk */
836 break;
837 }
838 else /* !cFree, go to the next chunk */
839 break;
840 }
841 }
842
843 return VERR_DBGF_NO_MORE_BP_SLOTS;
844}
845
846
847/**
848 * Frees the given breakpoint handle.
849 *
850 * @returns nothing.
851 * @param pUVM The user mode VM handle.
852 * @param idxL2Tbl The L2 table index to free.
853 * @param pL2TblEntry The L2 table entry pointer to free.
854 */
855static void dbgfR3BpL2TblEntryFree(PUVM pUVM, uint32_t idxL2Tbl, PDBGFBPL2ENTRY pL2TblEntry)
856{
857 uint32_t idChunk = DBGF_BP_L2_IDX_GET_CHUNK_ID(idxL2Tbl);
858 uint32_t idxEntry = DBGF_BP_L2_IDX_GET_ENTRY(idxL2Tbl);
859
860 AssertReturnVoid(idChunk < DBGF_BP_L2_TBL_CHUNK_COUNT);
861 AssertReturnVoid(idxEntry < DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK);
862
863 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[idChunk];
864 AssertPtrReturnVoid(pL2Chunk->pbmAlloc);
865 AssertReturnVoid(ASMBitTest(pL2Chunk->pbmAlloc, idxEntry));
866
867 memset(pL2TblEntry, 0, sizeof(*pL2TblEntry));
868
869 ASMAtomicBitClear(pL2Chunk->pbmAlloc, idxEntry);
870 ASMAtomicIncU32(&pL2Chunk->cFree);
871}
872
873
874/**
875 * Sets the enabled flag of the given breakpoint to the given value.
876 *
877 * @returns nothing.
878 * @param pBp The breakpoint to set the state.
879 * @param fEnabled Enabled status.
880 */
881DECLINLINE(void) dbgfR3BpSetEnabled(PDBGFBPINT pBp, bool fEnabled)
882{
883 DBGFBPTYPE enmType = DBGF_BP_PUB_GET_TYPE(pBp->Pub.fFlagsAndType);
884 if (fEnabled)
885 pBp->Pub.fFlagsAndType = DBGF_BP_PUB_SET_FLAGS_AND_TYPE(enmType, DBGF_BP_F_ENABLED);
886 else
887 pBp->Pub.fFlagsAndType = DBGF_BP_PUB_SET_FLAGS_AND_TYPE(enmType, 0 /*fFlags*/);
888}
889
890
891/**
892 * Assigns a hardware breakpoint state to the given register breakpoint.
893 *
894 * @returns VBox status code.
895 * @param pVM The cross-context VM structure pointer.
896 * @param hBp The breakpoint handle to assign.
897 * @param pBp The internal breakpoint state.
898 *
899 * @thread Any thread.
900 */
901static int dbgfR3BpRegAssign(PVM pVM, DBGFBP hBp, PDBGFBPINT pBp)
902{
903 AssertReturn(pBp->Pub.u.Reg.iReg == UINT8_MAX, VERR_DBGF_BP_IPE_3);
904
905 for (uint8_t i = 0; i < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints); i++)
906 {
907 PDBGFBPHW pHwBp = &pVM->dbgf.s.aHwBreakpoints[i];
908
909 AssertCompileSize(DBGFBP, sizeof(uint32_t));
910 if (ASMAtomicCmpXchgU32(&pHwBp->hBp, hBp, NIL_DBGFBP))
911 {
912 pHwBp->GCPtr = pBp->Pub.u.Reg.GCPtr;
913 pHwBp->fType = pBp->Pub.u.Reg.fType;
914 pHwBp->cb = pBp->Pub.u.Reg.cb;
915 pHwBp->fEnabled = DBGF_BP_PUB_IS_ENABLED(pBp->Pub.fFlagsAndType);
916
917 pBp->Pub.u.Reg.iReg = i;
918 return VINF_SUCCESS;
919 }
920 }
921
922 return VERR_DBGF_NO_MORE_BP_SLOTS;
923}
924
925
926/**
927 * Removes the assigned hardware breakpoint state from the given register breakpoint.
928 *
929 * @returns VBox status code.
930 * @param pVM The cross-context VM structure pointer.
931 * @param hBp The breakpoint handle to remove.
932 * @param pBp The internal breakpoint state.
933 *
934 * @thread Any thread.
935 */
936static int dbgfR3BpRegRemove(PVM pVM, DBGFBP hBp, PDBGFBPINT pBp)
937{
938 AssertReturn(pBp->Pub.u.Reg.iReg < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints), VERR_DBGF_BP_IPE_3);
939
940 PDBGFBPHW pHwBp = &pVM->dbgf.s.aHwBreakpoints[pBp->Pub.u.Reg.iReg];
941 AssertReturn(pHwBp->hBp == hBp, VERR_DBGF_BP_IPE_4);
942 AssertReturn(!pHwBp->fEnabled, VERR_DBGF_BP_IPE_5);
943
944 pHwBp->GCPtr = 0;
945 pHwBp->fType = 0;
946 pHwBp->cb = 0;
947 ASMCompilerBarrier();
948
949 ASMAtomicWriteU32(&pHwBp->hBp, NIL_DBGFBP);
950 return VINF_SUCCESS;
951}
952
953
954/**
955 * Returns the pointer to the L2 table entry from the given index.
956 *
957 * @returns Current context pointer to the L2 table entry or NULL if the provided index value is invalid.
958 * @param pUVM The user mode VM handle.
959 * @param idxL2 The L2 table index to resolve.
960 *
961 * @note The content of the resolved L2 table entry is not validated!.
962 */
963DECLINLINE(PDBGFBPL2ENTRY) dbgfR3BpL2GetByIdx(PUVM pUVM, uint32_t idxL2)
964{
965 uint32_t idChunk = DBGF_BP_L2_IDX_GET_CHUNK_ID(idxL2);
966 uint32_t idxEntry = DBGF_BP_L2_IDX_GET_ENTRY(idxL2);
967
968 AssertReturn(idChunk < DBGF_BP_L2_TBL_CHUNK_COUNT, NULL);
969 AssertReturn(idxEntry < DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK, NULL);
970
971 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[idChunk];
972 AssertPtrReturn(pL2Chunk->pbmAlloc, NULL);
973 AssertReturn(ASMBitTest(pL2Chunk->pbmAlloc, idxEntry), NULL);
974
975 return &pL2Chunk->CTX_SUFF(pL2Base)[idxEntry];
976}
977
978
979/**
980 * Creates a binary search tree with the given root and leaf nodes.
981 *
982 * @returns VBox status code.
983 * @param pUVM The user mode VM handle.
984 * @param idxL1 The index into the L1 table where the created tree should be linked into.
985 * @param u32EntryOld The old entry in the L1 table used to compare with in the atomic update.
986 * @param hBpRoot The root node DBGF handle to assign.
987 * @param GCPtrRoot The root nodes GC pointer to use as a key.
988 * @param hBpLeaf The leafs node DBGF handle to assign.
989 * @param GCPtrLeaf The leafs node GC pointer to use as a key.
990 */
991static int dbgfR3BpInt3L2BstCreate(PUVM pUVM, uint32_t idxL1, uint32_t u32EntryOld,
992 DBGFBP hBpRoot, RTGCUINTPTR GCPtrRoot,
993 DBGFBP hBpLeaf, RTGCUINTPTR GCPtrLeaf)
994{
995 AssertReturn(GCPtrRoot != GCPtrLeaf, VERR_DBGF_BP_IPE_9);
996 Assert(DBGF_BP_INT3_L1_IDX_EXTRACT_FROM_ADDR(GCPtrRoot) == DBGF_BP_INT3_L1_IDX_EXTRACT_FROM_ADDR(GCPtrLeaf));
997
998 /* Allocate two nodes. */
999 uint32_t idxL2Root = 0;
1000 PDBGFBPL2ENTRY pL2Root = NULL;
1001 int rc = dbgfR3BpL2TblEntryAlloc(pUVM, &idxL2Root, &pL2Root);
1002 if (RT_SUCCESS(rc))
1003 {
1004 uint32_t idxL2Leaf = 0;
1005 PDBGFBPL2ENTRY pL2Leaf = NULL;
1006 rc = dbgfR3BpL2TblEntryAlloc(pUVM, &idxL2Leaf, &pL2Leaf);
1007 if (RT_SUCCESS(rc))
1008 {
1009 dbgfBpL2TblEntryInit(pL2Leaf, hBpLeaf, GCPtrLeaf, DBGF_BP_L2_ENTRY_IDX_END, DBGF_BP_L2_ENTRY_IDX_END, 0 /*iDepth*/);
1010 if (GCPtrLeaf < GCPtrRoot)
1011 dbgfBpL2TblEntryInit(pL2Root, hBpRoot, GCPtrRoot, idxL2Leaf, DBGF_BP_L2_ENTRY_IDX_END, 0 /*iDepth*/);
1012 else
1013 dbgfBpL2TblEntryInit(pL2Root, hBpRoot, GCPtrRoot, DBGF_BP_L2_ENTRY_IDX_END, idxL2Leaf, 0 /*iDepth*/);
1014
1015 uint32_t const u32Entry = DBGF_BP_INT3_L1_ENTRY_CREATE_L2_IDX(idxL2Root);
1016 if (ASMAtomicCmpXchgU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1], u32Entry, u32EntryOld))
1017 return VINF_SUCCESS;
1018
1019 /* The L1 entry has changed due to another thread racing us during insertion, free nodes and try again. */
1020 rc = VINF_TRY_AGAIN;
1021 dbgfR3BpL2TblEntryFree(pUVM, idxL2Leaf, pL2Leaf);
1022 }
1023
1024 dbgfR3BpL2TblEntryFree(pUVM, idxL2Root, pL2Root);
1025 }
1026
1027 return rc;
1028}
1029
1030
1031/**
1032 * Inserts the given breakpoint handle into an existing binary search tree.
1033 *
1034 * @returns VBox status code.
1035 * @param pUVM The user mode VM handle.
1036 * @param idxL2Root The index of the tree root in the L2 table.
1037 * @param hBp The node DBGF handle to insert.
1038 * @param GCPtr The nodes GC pointer to use as a key.
1039 */
1040static int dbgfR3BpInt2L2BstNodeInsert(PUVM pUVM, uint32_t idxL2Root, DBGFBP hBp, RTGCUINTPTR GCPtr)
1041{
1042 GCPtr = DBGF_BP_INT3_L2_KEY_EXTRACT_FROM_ADDR(GCPtr);
1043
1044 /* Allocate a new node first. */
1045 uint32_t idxL2Nd = 0;
1046 PDBGFBPL2ENTRY pL2Nd = NULL;
1047 int rc = dbgfR3BpL2TblEntryAlloc(pUVM, &idxL2Nd, &pL2Nd);
1048 if (RT_SUCCESS(rc))
1049 {
1050 /* Walk the tree and find the correct node to insert to. */
1051 PDBGFBPL2ENTRY pL2Entry = dbgfR3BpL2GetByIdx(pUVM, idxL2Root);
1052 while (RT_LIKELY(pL2Entry))
1053 {
1054 /* Make a copy of the entry. */
1055 DBGFBPL2ENTRY L2Entry;
1056 L2Entry.u64GCPtrKeyAndBpHnd1 = ASMAtomicReadU64((volatile uint64_t *)&pL2Entry->u64GCPtrKeyAndBpHnd1);
1057 L2Entry.u64LeftRightIdxDepthBpHnd2 = ASMAtomicReadU64((volatile uint64_t *)&pL2Entry->u64LeftRightIdxDepthBpHnd2);
1058
1059 RTGCUINTPTR GCPtrL2Entry = DBGF_BP_L2_ENTRY_GET_GCPTR(L2Entry.u64GCPtrKeyAndBpHnd1);
1060 AssertBreak(GCPtr != GCPtrL2Entry);
1061
1062 /* Not found, get to the next level. */
1063 uint32_t idxL2Next = (GCPtr < GCPtrL2Entry)
1064 ? DBGF_BP_L2_ENTRY_GET_IDX_LEFT(L2Entry.u64LeftRightIdxDepthBpHnd2)
1065 : DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(L2Entry.u64LeftRightIdxDepthBpHnd2);
1066 if (idxL2Next == DBGF_BP_L2_ENTRY_IDX_END)
1067 {
1068 /* Insert the new node here. */
1069 dbgfBpL2TblEntryInit(pL2Nd, hBp, GCPtr, DBGF_BP_L2_ENTRY_IDX_END, DBGF_BP_L2_ENTRY_IDX_END, 0 /*iDepth*/);
1070 if (GCPtr < GCPtrL2Entry)
1071 dbgfBpL2TblEntryUpdateLeft(pL2Entry, idxL2Next, 0 /*iDepth*/);
1072 else
1073 dbgfBpL2TblEntryUpdateRight(pL2Entry, idxL2Next, 0 /*iDepth*/);
1074 return VINF_SUCCESS;
1075 }
1076
1077 pL2Entry = dbgfR3BpL2GetByIdx(pUVM, idxL2Next);
1078 }
1079
1080 rc = VERR_DBGF_BP_L2_LOOKUP_FAILED;
1081 dbgfR3BpL2TblEntryFree(pUVM, idxL2Nd, pL2Nd);
1082 }
1083
1084 return rc;
1085}
1086
1087
1088/**
1089 * Adds the given breakpoint handle keyed with the GC pointer to the proper L2 binary search tree
1090 * possibly creating a new tree.
1091 *
1092 * @returns VBox status code.
1093 * @param pUVM The user mode VM handle.
1094 * @param idxL1 The index into the L1 table the breakpoint uses.
1095 * @param hBp The breakpoint handle which is to be added.
1096 * @param GCPtr The GC pointer the breakpoint is keyed with.
1097 */
1098static int dbgfR3BpInt3L2BstNodeAdd(PUVM pUVM, uint32_t idxL1, DBGFBP hBp, RTGCUINTPTR GCPtr)
1099{
1100 int rc = RTSemFastMutexRequest(pUVM->dbgf.s.hMtxBpL2Wr); AssertRC(rc);
1101
1102 uint32_t u32Entry = ASMAtomicReadU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1]); /* Re-read, could get raced by a remove operation. */
1103 uint8_t u8Type = DBGF_BP_INT3_L1_ENTRY_GET_TYPE(u32Entry);
1104 if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_BP_HND)
1105 {
1106 /* Create a new search tree, gather the necessary information first. */
1107 DBGFBP hBp2 = DBGF_BP_INT3_L1_ENTRY_GET_BP_HND(u32Entry);
1108 PDBGFBPINT pBp2 = dbgfR3BpGetByHnd(pUVM, hBp2);
1109 AssertStmt(VALID_PTR(pBp2), rc = VERR_DBGF_BP_IPE_7);
1110 if (RT_SUCCESS(rc))
1111 rc = dbgfR3BpInt3L2BstCreate(pUVM, idxL1, u32Entry, hBp, GCPtr, hBp2, pBp2->Pub.u.Int3.GCPtr);
1112 }
1113 else if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_L2_IDX)
1114 rc = dbgfR3BpInt2L2BstNodeInsert(pUVM, DBGF_BP_INT3_L1_ENTRY_GET_L2_IDX(u32Entry), hBp, GCPtr);
1115
1116 int rc2 = RTSemFastMutexRelease(pUVM->dbgf.s.hMtxBpL2Wr); AssertRC(rc2);
1117 return rc;
1118}
1119
1120
1121/**
1122 * Gets the leftmost from the given tree node start index.
1123 *
1124 * @returns VBox status code.
1125 * @param pUVM The user mode VM handle.
1126 * @param idxL2Start The start index to walk from.
1127 * @param pidxL2Leftmost Where to store the L2 table index of the leftmost entry.
1128 * @param ppL2NdLeftmost Where to store the pointer to the leftmost L2 table entry.
1129 * @param pidxL2NdLeftParent Where to store the L2 table index of the leftmost entries parent.
1130 * @param ppL2NdLeftParent Where to store the pointer to the leftmost L2 table entries parent.
1131 */
1132static int dbgfR33BpInt3BstGetLeftmostEntryFromNode(PUVM pUVM, uint32_t idxL2Start,
1133 uint32_t *pidxL2Leftmost, PDBGFBPL2ENTRY *ppL2NdLeftmost,
1134 uint32_t *pidxL2NdLeftParent, PDBGFBPL2ENTRY *ppL2NdLeftParent)
1135{
1136 uint32_t idxL2Parent = DBGF_BP_L2_ENTRY_IDX_END;
1137 PDBGFBPL2ENTRY pL2NdParent = NULL;
1138
1139 for (;;)
1140 {
1141 PDBGFBPL2ENTRY pL2Entry = dbgfR3BpL2GetByIdx(pUVM, idxL2Start);
1142 AssertPtr(pL2Entry);
1143
1144 uint32_t idxL2Left = DBGF_BP_L2_ENTRY_GET_IDX_LEFT(pL2Entry->u64LeftRightIdxDepthBpHnd2);
1145 if (idxL2Start == DBGF_BP_L2_ENTRY_IDX_END)
1146 {
1147 *pidxL2Leftmost = idxL2Start;
1148 *ppL2NdLeftmost = pL2Entry;
1149 *pidxL2NdLeftParent = idxL2Parent;
1150 *ppL2NdLeftParent = pL2NdParent;
1151 break;
1152 }
1153
1154 idxL2Parent = idxL2Start;
1155 idxL2Start = idxL2Left;
1156 pL2NdParent = pL2Entry;
1157 }
1158
1159 return VINF_SUCCESS;
1160}
1161
1162
1163/**
1164 * Removes the given node rearranging the tree.
1165 *
1166 * @returns VBox status code.
1167 * @param pUVM The user mode VM handle.
1168 * @param idxL1 The index into the L1 table pointing to the binary search tree containing the node.
1169 * @param idxL2Root The L2 table index where the tree root is located.
1170 * @param idxL2Nd The node index to remove.
1171 * @param pL2Nd The L2 table entry to remove.
1172 * @param idxL2NdParent The parents index, can be DBGF_BP_L2_ENTRY_IDX_END if the root is about to be removed.
1173 * @param pL2NdParent The parents L2 table entry, can be NULL if the root is about to be removed.
1174 * @param fLeftChild Flag whether the node is the left child of the parent or the right one.
1175 */
1176static int dbgfR3BpInt3BstNodeRemove(PUVM pUVM, uint32_t idxL1, uint32_t idxL2Root,
1177 uint32_t idxL2Nd, PDBGFBPL2ENTRY pL2Nd,
1178 uint32_t idxL2NdParent, PDBGFBPL2ENTRY pL2NdParent,
1179 bool fLeftChild)
1180{
1181 /*
1182 * If there are only two nodes remaining the tree will get destroyed and the
1183 * L1 entry will be converted to the direct handle type.
1184 */
1185 uint32_t idxL2Left = DBGF_BP_L2_ENTRY_GET_IDX_LEFT(pL2Nd->u64LeftRightIdxDepthBpHnd2);
1186 uint32_t idxL2Right = DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(pL2Nd->u64LeftRightIdxDepthBpHnd2);
1187
1188 Assert(idxL2NdParent != DBGF_BP_L2_ENTRY_IDX_END || !pL2NdParent); RT_NOREF(idxL2NdParent);
1189 uint32_t idxL2ParentNew = DBGF_BP_L2_ENTRY_IDX_END;
1190 if (idxL2Right == DBGF_BP_L2_ENTRY_IDX_END)
1191 idxL2ParentNew = idxL2Left;
1192 else
1193 {
1194 /* Find the leftmost entry of the right subtree and move it to the to be removed nodes location in the tree. */
1195 PDBGFBPL2ENTRY pL2NdLeftmostParent = NULL;
1196 PDBGFBPL2ENTRY pL2NdLeftmost = NULL;
1197 uint32_t idxL2NdLeftmostParent = DBGF_BP_L2_ENTRY_IDX_END;
1198 uint32_t idxL2Leftmost = DBGF_BP_L2_ENTRY_IDX_END;
1199 int rc = dbgfR33BpInt3BstGetLeftmostEntryFromNode(pUVM, idxL2Right, &idxL2Leftmost ,&pL2NdLeftmost,
1200 &idxL2NdLeftmostParent, &pL2NdLeftmostParent);
1201 AssertRCReturn(rc, rc);
1202
1203 if (pL2NdLeftmostParent)
1204 {
1205 /* Rearrange the leftmost entries parents pointer. */
1206 dbgfBpL2TblEntryUpdateLeft(pL2NdLeftmostParent, DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(pL2NdLeftmost->u64LeftRightIdxDepthBpHnd2), 0 /*iDepth*/);
1207 dbgfBpL2TblEntryUpdateRight(pL2NdLeftmost, idxL2Right, 0 /*iDepth*/);
1208 }
1209
1210 dbgfBpL2TblEntryUpdateLeft(pL2NdLeftmost, idxL2Left, 0 /*iDepth*/);
1211
1212 /* Update the remove nodes parent to point to the new node. */
1213 idxL2ParentNew = idxL2Leftmost;
1214 }
1215
1216 if (pL2NdParent)
1217 {
1218 /* Asssign the new L2 index to proper parents left or right pointer. */
1219 if (fLeftChild)
1220 dbgfBpL2TblEntryUpdateLeft(pL2NdParent, idxL2ParentNew, 0 /*iDepth*/);
1221 else
1222 dbgfBpL2TblEntryUpdateRight(pL2NdParent, idxL2ParentNew, 0 /*iDepth*/);
1223 }
1224 else
1225 {
1226 /* The root node is removed, set the new root in the L1 table. */
1227 Assert(idxL2ParentNew != DBGF_BP_L2_ENTRY_IDX_END);
1228 idxL2Root = idxL2ParentNew;
1229 ASMAtomicXchgU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1], DBGF_BP_INT3_L1_ENTRY_CREATE_L2_IDX(idxL2Left));
1230 }
1231
1232 /* Free the node. */
1233 dbgfR3BpL2TblEntryFree(pUVM, idxL2Nd, pL2Nd);
1234
1235 /*
1236 * Check whether the old/new root is the only node remaining and convert the L1
1237 * table entry to a direct breakpoint handle one in that case.
1238 */
1239 pL2Nd = dbgfR3BpL2GetByIdx(pUVM, idxL2Root);
1240 AssertPtr(pL2Nd);
1241 if ( DBGF_BP_L2_ENTRY_GET_IDX_LEFT(pL2Nd->u64LeftRightIdxDepthBpHnd2) == DBGF_BP_L2_ENTRY_IDX_END
1242 && DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(pL2Nd->u64LeftRightIdxDepthBpHnd2) == DBGF_BP_L2_ENTRY_IDX_END)
1243 {
1244 DBGFBP hBp = DBGF_BP_L2_ENTRY_GET_BP_HND(pL2Nd->u64GCPtrKeyAndBpHnd1, pL2Nd->u64LeftRightIdxDepthBpHnd2);
1245 dbgfR3BpL2TblEntryFree(pUVM, idxL2Root, pL2Nd);
1246 ASMAtomicXchgU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1], DBGF_BP_INT3_L1_ENTRY_CREATE_BP_HND(hBp));
1247 }
1248
1249 return VINF_SUCCESS;
1250}
1251
1252
1253/**
1254 * Removes the given breakpoint handle keyed with the GC pointer from the L2 binary search tree
1255 * pointed to by the given L2 root index.
1256 *
1257 * @returns VBox status code.
1258 * @param pUVM The user mode VM handle.
1259 * @param idxL1 The index into the L1 table pointing to the binary search tree.
1260 * @param idxL2Root The L2 table index where the tree root is located.
1261 * @param hBp The breakpoint handle which is to be removed.
1262 * @param GCPtr The GC pointer the breakpoint is keyed with.
1263 */
1264static int dbgfR3BpInt3L2BstRemove(PUVM pUVM, uint32_t idxL1, uint32_t idxL2Root, DBGFBP hBp, RTGCUINTPTR GCPtr)
1265{
1266 GCPtr = DBGF_BP_INT3_L2_KEY_EXTRACT_FROM_ADDR(GCPtr);
1267
1268 int rc = RTSemFastMutexRequest(pUVM->dbgf.s.hMtxBpL2Wr); AssertRC(rc);
1269
1270 uint32_t idxL2Cur = idxL2Root;
1271 uint32_t idxL2Parent = DBGF_BP_L2_ENTRY_IDX_END;
1272 bool fLeftChild = false;
1273 PDBGFBPL2ENTRY pL2EntryParent = NULL;
1274 for (;;)
1275 {
1276 PDBGFBPL2ENTRY pL2Entry = dbgfR3BpL2GetByIdx(pUVM, idxL2Cur);
1277 AssertPtr(pL2Entry);
1278
1279 /* Check whether this node is to be removed.. */
1280 RTGCUINTPTR GCPtrL2Entry = DBGF_BP_L2_ENTRY_GET_GCPTR(pL2Entry->u64GCPtrKeyAndBpHnd1);
1281 if (GCPtrL2Entry == GCPtr)
1282 {
1283 Assert(DBGF_BP_L2_ENTRY_GET_BP_HND(pL2Entry->u64GCPtrKeyAndBpHnd1, pL2Entry->u64LeftRightIdxDepthBpHnd2) == hBp); RT_NOREF(hBp);
1284
1285 rc = dbgfR3BpInt3BstNodeRemove(pUVM, idxL1, idxL2Root, idxL2Cur, pL2Entry,
1286 idxL2Parent, pL2EntryParent, fLeftChild);
1287 break;
1288 }
1289
1290 pL2EntryParent = pL2Entry;
1291 idxL2Parent = idxL2Cur;
1292
1293 if (GCPtrL2Entry < GCPtr)
1294 {
1295 fLeftChild = true;
1296 idxL2Cur = DBGF_BP_L2_ENTRY_GET_IDX_LEFT(pL2Entry->u64LeftRightIdxDepthBpHnd2);
1297 }
1298 else
1299 {
1300 fLeftChild = false;
1301 idxL2Cur = DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(pL2Entry->u64LeftRightIdxDepthBpHnd2);
1302 }
1303
1304 AssertBreakStmt(idxL2Cur != DBGF_BP_L2_ENTRY_IDX_END, rc = VERR_DBGF_BP_L2_LOOKUP_FAILED);
1305 }
1306
1307 int rc2 = RTSemFastMutexRelease(pUVM->dbgf.s.hMtxBpL2Wr); AssertRC(rc2);
1308
1309 return rc;
1310}
1311
1312
1313/**
1314 * Adds the given int3 breakpoint to the appropriate lookup tables.
1315 *
1316 * @returns VBox status code.
1317 * @param pUVM The user mode VM handle.
1318 * @param hBp The breakpoint handle to add.
1319 * @param pBp The internal breakpoint state.
1320 */
1321static int dbgfR3BpInt3Add(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
1322{
1323 AssertReturn(DBGF_BP_PUB_GET_TYPE(pBp->Pub.fFlagsAndType) == DBGFBPTYPE_INT3, VERR_DBGF_BP_IPE_3);
1324
1325 int rc = VINF_SUCCESS;
1326 uint16_t idxL1 = DBGF_BP_INT3_L1_IDX_EXTRACT_FROM_ADDR(pBp->Pub.u.Int3.GCPtr);
1327 uint8_t cTries = 16;
1328
1329 while (cTries--)
1330 {
1331 uint32_t u32Entry = ASMAtomicReadU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1]);
1332
1333 if (u32Entry == DBGF_BP_INT3_L1_ENTRY_TYPE_NULL)
1334 {
1335 /*
1336 * No breakpoint assigned so far for this entry, create an entry containing
1337 * the direct breakpoint handle and try to exchange it atomically.
1338 */
1339 u32Entry = DBGF_BP_INT3_L1_ENTRY_CREATE_BP_HND(hBp);
1340 if (ASMAtomicCmpXchgU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1], u32Entry, DBGF_BP_INT3_L1_ENTRY_TYPE_NULL))
1341 break;
1342 }
1343 else
1344 {
1345 rc = dbgfR3BpInt3L2BstNodeAdd(pUVM, idxL1, hBp, pBp->Pub.u.Int3.GCPtr);
1346 if (rc == VINF_TRY_AGAIN)
1347 continue;
1348
1349 break;
1350 }
1351 }
1352
1353 if ( RT_SUCCESS(rc)
1354 && !cTries) /* Too much contention, abort with an error. */
1355 rc = VERR_DBGF_BP_INT3_ADD_TRIES_REACHED;
1356
1357 return rc;
1358}
1359
1360
1361/**
1362 * Get a breakpoint give by address.
1363 *
1364 * @returns The breakpoint handle on success or NIL_DBGF if not found.
1365 * @param pUVM The user mode VM handle.
1366 * @param enmType The breakpoint type.
1367 * @param GCPtr The breakpoint address.
1368 * @param ppBp Where to store the pointer to the internal breakpoint state on success, optional.
1369 */
1370static DBGFBP dbgfR3BpGetByAddr(PUVM pUVM, DBGFBPTYPE enmType, RTGCUINTPTR GCPtr, PDBGFBPINT *ppBp)
1371{
1372 DBGFBP hBp = NIL_DBGFBP;
1373
1374 switch (enmType)
1375 {
1376 case DBGFBPTYPE_REG:
1377 {
1378 PVM pVM = pUVM->pVM;
1379 VM_ASSERT_VALID_EXT_RETURN(pVM, NIL_DBGFBP);
1380
1381 for (uint32_t i = 0; i < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints); i++)
1382 {
1383 PDBGFBPHW pHwBp = &pVM->dbgf.s.aHwBreakpoints[i];
1384
1385 AssertCompileSize(DBGFBP, sizeof(uint32_t));
1386 DBGFBP hBpTmp = ASMAtomicReadU32(&pHwBp->hBp);
1387 if ( pHwBp->GCPtr == GCPtr
1388 && hBpTmp != NIL_DBGFBP)
1389 {
1390 hBp = hBpTmp;
1391 break;
1392 }
1393 }
1394
1395 break;
1396 }
1397
1398 case DBGFBPTYPE_INT3:
1399 {
1400 const uint16_t idxL1 = DBGF_BP_INT3_L1_IDX_EXTRACT_FROM_ADDR(GCPtr);
1401 const uint32_t u32L1Entry = ASMAtomicReadU32(&pUVM->dbgf.s.CTX_SUFF(paBpLocL1)[idxL1]);
1402
1403 if (u32L1Entry != DBGF_BP_INT3_L1_ENTRY_TYPE_NULL)
1404 {
1405 uint8_t u8Type = DBGF_BP_INT3_L1_ENTRY_GET_TYPE(u32L1Entry);
1406 if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_BP_HND)
1407 hBp = DBGF_BP_INT3_L1_ENTRY_GET_BP_HND(u32L1Entry);
1408 else if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_L2_IDX)
1409 {
1410 RTGCUINTPTR GCPtrKey = DBGF_BP_INT3_L2_KEY_EXTRACT_FROM_ADDR(GCPtr);
1411 PDBGFBPL2ENTRY pL2Nd = dbgfR3BpL2GetByIdx(pUVM, DBGF_BP_INT3_L1_ENTRY_GET_L2_IDX(u32L1Entry));
1412
1413 for (;;)
1414 {
1415 AssertPtr(pL2Nd);
1416
1417 RTGCUINTPTR GCPtrL2Entry = DBGF_BP_L2_ENTRY_GET_GCPTR(pL2Nd->u64GCPtrKeyAndBpHnd1);
1418 if (GCPtrKey == GCPtrL2Entry)
1419 {
1420 hBp = DBGF_BP_L2_ENTRY_GET_BP_HND(pL2Nd->u64GCPtrKeyAndBpHnd1, pL2Nd->u64LeftRightIdxDepthBpHnd2);
1421 break;
1422 }
1423
1424 /* Not found, get to the next level. */
1425 uint32_t idxL2Next = (GCPtrKey < GCPtrL2Entry)
1426 ? DBGF_BP_L2_ENTRY_GET_IDX_LEFT(pL2Nd->u64LeftRightIdxDepthBpHnd2)
1427 : DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(pL2Nd->u64LeftRightIdxDepthBpHnd2);
1428 /* Address not found if the entry denotes the end. */
1429 if (idxL2Next == DBGF_BP_L2_ENTRY_IDX_END)
1430 break;
1431
1432 pL2Nd = dbgfR3BpL2GetByIdx(pUVM, idxL2Next);
1433 }
1434 }
1435 }
1436 break;
1437 }
1438
1439 default:
1440 AssertMsgFailed(("enmType=%d\n", enmType));
1441 break;
1442 }
1443
1444 if ( hBp != NIL_DBGFBP
1445 && ppBp)
1446 *ppBp = dbgfR3BpGetByHnd(pUVM, hBp);
1447 return hBp;
1448}
1449
1450
1451/**
1452 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
1453 */
1454static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpInt3RemoveEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
1455{
1456 DBGFBP hBp = (DBGFBP)(uintptr_t)pvUser;
1457
1458 VMCPU_ASSERT_EMT(pVCpu);
1459 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
1460
1461 PUVM pUVM = pVM->pUVM;
1462 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pUVM, hBp);
1463 AssertPtrReturn(pBp, VERR_DBGF_BP_IPE_8);
1464
1465 int rc = VINF_SUCCESS;
1466 if (pVCpu->idCpu == 0)
1467 {
1468 uint16_t idxL1 = DBGF_BP_INT3_L1_IDX_EXTRACT_FROM_ADDR(pBp->Pub.u.Int3.GCPtr);
1469 uint32_t u32Entry = ASMAtomicReadU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1]);
1470 AssertReturn(u32Entry != DBGF_BP_INT3_L1_ENTRY_TYPE_NULL, VERR_DBGF_BP_IPE_6);
1471
1472 uint8_t u8Type = DBGF_BP_INT3_L1_ENTRY_GET_TYPE(u32Entry);
1473 if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_BP_HND)
1474 {
1475 /* Single breakpoint, just exchange atomically with the null value. */
1476 if (!ASMAtomicCmpXchgU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1], DBGF_BP_INT3_L1_ENTRY_TYPE_NULL, u32Entry))
1477 {
1478 /*
1479 * A breakpoint addition must have raced us converting the L1 entry to an L2 index type, re-read
1480 * and remove the node from the created binary search tree.
1481 *
1482 * This works because after the entry was converted to an L2 index it can only be converted back
1483 * to a direct handle by removing one or more nodes which always goes through the fast mutex
1484 * protecting the L2 table. Likewise adding a new breakpoint requires grabbing the mutex as well
1485 * so there is serialization here and the node can be removed safely without having to worry about
1486 * concurrent tree modifications.
1487 */
1488 u32Entry = ASMAtomicReadU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1]);
1489 AssertReturn(DBGF_BP_INT3_L1_ENTRY_GET_TYPE(u32Entry) == DBGF_BP_INT3_L1_ENTRY_TYPE_L2_IDX, VERR_DBGF_BP_IPE_9);
1490
1491 rc = dbgfR3BpInt3L2BstRemove(pUVM, idxL1, DBGF_BP_INT3_L1_ENTRY_GET_L2_IDX(u32Entry),
1492 hBp, pBp->Pub.u.Int3.GCPtr);
1493 }
1494 }
1495 else if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_L2_IDX)
1496 rc = dbgfR3BpInt3L2BstRemove(pUVM, idxL1, DBGF_BP_INT3_L1_ENTRY_GET_L2_IDX(u32Entry),
1497 hBp, pBp->Pub.u.Int3.GCPtr);
1498 }
1499
1500 return rc;
1501}
1502
1503
1504/**
1505 * Removes the given int3 breakpoint from all lookup tables.
1506 *
1507 * @returns VBox status code.
1508 * @param pUVM The user mode VM handle.
1509 * @param hBp The breakpoint handle to remove.
1510 * @param pBp The internal breakpoint state.
1511 */
1512static int dbgfR3BpInt3Remove(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
1513{
1514 AssertReturn(DBGF_BP_PUB_GET_TYPE(pBp->Pub.fFlagsAndType) == DBGFBPTYPE_INT3, VERR_DBGF_BP_IPE_3);
1515
1516 /*
1517 * This has to be done by an EMT rendezvous in order to not have an EMT traversing
1518 * any L2 trees while it is being removed.
1519 */
1520 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpInt3RemoveEmtWorker, (void *)(uintptr_t)hBp);
1521}
1522
1523
1524/**
1525 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
1526 */
1527static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpRegRecalcOnCpu(PVM pVM, PVMCPU pVCpu, void *pvUser)
1528{
1529 RT_NOREF(pvUser);
1530
1531 /*
1532 * CPU 0 updates the enabled hardware breakpoint counts.
1533 */
1534 if (pVCpu->idCpu == 0)
1535 {
1536 pVM->dbgf.s.cEnabledHwBreakpoints = 0;
1537 pVM->dbgf.s.cEnabledHwIoBreakpoints = 0;
1538
1539 for (uint32_t iBp = 0; iBp < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints); iBp++)
1540 {
1541 if (pVM->dbgf.s.aHwBreakpoints[iBp].fEnabled)
1542 {
1543 pVM->dbgf.s.cEnabledHwBreakpoints += 1;
1544 pVM->dbgf.s.cEnabledHwIoBreakpoints += pVM->dbgf.s.aHwBreakpoints[iBp].fType == X86_DR7_RW_IO;
1545 }
1546 }
1547 }
1548
1549 return CPUMRecalcHyperDRx(pVCpu, UINT8_MAX);
1550}
1551
1552
1553/**
1554 * Arms the given breakpoint.
1555 *
1556 * @returns VBox status code.
1557 * @param pUVM The user mode VM handle.
1558 * @param hBp The breakpoint handle to arm.
1559 * @param pBp The internal breakpoint state pointer for the handle.
1560 *
1561 * @thread Any thread.
1562 */
1563static int dbgfR3BpArm(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
1564{
1565 int rc = VINF_SUCCESS;
1566 PVM pVM = pUVM->pVM;
1567
1568 Assert(!DBGF_BP_PUB_IS_ENABLED(pBp->Pub.fFlagsAndType));
1569 switch (DBGF_BP_PUB_GET_TYPE(pBp->Pub.fFlagsAndType))
1570 {
1571 case DBGFBPTYPE_REG:
1572 {
1573 Assert(pBp->Pub.u.Reg.iReg < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints));
1574 PDBGFBPHW pBpHw = &pVM->dbgf.s.aHwBreakpoints[pBp->Pub.u.Reg.iReg];
1575 Assert(pBpHw->hBp == hBp); RT_NOREF(hBp);
1576
1577 dbgfR3BpSetEnabled(pBp, true /*fEnabled*/);
1578 ASMAtomicWriteBool(&pBpHw->fEnabled, true);
1579 rc = VMMR3EmtRendezvous(pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpRegRecalcOnCpu, NULL);
1580 if (RT_FAILURE(rc))
1581 {
1582 ASMAtomicWriteBool(&pBpHw->fEnabled, false);
1583 dbgfR3BpSetEnabled(pBp, false /*fEnabled*/);
1584 }
1585 break;
1586 }
1587 case DBGFBPTYPE_INT3:
1588 {
1589 dbgfR3BpSetEnabled(pBp, true /*fEnabled*/);
1590
1591 /** @todo When we enable the first int3 breakpoint we should do this in an EMT rendezvous
1592 * as the VMX code intercepts #BP only when at least one int3 breakpoint is enabled.
1593 * A racing vCPU might trigger it and forward it to the guest causing panics/crashes/havoc. */
1594 /*
1595 * Save current byte and write the int3 instruction byte.
1596 */
1597 rc = PGMPhysSimpleReadGCPhys(pVM, &pBp->Pub.u.Int3.bOrg, pBp->Pub.u.Int3.PhysAddr, sizeof(pBp->Pub.u.Int3.bOrg));
1598 if (RT_SUCCESS(rc))
1599 {
1600 static const uint8_t s_bInt3 = 0xcc;
1601 rc = PGMPhysSimpleWriteGCPhys(pVM, pBp->Pub.u.Int3.PhysAddr, &s_bInt3, sizeof(s_bInt3));
1602 if (RT_SUCCESS(rc))
1603 {
1604 ASMAtomicIncU32(&pVM->dbgf.s.cEnabledInt3Breakpoints);
1605 Log(("DBGF: Set breakpoint at %RGv (Phys %RGp)\n", pBp->Pub.u.Int3.GCPtr, pBp->Pub.u.Int3.PhysAddr));
1606 }
1607 }
1608
1609 if (RT_FAILURE(rc))
1610 dbgfR3BpSetEnabled(pBp, false /*fEnabled*/);
1611
1612 break;
1613 }
1614 case DBGFBPTYPE_PORT_IO:
1615 case DBGFBPTYPE_MMIO:
1616 rc = VERR_NOT_IMPLEMENTED;
1617 break;
1618 default:
1619 AssertMsgFailedReturn(("Invalid breakpoint type %d\n", DBGF_BP_PUB_GET_TYPE(pBp->Pub.fFlagsAndType)),
1620 VERR_IPE_NOT_REACHED_DEFAULT_CASE);
1621 }
1622
1623 return rc;
1624}
1625
1626
1627/**
1628 * Disarms the given breakpoint.
1629 *
1630 * @returns VBox status code.
1631 * @param pUVM The user mode VM handle.
1632 * @param hBp The breakpoint handle to disarm.
1633 * @param pBp The internal breakpoint state pointer for the handle.
1634 *
1635 * @thread Any thread.
1636 */
1637static int dbgfR3BpDisarm(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
1638{
1639 int rc = VINF_SUCCESS;
1640 PVM pVM = pUVM->pVM;
1641
1642 Assert(DBGF_BP_PUB_IS_ENABLED(pBp->Pub.fFlagsAndType));
1643 switch (DBGF_BP_PUB_GET_TYPE(pBp->Pub.fFlagsAndType))
1644 {
1645 case DBGFBPTYPE_REG:
1646 {
1647 Assert(pBp->Pub.u.Reg.iReg < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints));
1648 PDBGFBPHW pBpHw = &pVM->dbgf.s.aHwBreakpoints[pBp->Pub.u.Reg.iReg];
1649 Assert(pBpHw->hBp == hBp); RT_NOREF(hBp);
1650
1651 dbgfR3BpSetEnabled(pBp, false /*fEnabled*/);
1652 ASMAtomicWriteBool(&pBpHw->fEnabled, false);
1653 rc = VMMR3EmtRendezvous(pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpRegRecalcOnCpu, NULL);
1654 if (RT_FAILURE(rc))
1655 {
1656 ASMAtomicWriteBool(&pBpHw->fEnabled, true);
1657 dbgfR3BpSetEnabled(pBp, true /*fEnabled*/);
1658 }
1659 break;
1660 }
1661 case DBGFBPTYPE_INT3:
1662 {
1663 /*
1664 * Check that the current byte is the int3 instruction, and restore the original one.
1665 * We currently ignore invalid bytes.
1666 */
1667 uint8_t bCurrent = 0;
1668 rc = PGMPhysSimpleReadGCPhys(pVM, &bCurrent, pBp->Pub.u.Int3.PhysAddr, sizeof(bCurrent));
1669 if ( RT_SUCCESS(rc)
1670 && bCurrent == 0xcc)
1671 {
1672 rc = PGMPhysSimpleWriteGCPhys(pVM, pBp->Pub.u.Int3.PhysAddr, &pBp->Pub.u.Int3.bOrg, sizeof(pBp->Pub.u.Int3.bOrg));
1673 if (RT_SUCCESS(rc))
1674 {
1675 ASMAtomicDecU32(&pVM->dbgf.s.cEnabledInt3Breakpoints);
1676 dbgfR3BpSetEnabled(pBp, false /*fEnabled*/);
1677 Log(("DBGF: Removed breakpoint at %RGv (Phys %RGp)\n", pBp->Pub.u.Int3.GCPtr, pBp->Pub.u.Int3.PhysAddr));
1678 }
1679 }
1680 break;
1681 }
1682 case DBGFBPTYPE_PORT_IO:
1683 case DBGFBPTYPE_MMIO:
1684 rc = VERR_NOT_IMPLEMENTED;
1685 break;
1686 default:
1687 AssertMsgFailedReturn(("Invalid breakpoint type %d\n", DBGF_BP_PUB_GET_TYPE(pBp->Pub.fFlagsAndType)),
1688 VERR_IPE_NOT_REACHED_DEFAULT_CASE);
1689 }
1690
1691 return rc;
1692}
1693
1694
1695/**
1696 * Creates a new breakpoint owner returning a handle which can be used when setting breakpoints.
1697 *
1698 * @returns VBox status code.
1699 * @retval VERR_DBGF_BP_OWNER_NO_MORE_HANDLES if there are no more free owner handles available.
1700 * @param pUVM The user mode VM handle.
1701 * @param pfnBpHit The R3 callback which is called when a breakpoint with the owner handle is hit.
1702 * @param phBpOwner Where to store the owner handle on success.
1703 *
1704 * @thread Any thread but might defer work to EMT on the first call.
1705 */
1706VMMR3DECL(int) DBGFR3BpOwnerCreate(PUVM pUVM, PFNDBGFBPHIT pfnBpHit, PDBGFBPOWNER phBpOwner)
1707{
1708 /*
1709 * Validate the input.
1710 */
1711 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
1712 AssertPtrReturn(pfnBpHit, VERR_INVALID_PARAMETER);
1713 AssertPtrReturn(phBpOwner, VERR_INVALID_POINTER);
1714
1715 int rc = dbgfR3BpOwnerEnsureInit(pUVM);
1716 AssertRCReturn(rc ,rc);
1717
1718 /* Try to find a free entry in the owner table. */
1719 for (;;)
1720 {
1721 /* Scan the associated bitmap for a free entry. */
1722 int32_t iClr = ASMBitFirstClear(pUVM->dbgf.s.pbmBpOwnersAllocR3, DBGF_BP_OWNER_COUNT_MAX);
1723 if (iClr != -1)
1724 {
1725 /*
1726 * Try to allocate, we could get raced here as well. In that case
1727 * we try again.
1728 */
1729 if (!ASMAtomicBitTestAndSet(pUVM->dbgf.s.pbmBpOwnersAllocR3, iClr))
1730 {
1731 PDBGFBPOWNERINT pBpOwner = &pUVM->dbgf.s.paBpOwnersR3[iClr];
1732 pBpOwner->cRefs = 1;
1733 pBpOwner->pfnBpHitR3 = pfnBpHit;
1734
1735 *phBpOwner = (DBGFBPOWNER)iClr;
1736 return VINF_SUCCESS;
1737 }
1738 /* else Retry with another spot. */
1739 }
1740 else /* no free entry in bitmap, out of entries. */
1741 {
1742 rc = VERR_DBGF_BP_OWNER_NO_MORE_HANDLES;
1743 break;
1744 }
1745 }
1746
1747 return rc;
1748}
1749
1750
1751/**
1752 * Destroys the owner identified by the given handle.
1753 *
1754 * @returns VBox status code.
1755 * @retval VERR_INVALID_HANDLE if the given owner handle is invalid.
1756 * @retval VERR_DBGF_OWNER_BUSY if there are still breakpoints set with the given owner handle.
1757 * @param pUVM The user mode VM handle.
1758 * @param hBpOwner The breakpoint owner handle to destroy.
1759 */
1760VMMR3DECL(int) DBGFR3BpOwnerDestroy(PUVM pUVM, DBGFBPOWNER hBpOwner)
1761{
1762 /*
1763 * Validate the input.
1764 */
1765 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
1766 AssertReturn(hBpOwner != NIL_DBGFBPOWNER, VERR_INVALID_HANDLE);
1767
1768 int rc = dbgfR3BpOwnerEnsureInit(pUVM);
1769 AssertRCReturn(rc ,rc);
1770
1771 PDBGFBPOWNERINT pBpOwner = dbgfR3BpOwnerGetByHnd(pUVM, hBpOwner);
1772 if (RT_LIKELY(pBpOwner))
1773 {
1774 if (ASMAtomicReadU32(&pBpOwner->cRefs) == 1)
1775 {
1776 pBpOwner->pfnBpHitR3 = NULL;
1777 ASMAtomicDecU32(&pBpOwner->cRefs);
1778 ASMAtomicBitClear(pUVM->dbgf.s.pbmBpOwnersAllocR3, hBpOwner);
1779 }
1780 else
1781 rc = VERR_DBGF_OWNER_BUSY;
1782 }
1783 else
1784 rc = VERR_INVALID_HANDLE;
1785
1786 return rc;
1787}
1788
1789
1790/**
1791 * Sets a breakpoint (int 3 based).
1792 *
1793 * @returns VBox status code.
1794 * @param pUVM The user mode VM handle.
1795 * @param idSrcCpu The ID of the virtual CPU used for the
1796 * breakpoint address resolution.
1797 * @param pAddress The address of the breakpoint.
1798 * @param iHitTrigger The hit count at which the breakpoint start triggering.
1799 * Use 0 (or 1) if it's gonna trigger at once.
1800 * @param iHitDisable The hit count which disables the breakpoint.
1801 * Use ~(uint64_t) if it's never gonna be disabled.
1802 * @param phBp Where to store the breakpoint handle on success.
1803 *
1804 * @thread Any thread.
1805 */
1806VMMR3DECL(int) DBGFR3BpSetInt3(PUVM pUVM, VMCPUID idSrcCpu, PCDBGFADDRESS pAddress,
1807 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
1808{
1809 return DBGFR3BpSetInt3Ex(pUVM, NIL_DBGFBPOWNER, NULL /*pvUser*/, idSrcCpu, pAddress,
1810 iHitTrigger, iHitDisable, phBp);
1811}
1812
1813
1814/**
1815 * Sets a breakpoint (int 3 based) - extended version.
1816 *
1817 * @returns VBox status code.
1818 * @param pUVM The user mode VM handle.
1819 * @param hOwner The owner handle, use NIL_DBGFBPOWNER if no special owner attached.
1820 * @param pvUser Opaque user data to pass in the owner callback.
1821 * @param idSrcCpu The ID of the virtual CPU used for the
1822 * breakpoint address resolution.
1823 * @param pAddress The address of the breakpoint.
1824 * @param iHitTrigger The hit count at which the breakpoint start triggering.
1825 * Use 0 (or 1) if it's gonna trigger at once.
1826 * @param iHitDisable The hit count which disables the breakpoint.
1827 * Use ~(uint64_t) if it's never gonna be disabled.
1828 * @param phBp Where to store the breakpoint handle on success.
1829 *
1830 * @thread Any thread.
1831 */
1832VMMR3DECL(int) DBGFR3BpSetInt3Ex(PUVM pUVM, DBGFBPOWNER hOwner, void *pvUser,
1833 VMCPUID idSrcCpu, PCDBGFADDRESS pAddress,
1834 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
1835{
1836 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
1837 AssertReturn(hOwner != NIL_DBGFBPOWNER || pvUser == NULL, VERR_INVALID_PARAMETER);
1838 AssertReturn(DBGFR3AddrIsValid(pUVM, pAddress), VERR_INVALID_PARAMETER);
1839 AssertReturn(iHitTrigger <= iHitDisable, VERR_INVALID_PARAMETER);
1840 AssertPtrReturn(phBp, VERR_INVALID_POINTER);
1841
1842 int rc = dbgfR3BpEnsureInit(pUVM);
1843 AssertRCReturn(rc, rc);
1844
1845 /*
1846 * Translate & save the breakpoint address into a guest-physical address.
1847 */
1848 RTGCPHYS GCPhysBpAddr = NIL_RTGCPHYS;
1849 rc = DBGFR3AddrToPhys(pUVM, idSrcCpu, pAddress, &GCPhysBpAddr);
1850 if (RT_SUCCESS(rc))
1851 {
1852 /*
1853 * The physical address from DBGFR3AddrToPhys() is the start of the page,
1854 * we need the exact byte offset into the page while writing to it in dbgfR3BpInt3Arm().
1855 */
1856 GCPhysBpAddr |= (pAddress->FlatPtr & X86_PAGE_OFFSET_MASK);
1857
1858 PDBGFBPINT pBp = NULL;
1859 DBGFBP hBp = dbgfR3BpGetByAddr(pUVM, DBGFBPTYPE_INT3, pAddress->FlatPtr, &pBp);
1860 if ( hBp != NIL_DBGFBP
1861 && pBp->Pub.u.Int3.PhysAddr == GCPhysBpAddr)
1862 {
1863 rc = VINF_SUCCESS;
1864 if (!DBGF_BP_PUB_IS_ENABLED(pBp->Pub.fFlagsAndType))
1865 rc = dbgfR3BpArm(pUVM, hBp, pBp);
1866 if (RT_SUCCESS(rc))
1867 {
1868 rc = VINF_DBGF_BP_ALREADY_EXIST;
1869 if (phBp)
1870 *phBp = hBp;
1871 }
1872 return rc;
1873 }
1874
1875 rc = dbgfR3BpAlloc(pUVM, hOwner, pvUser, DBGFBPTYPE_INT3, iHitTrigger, iHitDisable, &hBp, &pBp);
1876 if (RT_SUCCESS(rc))
1877 {
1878 pBp->Pub.u.Int3.PhysAddr = GCPhysBpAddr;
1879 pBp->Pub.u.Int3.GCPtr = pAddress->FlatPtr;
1880
1881 /* Add the breakpoint to the lookup tables. */
1882 rc = dbgfR3BpInt3Add(pUVM, hBp, pBp);
1883 if (RT_SUCCESS(rc))
1884 {
1885 /* Enable the breakpoint. */
1886 rc = dbgfR3BpArm(pUVM, hBp, pBp);
1887 if (RT_SUCCESS(rc))
1888 {
1889 *phBp = hBp;
1890 return VINF_SUCCESS;
1891 }
1892
1893 int rc2 = dbgfR3BpInt3Remove(pUVM, hBp, pBp); AssertRC(rc2);
1894 }
1895
1896 dbgfR3BpFree(pUVM, hBp, pBp);
1897 }
1898 }
1899
1900 return rc;
1901}
1902
1903
1904/**
1905 * Sets a register breakpoint.
1906 *
1907 * @returns VBox status code.
1908 * @param pUVM The user mode VM handle.
1909 * @param pAddress The address of the breakpoint.
1910 * @param iHitTrigger The hit count at which the breakpoint start triggering.
1911 * Use 0 (or 1) if it's gonna trigger at once.
1912 * @param iHitDisable The hit count which disables the breakpoint.
1913 * Use ~(uint64_t) if it's never gonna be disabled.
1914 * @param fType The access type (one of the X86_DR7_RW_* defines).
1915 * @param cb The access size - 1,2,4 or 8 (the latter is AMD64 long mode only.
1916 * Must be 1 if fType is X86_DR7_RW_EO.
1917 * @param phBp Where to store the breakpoint handle.
1918 *
1919 * @thread Any thread.
1920 */
1921VMMR3DECL(int) DBGFR3BpSetReg(PUVM pUVM, PCDBGFADDRESS pAddress, uint64_t iHitTrigger,
1922 uint64_t iHitDisable, uint8_t fType, uint8_t cb, PDBGFBP phBp)
1923{
1924 return DBGFR3BpSetRegEx(pUVM, NIL_DBGFBPOWNER, NULL /*pvUser*/, pAddress,
1925 iHitTrigger, iHitDisable, fType, cb, phBp);
1926}
1927
1928
1929/**
1930 * Sets a register breakpoint - extended version.
1931 *
1932 * @returns VBox status code.
1933 * @param pUVM The user mode VM handle.
1934 * @param hOwner The owner handle, use NIL_DBGFBPOWNER if no special owner attached.
1935 * @param pvUser Opaque user data to pass in the owner callback.
1936 * @param pAddress The address of the breakpoint.
1937 * @param iHitTrigger The hit count at which the breakpoint start triggering.
1938 * Use 0 (or 1) if it's gonna trigger at once.
1939 * @param iHitDisable The hit count which disables the breakpoint.
1940 * Use ~(uint64_t) if it's never gonna be disabled.
1941 * @param fType The access type (one of the X86_DR7_RW_* defines).
1942 * @param cb The access size - 1,2,4 or 8 (the latter is AMD64 long mode only.
1943 * Must be 1 if fType is X86_DR7_RW_EO.
1944 * @param phBp Where to store the breakpoint handle.
1945 *
1946 * @thread Any thread.
1947 */
1948VMMR3DECL(int) DBGFR3BpSetRegEx(PUVM pUVM, DBGFBPOWNER hOwner, void *pvUser,
1949 PCDBGFADDRESS pAddress, uint64_t iHitTrigger, uint64_t iHitDisable,
1950 uint8_t fType, uint8_t cb, PDBGFBP phBp)
1951{
1952 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
1953 AssertReturn(hOwner != NIL_DBGFBPOWNER || pvUser == NULL, VERR_INVALID_PARAMETER);
1954 AssertReturn(DBGFR3AddrIsValid(pUVM, pAddress), VERR_INVALID_PARAMETER);
1955 AssertReturn(iHitTrigger <= iHitDisable, VERR_INVALID_PARAMETER);
1956 AssertReturn(cb > 0 && cb <= 8 && RT_IS_POWER_OF_TWO(cb), VERR_INVALID_PARAMETER);
1957 AssertPtrReturn(phBp, VERR_INVALID_POINTER);
1958 switch (fType)
1959 {
1960 case X86_DR7_RW_EO:
1961 if (cb == 1)
1962 break;
1963 AssertMsgFailedReturn(("fType=%#x cb=%d != 1\n", fType, cb), VERR_INVALID_PARAMETER);
1964 case X86_DR7_RW_IO:
1965 case X86_DR7_RW_RW:
1966 case X86_DR7_RW_WO:
1967 break;
1968 default:
1969 AssertMsgFailedReturn(("fType=%#x\n", fType), VERR_INVALID_PARAMETER);
1970 }
1971
1972 int rc = dbgfR3BpEnsureInit(pUVM);
1973 AssertRCReturn(rc, rc);
1974
1975 PDBGFBPINT pBp = NULL;
1976 DBGFBP hBp = dbgfR3BpGetByAddr(pUVM, DBGFBPTYPE_REG, pAddress->FlatPtr, &pBp);
1977 if ( hBp != NIL_DBGFBP
1978 && pBp->Pub.u.Reg.cb == cb
1979 && pBp->Pub.u.Reg.fType == fType)
1980 {
1981 rc = VINF_SUCCESS;
1982 if (!DBGF_BP_PUB_IS_ENABLED(pBp->Pub.fFlagsAndType))
1983 rc = dbgfR3BpArm(pUVM, hBp, pBp);
1984 if (RT_SUCCESS(rc))
1985 {
1986 rc = VINF_DBGF_BP_ALREADY_EXIST;
1987 if (phBp)
1988 *phBp = hBp;
1989 }
1990 return rc;
1991 }
1992
1993 /* Allocate new breakpoint. */
1994 rc = dbgfR3BpAlloc(pUVM, hOwner, pvUser, DBGFBPTYPE_REG, iHitTrigger, iHitDisable, &hBp, &pBp);
1995 if (RT_SUCCESS(rc))
1996 {
1997 pBp->Pub.u.Reg.GCPtr = pAddress->FlatPtr;
1998 pBp->Pub.u.Reg.fType = fType;
1999 pBp->Pub.u.Reg.cb = cb;
2000 pBp->Pub.u.Reg.iReg = UINT8_MAX;
2001 ASMCompilerBarrier();
2002
2003 /* Assign the proper hardware breakpoint. */
2004 rc = dbgfR3BpRegAssign(pUVM->pVM, hBp, pBp);
2005 if (RT_SUCCESS(rc))
2006 {
2007 /* Arm the breakpoint. */
2008 rc = dbgfR3BpArm(pUVM, hBp, pBp);
2009 if (RT_SUCCESS(rc))
2010 {
2011 if (phBp)
2012 *phBp = hBp;
2013 return VINF_SUCCESS;
2014 }
2015 else
2016 {
2017 int rc2 = dbgfR3BpRegRemove(pUVM->pVM, hBp, pBp);
2018 AssertRC(rc2); RT_NOREF(rc2);
2019 }
2020 }
2021
2022 dbgfR3BpFree(pUVM, hBp, pBp);
2023 }
2024
2025 return rc;
2026}
2027
2028
2029/**
2030 * This is only kept for now to not mess with the debugger implementation at this point,
2031 * recompiler breakpoints are not supported anymore (IEM has some API but it isn't implemented
2032 * and should probably be merged with the DBGF breakpoints).
2033 */
2034VMMR3DECL(int) DBGFR3BpSetREM(PUVM pUVM, PCDBGFADDRESS pAddress, uint64_t iHitTrigger,
2035 uint64_t iHitDisable, PDBGFBP phBp)
2036{
2037 RT_NOREF(pUVM, pAddress, iHitTrigger, iHitDisable, phBp);
2038 return VERR_NOT_SUPPORTED;
2039}
2040
2041
2042/**
2043 * Sets an I/O port breakpoint.
2044 *
2045 * @returns VBox status code.
2046 * @param pUVM The user mode VM handle.
2047 * @param uPort The first I/O port.
2048 * @param cPorts The number of I/O ports, see DBGFBPIOACCESS_XXX.
2049 * @param fAccess The access we want to break on.
2050 * @param iHitTrigger The hit count at which the breakpoint start
2051 * triggering. Use 0 (or 1) if it's gonna trigger at
2052 * once.
2053 * @param iHitDisable The hit count which disables the breakpoint.
2054 * Use ~(uint64_t) if it's never gonna be disabled.
2055 * @param phBp Where to store the breakpoint handle.
2056 *
2057 * @thread Any thread.
2058 */
2059VMMR3DECL(int) DBGFR3BpSetPortIo(PUVM pUVM, RTIOPORT uPort, RTIOPORT cPorts, uint32_t fAccess,
2060 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
2061{
2062 return DBGFR3BpSetPortIoEx(pUVM, NIL_DBGFBPOWNER, NULL /*pvUser*/, uPort, cPorts,
2063 fAccess, iHitTrigger, iHitDisable, phBp);
2064}
2065
2066
2067/**
2068 * Sets an I/O port breakpoint - extended version.
2069 *
2070 * @returns VBox status code.
2071 * @param pUVM The user mode VM handle.
2072 * @param hOwner The owner handle, use NIL_DBGFBPOWNER if no special owner attached.
2073 * @param pvUser Opaque user data to pass in the owner callback.
2074 * @param uPort The first I/O port.
2075 * @param cPorts The number of I/O ports, see DBGFBPIOACCESS_XXX.
2076 * @param fAccess The access we want to break on.
2077 * @param iHitTrigger The hit count at which the breakpoint start
2078 * triggering. Use 0 (or 1) if it's gonna trigger at
2079 * once.
2080 * @param iHitDisable The hit count which disables the breakpoint.
2081 * Use ~(uint64_t) if it's never gonna be disabled.
2082 * @param phBp Where to store the breakpoint handle.
2083 *
2084 * @thread Any thread.
2085 */
2086VMMR3DECL(int) DBGFR3BpSetPortIoEx(PUVM pUVM, DBGFBPOWNER hOwner, void *pvUser,
2087 RTIOPORT uPort, RTIOPORT cPorts, uint32_t fAccess,
2088 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
2089{
2090 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2091 AssertReturn(hOwner != NIL_DBGFBPOWNER || pvUser == NULL, VERR_INVALID_PARAMETER);
2092 AssertReturn(!(fAccess & ~DBGFBPIOACCESS_VALID_MASK_PORT_IO), VERR_INVALID_FLAGS);
2093 AssertReturn(fAccess, VERR_INVALID_FLAGS);
2094 AssertReturn(iHitTrigger <= iHitDisable, VERR_INVALID_PARAMETER);
2095 AssertPtrReturn(phBp, VERR_INVALID_POINTER);
2096 AssertReturn(cPorts > 0, VERR_OUT_OF_RANGE);
2097 AssertReturn((RTIOPORT)(uPort + cPorts) < uPort, VERR_OUT_OF_RANGE);
2098
2099 int rc = dbgfR3BpEnsureInit(pUVM);
2100 AssertRCReturn(rc, rc);
2101
2102 return VERR_NOT_IMPLEMENTED;
2103}
2104
2105
2106/**
2107 * Sets a memory mapped I/O breakpoint.
2108 *
2109 * @returns VBox status code.
2110 * @param pUVM The user mode VM handle.
2111 * @param GCPhys The first MMIO address.
2112 * @param cb The size of the MMIO range to break on.
2113 * @param fAccess The access we want to break on.
2114 * @param iHitTrigger The hit count at which the breakpoint start
2115 * triggering. Use 0 (or 1) if it's gonna trigger at
2116 * once.
2117 * @param iHitDisable The hit count which disables the breakpoint.
2118 * Use ~(uint64_t) if it's never gonna be disabled.
2119 * @param phBp Where to store the breakpoint handle.
2120 *
2121 * @thread Any thread.
2122 */
2123VMMR3DECL(int) DBGFR3BpSetMmio(PUVM pUVM, RTGCPHYS GCPhys, uint32_t cb, uint32_t fAccess,
2124 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
2125{
2126 return DBGFR3BpSetMmioEx(pUVM, NIL_DBGFBPOWNER, NULL /*pvUser*/, GCPhys, cb, fAccess,
2127 iHitTrigger, iHitDisable, phBp);
2128}
2129
2130
2131/**
2132 * Sets a memory mapped I/O breakpoint - extended version.
2133 *
2134 * @returns VBox status code.
2135 * @param pUVM The user mode VM handle.
2136 * @param hOwner The owner handle, use NIL_DBGFBPOWNER if no special owner attached.
2137 * @param pvUser Opaque user data to pass in the owner callback.
2138 * @param GCPhys The first MMIO address.
2139 * @param cb The size of the MMIO range to break on.
2140 * @param fAccess The access we want to break on.
2141 * @param iHitTrigger The hit count at which the breakpoint start
2142 * triggering. Use 0 (or 1) if it's gonna trigger at
2143 * once.
2144 * @param iHitDisable The hit count which disables the breakpoint.
2145 * Use ~(uint64_t) if it's never gonna be disabled.
2146 * @param phBp Where to store the breakpoint handle.
2147 *
2148 * @thread Any thread.
2149 */
2150VMMR3DECL(int) DBGFR3BpSetMmioEx(PUVM pUVM, DBGFBPOWNER hOwner, void *pvUser,
2151 RTGCPHYS GCPhys, uint32_t cb, uint32_t fAccess,
2152 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
2153{
2154 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2155 AssertReturn(hOwner != NIL_DBGFBPOWNER || pvUser == NULL, VERR_INVALID_PARAMETER);
2156 AssertReturn(!(fAccess & ~DBGFBPIOACCESS_VALID_MASK_MMIO), VERR_INVALID_FLAGS);
2157 AssertReturn(fAccess, VERR_INVALID_FLAGS);
2158 AssertReturn(iHitTrigger <= iHitDisable, VERR_INVALID_PARAMETER);
2159 AssertPtrReturn(phBp, VERR_INVALID_POINTER);
2160 AssertReturn(cb, VERR_OUT_OF_RANGE);
2161 AssertReturn(GCPhys + cb < GCPhys, VERR_OUT_OF_RANGE);
2162
2163 int rc = dbgfR3BpEnsureInit(pUVM);
2164 AssertRCReturn(rc, rc);
2165
2166 return VERR_NOT_IMPLEMENTED;
2167}
2168
2169
2170/**
2171 * Clears a breakpoint.
2172 *
2173 * @returns VBox status code.
2174 * @param pUVM The user mode VM handle.
2175 * @param hBp The handle of the breakpoint which should be removed (cleared).
2176 *
2177 * @thread Any thread.
2178 */
2179VMMR3DECL(int) DBGFR3BpClear(PUVM pUVM, DBGFBP hBp)
2180{
2181 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2182 AssertReturn(hBp != NIL_DBGFBPOWNER, VERR_INVALID_HANDLE);
2183
2184 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pUVM, hBp);
2185 AssertPtrReturn(pBp, VERR_DBGF_BP_NOT_FOUND);
2186
2187 /* Disarm the breakpoint when it is enabled. */
2188 if (DBGF_BP_PUB_IS_ENABLED(pBp->Pub.fFlagsAndType))
2189 {
2190 int rc = dbgfR3BpDisarm(pUVM, hBp, pBp);
2191 AssertRC(rc);
2192 }
2193
2194 switch (DBGF_BP_PUB_GET_TYPE(pBp->Pub.fFlagsAndType))
2195 {
2196 case DBGFBPTYPE_REG:
2197 {
2198 int rc = dbgfR3BpRegRemove(pUVM->pVM, hBp, pBp);
2199 AssertRC(rc);
2200 break;
2201 }
2202 default:
2203 break;
2204 }
2205
2206 dbgfR3BpFree(pUVM, hBp, pBp);
2207 return VINF_SUCCESS;
2208}
2209
2210
2211/**
2212 * Enables a breakpoint.
2213 *
2214 * @returns VBox status code.
2215 * @param pUVM The user mode VM handle.
2216 * @param hBp The handle of the breakpoint which should be enabled.
2217 *
2218 * @thread Any thread.
2219 */
2220VMMR3DECL(int) DBGFR3BpEnable(PUVM pUVM, DBGFBP hBp)
2221{
2222 /*
2223 * Validate the input.
2224 */
2225 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2226 AssertReturn(hBp != NIL_DBGFBPOWNER, VERR_INVALID_HANDLE);
2227
2228 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pUVM, hBp);
2229 AssertPtrReturn(pBp, VERR_DBGF_BP_NOT_FOUND);
2230
2231 int rc = VINF_SUCCESS;
2232 if (!DBGF_BP_PUB_IS_ENABLED(pBp->Pub.fFlagsAndType))
2233 rc = dbgfR3BpArm(pUVM, hBp, pBp);
2234 else
2235 rc = VINF_DBGF_BP_ALREADY_ENABLED;
2236
2237 return rc;
2238}
2239
2240
2241/**
2242 * Disables a breakpoint.
2243 *
2244 * @returns VBox status code.
2245 * @param pUVM The user mode VM handle.
2246 * @param hBp The handle of the breakpoint which should be disabled.
2247 *
2248 * @thread Any thread.
2249 */
2250VMMR3DECL(int) DBGFR3BpDisable(PUVM pUVM, DBGFBP hBp)
2251{
2252 /*
2253 * Validate the input.
2254 */
2255 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2256 AssertReturn(hBp != NIL_DBGFBPOWNER, VERR_INVALID_HANDLE);
2257
2258 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pUVM, hBp);
2259 AssertPtrReturn(pBp, VERR_DBGF_BP_NOT_FOUND);
2260
2261 int rc = VINF_SUCCESS;
2262 if (DBGF_BP_PUB_IS_ENABLED(pBp->Pub.fFlagsAndType))
2263 rc = dbgfR3BpDisarm(pUVM, hBp, pBp);
2264 else
2265 rc = VINF_DBGF_BP_ALREADY_DISABLED;
2266
2267 return rc;
2268}
2269
2270
2271/**
2272 * Enumerate the breakpoints.
2273 *
2274 * @returns VBox status code.
2275 * @param pUVM The user mode VM handle.
2276 * @param pfnCallback The callback function.
2277 * @param pvUser The user argument to pass to the callback.
2278 *
2279 * @thread Any thread.
2280 */
2281VMMR3DECL(int) DBGFR3BpEnum(PUVM pUVM, PFNDBGFBPENUM pfnCallback, void *pvUser)
2282{
2283 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2284
2285 for (uint32_t idChunk = 0; idChunk < RT_ELEMENTS(pUVM->dbgf.s.aBpChunks); idChunk++)
2286 {
2287 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[idChunk];
2288
2289 if (pBpChunk->idChunk == DBGF_BP_CHUNK_ID_INVALID)
2290 break; /* Stop here as the first non allocated chunk means there is no one allocated afterwards as well. */
2291
2292 if (pBpChunk->cBpsFree < DBGF_BP_COUNT_PER_CHUNK)
2293 {
2294 /* Scan the bitmap for allocated entries. */
2295 int32_t iAlloc = ASMBitFirstSet(pBpChunk->pbmAlloc, DBGF_BP_COUNT_PER_CHUNK);
2296 if (iAlloc != -1)
2297 {
2298 do
2299 {
2300 DBGFBP hBp = DBGF_BP_HND_CREATE(idChunk, (uint32_t)iAlloc);
2301 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pUVM, hBp);
2302
2303 /* Make a copy of the breakpoints public data to have a consistent view. */
2304 DBGFBPPUB BpPub;
2305 BpPub.cHits = ASMAtomicReadU64((volatile uint64_t *)&pBp->Pub.cHits);
2306 BpPub.iHitTrigger = ASMAtomicReadU64((volatile uint64_t *)&pBp->Pub.iHitTrigger);
2307 BpPub.iHitDisable = ASMAtomicReadU64((volatile uint64_t *)&pBp->Pub.iHitDisable);
2308 BpPub.hOwner = ASMAtomicReadU32((volatile uint32_t *)&pBp->Pub.hOwner);
2309 BpPub.fFlagsAndType = ASMAtomicReadU32((volatile uint32_t *)&pBp->Pub.fFlagsAndType);
2310 memcpy(&BpPub.u, &pBp->Pub.u, sizeof(pBp->Pub.u)); /* Is constant after allocation. */
2311
2312 /* Check if a removal raced us. */
2313 if (ASMBitTest(pBpChunk->pbmAlloc, iAlloc))
2314 {
2315 int rc = pfnCallback(pUVM, pvUser, hBp, &BpPub);
2316 if (RT_FAILURE(rc) || rc == VINF_CALLBACK_RETURN)
2317 return rc;
2318 }
2319
2320 iAlloc = ASMBitNextSet(pBpChunk->pbmAlloc, DBGF_BP_COUNT_PER_CHUNK, iAlloc);
2321 } while (iAlloc != -1);
2322 }
2323 }
2324 }
2325
2326 return VINF_SUCCESS;
2327}
2328
2329
2330/**
2331 * Called whenever a breakpoint event needs to be serviced in ring-3 to decide what to do.
2332 *
2333 * @returns VBox status code.
2334 * @param pVM The cross context VM structure.
2335 * @param pVCpu The vCPU the breakpoint event happened on.
2336 *
2337 * @thread EMT
2338 */
2339VMMR3_INT_DECL(int) DBGFR3BpHit(PVM pVM, PVMCPU pVCpu)
2340{
2341 /* Send it straight into the debugger?. */
2342 if (pVCpu->dbgf.s.fBpInvokeOwnerCallback)
2343 {
2344 DBGFBP hBp = pVCpu->dbgf.s.hBpActive;
2345 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pVM->pUVM, pVCpu->dbgf.s.hBpActive);
2346 AssertReturn(pBp, VERR_DBGF_BP_IPE_9);
2347
2348 /* Resolve owner (can be NIL_DBGFBPOWNER) and invoke callback if there is one. */
2349 PCDBGFBPOWNERINT pBpOwner = dbgfR3BpOwnerGetByHnd(pVM->pUVM, pBp->Pub.hOwner);
2350 if (pBpOwner)
2351 {
2352 VBOXSTRICTRC rcStrict = pBpOwner->pfnBpHitR3(pVM, pVCpu->idCpu, pBp->pvUserR3, hBp, &pBp->Pub);
2353 if (rcStrict == VINF_SUCCESS)
2354 {
2355 uint8_t abInstr[DBGF_BP_INSN_MAX];
2356 RTGCPTR const GCPtrInstr = pVCpu->cpum.GstCtx.rip + pVCpu->cpum.GstCtx.cs.u64Base;
2357 int rc = PGMPhysSimpleReadGCPtr(pVCpu, &abInstr[0], GCPtrInstr, sizeof(abInstr));
2358 AssertRC(rc);
2359 if (RT_SUCCESS(rc))
2360 {
2361 /* Replace the int3 with the original instruction byte. */
2362 abInstr[0] = pBp->Pub.u.Int3.bOrg;
2363 rcStrict = IEMExecOneWithPrefetchedByPC(pVCpu, CPUMCTX2CORE(&pVCpu->cpum.GstCtx), GCPtrInstr, &abInstr[0], sizeof(abInstr));
2364 return VBOXSTRICTRC_VAL(rcStrict);
2365 }
2366 }
2367 else if (rcStrict != VINF_DBGF_BP_HALT) /* Guru meditation. */
2368 return VERR_DBGF_BP_OWNER_CALLBACK_WRONG_STATUS;
2369 /* else: Halt in the debugger. */
2370 }
2371 }
2372
2373 return DBGFR3EventBreakpoint(pVM, DBGFEVENT_BREAKPOINT);
2374}
2375
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette