VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR3/DBGFR3Bp.cpp@ 87626

Last change on this file since 87626 was 87597, checked in by vboxsync, 4 years ago

VMM/DBGF: Eliminated some unnecessary variable initializations. (Some of the compiler will certainly tell you if you forget to initialize "rc" or any other variable, where-as it both looks totally wrong (to me at least) to assign a value to a variable which you do not use and you also risk not assigning it the correct value in some code path.) bugref:9837

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id Revision
File size: 90.4 KB
Line 
1/* $Id: DBGFR3Bp.cpp 87597 2021-02-04 00:05:03Z vboxsync $ */
2/** @file
3 * DBGF - Debugger Facility, Breakpoint Management.
4 */
5
6/*
7 * Copyright (C) 2006-2020 Oracle Corporation
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 */
17
18
19/** @page pg_dbgf_bp DBGF - The Debugger Facility, Breakpoint Management
20 *
21 * The debugger facilities breakpoint managers purpose is to efficiently manage
22 * large amounts of breakpoints for various use cases like dtrace like operations
23 * or execution flow tracing for instance. Especially execution flow tracing can
24 * require thousands of breakpoints which need to be managed efficiently to not slow
25 * down guest operation too much. Before the rewrite starting end of 2020, DBGF could
26 * only handle 32 breakpoints (+ 4 hardware assisted breakpoints). The new
27 * manager is supposed to be able to handle up to one million breakpoints.
28 *
29 * @see grp_dbgf
30 *
31 *
32 * @section sec_dbgf_bp_owner Breakpoint owners
33 *
34 * A single breakpoint owner has a mandatory ring-3 callback and an optional ring-0
35 * callback assigned which is called whenever a breakpoint with the owner assigned is hit.
36 * The common part of the owner is managed by a single table mapped into both ring-0
37 * and ring-3 and the handle being the index into the table. This allows resolving
38 * the handle to the internal structure efficiently. Searching for a free entry is
39 * done using a bitmap indicating free and occupied entries. For the optional
40 * ring-0 owner part there is a separate ring-0 only table for security reasons.
41 *
42 * The callback of the owner can be used to gather and log guest state information
43 * and decide whether to continue guest execution or stop and drop into the debugger.
44 * Breakpoints which don't have an owner assigned will always drop the VM right into
45 * the debugger.
46 *
47 *
48 * @section sec_dbgf_bp_bps Breakpoints
49 *
50 * Breakpoints are referenced by an opaque handle which acts as an index into a global table
51 * mapped into ring-3 and ring-0. Each entry contains the necessary state to manage the breakpoint
52 * like trigger conditions, type, owner, etc. If an owner is given an optional opaque user argument
53 * can be supplied which is passed in the respective owner callback. For owners with ring-0 callbacks
54 * a dedicated ring-0 table is held saving possible ring-0 user arguments.
55 *
56 * To keep memory consumption under control and still support large amounts of
57 * breakpoints the table is split into fixed sized chunks and the chunk index and index
58 * into the chunk can be derived from the handle with only a few logical operations.
59 *
60 *
61 * @section sec_dbgf_bp_resolv Resolving breakpoint addresses
62 *
63 * Whenever a \#BP(0) event is triggered DBGF needs to decide whether the event originated
64 * from within the guest or whether a DBGF breakpoint caused it. This has to happen as fast
65 * as possible. The following scheme is employed to achieve this:
66 *
67 * @verbatim
68 * 7 6 5 4 3 2 1 0
69 * +---+---+---+---+---+---+---+---+
70 * | | | | | | | | | BP address
71 * +---+---+---+---+---+---+---+---+
72 * \_____________________/ \_____/
73 * | |
74 * | +---------------+
75 * | |
76 * BP table | v
77 * +------------+ | +-----------+
78 * | hBp 0 | | X <- | 0 | xxxxx |
79 * | hBp 1 | <----------------+------------------------ | 1 | hBp 1 |
80 * | | | +--- | 2 | idxL2 |
81 * | hBp <m> | <---+ v | |...| ... |
82 * | | | +-----------+ | |...| ... |
83 * | | | | | | |...| ... |
84 * | hBp <n> | <-+ +----- | +> leaf | | | . |
85 * | | | | | | | | . |
86 * | | | | + root + | <------------+ | . |
87 * | | | | | | +-----------+
88 * | | +------- | leaf<+ | L1: 65536
89 * | . | | . |
90 * | . | | . |
91 * | . | | . |
92 * +------------+ +-----------+
93 * L2 idx AVL
94 * @endverbatim
95 *
96 * -# Take the lowest 16 bits of the breakpoint address and use it as an direct index
97 * into the L1 table. The L1 table is contiguous and consists of 4 byte entries
98 * resulting in 256KiB of memory used. The topmost 4 bits indicate how to proceed
99 * and the meaning of the remaining 28bits depends on the topmost 4 bits:
100 * - A 0 type entry means no breakpoint is registered with the matching lowest 16bits,
101 * so forward the event to the guest.
102 * - A 1 in the topmost 4 bits means that the remaining 28bits directly denote a breakpoint
103 * handle which can be resolved by extracting the chunk index and index into the chunk
104 * of the global breakpoint table. If the address matches the breakpoint is processed
105 * according to the configuration. Otherwise the breakpoint is again forwarded to the guest.
106 * - A 2 in the topmost 4 bits means that there are multiple breakpoints registered
107 * matching the lowest 16bits and the search must continue in the L2 table with the
108 * remaining 28bits acting as an index into the L2 table indicating the search root.
109 * -# The L2 table consists of multiple index based AVL trees, there is one for each reference
110 * from the L1 table. The key for the table are the upper 6 bytes of the breakpoint address
111 * used for searching. This tree is traversed until either a matching address is found and
112 * the breakpoint is being processed or again forwarded to the guest if it isn't successful.
113 * Each entry in the L2 table is 16 bytes big and densly packed to avoid excessive memory usage.
114 *
115 *
116 * @section sec_dbgf_bp_note Random thoughts and notes for the implementation
117 *
118 * - The assumption for this approach is that the lowest 16bits of the breakpoint address are
119 * hopefully the ones being the most varying ones across breakpoints so the traversal
120 * can skip the L2 table in most of the cases. Even if the L2 table must be taken the
121 * individual trees should be quite shallow resulting in low overhead when walking it
122 * (though only real world testing can assert this assumption).
123 * - Index based tables and trees are used instead of pointers because the tables
124 * are always mapped into ring-0 and ring-3 with different base addresses.
125 * - Efficent breakpoint allocation is done by having a global bitmap indicating free
126 * and occupied breakpoint entries. Same applies for the L2 AVL table.
127 * - Special care must be taken when modifying the L1 and L2 tables as other EMTs
128 * might still access it (want to try a lockless approach first using
129 * atomic updates, have to resort to locking if that turns out to be too difficult).
130 * - Each BP entry is supposed to be 64 byte big and each chunk should contain 65536
131 * breakpoints which results in 4MiB for each chunk plus the allocation bitmap.
132 * - ring-0 has to take special care when traversing the L2 AVL tree to not run into cycles
133 * and do strict bounds checking before accessing anything. The L1 and L2 table
134 * are written to from ring-3 only. Same goes for the breakpoint table with the
135 * exception being the opaque user argument for ring-0 which is stored in ring-0 only
136 * memory.
137 */
138
139
140/*********************************************************************************************************************************
141* Header Files *
142*********************************************************************************************************************************/
143#define LOG_GROUP LOG_GROUP_DBGF
144#define VMCPU_INCL_CPUM_GST_CTX
145#include <VBox/vmm/dbgf.h>
146#include <VBox/vmm/selm.h>
147#include <VBox/vmm/iem.h>
148#include <VBox/vmm/mm.h>
149#include <VBox/vmm/iom.h>
150#include <VBox/vmm/hm.h>
151#include "DBGFInternal.h"
152#include <VBox/vmm/vm.h>
153#include <VBox/vmm/uvm.h>
154
155#include <VBox/err.h>
156#include <VBox/log.h>
157#include <iprt/assert.h>
158#include <iprt/mem.h>
159
160#include "DBGFInline.h"
161
162
163/*********************************************************************************************************************************
164* Structures and Typedefs *
165*********************************************************************************************************************************/
166
167
168/*********************************************************************************************************************************
169* Internal Functions *
170*********************************************************************************************************************************/
171RT_C_DECLS_BEGIN
172RT_C_DECLS_END
173
174
175/**
176 * Initialize the breakpoint mangement.
177 *
178 * @returns VBox status code.
179 * @param pUVM The user mode VM handle.
180 */
181DECLHIDDEN(int) dbgfR3BpInit(PUVM pUVM)
182{
183 PVM pVM = pUVM->pVM;
184
185 //pUVM->dbgf.s.paBpOwnersR3 = NULL;
186 //pUVM->dbgf.s.pbmBpOwnersAllocR3 = NULL;
187
188 /* Init hardware breakpoint states. */
189 for (uint32_t i = 0; i < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints); i++)
190 {
191 PDBGFBPHW pHwBp = &pVM->dbgf.s.aHwBreakpoints[i];
192
193 AssertCompileSize(DBGFBP, sizeof(uint32_t));
194 pHwBp->hBp = NIL_DBGFBP;
195 //pHwBp->fEnabled = false;
196 }
197
198 /* Now the global breakpoint table chunks. */
199 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpChunks); i++)
200 {
201 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[i];
202
203 //pBpChunk->pBpBaseR3 = NULL;
204 //pBpChunk->pbmAlloc = NULL;
205 //pBpChunk->cBpsFree = 0;
206 pBpChunk->idChunk = DBGF_BP_CHUNK_ID_INVALID; /* Not allocated. */
207 }
208
209 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpL2TblChunks); i++)
210 {
211 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[i];
212
213 //pL2Chunk->pL2BaseR3 = NULL;
214 //pL2Chunk->pbmAlloc = NULL;
215 //pL2Chunk->cFree = 0;
216 pL2Chunk->idChunk = DBGF_BP_CHUNK_ID_INVALID; /* Not allocated. */
217 }
218
219 //pUVM->dbgf.s.paBpLocL1R3 = NULL;
220 pUVM->dbgf.s.hMtxBpL2Wr = NIL_RTSEMFASTMUTEX;
221 return RTSemFastMutexCreate(&pUVM->dbgf.s.hMtxBpL2Wr);
222}
223
224
225/**
226 * Terminates the breakpoint mangement.
227 *
228 * @returns VBox status code.
229 * @param pUVM The user mode VM handle.
230 */
231DECLHIDDEN(int) dbgfR3BpTerm(PUVM pUVM)
232{
233 if (pUVM->dbgf.s.pbmBpOwnersAllocR3)
234 {
235 RTMemFree((void *)pUVM->dbgf.s.pbmBpOwnersAllocR3);
236 pUVM->dbgf.s.pbmBpOwnersAllocR3 = NULL;
237 }
238
239 /* Free all allocated chunk bitmaps (the chunks itself are destroyed during ring-0 VM destruction). */
240 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpChunks); i++)
241 {
242 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[i];
243
244 if (pBpChunk->idChunk != DBGF_BP_CHUNK_ID_INVALID)
245 {
246 AssertPtr(pBpChunk->pbmAlloc);
247 RTMemFree((void *)pBpChunk->pbmAlloc);
248 pBpChunk->pbmAlloc = NULL;
249 pBpChunk->idChunk = DBGF_BP_CHUNK_ID_INVALID;
250 }
251 }
252
253 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpL2TblChunks); i++)
254 {
255 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[i];
256
257 if (pL2Chunk->idChunk != DBGF_BP_CHUNK_ID_INVALID)
258 {
259 AssertPtr(pL2Chunk->pbmAlloc);
260 RTMemFree((void *)pL2Chunk->pbmAlloc);
261 pL2Chunk->pbmAlloc = NULL;
262 pL2Chunk->idChunk = DBGF_BP_CHUNK_ID_INVALID;
263 }
264 }
265
266 if (pUVM->dbgf.s.hMtxBpL2Wr != NIL_RTSEMFASTMUTEX)
267 {
268 RTSemFastMutexDestroy(pUVM->dbgf.s.hMtxBpL2Wr);
269 pUVM->dbgf.s.hMtxBpL2Wr = NIL_RTSEMFASTMUTEX;
270 }
271
272 return VINF_SUCCESS;
273}
274
275
276/**
277 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
278 */
279static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpInitEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
280{
281 RT_NOREF(pvUser);
282
283 VMCPU_ASSERT_EMT(pVCpu);
284 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
285
286 /*
287 * The initialization will be done on EMT(0). It is possible that multiple
288 * initialization attempts are done because dbgfR3BpEnsureInit() can be called
289 * from racing non EMT threads when trying to set a breakpoint for the first time.
290 * Just fake success if the L1 is already present which means that a previous rendezvous
291 * successfully initialized the breakpoint manager.
292 */
293 PUVM pUVM = pVM->pUVM;
294 if ( pVCpu->idCpu == 0
295 && !pUVM->dbgf.s.paBpLocL1R3)
296 {
297 DBGFBPINITREQ Req;
298 Req.Hdr.u32Magic = SUPVMMR0REQHDR_MAGIC;
299 Req.Hdr.cbReq = sizeof(Req);
300 Req.paBpLocL1R3 = NULL;
301 int rc = VMMR3CallR0Emt(pVM, pVCpu, VMMR0_DO_DBGF_BP_INIT, 0 /*u64Arg*/, &Req.Hdr);
302 AssertLogRelMsgRCReturn(rc, ("VMMR0_DO_DBGF_BP_INIT failed: %Rrc\n", rc), rc);
303 pUVM->dbgf.s.paBpLocL1R3 = Req.paBpLocL1R3;
304 }
305
306 return VINF_SUCCESS;
307}
308
309
310/**
311 * Ensures that the breakpoint manager is fully initialized.
312 *
313 * @returns VBox status code.
314 * @param pUVM The user mode VM handle.
315 *
316 * @thread Any thread.
317 */
318static int dbgfR3BpEnsureInit(PUVM pUVM)
319{
320 /* If the L1 lookup table is allocated initialization succeeded before. */
321 if (RT_LIKELY(pUVM->dbgf.s.paBpLocL1R3))
322 return VINF_SUCCESS;
323
324 /* Gather all EMTs and call into ring-0 to initialize the breakpoint manager. */
325 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpInitEmtWorker, NULL /*pvUser*/);
326}
327
328
329/**
330 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
331 */
332static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpOwnerInitEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
333{
334 RT_NOREF(pvUser);
335
336 VMCPU_ASSERT_EMT(pVCpu);
337 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
338
339 /*
340 * The initialization will be done on EMT(0). It is possible that multiple
341 * initialization attempts are done because dbgfR3BpOwnerEnsureInit() can be called
342 * from racing non EMT threads when trying to create a breakpoint owner for the first time.
343 * Just fake success if the pointers are initialized already, meaning that a previous rendezvous
344 * successfully initialized the breakpoint owner table.
345 */
346 int rc = VINF_SUCCESS;
347 PUVM pUVM = pVM->pUVM;
348 if ( pVCpu->idCpu == 0
349 && !pUVM->dbgf.s.pbmBpOwnersAllocR3)
350 {
351 pUVM->dbgf.s.pbmBpOwnersAllocR3 = (volatile void *)RTMemAllocZ(DBGF_BP_OWNER_COUNT_MAX / 8);
352 if (pUVM->dbgf.s.pbmBpOwnersAllocR3)
353 {
354 DBGFBPOWNERINITREQ Req;
355 Req.Hdr.u32Magic = SUPVMMR0REQHDR_MAGIC;
356 Req.Hdr.cbReq = sizeof(Req);
357 Req.paBpOwnerR3 = NULL;
358 rc = VMMR3CallR0Emt(pVM, pVCpu, VMMR0_DO_DBGF_BP_OWNER_INIT, 0 /*u64Arg*/, &Req.Hdr);
359 AssertLogRelMsgRC(rc, ("VMMR0_DO_DBGF_BP_OWNER_INIT failed: %Rrc\n", rc));
360 if (RT_SUCCESS(rc))
361 {
362 pUVM->dbgf.s.paBpOwnersR3 = (PDBGFBPOWNERINT)Req.paBpOwnerR3;
363 return VINF_SUCCESS;
364 }
365
366 RTMemFree((void *)pUVM->dbgf.s.pbmBpOwnersAllocR3);
367 pUVM->dbgf.s.pbmBpOwnersAllocR3 = NULL;
368 }
369 else
370 rc = VERR_NO_MEMORY;
371 }
372
373 return rc;
374}
375
376
377/**
378 * Ensures that the breakpoint manager is fully initialized.
379 *
380 * @returns VBox status code.
381 * @param pUVM The user mode VM handle.
382 *
383 * @thread Any thread.
384 */
385static int dbgfR3BpOwnerEnsureInit(PUVM pUVM)
386{
387 /* If the allocation bitmap is allocated initialization succeeded before. */
388 if (RT_LIKELY(pUVM->dbgf.s.pbmBpOwnersAllocR3))
389 return VINF_SUCCESS;
390
391 /* Gather all EMTs and call into ring-0 to initialize the breakpoint manager. */
392 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpOwnerInitEmtWorker, NULL /*pvUser*/);
393}
394
395
396/**
397 * Returns the internal breakpoint owner state for the given handle.
398 *
399 * @returns Pointer to the internal breakpoint owner state or NULL if the handle is invalid.
400 * @param pUVM The user mode VM handle.
401 * @param hBpOwner The breakpoint owner handle to resolve.
402 */
403DECLINLINE(PDBGFBPOWNERINT) dbgfR3BpOwnerGetByHnd(PUVM pUVM, DBGFBPOWNER hBpOwner)
404{
405 AssertReturn(hBpOwner < DBGF_BP_OWNER_COUNT_MAX, NULL);
406 AssertPtrReturn(pUVM->dbgf.s.pbmBpOwnersAllocR3, NULL);
407
408 AssertReturn(ASMBitTest(pUVM->dbgf.s.pbmBpOwnersAllocR3, hBpOwner), NULL);
409 return &pUVM->dbgf.s.paBpOwnersR3[hBpOwner];
410}
411
412
413/**
414 * Retains the given breakpoint owner handle for use.
415 *
416 * @returns VBox status code.
417 * @retval VERR_INVALID_HANDLE if the given breakpoint owner handle is invalid.
418 * @param pUVM The user mode VM handle.
419 * @param hBpOwner The breakpoint owner handle to retain, NIL_DBGFOWNER is accepted without doing anything.
420 */
421DECLINLINE(int) dbgfR3BpOwnerRetain(PUVM pUVM, DBGFBPOWNER hBpOwner)
422{
423 if (hBpOwner == NIL_DBGFBPOWNER)
424 return VINF_SUCCESS;
425
426 PDBGFBPOWNERINT pBpOwner = dbgfR3BpOwnerGetByHnd(pUVM, hBpOwner);
427 if (pBpOwner)
428 {
429 ASMAtomicIncU32(&pBpOwner->cRefs);
430 return VINF_SUCCESS;
431 }
432
433 return VERR_INVALID_HANDLE;
434}
435
436
437/**
438 * Releases the given breakpoint owner handle.
439 *
440 * @returns VBox status code.
441 * @retval VERR_INVALID_HANDLE if the given breakpoint owner handle is invalid.
442 * @param pUVM The user mode VM handle.
443 * @param hBpOwner The breakpoint owner handle to retain, NIL_DBGFOWNER is accepted without doing anything.
444 */
445DECLINLINE(int) dbgfR3BpOwnerRelease(PUVM pUVM, DBGFBPOWNER hBpOwner)
446{
447 if (hBpOwner == NIL_DBGFBPOWNER)
448 return VINF_SUCCESS;
449
450 PDBGFBPOWNERINT pBpOwner = dbgfR3BpOwnerGetByHnd(pUVM, hBpOwner);
451 if (pBpOwner)
452 {
453 Assert(pBpOwner->cRefs > 1);
454 ASMAtomicDecU32(&pBpOwner->cRefs);
455 return VINF_SUCCESS;
456 }
457
458 return VERR_INVALID_HANDLE;
459}
460
461
462/**
463 * Returns the internal breakpoint state for the given handle.
464 *
465 * @returns Pointer to the internal breakpoint state or NULL if the handle is invalid.
466 * @param pUVM The user mode VM handle.
467 * @param hBp The breakpoint handle to resolve.
468 */
469DECLINLINE(PDBGFBPINT) dbgfR3BpGetByHnd(PUVM pUVM, DBGFBP hBp)
470{
471 uint32_t idChunk = DBGF_BP_HND_GET_CHUNK_ID(hBp);
472 uint32_t idxEntry = DBGF_BP_HND_GET_ENTRY(hBp);
473
474 AssertReturn(idChunk < DBGF_BP_CHUNK_COUNT, NULL);
475 AssertReturn(idxEntry < DBGF_BP_COUNT_PER_CHUNK, NULL);
476
477 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[idChunk];
478 AssertReturn(pBpChunk->idChunk == idChunk, NULL);
479 AssertPtrReturn(pBpChunk->pbmAlloc, NULL);
480 AssertReturn(ASMBitTest(pBpChunk->pbmAlloc, idxEntry), NULL);
481
482 return &pBpChunk->pBpBaseR3[idxEntry];
483}
484
485
486/**
487 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
488 */
489static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpChunkAllocEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
490{
491 uint32_t idChunk = (uint32_t)(uintptr_t)pvUser;
492
493 VMCPU_ASSERT_EMT(pVCpu);
494 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
495
496 AssertReturn(idChunk < DBGF_BP_CHUNK_COUNT, VERR_DBGF_BP_IPE_1);
497
498 PUVM pUVM = pVM->pUVM;
499 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[idChunk];
500
501 AssertReturn( pBpChunk->idChunk == DBGF_BP_CHUNK_ID_INVALID
502 || pBpChunk->idChunk == idChunk,
503 VERR_DBGF_BP_IPE_2);
504
505 /*
506 * The initialization will be done on EMT(0). It is possible that multiple
507 * allocation attempts are done when multiple racing non EMT threads try to
508 * allocate a breakpoint and a new chunk needs to be allocated.
509 * Ignore the request and succeed if the chunk is allocated meaning that a
510 * previous rendezvous successfully allocated the chunk.
511 */
512 int rc = VINF_SUCCESS;
513 if ( pVCpu->idCpu == 0
514 && pBpChunk->idChunk == DBGF_BP_CHUNK_ID_INVALID)
515 {
516 /* Allocate the bitmap first so we can skip calling into VMMR0 if it fails. */
517 AssertCompile(!(DBGF_BP_COUNT_PER_CHUNK % 8));
518 volatile void *pbmAlloc = RTMemAllocZ(DBGF_BP_COUNT_PER_CHUNK / 8);
519 if (RT_LIKELY(pbmAlloc))
520 {
521 DBGFBPCHUNKALLOCREQ Req;
522 Req.Hdr.u32Magic = SUPVMMR0REQHDR_MAGIC;
523 Req.Hdr.cbReq = sizeof(Req);
524 Req.idChunk = idChunk;
525 Req.pChunkBaseR3 = NULL;
526 rc = VMMR3CallR0Emt(pVM, pVCpu, VMMR0_DO_DBGF_BP_CHUNK_ALLOC, 0 /*u64Arg*/, &Req.Hdr);
527 AssertLogRelMsgRC(rc, ("VMMR0_DO_DBGF_BP_CHUNK_ALLOC failed: %Rrc\n", rc));
528 if (RT_SUCCESS(rc))
529 {
530 pBpChunk->pBpBaseR3 = (PDBGFBPINT)Req.pChunkBaseR3;
531 pBpChunk->pbmAlloc = pbmAlloc;
532 pBpChunk->cBpsFree = DBGF_BP_COUNT_PER_CHUNK;
533 pBpChunk->idChunk = idChunk;
534 return VINF_SUCCESS;
535 }
536
537 RTMemFree((void *)pbmAlloc);
538 }
539 else
540 rc = VERR_NO_MEMORY;
541 }
542
543 return rc;
544}
545
546
547/**
548 * Tries to allocate the given chunk which requires an EMT rendezvous.
549 *
550 * @returns VBox status code.
551 * @param pUVM The user mode VM handle.
552 * @param idChunk The chunk to allocate.
553 *
554 * @thread Any thread.
555 */
556DECLINLINE(int) dbgfR3BpChunkAlloc(PUVM pUVM, uint32_t idChunk)
557{
558 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpChunkAllocEmtWorker, (void *)(uintptr_t)idChunk);
559}
560
561
562/**
563 * Tries to allocate a new breakpoint of the given type.
564 *
565 * @returns VBox status code.
566 * @param pUVM The user mode VM handle.
567 * @param hOwner The owner handle, NIL_DBGFBPOWNER if none assigned.
568 * @param pvUser Opaque user data passed in the owner callback.
569 * @param enmType Breakpoint type to allocate.
570 * @param iHitTrigger The hit count at which the breakpoint start triggering.
571 * Use 0 (or 1) if it's gonna trigger at once.
572 * @param iHitDisable The hit count which disables the breakpoint.
573 * Use ~(uint64_t) if it's never gonna be disabled.
574 * @param phBp Where to return the opaque breakpoint handle on success.
575 * @param ppBp Where to return the pointer to the internal breakpoint state on success.
576 *
577 * @thread Any thread.
578 */
579static int dbgfR3BpAlloc(PUVM pUVM, DBGFBPOWNER hOwner, void *pvUser, DBGFBPTYPE enmType,
580 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp,
581 PDBGFBPINT *ppBp)
582{
583 int rc = dbgfR3BpOwnerRetain(pUVM, hOwner);
584 if (RT_FAILURE(rc))
585 return rc;
586
587 /*
588 * Search for a chunk having a free entry, allocating new chunks
589 * if the encountered ones are full.
590 *
591 * This can be called from multiple threads at the same time so special care
592 * has to be taken to not require any locking here.
593 */
594 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpChunks); i++)
595 {
596 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[i];
597
598 uint32_t idChunk = ASMAtomicReadU32(&pBpChunk->idChunk);
599 if (idChunk == DBGF_BP_CHUNK_ID_INVALID)
600 {
601 rc = dbgfR3BpChunkAlloc(pUVM, i);
602 if (RT_FAILURE(rc))
603 {
604 LogRel(("DBGF/Bp: Allocating new breakpoint table chunk failed with %Rrc\n", rc));
605 break;
606 }
607
608 idChunk = ASMAtomicReadU32(&pBpChunk->idChunk);
609 Assert(idChunk == i);
610 }
611
612 /** @todo Optimize with some hinting if this turns out to be too slow. */
613 for (;;)
614 {
615 uint32_t cBpsFree = ASMAtomicReadU32(&pBpChunk->cBpsFree);
616 if (cBpsFree)
617 {
618 /*
619 * Scan the associated bitmap for a free entry, if none can be found another thread
620 * raced us and we go to the next chunk.
621 */
622 int32_t iClr = ASMBitFirstClear(pBpChunk->pbmAlloc, DBGF_BP_COUNT_PER_CHUNK);
623 if (iClr != -1)
624 {
625 /*
626 * Try to allocate, we could get raced here as well. In that case
627 * we try again.
628 */
629 if (!ASMAtomicBitTestAndSet(pBpChunk->pbmAlloc, iClr))
630 {
631 /* Success, immediately mark as allocated, initialize the breakpoint state and return. */
632 ASMAtomicDecU32(&pBpChunk->cBpsFree);
633
634 PDBGFBPINT pBp = &pBpChunk->pBpBaseR3[iClr];
635 pBp->Pub.cHits = 0;
636 pBp->Pub.iHitTrigger = iHitTrigger;
637 pBp->Pub.iHitDisable = iHitDisable;
638 pBp->Pub.hOwner = hOwner;
639 pBp->Pub.fFlagsAndType = DBGF_BP_PUB_SET_FLAGS_AND_TYPE(enmType, DBGF_BP_F_DEFAULT);
640 pBp->pvUserR3 = pvUser;
641
642 /** @todo Owner handling (reference and call ring-0 if it has an ring-0 callback). */
643
644 *phBp = DBGF_BP_HND_CREATE(idChunk, iClr);
645 *ppBp = pBp;
646 return VINF_SUCCESS;
647 }
648 /* else Retry with another spot. */
649 }
650 else /* no free entry in bitmap, go to the next chunk */
651 break;
652 }
653 else /* !cBpsFree, go to the next chunk */
654 break;
655 }
656 }
657
658 rc = dbgfR3BpOwnerRelease(pUVM, hOwner); AssertRC(rc);
659 return VERR_DBGF_NO_MORE_BP_SLOTS;
660}
661
662
663/**
664 * Frees the given breakpoint handle.
665 *
666 * @returns nothing.
667 * @param pUVM The user mode VM handle.
668 * @param hBp The breakpoint handle to free.
669 * @param pBp The internal breakpoint state pointer.
670 */
671static void dbgfR3BpFree(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
672{
673 uint32_t idChunk = DBGF_BP_HND_GET_CHUNK_ID(hBp);
674 uint32_t idxEntry = DBGF_BP_HND_GET_ENTRY(hBp);
675
676 AssertReturnVoid(idChunk < DBGF_BP_CHUNK_COUNT);
677 AssertReturnVoid(idxEntry < DBGF_BP_COUNT_PER_CHUNK);
678
679 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[idChunk];
680 AssertPtrReturnVoid(pBpChunk->pbmAlloc);
681 AssertReturnVoid(ASMBitTest(pBpChunk->pbmAlloc, idxEntry));
682
683 /** @todo Need a trip to Ring-0 if an owner is assigned with a Ring-0 part to clear the breakpoint. */
684 int rc = dbgfR3BpOwnerRelease(pUVM, pBp->Pub.hOwner); AssertRC(rc); RT_NOREF(rc);
685 memset(pBp, 0, sizeof(*pBp));
686
687 ASMAtomicBitClear(pBpChunk->pbmAlloc, idxEntry);
688 ASMAtomicIncU32(&pBpChunk->cBpsFree);
689}
690
691
692/**
693 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
694 */
695static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpL2TblChunkAllocEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
696{
697 uint32_t idChunk = (uint32_t)(uintptr_t)pvUser;
698
699 VMCPU_ASSERT_EMT(pVCpu);
700 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
701
702 AssertReturn(idChunk < DBGF_BP_L2_TBL_CHUNK_COUNT, VERR_DBGF_BP_IPE_1);
703
704 PUVM pUVM = pVM->pUVM;
705 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[idChunk];
706
707 AssertReturn( pL2Chunk->idChunk == DBGF_BP_L2_IDX_CHUNK_ID_INVALID
708 || pL2Chunk->idChunk == idChunk,
709 VERR_DBGF_BP_IPE_2);
710
711 /*
712 * The initialization will be done on EMT(0). It is possible that multiple
713 * allocation attempts are done when multiple racing non EMT threads try to
714 * allocate a breakpoint and a new chunk needs to be allocated.
715 * Ignore the request and succeed if the chunk is allocated meaning that a
716 * previous rendezvous successfully allocated the chunk.
717 */
718 int rc = VINF_SUCCESS;
719 if ( pVCpu->idCpu == 0
720 && pL2Chunk->idChunk == DBGF_BP_L2_IDX_CHUNK_ID_INVALID)
721 {
722 /* Allocate the bitmap first so we can skip calling into VMMR0 if it fails. */
723 AssertCompile(!(DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK % 8));
724 volatile void *pbmAlloc = RTMemAllocZ(DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK / 8);
725 if (RT_LIKELY(pbmAlloc))
726 {
727 DBGFBPL2TBLCHUNKALLOCREQ Req;
728 Req.Hdr.u32Magic = SUPVMMR0REQHDR_MAGIC;
729 Req.Hdr.cbReq = sizeof(Req);
730 Req.idChunk = idChunk;
731 Req.pChunkBaseR3 = NULL;
732 rc = VMMR3CallR0Emt(pVM, pVCpu, VMMR0_DO_DBGF_BP_L2_TBL_CHUNK_ALLOC, 0 /*u64Arg*/, &Req.Hdr);
733 AssertLogRelMsgRC(rc, ("VMMR0_DO_DBGF_BP_L2_TBL_CHUNK_ALLOC failed: %Rrc\n", rc));
734 if (RT_SUCCESS(rc))
735 {
736 pL2Chunk->pL2BaseR3 = (PDBGFBPL2ENTRY)Req.pChunkBaseR3;
737 pL2Chunk->pbmAlloc = pbmAlloc;
738 pL2Chunk->cFree = DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK;
739 pL2Chunk->idChunk = idChunk;
740 return VINF_SUCCESS;
741 }
742
743 RTMemFree((void *)pbmAlloc);
744 }
745 else
746 rc = VERR_NO_MEMORY;
747 }
748
749 return rc;
750}
751
752
753/**
754 * Tries to allocate the given L2 table chunk which requires an EMT rendezvous.
755 *
756 * @returns VBox status code.
757 * @param pUVM The user mode VM handle.
758 * @param idChunk The chunk to allocate.
759 *
760 * @thread Any thread.
761 */
762DECLINLINE(int) dbgfR3BpL2TblChunkAlloc(PUVM pUVM, uint32_t idChunk)
763{
764 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpL2TblChunkAllocEmtWorker, (void *)(uintptr_t)idChunk);
765}
766
767
768/**
769 * Tries to allocate a new breakpoint of the given type.
770 *
771 * @returns VBox status code.
772 * @param pUVM The user mode VM handle.
773 * @param pidxL2Tbl Where to return the L2 table entry index on success.
774 * @param ppL2TblEntry Where to return the pointer to the L2 table entry on success.
775 *
776 * @thread Any thread.
777 */
778static int dbgfR3BpL2TblEntryAlloc(PUVM pUVM, uint32_t *pidxL2Tbl, PDBGFBPL2ENTRY *ppL2TblEntry)
779{
780 /*
781 * Search for a chunk having a free entry, allocating new chunks
782 * if the encountered ones are full.
783 *
784 * This can be called from multiple threads at the same time so special care
785 * has to be taken to not require any locking here.
786 */
787 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpL2TblChunks); i++)
788 {
789 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[i];
790
791 uint32_t idChunk = ASMAtomicReadU32(&pL2Chunk->idChunk);
792 if (idChunk == DBGF_BP_L2_IDX_CHUNK_ID_INVALID)
793 {
794 int rc = dbgfR3BpL2TblChunkAlloc(pUVM, i);
795 if (RT_FAILURE(rc))
796 {
797 LogRel(("DBGF/Bp: Allocating new breakpoint L2 lookup table chunk failed with %Rrc\n", rc));
798 break;
799 }
800
801 idChunk = ASMAtomicReadU32(&pL2Chunk->idChunk);
802 Assert(idChunk == i);
803 }
804
805 /** @todo Optimize with some hinting if this turns out to be too slow. */
806 for (;;)
807 {
808 uint32_t cFree = ASMAtomicReadU32(&pL2Chunk->cFree);
809 if (cFree)
810 {
811 /*
812 * Scan the associated bitmap for a free entry, if none can be found another thread
813 * raced us and we go to the next chunk.
814 */
815 int32_t iClr = ASMBitFirstClear(pL2Chunk->pbmAlloc, DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK);
816 if (iClr != -1)
817 {
818 /*
819 * Try to allocate, we could get raced here as well. In that case
820 * we try again.
821 */
822 if (!ASMAtomicBitTestAndSet(pL2Chunk->pbmAlloc, iClr))
823 {
824 /* Success, immediately mark as allocated, initialize the breakpoint state and return. */
825 ASMAtomicDecU32(&pL2Chunk->cFree);
826
827 PDBGFBPL2ENTRY pL2Entry = &pL2Chunk->pL2BaseR3[iClr];
828
829 *pidxL2Tbl = DBGF_BP_L2_IDX_CREATE(idChunk, iClr);
830 *ppL2TblEntry = pL2Entry;
831 return VINF_SUCCESS;
832 }
833 /* else Retry with another spot. */
834 }
835 else /* no free entry in bitmap, go to the next chunk */
836 break;
837 }
838 else /* !cFree, go to the next chunk */
839 break;
840 }
841 }
842
843 return VERR_DBGF_NO_MORE_BP_SLOTS;
844}
845
846
847/**
848 * Frees the given breakpoint handle.
849 *
850 * @returns nothing.
851 * @param pUVM The user mode VM handle.
852 * @param idxL2Tbl The L2 table index to free.
853 * @param pL2TblEntry The L2 table entry pointer to free.
854 */
855static void dbgfR3BpL2TblEntryFree(PUVM pUVM, uint32_t idxL2Tbl, PDBGFBPL2ENTRY pL2TblEntry)
856{
857 uint32_t idChunk = DBGF_BP_L2_IDX_GET_CHUNK_ID(idxL2Tbl);
858 uint32_t idxEntry = DBGF_BP_L2_IDX_GET_ENTRY(idxL2Tbl);
859
860 AssertReturnVoid(idChunk < DBGF_BP_L2_TBL_CHUNK_COUNT);
861 AssertReturnVoid(idxEntry < DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK);
862
863 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[idChunk];
864 AssertPtrReturnVoid(pL2Chunk->pbmAlloc);
865 AssertReturnVoid(ASMBitTest(pL2Chunk->pbmAlloc, idxEntry));
866
867 memset(pL2TblEntry, 0, sizeof(*pL2TblEntry));
868
869 ASMAtomicBitClear(pL2Chunk->pbmAlloc, idxEntry);
870 ASMAtomicIncU32(&pL2Chunk->cFree);
871}
872
873
874/**
875 * Sets the enabled flag of the given breakpoint to the given value.
876 *
877 * @returns nothing.
878 * @param pBp The breakpoint to set the state.
879 * @param fEnabled Enabled status.
880 */
881DECLINLINE(void) dbgfR3BpSetEnabled(PDBGFBPINT pBp, bool fEnabled)
882{
883 DBGFBPTYPE enmType = DBGF_BP_PUB_GET_TYPE(pBp->Pub.fFlagsAndType);
884 if (fEnabled)
885 pBp->Pub.fFlagsAndType = DBGF_BP_PUB_SET_FLAGS_AND_TYPE(enmType, DBGF_BP_F_ENABLED);
886 else
887 pBp->Pub.fFlagsAndType = DBGF_BP_PUB_SET_FLAGS_AND_TYPE(enmType, 0 /*fFlags*/);
888}
889
890
891/**
892 * Assigns a hardware breakpoint state to the given register breakpoint.
893 *
894 * @returns VBox status code.
895 * @param pVM The cross-context VM structure pointer.
896 * @param hBp The breakpoint handle to assign.
897 * @param pBp The internal breakpoint state.
898 *
899 * @thread Any thread.
900 */
901static int dbgfR3BpRegAssign(PVM pVM, DBGFBP hBp, PDBGFBPINT pBp)
902{
903 AssertReturn(pBp->Pub.u.Reg.iReg == UINT8_MAX, VERR_DBGF_BP_IPE_3);
904
905 for (uint8_t i = 0; i < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints); i++)
906 {
907 PDBGFBPHW pHwBp = &pVM->dbgf.s.aHwBreakpoints[i];
908
909 AssertCompileSize(DBGFBP, sizeof(uint32_t));
910 if (ASMAtomicCmpXchgU32(&pHwBp->hBp, hBp, NIL_DBGFBP))
911 {
912 pHwBp->GCPtr = pBp->Pub.u.Reg.GCPtr;
913 pHwBp->fType = pBp->Pub.u.Reg.fType;
914 pHwBp->cb = pBp->Pub.u.Reg.cb;
915 pHwBp->fEnabled = DBGF_BP_PUB_IS_ENABLED(pBp->Pub.fFlagsAndType);
916
917 pBp->Pub.u.Reg.iReg = i;
918 return VINF_SUCCESS;
919 }
920 }
921
922 return VERR_DBGF_NO_MORE_BP_SLOTS;
923}
924
925
926/**
927 * Removes the assigned hardware breakpoint state from the given register breakpoint.
928 *
929 * @returns VBox status code.
930 * @param pVM The cross-context VM structure pointer.
931 * @param hBp The breakpoint handle to remove.
932 * @param pBp The internal breakpoint state.
933 *
934 * @thread Any thread.
935 */
936static int dbgfR3BpRegRemove(PVM pVM, DBGFBP hBp, PDBGFBPINT pBp)
937{
938 AssertReturn(pBp->Pub.u.Reg.iReg < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints), VERR_DBGF_BP_IPE_3);
939
940 PDBGFBPHW pHwBp = &pVM->dbgf.s.aHwBreakpoints[pBp->Pub.u.Reg.iReg];
941 AssertReturn(pHwBp->hBp == hBp, VERR_DBGF_BP_IPE_4);
942 AssertReturn(!pHwBp->fEnabled, VERR_DBGF_BP_IPE_5);
943
944 pHwBp->GCPtr = 0;
945 pHwBp->fType = 0;
946 pHwBp->cb = 0;
947 ASMCompilerBarrier();
948
949 ASMAtomicWriteU32(&pHwBp->hBp, NIL_DBGFBP);
950 return VINF_SUCCESS;
951}
952
953
954/**
955 * Returns the pointer to the L2 table entry from the given index.
956 *
957 * @returns Current context pointer to the L2 table entry or NULL if the provided index value is invalid.
958 * @param pUVM The user mode VM handle.
959 * @param idxL2 The L2 table index to resolve.
960 *
961 * @note The content of the resolved L2 table entry is not validated!.
962 */
963DECLINLINE(PDBGFBPL2ENTRY) dbgfR3BpL2GetByIdx(PUVM pUVM, uint32_t idxL2)
964{
965 uint32_t idChunk = DBGF_BP_L2_IDX_GET_CHUNK_ID(idxL2);
966 uint32_t idxEntry = DBGF_BP_L2_IDX_GET_ENTRY(idxL2);
967
968 AssertReturn(idChunk < DBGF_BP_L2_TBL_CHUNK_COUNT, NULL);
969 AssertReturn(idxEntry < DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK, NULL);
970
971 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[idChunk];
972 AssertPtrReturn(pL2Chunk->pbmAlloc, NULL);
973 AssertReturn(ASMBitTest(pL2Chunk->pbmAlloc, idxEntry), NULL);
974
975 return &pL2Chunk->CTX_SUFF(pL2Base)[idxEntry];
976}
977
978
979/**
980 * Creates a binary search tree with the given root and leaf nodes.
981 *
982 * @returns VBox status code.
983 * @param pUVM The user mode VM handle.
984 * @param idxL1 The index into the L1 table where the created tree should be linked into.
985 * @param u32EntryOld The old entry in the L1 table used to compare with in the atomic update.
986 * @param hBpRoot The root node DBGF handle to assign.
987 * @param GCPtrRoot The root nodes GC pointer to use as a key.
988 * @param hBpLeaf The leafs node DBGF handle to assign.
989 * @param GCPtrLeaf The leafs node GC pointer to use as a key.
990 */
991static int dbgfR3BpInt3L2BstCreate(PUVM pUVM, uint32_t idxL1, uint32_t u32EntryOld,
992 DBGFBP hBpRoot, RTGCUINTPTR GCPtrRoot,
993 DBGFBP hBpLeaf, RTGCUINTPTR GCPtrLeaf)
994{
995 AssertReturn(GCPtrRoot != GCPtrLeaf, VERR_DBGF_BP_IPE_9);
996 Assert(DBGF_BP_INT3_L1_IDX_EXTRACT_FROM_ADDR(GCPtrRoot) == DBGF_BP_INT3_L1_IDX_EXTRACT_FROM_ADDR(GCPtrLeaf));
997
998 /* Allocate two nodes. */
999 uint32_t idxL2Root = 0;
1000 PDBGFBPL2ENTRY pL2Root = NULL;
1001 int rc = dbgfR3BpL2TblEntryAlloc(pUVM, &idxL2Root, &pL2Root);
1002 if (RT_SUCCESS(rc))
1003 {
1004 uint32_t idxL2Leaf = 0;
1005 PDBGFBPL2ENTRY pL2Leaf = NULL;
1006 rc = dbgfR3BpL2TblEntryAlloc(pUVM, &idxL2Leaf, &pL2Leaf);
1007 if (RT_SUCCESS(rc))
1008 {
1009 dbgfBpL2TblEntryInit(pL2Leaf, hBpLeaf, GCPtrLeaf, DBGF_BP_L2_ENTRY_IDX_END, DBGF_BP_L2_ENTRY_IDX_END, 0 /*iDepth*/);
1010 if (GCPtrLeaf < GCPtrRoot)
1011 dbgfBpL2TblEntryInit(pL2Root, hBpRoot, GCPtrRoot, idxL2Leaf, DBGF_BP_L2_ENTRY_IDX_END, 0 /*iDepth*/);
1012 else
1013 dbgfBpL2TblEntryInit(pL2Root, hBpRoot, GCPtrRoot, DBGF_BP_L2_ENTRY_IDX_END, idxL2Leaf, 0 /*iDepth*/);
1014
1015 uint32_t const u32Entry = DBGF_BP_INT3_L1_ENTRY_CREATE_L2_IDX(idxL2Root);
1016 if (ASMAtomicCmpXchgU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1], u32Entry, u32EntryOld))
1017 return VINF_SUCCESS;
1018
1019 /* The L1 entry has changed due to another thread racing us during insertion, free nodes and try again. */
1020 rc = VINF_TRY_AGAIN;
1021 dbgfR3BpL2TblEntryFree(pUVM, idxL2Leaf, pL2Leaf);
1022 }
1023
1024 dbgfR3BpL2TblEntryFree(pUVM, idxL2Root, pL2Root);
1025 }
1026
1027 return rc;
1028}
1029
1030
1031/**
1032 * Inserts the given breakpoint handle into an existing binary search tree.
1033 *
1034 * @returns VBox status code.
1035 * @param pUVM The user mode VM handle.
1036 * @param idxL2Root The index of the tree root in the L2 table.
1037 * @param hBp The node DBGF handle to insert.
1038 * @param GCPtr The nodes GC pointer to use as a key.
1039 */
1040static int dbgfR3BpInt2L2BstNodeInsert(PUVM pUVM, uint32_t idxL2Root, DBGFBP hBp, RTGCUINTPTR GCPtr)
1041{
1042 GCPtr = DBGF_BP_INT3_L2_KEY_EXTRACT_FROM_ADDR(GCPtr);
1043
1044 /* Allocate a new node first. */
1045 uint32_t idxL2Nd = 0;
1046 PDBGFBPL2ENTRY pL2Nd = NULL;
1047 int rc = dbgfR3BpL2TblEntryAlloc(pUVM, &idxL2Nd, &pL2Nd);
1048 if (RT_SUCCESS(rc))
1049 {
1050 /* Walk the tree and find the correct node to insert to. */
1051 PDBGFBPL2ENTRY pL2Entry = dbgfR3BpL2GetByIdx(pUVM, idxL2Root);
1052 while (RT_LIKELY(pL2Entry))
1053 {
1054 /* Make a copy of the entry. */
1055 DBGFBPL2ENTRY L2Entry;
1056 L2Entry.u64GCPtrKeyAndBpHnd1 = ASMAtomicReadU64((volatile uint64_t *)&pL2Entry->u64GCPtrKeyAndBpHnd1);
1057 L2Entry.u64LeftRightIdxDepthBpHnd2 = ASMAtomicReadU64((volatile uint64_t *)&pL2Entry->u64LeftRightIdxDepthBpHnd2);
1058
1059 RTGCUINTPTR GCPtrL2Entry = DBGF_BP_L2_ENTRY_GET_GCPTR(L2Entry.u64GCPtrKeyAndBpHnd1);
1060 AssertBreak(GCPtr != GCPtrL2Entry);
1061
1062 /* Not found, get to the next level. */
1063 uint32_t idxL2Next = (GCPtr < GCPtrL2Entry)
1064 ? DBGF_BP_L2_ENTRY_GET_IDX_LEFT(L2Entry.u64LeftRightIdxDepthBpHnd2)
1065 : DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(L2Entry.u64LeftRightIdxDepthBpHnd2);
1066 if (idxL2Next == DBGF_BP_L2_ENTRY_IDX_END)
1067 {
1068 /* Insert the new node here. */
1069 dbgfBpL2TblEntryInit(pL2Nd, hBp, GCPtr, DBGF_BP_L2_ENTRY_IDX_END, DBGF_BP_L2_ENTRY_IDX_END, 0 /*iDepth*/);
1070 if (GCPtr < GCPtrL2Entry)
1071 dbgfBpL2TblEntryUpdateLeft(pL2Entry, idxL2Next, 0 /*iDepth*/);
1072 else
1073 dbgfBpL2TblEntryUpdateRight(pL2Entry, idxL2Next, 0 /*iDepth*/);
1074 return VINF_SUCCESS;
1075 }
1076
1077 pL2Entry = dbgfR3BpL2GetByIdx(pUVM, idxL2Next);
1078 }
1079
1080 rc = VERR_DBGF_BP_L2_LOOKUP_FAILED;
1081 dbgfR3BpL2TblEntryFree(pUVM, idxL2Nd, pL2Nd);
1082 }
1083
1084 return rc;
1085}
1086
1087
1088/**
1089 * Adds the given breakpoint handle keyed with the GC pointer to the proper L2 binary search tree
1090 * possibly creating a new tree.
1091 *
1092 * @returns VBox status code.
1093 * @param pUVM The user mode VM handle.
1094 * @param idxL1 The index into the L1 table the breakpoint uses.
1095 * @param hBp The breakpoint handle which is to be added.
1096 * @param GCPtr The GC pointer the breakpoint is keyed with.
1097 */
1098static int dbgfR3BpInt3L2BstNodeAdd(PUVM pUVM, uint32_t idxL1, DBGFBP hBp, RTGCUINTPTR GCPtr)
1099{
1100 int rc = RTSemFastMutexRequest(pUVM->dbgf.s.hMtxBpL2Wr); AssertRC(rc);
1101
1102 uint32_t u32Entry = ASMAtomicReadU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1]); /* Re-read, could get raced by a remove operation. */
1103 uint8_t u8Type = DBGF_BP_INT3_L1_ENTRY_GET_TYPE(u32Entry);
1104 if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_BP_HND)
1105 {
1106 /* Create a new search tree, gather the necessary information first. */
1107 DBGFBP hBp2 = DBGF_BP_INT3_L1_ENTRY_GET_BP_HND(u32Entry);
1108 PDBGFBPINT pBp2 = dbgfR3BpGetByHnd(pUVM, hBp2);
1109 AssertStmt(VALID_PTR(pBp2), rc = VERR_DBGF_BP_IPE_7);
1110 if (RT_SUCCESS(rc))
1111 rc = dbgfR3BpInt3L2BstCreate(pUVM, idxL1, u32Entry, hBp, GCPtr, hBp2, pBp2->Pub.u.Int3.GCPtr);
1112 }
1113 else if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_L2_IDX)
1114 rc = dbgfR3BpInt2L2BstNodeInsert(pUVM, DBGF_BP_INT3_L1_ENTRY_GET_L2_IDX(u32Entry), hBp, GCPtr);
1115
1116 int rc2 = RTSemFastMutexRelease(pUVM->dbgf.s.hMtxBpL2Wr); AssertRC(rc2);
1117 return rc;
1118}
1119
1120
1121/**
1122 * Gets the leftmost from the given tree node start index.
1123 *
1124 * @returns VBox status code.
1125 * @param pUVM The user mode VM handle.
1126 * @param idxL2Start The start index to walk from.
1127 * @param pidxL2Leftmost Where to store the L2 table index of the leftmost entry.
1128 * @param ppL2NdLeftmost Where to store the pointer to the leftmost L2 table entry.
1129 * @param pidxL2NdLeftParent Where to store the L2 table index of the leftmost entries parent.
1130 * @param ppL2NdLeftParent Where to store the pointer to the leftmost L2 table entries parent.
1131 */
1132static int dbgfR33BpInt3BstGetLeftmostEntryFromNode(PUVM pUVM, uint32_t idxL2Start,
1133 uint32_t *pidxL2Leftmost, PDBGFBPL2ENTRY *ppL2NdLeftmost,
1134 uint32_t *pidxL2NdLeftParent, PDBGFBPL2ENTRY *ppL2NdLeftParent)
1135{
1136 uint32_t idxL2Parent = DBGF_BP_L2_ENTRY_IDX_END;
1137 PDBGFBPL2ENTRY pL2NdParent = NULL;
1138
1139 for (;;)
1140 {
1141 PDBGFBPL2ENTRY pL2Entry = dbgfR3BpL2GetByIdx(pUVM, idxL2Start);
1142 AssertPtr(pL2Entry);
1143
1144 uint32_t idxL2Left = DBGF_BP_L2_ENTRY_GET_IDX_LEFT(pL2Entry->u64LeftRightIdxDepthBpHnd2);
1145 if (idxL2Start == DBGF_BP_L2_ENTRY_IDX_END)
1146 {
1147 *pidxL2Leftmost = idxL2Start;
1148 *ppL2NdLeftmost = pL2Entry;
1149 *pidxL2NdLeftParent = idxL2Parent;
1150 *ppL2NdLeftParent = pL2NdParent;
1151 break;
1152 }
1153
1154 idxL2Parent = idxL2Start;
1155 idxL2Start = idxL2Left;
1156 pL2NdParent = pL2Entry;
1157 }
1158
1159 return VINF_SUCCESS;
1160}
1161
1162
1163/**
1164 * Removes the given node rearranging the tree.
1165 *
1166 * @returns VBox status code.
1167 * @param pUVM The user mode VM handle.
1168 * @param idxL1 The index into the L1 table pointing to the binary search tree containing the node.
1169 * @param idxL2Root The L2 table index where the tree root is located.
1170 * @param idxL2Nd The node index to remove.
1171 * @param pL2Nd The L2 table entry to remove.
1172 * @param idxL2NdParent The parents index, can be DBGF_BP_L2_ENTRY_IDX_END if the root is about to be removed.
1173 * @param pL2NdParent The parents L2 table entry, can be NULL if the root is about to be removed.
1174 * @param fLeftChild Flag whether the node is the left child of the parent or the right one.
1175 */
1176static int dbgfR3BpInt3BstNodeRemove(PUVM pUVM, uint32_t idxL1, uint32_t idxL2Root,
1177 uint32_t idxL2Nd, PDBGFBPL2ENTRY pL2Nd,
1178 uint32_t idxL2NdParent, PDBGFBPL2ENTRY pL2NdParent,
1179 bool fLeftChild)
1180{
1181 /*
1182 * If there are only two nodes remaining the tree will get destroyed and the
1183 * L1 entry will be converted to the direct handle type.
1184 */
1185 uint32_t idxL2Left = DBGF_BP_L2_ENTRY_GET_IDX_LEFT(pL2Nd->u64LeftRightIdxDepthBpHnd2);
1186 uint32_t idxL2Right = DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(pL2Nd->u64LeftRightIdxDepthBpHnd2);
1187
1188 Assert(idxL2NdParent != DBGF_BP_L2_ENTRY_IDX_END || !pL2NdParent); RT_NOREF(idxL2NdParent);
1189 uint32_t idxL2ParentNew = DBGF_BP_L2_ENTRY_IDX_END;
1190 if (idxL2Right == DBGF_BP_L2_ENTRY_IDX_END)
1191 idxL2ParentNew = idxL2Left;
1192 else
1193 {
1194 /* Find the leftmost entry of the right subtree and move it to the to be removed nodes location in the tree. */
1195 PDBGFBPL2ENTRY pL2NdLeftmostParent = NULL;
1196 PDBGFBPL2ENTRY pL2NdLeftmost = NULL;
1197 uint32_t idxL2NdLeftmostParent = DBGF_BP_L2_ENTRY_IDX_END;
1198 uint32_t idxL2Leftmost = DBGF_BP_L2_ENTRY_IDX_END;
1199 int rc = dbgfR33BpInt3BstGetLeftmostEntryFromNode(pUVM, idxL2Right, &idxL2Leftmost ,&pL2NdLeftmost,
1200 &idxL2NdLeftmostParent, &pL2NdLeftmostParent);
1201 AssertRCReturn(rc, rc);
1202
1203 if (pL2NdLeftmostParent)
1204 {
1205 /* Rearrange the leftmost entries parents pointer. */
1206 dbgfBpL2TblEntryUpdateLeft(pL2NdLeftmostParent, DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(pL2NdLeftmost->u64LeftRightIdxDepthBpHnd2), 0 /*iDepth*/);
1207 dbgfBpL2TblEntryUpdateRight(pL2NdLeftmost, idxL2Right, 0 /*iDepth*/);
1208 }
1209
1210 dbgfBpL2TblEntryUpdateLeft(pL2NdLeftmost, idxL2Left, 0 /*iDepth*/);
1211
1212 /* Update the remove nodes parent to point to the new node. */
1213 idxL2ParentNew = idxL2Leftmost;
1214 }
1215
1216 if (pL2NdParent)
1217 {
1218 /* Asssign the new L2 index to proper parents left or right pointer. */
1219 if (fLeftChild)
1220 dbgfBpL2TblEntryUpdateLeft(pL2NdParent, idxL2ParentNew, 0 /*iDepth*/);
1221 else
1222 dbgfBpL2TblEntryUpdateRight(pL2NdParent, idxL2ParentNew, 0 /*iDepth*/);
1223 }
1224 else
1225 {
1226 /* The root node is removed, set the new root in the L1 table. */
1227 Assert(idxL2ParentNew != DBGF_BP_L2_ENTRY_IDX_END);
1228 idxL2Root = idxL2ParentNew;
1229 ASMAtomicXchgU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1], DBGF_BP_INT3_L1_ENTRY_CREATE_L2_IDX(idxL2Left));
1230 }
1231
1232 /* Free the node. */
1233 dbgfR3BpL2TblEntryFree(pUVM, idxL2Nd, pL2Nd);
1234
1235 /*
1236 * Check whether the old/new root is the only node remaining and convert the L1
1237 * table entry to a direct breakpoint handle one in that case.
1238 */
1239 pL2Nd = dbgfR3BpL2GetByIdx(pUVM, idxL2Root);
1240 AssertPtr(pL2Nd);
1241 if ( DBGF_BP_L2_ENTRY_GET_IDX_LEFT(pL2Nd->u64LeftRightIdxDepthBpHnd2) == DBGF_BP_L2_ENTRY_IDX_END
1242 && DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(pL2Nd->u64LeftRightIdxDepthBpHnd2) == DBGF_BP_L2_ENTRY_IDX_END)
1243 {
1244 DBGFBP hBp = DBGF_BP_L2_ENTRY_GET_BP_HND(pL2Nd->u64GCPtrKeyAndBpHnd1, pL2Nd->u64LeftRightIdxDepthBpHnd2);
1245 dbgfR3BpL2TblEntryFree(pUVM, idxL2Root, pL2Nd);
1246 ASMAtomicXchgU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1], DBGF_BP_INT3_L1_ENTRY_CREATE_BP_HND(hBp));
1247 }
1248
1249 return VINF_SUCCESS;
1250}
1251
1252
1253/**
1254 * Removes the given breakpoint handle keyed with the GC pointer from the L2 binary search tree
1255 * pointed to by the given L2 root index.
1256 *
1257 * @returns VBox status code.
1258 * @param pUVM The user mode VM handle.
1259 * @param idxL1 The index into the L1 table pointing to the binary search tree.
1260 * @param idxL2Root The L2 table index where the tree root is located.
1261 * @param hBp The breakpoint handle which is to be removed.
1262 * @param GCPtr The GC pointer the breakpoint is keyed with.
1263 */
1264static int dbgfR3BpInt3L2BstRemove(PUVM pUVM, uint32_t idxL1, uint32_t idxL2Root, DBGFBP hBp, RTGCUINTPTR GCPtr)
1265{
1266 GCPtr = DBGF_BP_INT3_L2_KEY_EXTRACT_FROM_ADDR(GCPtr);
1267
1268 int rc = RTSemFastMutexRequest(pUVM->dbgf.s.hMtxBpL2Wr); AssertRC(rc);
1269
1270 uint32_t idxL2Cur = idxL2Root;
1271 uint32_t idxL2Parent = DBGF_BP_L2_ENTRY_IDX_END;
1272 bool fLeftChild = false;
1273 PDBGFBPL2ENTRY pL2EntryParent = NULL;
1274 for (;;)
1275 {
1276 PDBGFBPL2ENTRY pL2Entry = dbgfR3BpL2GetByIdx(pUVM, idxL2Cur);
1277 AssertPtr(pL2Entry);
1278
1279 /* Check whether this node is to be removed.. */
1280 RTGCUINTPTR GCPtrL2Entry = DBGF_BP_L2_ENTRY_GET_GCPTR(pL2Entry->u64GCPtrKeyAndBpHnd1);
1281 if (GCPtrL2Entry == GCPtr)
1282 {
1283 Assert(DBGF_BP_L2_ENTRY_GET_BP_HND(pL2Entry->u64GCPtrKeyAndBpHnd1, pL2Entry->u64LeftRightIdxDepthBpHnd2) == hBp); RT_NOREF(hBp);
1284
1285 rc = dbgfR3BpInt3BstNodeRemove(pUVM, idxL1, idxL2Root, idxL2Cur, pL2Entry,
1286 idxL2Parent, pL2EntryParent, fLeftChild);
1287 break;
1288 }
1289
1290 pL2EntryParent = pL2Entry;
1291 idxL2Parent = idxL2Cur;
1292
1293 if (GCPtrL2Entry < GCPtr)
1294 {
1295 fLeftChild = true;
1296 idxL2Cur = DBGF_BP_L2_ENTRY_GET_IDX_LEFT(pL2Entry->u64LeftRightIdxDepthBpHnd2);
1297 }
1298 else
1299 {
1300 fLeftChild = false;
1301 idxL2Cur = DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(pL2Entry->u64LeftRightIdxDepthBpHnd2);
1302 }
1303
1304 AssertBreakStmt(idxL2Cur != DBGF_BP_L2_ENTRY_IDX_END, rc = VERR_DBGF_BP_L2_LOOKUP_FAILED);
1305 }
1306
1307 int rc2 = RTSemFastMutexRelease(pUVM->dbgf.s.hMtxBpL2Wr); AssertRC(rc2);
1308
1309 return rc;
1310}
1311
1312
1313/**
1314 * Adds the given int3 breakpoint to the appropriate lookup tables.
1315 *
1316 * @returns VBox status code.
1317 * @param pUVM The user mode VM handle.
1318 * @param hBp The breakpoint handle to add.
1319 * @param pBp The internal breakpoint state.
1320 */
1321static int dbgfR3BpInt3Add(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
1322{
1323 AssertReturn(DBGF_BP_PUB_GET_TYPE(pBp->Pub.fFlagsAndType) == DBGFBPTYPE_INT3, VERR_DBGF_BP_IPE_3);
1324
1325 int rc = VINF_SUCCESS;
1326 uint16_t idxL1 = DBGF_BP_INT3_L1_IDX_EXTRACT_FROM_ADDR(pBp->Pub.u.Int3.GCPtr);
1327 uint8_t cTries = 16;
1328
1329 while (cTries--)
1330 {
1331 uint32_t u32Entry = ASMAtomicReadU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1]);
1332 if (u32Entry == DBGF_BP_INT3_L1_ENTRY_TYPE_NULL)
1333 {
1334 /*
1335 * No breakpoint assigned so far for this entry, create an entry containing
1336 * the direct breakpoint handle and try to exchange it atomically.
1337 */
1338 u32Entry = DBGF_BP_INT3_L1_ENTRY_CREATE_BP_HND(hBp);
1339 if (ASMAtomicCmpXchgU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1], u32Entry, DBGF_BP_INT3_L1_ENTRY_TYPE_NULL))
1340 break;
1341 }
1342 else
1343 {
1344 rc = dbgfR3BpInt3L2BstNodeAdd(pUVM, idxL1, hBp, pBp->Pub.u.Int3.GCPtr);
1345 if (rc != VINF_TRY_AGAIN)
1346 break;
1347 }
1348 }
1349
1350 if ( RT_SUCCESS(rc)
1351 && !cTries) /* Too much contention, abort with an error. */
1352 rc = VERR_DBGF_BP_INT3_ADD_TRIES_REACHED;
1353
1354 return rc;
1355}
1356
1357
1358/**
1359 * Get a breakpoint give by address.
1360 *
1361 * @returns The breakpoint handle on success or NIL_DBGF if not found.
1362 * @param pUVM The user mode VM handle.
1363 * @param enmType The breakpoint type.
1364 * @param GCPtr The breakpoint address.
1365 * @param ppBp Where to store the pointer to the internal breakpoint state on success, optional.
1366 */
1367static DBGFBP dbgfR3BpGetByAddr(PUVM pUVM, DBGFBPTYPE enmType, RTGCUINTPTR GCPtr, PDBGFBPINT *ppBp)
1368{
1369 DBGFBP hBp = NIL_DBGFBP;
1370
1371 switch (enmType)
1372 {
1373 case DBGFBPTYPE_REG:
1374 {
1375 PVM pVM = pUVM->pVM;
1376 VM_ASSERT_VALID_EXT_RETURN(pVM, NIL_DBGFBP);
1377
1378 for (uint32_t i = 0; i < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints); i++)
1379 {
1380 PDBGFBPHW pHwBp = &pVM->dbgf.s.aHwBreakpoints[i];
1381
1382 AssertCompileSize(DBGFBP, sizeof(uint32_t));
1383 DBGFBP hBpTmp = ASMAtomicReadU32(&pHwBp->hBp);
1384 if ( pHwBp->GCPtr == GCPtr
1385 && hBpTmp != NIL_DBGFBP)
1386 {
1387 hBp = hBpTmp;
1388 break;
1389 }
1390 }
1391
1392 break;
1393 }
1394
1395 case DBGFBPTYPE_INT3:
1396 {
1397 const uint16_t idxL1 = DBGF_BP_INT3_L1_IDX_EXTRACT_FROM_ADDR(GCPtr);
1398 const uint32_t u32L1Entry = ASMAtomicReadU32(&pUVM->dbgf.s.CTX_SUFF(paBpLocL1)[idxL1]);
1399
1400 if (u32L1Entry != DBGF_BP_INT3_L1_ENTRY_TYPE_NULL)
1401 {
1402 uint8_t u8Type = DBGF_BP_INT3_L1_ENTRY_GET_TYPE(u32L1Entry);
1403 if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_BP_HND)
1404 hBp = DBGF_BP_INT3_L1_ENTRY_GET_BP_HND(u32L1Entry);
1405 else if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_L2_IDX)
1406 {
1407 RTGCUINTPTR GCPtrKey = DBGF_BP_INT3_L2_KEY_EXTRACT_FROM_ADDR(GCPtr);
1408 PDBGFBPL2ENTRY pL2Nd = dbgfR3BpL2GetByIdx(pUVM, DBGF_BP_INT3_L1_ENTRY_GET_L2_IDX(u32L1Entry));
1409
1410 for (;;)
1411 {
1412 AssertPtr(pL2Nd);
1413
1414 RTGCUINTPTR GCPtrL2Entry = DBGF_BP_L2_ENTRY_GET_GCPTR(pL2Nd->u64GCPtrKeyAndBpHnd1);
1415 if (GCPtrKey == GCPtrL2Entry)
1416 {
1417 hBp = DBGF_BP_L2_ENTRY_GET_BP_HND(pL2Nd->u64GCPtrKeyAndBpHnd1, pL2Nd->u64LeftRightIdxDepthBpHnd2);
1418 break;
1419 }
1420
1421 /* Not found, get to the next level. */
1422 uint32_t idxL2Next = (GCPtrKey < GCPtrL2Entry)
1423 ? DBGF_BP_L2_ENTRY_GET_IDX_LEFT(pL2Nd->u64LeftRightIdxDepthBpHnd2)
1424 : DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(pL2Nd->u64LeftRightIdxDepthBpHnd2);
1425 /* Address not found if the entry denotes the end. */
1426 if (idxL2Next == DBGF_BP_L2_ENTRY_IDX_END)
1427 break;
1428
1429 pL2Nd = dbgfR3BpL2GetByIdx(pUVM, idxL2Next);
1430 }
1431 }
1432 }
1433 break;
1434 }
1435
1436 default:
1437 AssertMsgFailed(("enmType=%d\n", enmType));
1438 break;
1439 }
1440
1441 if ( hBp != NIL_DBGFBP
1442 && ppBp)
1443 *ppBp = dbgfR3BpGetByHnd(pUVM, hBp);
1444 return hBp;
1445}
1446
1447
1448/**
1449 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
1450 */
1451static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpInt3RemoveEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
1452{
1453 DBGFBP hBp = (DBGFBP)(uintptr_t)pvUser;
1454
1455 VMCPU_ASSERT_EMT(pVCpu);
1456 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
1457
1458 PUVM pUVM = pVM->pUVM;
1459 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pUVM, hBp);
1460 AssertPtrReturn(pBp, VERR_DBGF_BP_IPE_8);
1461
1462 int rc = VINF_SUCCESS;
1463 if (pVCpu->idCpu == 0)
1464 {
1465 uint16_t idxL1 = DBGF_BP_INT3_L1_IDX_EXTRACT_FROM_ADDR(pBp->Pub.u.Int3.GCPtr);
1466 uint32_t u32Entry = ASMAtomicReadU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1]);
1467 AssertReturn(u32Entry != DBGF_BP_INT3_L1_ENTRY_TYPE_NULL, VERR_DBGF_BP_IPE_6);
1468
1469 uint8_t u8Type = DBGF_BP_INT3_L1_ENTRY_GET_TYPE(u32Entry);
1470 if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_BP_HND)
1471 {
1472 /* Single breakpoint, just exchange atomically with the null value. */
1473 if (!ASMAtomicCmpXchgU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1], DBGF_BP_INT3_L1_ENTRY_TYPE_NULL, u32Entry))
1474 {
1475 /*
1476 * A breakpoint addition must have raced us converting the L1 entry to an L2 index type, re-read
1477 * and remove the node from the created binary search tree.
1478 *
1479 * This works because after the entry was converted to an L2 index it can only be converted back
1480 * to a direct handle by removing one or more nodes which always goes through the fast mutex
1481 * protecting the L2 table. Likewise adding a new breakpoint requires grabbing the mutex as well
1482 * so there is serialization here and the node can be removed safely without having to worry about
1483 * concurrent tree modifications.
1484 */
1485 u32Entry = ASMAtomicReadU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1]);
1486 AssertReturn(DBGF_BP_INT3_L1_ENTRY_GET_TYPE(u32Entry) == DBGF_BP_INT3_L1_ENTRY_TYPE_L2_IDX, VERR_DBGF_BP_IPE_9);
1487
1488 rc = dbgfR3BpInt3L2BstRemove(pUVM, idxL1, DBGF_BP_INT3_L1_ENTRY_GET_L2_IDX(u32Entry),
1489 hBp, pBp->Pub.u.Int3.GCPtr);
1490 }
1491 }
1492 else if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_L2_IDX)
1493 rc = dbgfR3BpInt3L2BstRemove(pUVM, idxL1, DBGF_BP_INT3_L1_ENTRY_GET_L2_IDX(u32Entry),
1494 hBp, pBp->Pub.u.Int3.GCPtr);
1495 }
1496
1497 return rc;
1498}
1499
1500
1501/**
1502 * Removes the given int3 breakpoint from all lookup tables.
1503 *
1504 * @returns VBox status code.
1505 * @param pUVM The user mode VM handle.
1506 * @param hBp The breakpoint handle to remove.
1507 * @param pBp The internal breakpoint state.
1508 */
1509static int dbgfR3BpInt3Remove(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
1510{
1511 AssertReturn(DBGF_BP_PUB_GET_TYPE(pBp->Pub.fFlagsAndType) == DBGFBPTYPE_INT3, VERR_DBGF_BP_IPE_3);
1512
1513 /*
1514 * This has to be done by an EMT rendezvous in order to not have an EMT traversing
1515 * any L2 trees while it is being removed.
1516 */
1517 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpInt3RemoveEmtWorker, (void *)(uintptr_t)hBp);
1518}
1519
1520
1521/**
1522 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
1523 */
1524static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpRegRecalcOnCpu(PVM pVM, PVMCPU pVCpu, void *pvUser)
1525{
1526 RT_NOREF(pvUser);
1527
1528 /*
1529 * CPU 0 updates the enabled hardware breakpoint counts.
1530 */
1531 if (pVCpu->idCpu == 0)
1532 {
1533 pVM->dbgf.s.cEnabledHwBreakpoints = 0;
1534 pVM->dbgf.s.cEnabledHwIoBreakpoints = 0;
1535
1536 for (uint32_t iBp = 0; iBp < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints); iBp++)
1537 {
1538 if (pVM->dbgf.s.aHwBreakpoints[iBp].fEnabled)
1539 {
1540 pVM->dbgf.s.cEnabledHwBreakpoints += 1;
1541 pVM->dbgf.s.cEnabledHwIoBreakpoints += pVM->dbgf.s.aHwBreakpoints[iBp].fType == X86_DR7_RW_IO;
1542 }
1543 }
1544 }
1545
1546 return CPUMRecalcHyperDRx(pVCpu, UINT8_MAX);
1547}
1548
1549
1550/**
1551 * Arms the given breakpoint.
1552 *
1553 * @returns VBox status code.
1554 * @param pUVM The user mode VM handle.
1555 * @param hBp The breakpoint handle to arm.
1556 * @param pBp The internal breakpoint state pointer for the handle.
1557 *
1558 * @thread Any thread.
1559 */
1560static int dbgfR3BpArm(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
1561{
1562 int rc;
1563 PVM pVM = pUVM->pVM;
1564
1565 Assert(!DBGF_BP_PUB_IS_ENABLED(pBp->Pub.fFlagsAndType));
1566 switch (DBGF_BP_PUB_GET_TYPE(pBp->Pub.fFlagsAndType))
1567 {
1568 case DBGFBPTYPE_REG:
1569 {
1570 Assert(pBp->Pub.u.Reg.iReg < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints));
1571 PDBGFBPHW pBpHw = &pVM->dbgf.s.aHwBreakpoints[pBp->Pub.u.Reg.iReg];
1572 Assert(pBpHw->hBp == hBp); RT_NOREF(hBp);
1573
1574 dbgfR3BpSetEnabled(pBp, true /*fEnabled*/);
1575 ASMAtomicWriteBool(&pBpHw->fEnabled, true);
1576 rc = VMMR3EmtRendezvous(pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpRegRecalcOnCpu, NULL);
1577 if (RT_FAILURE(rc))
1578 {
1579 ASMAtomicWriteBool(&pBpHw->fEnabled, false);
1580 dbgfR3BpSetEnabled(pBp, false /*fEnabled*/);
1581 }
1582 break;
1583 }
1584 case DBGFBPTYPE_INT3:
1585 {
1586 dbgfR3BpSetEnabled(pBp, true /*fEnabled*/);
1587
1588 /** @todo When we enable the first int3 breakpoint we should do this in an EMT rendezvous
1589 * as the VMX code intercepts #BP only when at least one int3 breakpoint is enabled.
1590 * A racing vCPU might trigger it and forward it to the guest causing panics/crashes/havoc. */
1591 /*
1592 * Save current byte and write the int3 instruction byte.
1593 */
1594 rc = PGMPhysSimpleReadGCPhys(pVM, &pBp->Pub.u.Int3.bOrg, pBp->Pub.u.Int3.PhysAddr, sizeof(pBp->Pub.u.Int3.bOrg));
1595 if (RT_SUCCESS(rc))
1596 {
1597 static const uint8_t s_bInt3 = 0xcc;
1598 rc = PGMPhysSimpleWriteGCPhys(pVM, pBp->Pub.u.Int3.PhysAddr, &s_bInt3, sizeof(s_bInt3));
1599 if (RT_SUCCESS(rc))
1600 {
1601 ASMAtomicIncU32(&pVM->dbgf.s.cEnabledInt3Breakpoints);
1602 Log(("DBGF: Set breakpoint at %RGv (Phys %RGp)\n", pBp->Pub.u.Int3.GCPtr, pBp->Pub.u.Int3.PhysAddr));
1603 }
1604 }
1605
1606 if (RT_FAILURE(rc))
1607 dbgfR3BpSetEnabled(pBp, false /*fEnabled*/);
1608
1609 break;
1610 }
1611 case DBGFBPTYPE_PORT_IO:
1612 case DBGFBPTYPE_MMIO:
1613 rc = VERR_NOT_IMPLEMENTED;
1614 break;
1615 default:
1616 AssertMsgFailedReturn(("Invalid breakpoint type %d\n", DBGF_BP_PUB_GET_TYPE(pBp->Pub.fFlagsAndType)),
1617 VERR_IPE_NOT_REACHED_DEFAULT_CASE);
1618 }
1619
1620 return rc;
1621}
1622
1623
1624/**
1625 * Disarms the given breakpoint.
1626 *
1627 * @returns VBox status code.
1628 * @param pUVM The user mode VM handle.
1629 * @param hBp The breakpoint handle to disarm.
1630 * @param pBp The internal breakpoint state pointer for the handle.
1631 *
1632 * @thread Any thread.
1633 */
1634static int dbgfR3BpDisarm(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
1635{
1636 int rc;
1637 PVM pVM = pUVM->pVM;
1638
1639 Assert(DBGF_BP_PUB_IS_ENABLED(pBp->Pub.fFlagsAndType));
1640 switch (DBGF_BP_PUB_GET_TYPE(pBp->Pub.fFlagsAndType))
1641 {
1642 case DBGFBPTYPE_REG:
1643 {
1644 Assert(pBp->Pub.u.Reg.iReg < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints));
1645 PDBGFBPHW pBpHw = &pVM->dbgf.s.aHwBreakpoints[pBp->Pub.u.Reg.iReg];
1646 Assert(pBpHw->hBp == hBp); RT_NOREF(hBp);
1647
1648 dbgfR3BpSetEnabled(pBp, false /*fEnabled*/);
1649 ASMAtomicWriteBool(&pBpHw->fEnabled, false);
1650 rc = VMMR3EmtRendezvous(pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpRegRecalcOnCpu, NULL);
1651 if (RT_FAILURE(rc))
1652 {
1653 ASMAtomicWriteBool(&pBpHw->fEnabled, true);
1654 dbgfR3BpSetEnabled(pBp, true /*fEnabled*/);
1655 }
1656 break;
1657 }
1658 case DBGFBPTYPE_INT3:
1659 {
1660 /*
1661 * Check that the current byte is the int3 instruction, and restore the original one.
1662 * We currently ignore invalid bytes.
1663 */
1664 uint8_t bCurrent = 0;
1665 rc = PGMPhysSimpleReadGCPhys(pVM, &bCurrent, pBp->Pub.u.Int3.PhysAddr, sizeof(bCurrent));
1666 if ( RT_SUCCESS(rc)
1667 && bCurrent == 0xcc)
1668 {
1669 rc = PGMPhysSimpleWriteGCPhys(pVM, pBp->Pub.u.Int3.PhysAddr, &pBp->Pub.u.Int3.bOrg, sizeof(pBp->Pub.u.Int3.bOrg));
1670 if (RT_SUCCESS(rc))
1671 {
1672 ASMAtomicDecU32(&pVM->dbgf.s.cEnabledInt3Breakpoints);
1673 dbgfR3BpSetEnabled(pBp, false /*fEnabled*/);
1674 Log(("DBGF: Removed breakpoint at %RGv (Phys %RGp)\n", pBp->Pub.u.Int3.GCPtr, pBp->Pub.u.Int3.PhysAddr));
1675 }
1676 }
1677 break;
1678 }
1679 case DBGFBPTYPE_PORT_IO:
1680 case DBGFBPTYPE_MMIO:
1681 rc = VERR_NOT_IMPLEMENTED;
1682 break;
1683 default:
1684 AssertMsgFailedReturn(("Invalid breakpoint type %d\n", DBGF_BP_PUB_GET_TYPE(pBp->Pub.fFlagsAndType)),
1685 VERR_IPE_NOT_REACHED_DEFAULT_CASE);
1686 }
1687
1688 return rc;
1689}
1690
1691
1692/**
1693 * Creates a new breakpoint owner returning a handle which can be used when setting breakpoints.
1694 *
1695 * @returns VBox status code.
1696 * @retval VERR_DBGF_BP_OWNER_NO_MORE_HANDLES if there are no more free owner handles available.
1697 * @param pUVM The user mode VM handle.
1698 * @param pfnBpHit The R3 callback which is called when a breakpoint with the owner handle is hit.
1699 * @param phBpOwner Where to store the owner handle on success.
1700 *
1701 * @thread Any thread but might defer work to EMT on the first call.
1702 */
1703VMMR3DECL(int) DBGFR3BpOwnerCreate(PUVM pUVM, PFNDBGFBPHIT pfnBpHit, PDBGFBPOWNER phBpOwner)
1704{
1705 /*
1706 * Validate the input.
1707 */
1708 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
1709 AssertPtrReturn(pfnBpHit, VERR_INVALID_PARAMETER);
1710 AssertPtrReturn(phBpOwner, VERR_INVALID_POINTER);
1711
1712 int rc = dbgfR3BpOwnerEnsureInit(pUVM);
1713 AssertRCReturn(rc ,rc);
1714
1715 /* Try to find a free entry in the owner table. */
1716 for (;;)
1717 {
1718 /* Scan the associated bitmap for a free entry. */
1719 int32_t iClr = ASMBitFirstClear(pUVM->dbgf.s.pbmBpOwnersAllocR3, DBGF_BP_OWNER_COUNT_MAX);
1720 if (iClr != -1)
1721 {
1722 /*
1723 * Try to allocate, we could get raced here as well. In that case
1724 * we try again.
1725 */
1726 if (!ASMAtomicBitTestAndSet(pUVM->dbgf.s.pbmBpOwnersAllocR3, iClr))
1727 {
1728 PDBGFBPOWNERINT pBpOwner = &pUVM->dbgf.s.paBpOwnersR3[iClr];
1729 pBpOwner->cRefs = 1;
1730 pBpOwner->pfnBpHitR3 = pfnBpHit;
1731
1732 *phBpOwner = (DBGFBPOWNER)iClr;
1733 return VINF_SUCCESS;
1734 }
1735 /* else Retry with another spot. */
1736 }
1737 else /* no free entry in bitmap, out of entries. */
1738 {
1739 rc = VERR_DBGF_BP_OWNER_NO_MORE_HANDLES;
1740 break;
1741 }
1742 }
1743
1744 return rc;
1745}
1746
1747
1748/**
1749 * Destroys the owner identified by the given handle.
1750 *
1751 * @returns VBox status code.
1752 * @retval VERR_INVALID_HANDLE if the given owner handle is invalid.
1753 * @retval VERR_DBGF_OWNER_BUSY if there are still breakpoints set with the given owner handle.
1754 * @param pUVM The user mode VM handle.
1755 * @param hBpOwner The breakpoint owner handle to destroy.
1756 */
1757VMMR3DECL(int) DBGFR3BpOwnerDestroy(PUVM pUVM, DBGFBPOWNER hBpOwner)
1758{
1759 /*
1760 * Validate the input.
1761 */
1762 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
1763 AssertReturn(hBpOwner != NIL_DBGFBPOWNER, VERR_INVALID_HANDLE);
1764
1765 int rc = dbgfR3BpOwnerEnsureInit(pUVM);
1766 AssertRCReturn(rc ,rc);
1767
1768 PDBGFBPOWNERINT pBpOwner = dbgfR3BpOwnerGetByHnd(pUVM, hBpOwner);
1769 if (RT_LIKELY(pBpOwner))
1770 {
1771 if (ASMAtomicReadU32(&pBpOwner->cRefs) == 1)
1772 {
1773 pBpOwner->pfnBpHitR3 = NULL;
1774 ASMAtomicDecU32(&pBpOwner->cRefs);
1775 ASMAtomicBitClear(pUVM->dbgf.s.pbmBpOwnersAllocR3, hBpOwner);
1776 }
1777 else
1778 rc = VERR_DBGF_OWNER_BUSY;
1779 }
1780 else
1781 rc = VERR_INVALID_HANDLE;
1782
1783 return rc;
1784}
1785
1786
1787/**
1788 * Sets a breakpoint (int 3 based).
1789 *
1790 * @returns VBox status code.
1791 * @param pUVM The user mode VM handle.
1792 * @param idSrcCpu The ID of the virtual CPU used for the
1793 * breakpoint address resolution.
1794 * @param pAddress The address of the breakpoint.
1795 * @param iHitTrigger The hit count at which the breakpoint start triggering.
1796 * Use 0 (or 1) if it's gonna trigger at once.
1797 * @param iHitDisable The hit count which disables the breakpoint.
1798 * Use ~(uint64_t) if it's never gonna be disabled.
1799 * @param phBp Where to store the breakpoint handle on success.
1800 *
1801 * @thread Any thread.
1802 */
1803VMMR3DECL(int) DBGFR3BpSetInt3(PUVM pUVM, VMCPUID idSrcCpu, PCDBGFADDRESS pAddress,
1804 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
1805{
1806 return DBGFR3BpSetInt3Ex(pUVM, NIL_DBGFBPOWNER, NULL /*pvUser*/, idSrcCpu, pAddress,
1807 iHitTrigger, iHitDisable, phBp);
1808}
1809
1810
1811/**
1812 * Sets a breakpoint (int 3 based) - extended version.
1813 *
1814 * @returns VBox status code.
1815 * @param pUVM The user mode VM handle.
1816 * @param hOwner The owner handle, use NIL_DBGFBPOWNER if no special owner attached.
1817 * @param pvUser Opaque user data to pass in the owner callback.
1818 * @param idSrcCpu The ID of the virtual CPU used for the
1819 * breakpoint address resolution.
1820 * @param pAddress The address of the breakpoint.
1821 * @param iHitTrigger The hit count at which the breakpoint start triggering.
1822 * Use 0 (or 1) if it's gonna trigger at once.
1823 * @param iHitDisable The hit count which disables the breakpoint.
1824 * Use ~(uint64_t) if it's never gonna be disabled.
1825 * @param phBp Where to store the breakpoint handle on success.
1826 *
1827 * @thread Any thread.
1828 */
1829VMMR3DECL(int) DBGFR3BpSetInt3Ex(PUVM pUVM, DBGFBPOWNER hOwner, void *pvUser,
1830 VMCPUID idSrcCpu, PCDBGFADDRESS pAddress,
1831 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
1832{
1833 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
1834 AssertReturn(hOwner != NIL_DBGFBPOWNER || pvUser == NULL, VERR_INVALID_PARAMETER);
1835 AssertReturn(DBGFR3AddrIsValid(pUVM, pAddress), VERR_INVALID_PARAMETER);
1836 AssertReturn(iHitTrigger <= iHitDisable, VERR_INVALID_PARAMETER);
1837 AssertPtrReturn(phBp, VERR_INVALID_POINTER);
1838
1839 int rc = dbgfR3BpEnsureInit(pUVM);
1840 AssertRCReturn(rc, rc);
1841
1842 /*
1843 * Translate & save the breakpoint address into a guest-physical address.
1844 */
1845 RTGCPHYS GCPhysBpAddr = NIL_RTGCPHYS;
1846 rc = DBGFR3AddrToPhys(pUVM, idSrcCpu, pAddress, &GCPhysBpAddr);
1847 if (RT_SUCCESS(rc))
1848 {
1849 /*
1850 * The physical address from DBGFR3AddrToPhys() is the start of the page,
1851 * we need the exact byte offset into the page while writing to it in dbgfR3BpInt3Arm().
1852 */
1853 GCPhysBpAddr |= (pAddress->FlatPtr & X86_PAGE_OFFSET_MASK);
1854
1855 PDBGFBPINT pBp = NULL;
1856 DBGFBP hBp = dbgfR3BpGetByAddr(pUVM, DBGFBPTYPE_INT3, pAddress->FlatPtr, &pBp);
1857 if ( hBp != NIL_DBGFBP
1858 && pBp->Pub.u.Int3.PhysAddr == GCPhysBpAddr)
1859 {
1860 rc = VINF_SUCCESS;
1861 if (!DBGF_BP_PUB_IS_ENABLED(pBp->Pub.fFlagsAndType))
1862 rc = dbgfR3BpArm(pUVM, hBp, pBp);
1863 if (RT_SUCCESS(rc))
1864 {
1865 rc = VINF_DBGF_BP_ALREADY_EXIST;
1866 if (phBp)
1867 *phBp = hBp;
1868 }
1869 return rc;
1870 }
1871
1872 rc = dbgfR3BpAlloc(pUVM, hOwner, pvUser, DBGFBPTYPE_INT3, iHitTrigger, iHitDisable, &hBp, &pBp);
1873 if (RT_SUCCESS(rc))
1874 {
1875 pBp->Pub.u.Int3.PhysAddr = GCPhysBpAddr;
1876 pBp->Pub.u.Int3.GCPtr = pAddress->FlatPtr;
1877
1878 /* Add the breakpoint to the lookup tables. */
1879 rc = dbgfR3BpInt3Add(pUVM, hBp, pBp);
1880 if (RT_SUCCESS(rc))
1881 {
1882 /* Enable the breakpoint. */
1883 rc = dbgfR3BpArm(pUVM, hBp, pBp);
1884 if (RT_SUCCESS(rc))
1885 {
1886 *phBp = hBp;
1887 return VINF_SUCCESS;
1888 }
1889
1890 int rc2 = dbgfR3BpInt3Remove(pUVM, hBp, pBp); AssertRC(rc2);
1891 }
1892
1893 dbgfR3BpFree(pUVM, hBp, pBp);
1894 }
1895 }
1896
1897 return rc;
1898}
1899
1900
1901/**
1902 * Sets a register breakpoint.
1903 *
1904 * @returns VBox status code.
1905 * @param pUVM The user mode VM handle.
1906 * @param pAddress The address of the breakpoint.
1907 * @param iHitTrigger The hit count at which the breakpoint start triggering.
1908 * Use 0 (or 1) if it's gonna trigger at once.
1909 * @param iHitDisable The hit count which disables the breakpoint.
1910 * Use ~(uint64_t) if it's never gonna be disabled.
1911 * @param fType The access type (one of the X86_DR7_RW_* defines).
1912 * @param cb The access size - 1,2,4 or 8 (the latter is AMD64 long mode only.
1913 * Must be 1 if fType is X86_DR7_RW_EO.
1914 * @param phBp Where to store the breakpoint handle.
1915 *
1916 * @thread Any thread.
1917 */
1918VMMR3DECL(int) DBGFR3BpSetReg(PUVM pUVM, PCDBGFADDRESS pAddress, uint64_t iHitTrigger,
1919 uint64_t iHitDisable, uint8_t fType, uint8_t cb, PDBGFBP phBp)
1920{
1921 return DBGFR3BpSetRegEx(pUVM, NIL_DBGFBPOWNER, NULL /*pvUser*/, pAddress,
1922 iHitTrigger, iHitDisable, fType, cb, phBp);
1923}
1924
1925
1926/**
1927 * Sets a register breakpoint - extended version.
1928 *
1929 * @returns VBox status code.
1930 * @param pUVM The user mode VM handle.
1931 * @param hOwner The owner handle, use NIL_DBGFBPOWNER if no special owner attached.
1932 * @param pvUser Opaque user data to pass in the owner callback.
1933 * @param pAddress The address of the breakpoint.
1934 * @param iHitTrigger The hit count at which the breakpoint start triggering.
1935 * Use 0 (or 1) if it's gonna trigger at once.
1936 * @param iHitDisable The hit count which disables the breakpoint.
1937 * Use ~(uint64_t) if it's never gonna be disabled.
1938 * @param fType The access type (one of the X86_DR7_RW_* defines).
1939 * @param cb The access size - 1,2,4 or 8 (the latter is AMD64 long mode only.
1940 * Must be 1 if fType is X86_DR7_RW_EO.
1941 * @param phBp Where to store the breakpoint handle.
1942 *
1943 * @thread Any thread.
1944 */
1945VMMR3DECL(int) DBGFR3BpSetRegEx(PUVM pUVM, DBGFBPOWNER hOwner, void *pvUser,
1946 PCDBGFADDRESS pAddress, uint64_t iHitTrigger, uint64_t iHitDisable,
1947 uint8_t fType, uint8_t cb, PDBGFBP phBp)
1948{
1949 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
1950 AssertReturn(hOwner != NIL_DBGFBPOWNER || pvUser == NULL, VERR_INVALID_PARAMETER);
1951 AssertReturn(DBGFR3AddrIsValid(pUVM, pAddress), VERR_INVALID_PARAMETER);
1952 AssertReturn(iHitTrigger <= iHitDisable, VERR_INVALID_PARAMETER);
1953 AssertReturn(cb > 0 && cb <= 8 && RT_IS_POWER_OF_TWO(cb), VERR_INVALID_PARAMETER);
1954 AssertPtrReturn(phBp, VERR_INVALID_POINTER);
1955 switch (fType)
1956 {
1957 case X86_DR7_RW_EO:
1958 if (cb == 1)
1959 break;
1960 AssertMsgFailedReturn(("fType=%#x cb=%d != 1\n", fType, cb), VERR_INVALID_PARAMETER);
1961 case X86_DR7_RW_IO:
1962 case X86_DR7_RW_RW:
1963 case X86_DR7_RW_WO:
1964 break;
1965 default:
1966 AssertMsgFailedReturn(("fType=%#x\n", fType), VERR_INVALID_PARAMETER);
1967 }
1968
1969 int rc = dbgfR3BpEnsureInit(pUVM);
1970 AssertRCReturn(rc, rc);
1971
1972 PDBGFBPINT pBp = NULL;
1973 DBGFBP hBp = dbgfR3BpGetByAddr(pUVM, DBGFBPTYPE_REG, pAddress->FlatPtr, &pBp);
1974 if ( hBp != NIL_DBGFBP
1975 && pBp->Pub.u.Reg.cb == cb
1976 && pBp->Pub.u.Reg.fType == fType)
1977 {
1978 rc = VINF_SUCCESS;
1979 if (!DBGF_BP_PUB_IS_ENABLED(pBp->Pub.fFlagsAndType))
1980 rc = dbgfR3BpArm(pUVM, hBp, pBp);
1981 if (RT_SUCCESS(rc))
1982 {
1983 rc = VINF_DBGF_BP_ALREADY_EXIST;
1984 if (phBp)
1985 *phBp = hBp;
1986 }
1987 return rc;
1988 }
1989
1990 /* Allocate new breakpoint. */
1991 rc = dbgfR3BpAlloc(pUVM, hOwner, pvUser, DBGFBPTYPE_REG, iHitTrigger, iHitDisable, &hBp, &pBp);
1992 if (RT_SUCCESS(rc))
1993 {
1994 pBp->Pub.u.Reg.GCPtr = pAddress->FlatPtr;
1995 pBp->Pub.u.Reg.fType = fType;
1996 pBp->Pub.u.Reg.cb = cb;
1997 pBp->Pub.u.Reg.iReg = UINT8_MAX;
1998 ASMCompilerBarrier();
1999
2000 /* Assign the proper hardware breakpoint. */
2001 rc = dbgfR3BpRegAssign(pUVM->pVM, hBp, pBp);
2002 if (RT_SUCCESS(rc))
2003 {
2004 /* Arm the breakpoint. */
2005 rc = dbgfR3BpArm(pUVM, hBp, pBp);
2006 if (RT_SUCCESS(rc))
2007 {
2008 if (phBp)
2009 *phBp = hBp;
2010 return VINF_SUCCESS;
2011 }
2012 else
2013 {
2014 int rc2 = dbgfR3BpRegRemove(pUVM->pVM, hBp, pBp);
2015 AssertRC(rc2); RT_NOREF(rc2);
2016 }
2017 }
2018
2019 dbgfR3BpFree(pUVM, hBp, pBp);
2020 }
2021
2022 return rc;
2023}
2024
2025
2026/**
2027 * This is only kept for now to not mess with the debugger implementation at this point,
2028 * recompiler breakpoints are not supported anymore (IEM has some API but it isn't implemented
2029 * and should probably be merged with the DBGF breakpoints).
2030 */
2031VMMR3DECL(int) DBGFR3BpSetREM(PUVM pUVM, PCDBGFADDRESS pAddress, uint64_t iHitTrigger,
2032 uint64_t iHitDisable, PDBGFBP phBp)
2033{
2034 RT_NOREF(pUVM, pAddress, iHitTrigger, iHitDisable, phBp);
2035 return VERR_NOT_SUPPORTED;
2036}
2037
2038
2039/**
2040 * Sets an I/O port breakpoint.
2041 *
2042 * @returns VBox status code.
2043 * @param pUVM The user mode VM handle.
2044 * @param uPort The first I/O port.
2045 * @param cPorts The number of I/O ports, see DBGFBPIOACCESS_XXX.
2046 * @param fAccess The access we want to break on.
2047 * @param iHitTrigger The hit count at which the breakpoint start
2048 * triggering. Use 0 (or 1) if it's gonna trigger at
2049 * once.
2050 * @param iHitDisable The hit count which disables the breakpoint.
2051 * Use ~(uint64_t) if it's never gonna be disabled.
2052 * @param phBp Where to store the breakpoint handle.
2053 *
2054 * @thread Any thread.
2055 */
2056VMMR3DECL(int) DBGFR3BpSetPortIo(PUVM pUVM, RTIOPORT uPort, RTIOPORT cPorts, uint32_t fAccess,
2057 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
2058{
2059 return DBGFR3BpSetPortIoEx(pUVM, NIL_DBGFBPOWNER, NULL /*pvUser*/, uPort, cPorts,
2060 fAccess, iHitTrigger, iHitDisable, phBp);
2061}
2062
2063
2064/**
2065 * Sets an I/O port breakpoint - extended version.
2066 *
2067 * @returns VBox status code.
2068 * @param pUVM The user mode VM handle.
2069 * @param hOwner The owner handle, use NIL_DBGFBPOWNER if no special owner attached.
2070 * @param pvUser Opaque user data to pass in the owner callback.
2071 * @param uPort The first I/O port.
2072 * @param cPorts The number of I/O ports, see DBGFBPIOACCESS_XXX.
2073 * @param fAccess The access we want to break on.
2074 * @param iHitTrigger The hit count at which the breakpoint start
2075 * triggering. Use 0 (or 1) if it's gonna trigger at
2076 * once.
2077 * @param iHitDisable The hit count which disables the breakpoint.
2078 * Use ~(uint64_t) if it's never gonna be disabled.
2079 * @param phBp Where to store the breakpoint handle.
2080 *
2081 * @thread Any thread.
2082 */
2083VMMR3DECL(int) DBGFR3BpSetPortIoEx(PUVM pUVM, DBGFBPOWNER hOwner, void *pvUser,
2084 RTIOPORT uPort, RTIOPORT cPorts, uint32_t fAccess,
2085 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
2086{
2087 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2088 AssertReturn(hOwner != NIL_DBGFBPOWNER || pvUser == NULL, VERR_INVALID_PARAMETER);
2089 AssertReturn(!(fAccess & ~DBGFBPIOACCESS_VALID_MASK_PORT_IO), VERR_INVALID_FLAGS);
2090 AssertReturn(fAccess, VERR_INVALID_FLAGS);
2091 AssertReturn(iHitTrigger <= iHitDisable, VERR_INVALID_PARAMETER);
2092 AssertPtrReturn(phBp, VERR_INVALID_POINTER);
2093 AssertReturn(cPorts > 0, VERR_OUT_OF_RANGE);
2094 AssertReturn((RTIOPORT)(uPort + cPorts) < uPort, VERR_OUT_OF_RANGE);
2095
2096 int rc = dbgfR3BpEnsureInit(pUVM);
2097 AssertRCReturn(rc, rc);
2098
2099 return VERR_NOT_IMPLEMENTED;
2100}
2101
2102
2103/**
2104 * Sets a memory mapped I/O breakpoint.
2105 *
2106 * @returns VBox status code.
2107 * @param pUVM The user mode VM handle.
2108 * @param GCPhys The first MMIO address.
2109 * @param cb The size of the MMIO range to break on.
2110 * @param fAccess The access we want to break on.
2111 * @param iHitTrigger The hit count at which the breakpoint start
2112 * triggering. Use 0 (or 1) if it's gonna trigger at
2113 * once.
2114 * @param iHitDisable The hit count which disables the breakpoint.
2115 * Use ~(uint64_t) if it's never gonna be disabled.
2116 * @param phBp Where to store the breakpoint handle.
2117 *
2118 * @thread Any thread.
2119 */
2120VMMR3DECL(int) DBGFR3BpSetMmio(PUVM pUVM, RTGCPHYS GCPhys, uint32_t cb, uint32_t fAccess,
2121 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
2122{
2123 return DBGFR3BpSetMmioEx(pUVM, NIL_DBGFBPOWNER, NULL /*pvUser*/, GCPhys, cb, fAccess,
2124 iHitTrigger, iHitDisable, phBp);
2125}
2126
2127
2128/**
2129 * Sets a memory mapped I/O breakpoint - extended version.
2130 *
2131 * @returns VBox status code.
2132 * @param pUVM The user mode VM handle.
2133 * @param hOwner The owner handle, use NIL_DBGFBPOWNER if no special owner attached.
2134 * @param pvUser Opaque user data to pass in the owner callback.
2135 * @param GCPhys The first MMIO address.
2136 * @param cb The size of the MMIO range to break on.
2137 * @param fAccess The access we want to break on.
2138 * @param iHitTrigger The hit count at which the breakpoint start
2139 * triggering. Use 0 (or 1) if it's gonna trigger at
2140 * once.
2141 * @param iHitDisable The hit count which disables the breakpoint.
2142 * Use ~(uint64_t) if it's never gonna be disabled.
2143 * @param phBp Where to store the breakpoint handle.
2144 *
2145 * @thread Any thread.
2146 */
2147VMMR3DECL(int) DBGFR3BpSetMmioEx(PUVM pUVM, DBGFBPOWNER hOwner, void *pvUser,
2148 RTGCPHYS GCPhys, uint32_t cb, uint32_t fAccess,
2149 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
2150{
2151 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2152 AssertReturn(hOwner != NIL_DBGFBPOWNER || pvUser == NULL, VERR_INVALID_PARAMETER);
2153 AssertReturn(!(fAccess & ~DBGFBPIOACCESS_VALID_MASK_MMIO), VERR_INVALID_FLAGS);
2154 AssertReturn(fAccess, VERR_INVALID_FLAGS);
2155 AssertReturn(iHitTrigger <= iHitDisable, VERR_INVALID_PARAMETER);
2156 AssertPtrReturn(phBp, VERR_INVALID_POINTER);
2157 AssertReturn(cb, VERR_OUT_OF_RANGE);
2158 AssertReturn(GCPhys + cb < GCPhys, VERR_OUT_OF_RANGE);
2159
2160 int rc = dbgfR3BpEnsureInit(pUVM);
2161 AssertRCReturn(rc, rc);
2162
2163 return VERR_NOT_IMPLEMENTED;
2164}
2165
2166
2167/**
2168 * Clears a breakpoint.
2169 *
2170 * @returns VBox status code.
2171 * @param pUVM The user mode VM handle.
2172 * @param hBp The handle of the breakpoint which should be removed (cleared).
2173 *
2174 * @thread Any thread.
2175 */
2176VMMR3DECL(int) DBGFR3BpClear(PUVM pUVM, DBGFBP hBp)
2177{
2178 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2179 AssertReturn(hBp != NIL_DBGFBPOWNER, VERR_INVALID_HANDLE);
2180
2181 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pUVM, hBp);
2182 AssertPtrReturn(pBp, VERR_DBGF_BP_NOT_FOUND);
2183
2184 /* Disarm the breakpoint when it is enabled. */
2185 if (DBGF_BP_PUB_IS_ENABLED(pBp->Pub.fFlagsAndType))
2186 {
2187 int rc = dbgfR3BpDisarm(pUVM, hBp, pBp);
2188 AssertRC(rc);
2189 }
2190
2191 switch (DBGF_BP_PUB_GET_TYPE(pBp->Pub.fFlagsAndType))
2192 {
2193 case DBGFBPTYPE_REG:
2194 {
2195 int rc = dbgfR3BpRegRemove(pUVM->pVM, hBp, pBp);
2196 AssertRC(rc);
2197 break;
2198 }
2199 default:
2200 break;
2201 }
2202
2203 dbgfR3BpFree(pUVM, hBp, pBp);
2204 return VINF_SUCCESS;
2205}
2206
2207
2208/**
2209 * Enables a breakpoint.
2210 *
2211 * @returns VBox status code.
2212 * @param pUVM The user mode VM handle.
2213 * @param hBp The handle of the breakpoint which should be enabled.
2214 *
2215 * @thread Any thread.
2216 */
2217VMMR3DECL(int) DBGFR3BpEnable(PUVM pUVM, DBGFBP hBp)
2218{
2219 /*
2220 * Validate the input.
2221 */
2222 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2223 AssertReturn(hBp != NIL_DBGFBPOWNER, VERR_INVALID_HANDLE);
2224
2225 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pUVM, hBp);
2226 AssertPtrReturn(pBp, VERR_DBGF_BP_NOT_FOUND);
2227
2228 int rc;
2229 if (!DBGF_BP_PUB_IS_ENABLED(pBp->Pub.fFlagsAndType))
2230 rc = dbgfR3BpArm(pUVM, hBp, pBp);
2231 else
2232 rc = VINF_DBGF_BP_ALREADY_ENABLED;
2233
2234 return rc;
2235}
2236
2237
2238/**
2239 * Disables a breakpoint.
2240 *
2241 * @returns VBox status code.
2242 * @param pUVM The user mode VM handle.
2243 * @param hBp The handle of the breakpoint which should be disabled.
2244 *
2245 * @thread Any thread.
2246 */
2247VMMR3DECL(int) DBGFR3BpDisable(PUVM pUVM, DBGFBP hBp)
2248{
2249 /*
2250 * Validate the input.
2251 */
2252 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2253 AssertReturn(hBp != NIL_DBGFBPOWNER, VERR_INVALID_HANDLE);
2254
2255 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pUVM, hBp);
2256 AssertPtrReturn(pBp, VERR_DBGF_BP_NOT_FOUND);
2257
2258 int rc;
2259 if (DBGF_BP_PUB_IS_ENABLED(pBp->Pub.fFlagsAndType))
2260 rc = dbgfR3BpDisarm(pUVM, hBp, pBp);
2261 else
2262 rc = VINF_DBGF_BP_ALREADY_DISABLED;
2263
2264 return rc;
2265}
2266
2267
2268/**
2269 * Enumerate the breakpoints.
2270 *
2271 * @returns VBox status code.
2272 * @param pUVM The user mode VM handle.
2273 * @param pfnCallback The callback function.
2274 * @param pvUser The user argument to pass to the callback.
2275 *
2276 * @thread Any thread.
2277 */
2278VMMR3DECL(int) DBGFR3BpEnum(PUVM pUVM, PFNDBGFBPENUM pfnCallback, void *pvUser)
2279{
2280 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2281
2282 for (uint32_t idChunk = 0; idChunk < RT_ELEMENTS(pUVM->dbgf.s.aBpChunks); idChunk++)
2283 {
2284 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[idChunk];
2285
2286 if (pBpChunk->idChunk == DBGF_BP_CHUNK_ID_INVALID)
2287 break; /* Stop here as the first non allocated chunk means there is no one allocated afterwards as well. */
2288
2289 if (pBpChunk->cBpsFree < DBGF_BP_COUNT_PER_CHUNK)
2290 {
2291 /* Scan the bitmap for allocated entries. */
2292 int32_t iAlloc = ASMBitFirstSet(pBpChunk->pbmAlloc, DBGF_BP_COUNT_PER_CHUNK);
2293 if (iAlloc != -1)
2294 {
2295 do
2296 {
2297 DBGFBP hBp = DBGF_BP_HND_CREATE(idChunk, (uint32_t)iAlloc);
2298 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pUVM, hBp);
2299
2300 /* Make a copy of the breakpoints public data to have a consistent view. */
2301 DBGFBPPUB BpPub;
2302 BpPub.cHits = ASMAtomicReadU64((volatile uint64_t *)&pBp->Pub.cHits);
2303 BpPub.iHitTrigger = ASMAtomicReadU64((volatile uint64_t *)&pBp->Pub.iHitTrigger);
2304 BpPub.iHitDisable = ASMAtomicReadU64((volatile uint64_t *)&pBp->Pub.iHitDisable);
2305 BpPub.hOwner = ASMAtomicReadU32((volatile uint32_t *)&pBp->Pub.hOwner);
2306 BpPub.fFlagsAndType = ASMAtomicReadU32((volatile uint32_t *)&pBp->Pub.fFlagsAndType);
2307 memcpy(&BpPub.u, &pBp->Pub.u, sizeof(pBp->Pub.u)); /* Is constant after allocation. */
2308
2309 /* Check if a removal raced us. */
2310 if (ASMBitTest(pBpChunk->pbmAlloc, iAlloc))
2311 {
2312 int rc = pfnCallback(pUVM, pvUser, hBp, &BpPub);
2313 if (RT_FAILURE(rc) || rc == VINF_CALLBACK_RETURN)
2314 return rc;
2315 }
2316
2317 iAlloc = ASMBitNextSet(pBpChunk->pbmAlloc, DBGF_BP_COUNT_PER_CHUNK, iAlloc);
2318 } while (iAlloc != -1);
2319 }
2320 }
2321 }
2322
2323 return VINF_SUCCESS;
2324}
2325
2326
2327/**
2328 * Called whenever a breakpoint event needs to be serviced in ring-3 to decide what to do.
2329 *
2330 * @returns VBox status code.
2331 * @param pVM The cross context VM structure.
2332 * @param pVCpu The vCPU the breakpoint event happened on.
2333 *
2334 * @thread EMT
2335 */
2336VMMR3_INT_DECL(int) DBGFR3BpHit(PVM pVM, PVMCPU pVCpu)
2337{
2338 /* Send it straight into the debugger?. */
2339 if (pVCpu->dbgf.s.fBpInvokeOwnerCallback)
2340 {
2341 DBGFBP hBp = pVCpu->dbgf.s.hBpActive;
2342 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pVM->pUVM, pVCpu->dbgf.s.hBpActive);
2343 AssertReturn(pBp, VERR_DBGF_BP_IPE_9);
2344
2345 /* Resolve owner (can be NIL_DBGFBPOWNER) and invoke callback if there is one. */
2346 PCDBGFBPOWNERINT pBpOwner = dbgfR3BpOwnerGetByHnd(pVM->pUVM, pBp->Pub.hOwner);
2347 if (pBpOwner)
2348 {
2349 VBOXSTRICTRC rcStrict = pBpOwner->pfnBpHitR3(pVM, pVCpu->idCpu, pBp->pvUserR3, hBp, &pBp->Pub);
2350 if (rcStrict == VINF_SUCCESS)
2351 {
2352 uint8_t abInstr[DBGF_BP_INSN_MAX];
2353 RTGCPTR const GCPtrInstr = pVCpu->cpum.GstCtx.rip + pVCpu->cpum.GstCtx.cs.u64Base;
2354 int rc = PGMPhysSimpleReadGCPtr(pVCpu, &abInstr[0], GCPtrInstr, sizeof(abInstr));
2355 AssertRC(rc);
2356 if (RT_SUCCESS(rc))
2357 {
2358 /* Replace the int3 with the original instruction byte. */
2359 abInstr[0] = pBp->Pub.u.Int3.bOrg;
2360 rcStrict = IEMExecOneWithPrefetchedByPC(pVCpu, CPUMCTX2CORE(&pVCpu->cpum.GstCtx), GCPtrInstr, &abInstr[0], sizeof(abInstr));
2361 return VBOXSTRICTRC_VAL(rcStrict);
2362 }
2363 }
2364 else if (rcStrict != VINF_DBGF_BP_HALT) /* Guru meditation. */
2365 return VERR_DBGF_BP_OWNER_CALLBACK_WRONG_STATUS;
2366 /* else: Halt in the debugger. */
2367 }
2368 }
2369
2370 return DBGFR3EventBreakpoint(pVM, DBGFEVENT_BREAKPOINT);
2371}
2372
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette