Compare commits

...

86 Commits

Author SHA1 Message Date
Pratik Mankawde
31d0ed178a Phase 4a: Establish-phase gap fill & cross-node correlation
Add full consensus tracing with deterministic trace ID correlation
and establish-phase instrumentation:

- Deterministic trace_id from previousLedger.id() for cross-node
  correlation (switchable via consensus_trace_strategy config)
- Round-to-round span links (follows-from) for causal chaining
- Establish phase spans with convergence tracking, dispute resolution
  events, and threshold escalation attributes
- Validation spans with links to round spans (thread-safe via
  roundSpanContext_ snapshot for jtACCEPT cross-thread access)
- Mode change spans for proposing/observing transitions
- New startSpan overload with span links in Telemetry interface
- XRPL_TRACE_ADD_EVENT macro with do-while(0) safety wrapper
- Config validation for consensus_trace_strategy
- Test adaptor (csf::Peer) updated with getTelemetry() stub

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:10:00 +00:00
Pratik Mankawde
7675af41ec Add consensus.accept.apply span with ledger close time attributes
Add a new trace span in doAccept() capturing ledger close time details:
- xrpl.consensus.close_time: agreed-upon close time (epoch seconds)
- xrpl.consensus.close_time_correct: whether validators converged
  (per avCT_CONSENSUS_PCT = 75% threshold)
- xrpl.consensus.close_resolution_ms: time rounding granularity
- xrpl.consensus.state: "finished" or "moved_on" (consensus failure)
- xrpl.consensus.proposing: whether this node was proposing

Update Tempo datasource with close time filters, plan docs with
new span inventory, and add test coverage for the attribute pattern.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:10:00 +00:00
Pratik Mankawde
655b78a7d8 Add consensus trace filters to Grafana Tempo datasource
Add xrpl.consensus.mode (static), xrpl.consensus.round (dynamic),
and xrpl.consensus.ledger.seq (static) search filters for Phase 4
consensus span attributes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:24 +00:00
Pratik Mankawde
0bbffaebc4 Phase 4: Consensus tracing — round lifecycle, proposals, validations
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:24 +00:00
Pratik Mankawde
e33e592128 Add transaction trace filters to Grafana Tempo datasource
Add xrpl.tx.hash (static), xrpl.tx.local and xrpl.tx.status
(dynamic) search filters for Phase 3 transaction span attributes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
e6b05d700c Phase 3: Transaction tracing — protobuf context, PeerImp, NetworkOPs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
f114ec5462 Fix gersemi cmake formatting in test CMakeLists
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
1d05075585 Appendix: add Phase 2-5 task lists to document index
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
e77f2f0fa5 Add RPC trace filters to Grafana Tempo datasource
Add xrpl.rpc.command (static), xrpl.rpc.status and xrpl.rpc.role
(dynamic) search filters for Phase 2 RPC span attributes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
ea00fb9bc8 Phase 2: Complete RPC tracing — interface, macros, attributes, tests
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
a707abd6f2 Phase 1c: RPC layer telemetry integration
Add tracing instrumentation to the RPC request handling layer:

TracingInstrumentation.h:
- Convenience macros (XRPL_TRACE_RPC, XRPL_TRACE_TX, etc.) that
  create RAII SpanGuard objects when telemetry is enabled
- XRPL_TRACE_SET_ATTR / XRPL_TRACE_EXCEPTION for span enrichment
- Zero-overhead no-ops when XRPL_ENABLE_TELEMETRY is not defined

RPCHandler.cpp:
- Trace each RPC command with span name "rpc.command.<method>"
- Record command name, API version, role, and status as attributes
- Capture exceptions on the span for error visibility

ServerHandler.cpp:
- Trace HTTP requests ("rpc.request"), WebSocket messages
  ("rpc.ws_message"), and processRequest ("rpc.process")

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
99f4f16eb9 Phase 1a: OpenTelemetry plan documentation
Add comprehensive planning documentation for the OpenTelemetry
distributed tracing integration:

- Tracing fundamentals and concepts
- Architecture analysis of rippled's tracing surface area
- Design decisions and trade-offs
- Implementation strategy and code samples
- Configuration reference
- Implementation phases roadmap
- Observability backend comparison
- POC task list and presentation materials

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
f333ae4c11 Fix gersemi cmake formatting (if/endif spacing, target_link_libraries wrapping)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
9ecea839bb Appendix: add presentation.md to document index
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
ba3bf72326 Move presentation.md into OpenTelemetryPlan directory
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
b3609fb345 Respect custom service_instance_id when set in [telemetry] config
Only override serviceInstanceId with the node public key when the user
hasn't explicitly set service_instance_id in the [telemetry] section.
This allows operators to assign human-friendly names (e.g. "validator-1")
while still defaulting to the node's base58 public key.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
99a0b4094e Fix empty service.instance.id by deferring node identity injection
The Telemetry object is constructed in ApplicationImp's member initializer
list where nodeIdentity_ is not yet available, resulting in an empty
service.instance.id resource attribute. Add setServiceInstanceId() virtual
method that Application::setup() calls after nodeIdentity_ is known but
before telemetry_->start() creates the OTel resource.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
48a3cf56b8 Add node identification filters to Grafana Tempo datasource
Add resource-level search filters for node identity:
- service.instance.id (node public key) — unique node identifier
- service.version (rippled build version)
- xrpl.network.id (numeric network ID)
- xrpl.network.type (mainnet/testnet/devnet/standalone)

These enable filtering traces by specific nodes in multi-node
deployments and by network in mixed environments.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
8e43103310 Add Tempo search filters and metrics generator for trace exploration
Configure Grafana Tempo datasource with pre-built search filters
(service.name, span name, status, duration) for the Explore UI.
Enable Tempo metrics_generator with service-graphs and span-metrics
processors to power Grafana's service map visualization.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
d412791a01 Add Grafana Tempo as second trace backend alongside Jaeger
- Add Tempo 2.7.2 service to docker-compose with local storage
- Add otlp/tempo exporter to OTel Collector traces pipeline
- Add Tempo Grafana datasource provisioning with node graph
- Update 05-configuration-reference.md examples with Tempo
- OTel Collector fans traces to both Jaeger and Tempo simultaneously

Jaeger provides a standalone UI at :16686 for quick lookups.
Tempo is queryable via Grafana Explore using TraceQL and is the
recommended backend for production (supports S3/GCS storage).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
d8d7a40fca Phase 1b: Telemetry core infrastructure
Add the OpenTelemetry telemetry library and supporting infrastructure:

Build system:
- Conan opentelemetry-cpp dependency with OTLP/gRPC exporter
- CMake integration for xrpl_telemetry library target
- Levelization ordering updates

Core library (libxrpl):
- Telemetry class: provider lifecycle, span creation, sampling config
- SpanGuard: RAII span management with attribute/exception helpers
- TelemetryConfig: parse [telemetry] config section
- NullTelemetry: no-op implementation when telemetry is disabled

Application integration:
- Telemetry member in ApplicationImp with start/stop lifecycle
- getTelemetry() interface on Application
- ServiceRegistry telemetry accessor

Docker observability stack:
- OTel Collector, Jaeger, Grafana docker-compose setup
- Collector config with OTLP gRPC receiver and Jaeger exporter

Config and docs:
- Example telemetry config section in xrpld-example.cfg
- Build documentation for telemetry setup

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:09:01 +00:00
Pratik Mankawde
fe6cd31762 Add Phase 4a implementation status to plan docs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 22:08:13 +00:00
Pratik Mankawde
fd18cf9e01 Merge remote-tracking branch 'origin/develop' into pratik/otel-phase1a-plan-docs 2026-03-11 14:58:44 +00:00
Alex Kremer
7b3724b7a3 fix: Add missed clang-tidy bugprone-inc-dec-conditions check (#6526) 2026-03-11 14:04:26 +00:00
Ayaz Salikhov
bee2d112c6 ci: Fix how clang-tidy is run when .clang-tidy is changed (#6521) 2026-03-11 14:18:18 +01:00
Bart
01c977bbfe ci: Fix rules used to determine when to upload Conan recipes (#6524)
The refs as previously used pointed to the source branch, not the target branch. However, determining the target branch is different depending on the GitHub event. The pull request logic was incorrect and needed to be fixed, and the logic inside the workflow could be simplified. Both modifications have been made in this commit.
2026-03-11 13:43:58 +01:00
Bart
3baf5454f2 ci: Only upload artifacts in the XRPLF/rippled repository (#6523)
This change will only attempt to upload artifacts for CI runs performed in the XRPLF/rippled repository.
2026-03-11 11:48:40 +01:00
Michael Legleux
24a5cbaa93 chore: Build voidstar on amd64 only (#6481)
* chore: Build voidstar on amd64 only

* fatal error if configuring voidstar on  non x86
2026-03-10 23:59:43 +00:00
Michael Legleux
eb7c8c6c7a chore: Use CMake components for install (#6485)
* chore: Use components for install

* rm CMake export targets

* reformat
2026-03-10 23:38:43 +00:00
Alex Kremer
f27d8f3890 chore: Enable clang-tidy bugprone-inc-dec-in-conditions check (#6455) 2026-03-10 20:12:15 +00:00
Alex Kremer
8345cd77df chore: Enable clang-tidy bugprone-unused-raii check (#6505) 2026-03-10 19:48:56 +00:00
Alex Kremer
c38aabdaee chore: Enable clang-tidy bugprone-unhandled-self-assignment check (#6504) 2026-03-10 17:42:49 +00:00
Alex Kremer
a896ed3987 chore: Enable clang-tidy bugprone-optional-value-conversion check (#6470) 2026-03-10 15:56:24 +01:00
Alex Kremer
1a7d67c4db chore: Enable clang-tidy bugprone-reserved-identifier check (#6456) 2026-03-10 10:29:08 +01:00
Alex Kremer
92983d8040 chore: Enable clang-tidy bugprone-too-small-loop-variable check (#6473) 2026-03-10 08:56:44 +00:00
Alex Kremer
320a65f77c chore: Enable clang-tidy bugprone-suspicious-stringview-data-usage check (#6467) 2026-03-10 08:34:27 +00:00
Ayaz Salikhov
45b8c4d732 chore: Update XRPLF/actions (#6508)
This change mainly includes XRPLF/actions#51.
2026-03-09 21:47:22 +00:00
Pratik Mankawde
d6bf13394e Appendix: add 00-tracing-fundamentals.md and POC_taskList.md to document index
Split document index into Plan Documents and Task Lists sections.
These files were introduced in this branch but missing from the index.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 19:03:09 +00:00
Pratik Mankawde
34243e0cc2 Phase 1a: OpenTelemetry plan documentation
Add comprehensive planning documentation for the OpenTelemetry
distributed tracing integration:

- Tracing fundamentals and concepts
- Architecture analysis of rippled's tracing surface area
- Design decisions and trade-offs
- Implementation strategy and code samples
- Configuration reference
- Implementation phases roadmap
- Observability backend comparison
- POC task list and presentation materials

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 19:03:09 +00:00
Alex Kremer
e284969ae4 chore: Enable clang-tidy bugprone-pointer-arithmetic-on-polymorphic-object check (#6469) 2026-03-09 19:36:56 +01:00
Alex Kremer
0335076359 chore: Fix additional clang-tidy issues for unused-local-non-trivial-variable check (#6509) 2026-03-09 17:16:04 +00:00
Ayaz Salikhov
7e2b137131 chore: Use check-pr-title from XRPLF/actions (#6506) 2026-03-09 17:53:52 +01:00
Sergey Kuznetsov
e2290b1a0a feat: Add mutex wrapper from clio (#6447)
This change adds a mutex wrapper copied from clio. The wrapper attaches a mutex to the data it protects, which improves safety and readability.
2026-03-09 16:33:20 +00:00
Alex Kremer
1ee0567b14 chore: Enable clang-tidy bugprone-suspicious-missing-comma check (#6468) 2026-03-09 15:48:38 +00:00
Alex Kremer
6b301efc8c chore: Enable clang-tidy bugprone-unused-local-non-trivial-variable check (#6458) 2026-03-09 15:25:52 +00:00
dependabot[bot]
9e0d350fca ci: [DEPENDABOT] bump tj-actions/changed-files from 47.0.4 to 47.0.5 (#6501) 2026-03-09 15:27:03 +01:00
Ayaz Salikhov
7b12c00e6b chore: Add custom cmake definitions for gersemi (#6491)
This change adds definitions for our custom functions/macros, so gersemi will nicely format them too.
2026-03-06 13:50:00 +01:00
Vito Tumas
5865bd017f refactor: Update transaction folder structure (#6483)
This change reorganizes the `tx/transactors` directory for consistency and discoverability. There are no behavioral changes, this is a pure refactor. Underscores were chosen as the way to separate multi-words as this is the more popular option in C++ projects.
 
Specific changes:
- Rename all subdirectories to lowercase/snake_case (`AMM` → `amm`, `Check` → `check`, `NFT` → `nft`, `PermissionedDomain` → `permissioned_domain`, etc.)
- Merge `AMM/` and `Offer/` into `dex/`, including `PermissionedDEXHelpers`
- Rename `MPT/` → `token/`, absorbing `SetTrust` and `Clawback`
- Move top-level transactors into named groups: `account/`, `bridge/`, `credentials/`, `did/`, `escrow/`, `oracle/`, `payment/`, `payment_channel/`, `system/`
- Update all include paths across the codebase and `transactions.macro`
2026-03-06 08:25:31 +00:00
Ayaz Salikhov
af0ec7defd chore: Apply gersemi changes (#6486) 2026-03-05 19:54:44 +00:00
Ayaz Salikhov
0c74270b05 chore: Use gersemi instead of ancient cmake-format (#6486) 2026-03-05 19:54:37 +00:00
Alex Kremer
dde450784d Add Formats and Flags to server_definitions (#6321)
This change implements https://github.com/XRPLF/XRPL-Standards/discussions/418: "System XLS: Add Formats and Flags to server_definitions".
2026-03-05 16:11:27 +00:00
Michael Legleux
08e734457f fix: Fix docs deployment for pull requests (#6482) 2026-03-05 00:12:41 -08:00
Michael Legleux
77518394e8 fix: Stop committing generated docs to prevent repo bloat (#6474) 2026-03-04 19:19:57 -08:00
Ayaz Salikhov
c69091bded chore: Add Git information compile-time info to only one file (#6464)
The existing code added the git commit info (`GIT_COMMIT_HASH` and `GIT_BRANCH`) to every file, which was a problem for leveraging `ccache` to cache build objects. This change adds a separate C++ file from where these compile-time variables are propagated to wherever they are needed. A new CMake file is added to set the commit info if the `git` binary is available.
2026-03-04 19:45:28 +00:00
Alex Kremer
595f0dd461 chore: Enable clang-tidy bugprone-sizeof-expression check (#6466) 2026-03-04 19:15:22 +00:00
Alex Kremer
b451d5e412 chore: Enable clang-tidy bugprone-return-const-ref-from-parameter check (#6459) 2026-03-04 18:10:10 +00:00
Alex Kremer
af97df5a63 chore: Enable clang-tidy bugprone-move-forwarding-reference check (#6457) 2026-03-04 17:03:27 +00:00
Peter Chen
e39954d128 fix: Gateway balance with MPT (#6143)
When `gateway_balances` gets called on an account that is involved in the `EscrowCreate` transaction (with MPT being escrowed), the method returns internal error. This change fixes this case by excluding the MPT type when totaling escrow amount.
2026-03-04 15:50:51 +00:00
tequ
3cd1e3d94e refactor: Update PermissionedDomainDelete to use keylet for sle access (#6063) 2026-03-04 04:11:58 +01:00
Ayaz Salikhov
fcec31ed20 chore: Update pre-commit hooks (#6460) 2026-03-03 20:23:22 +00:00
dependabot[bot]
0abd762781 ci: [DEPENDABOT] bump actions/upload-artifact from 6.0.0 to 7.0.0 (#6450) 2026-03-03 17:17:08 +00:00
Sergey Kuznetsov
5300e65686 tests: Improve stability of Subscribe tests (#6420)
The `Subscribe` tests were flaky, because each test performs some operations (e.g. sends transactions) and waits for messages to appear in subscription with a 100ms timeout. If tests are slow (e.g. compiled in debug mode or a slow machine) then some of them could fail. This change adds an attempt to synchronize the background Env's thread and the test's thread by ensuring that all the scheduled operations are started before the test's thread starts to wait for a websocket message. This is done by limiting I/O threads of the app inside Env to 1 and adding a synchronization barrier after closing the ledger.
2026-03-03 08:46:55 -05:00
Alex Kremer
afc660a1b5 refactor: Fix clang-tidy bugprone-empty-catch check (#6419)
This change fixes or suppresses instances detected by the `bugprone-empty-catch` clang-tidy check.
2026-03-02 17:08:56 +00:00
Vito Tumas
1a7f824b89 refactor: Splits invariant checks into multiple classes (#6440)
The invariant check system had grown into a single monolithic file pair containing 24 invariant checker classes. The large `InvariantCheck.cpp` file was a frequent source of merge conflicts and difficult to navigate. This refactoring improves maintainability and readability with zero behavioral changes.

In particular, this change:
- Splits `InvariantCheck.h` and `InvariantCheck.cpp` into 10 focused header/source pairs organized by domain under a new `invariants/` subdirectory.
- Extracts the shared `Privilege` enum and `hasPrivilege()` function into a dedicated `InvariantCheckPrivilege.h` header, so domain-specific files can reference them independently.
2026-02-27 21:02:39 +00:00
Sergey Kuznetsov
b58c681189 chore: Make nix hook optional (#6431)
This change makes the `nix` pre-commit hook optional in development environments, and enforced only inside Github Actions.
2026-02-27 13:36:10 -05:00
Mayukha Vadari
404f35d556 test: Grep for failures in CI (#6339)
This change adjusts the CI tests to make it easier to spot errors, without needing to sift through the thousands of lines of output.
2026-02-27 03:01:38 +00:00
Alex Kremer
2e595b6031 chore: Enable clang-tidy checks without issues (#6414)
This change enables all clang-tidy checks that are already passing. It also modifies the clang-tidy CI job, so it runs against all files if .clang-tidy changed.
2026-02-26 18:26:58 +00:00
Bart
3a8a18c2ca refactor: Use uint256 directly as key instead of void pointer (#6313)
This change replaces `void const*` by `uint256 const&` for database fetches.

Object hashes are expressed using the `uint256` data type, and are converted to `void *` when calling the `fetch` or `fetchBatch` functions. However, in these fetch functions they are converted back to `uint256`, making the conversion process unnecessary. In a few cases the underlying pointer is needed, but that can then be easy obtained via `[hash variable].data()`.
2026-02-25 18:23:34 -05:00
Ayaz Salikhov
65e63ebef3 chore: Update cleanup-workspace to delete old .conan2 dir on macOS (#6412) 2026-02-25 01:12:16 +00:00
Valentin Balaschenko
bdd106d992 Explicitly trim the heap after cache sweeps (#6022)
Limited to Linux/glibc builds.
2026-02-24 21:33:13 +00:00
Valentin Balaschenko
24cbaf76a5 ci: Update prepare-runner action to fix macOS build environment (empty)
Updates XRPLF/actions prepare-runner to version 2cbf48101 which fixes
pip upgrade failures on macOS runners with Homebrew-managed Python.

* This commit was cherry-picked from "release-3.1", but ended up empty
  because the changes are already present. It is included only for
  accounting - to indicate that all changes/commits from the previous
  release will be in the next one.
2026-02-24 12:52:32 -05:00
Valentin Balaschenko
3a805cc646 Disable featureBatch and fixBatchInnerSigs amendments (#6402) 2026-02-24 12:49:59 -05:00
Sergey Kuznetsov
0fd237d707 chore: Add nix development environment (#6314)
---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-23 20:10:07 -05:00
dependabot[bot]
3542daa4cc ci: [DEPENDABOT] bump actions/upload-artifact from 4.6.2 to 6.0.0 (#6396)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.6.2 to 6.0.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](ea165f8d65...b7c566a772)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-20 22:48:01 +00:00
dependabot[bot]
fd9f57ec97 ci: [DEPENDABOT] bump actions/checkout from 4.3.0 to 6.0.2 (#6397)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.3.0 to 6.0.2.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4.3.0...de0fac2e4500dabe0009e67214ff5f5447ce83dd)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 6.0.2
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-20 22:09:48 +00:00
dependabot[bot]
625becff18 ci: [DEPENDABOT] bump actions/setup-python from 5.6.0 to 6.2.0 (#6395)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.6.0 to 6.2.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](a26af69be9...a309ff8b42)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-version: 6.2.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-20 21:29:05 +00:00
dependabot[bot]
4bcbc6e50f ci: [DEPENDABOT] bump tj-actions/changed-files from 46.0.5 to 47.0.4 (#6394)
Bumps [tj-actions/changed-files](https://github.com/tj-actions/changed-files) from 46.0.5 to 47.0.4.
- [Release notes](https://github.com/tj-actions/changed-files/releases)
- [Changelog](https://github.com/tj-actions/changed-files/blob/main/HISTORY.md)
- [Commits](ed68ef82c0...7dee1b0c15)

---
updated-dependencies:
- dependency-name: tj-actions/changed-files
  dependency-version: 47.0.4
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-20 19:59:37 +00:00
dependabot[bot]
0bc4a0cfe8 ci: [DEPENDABOT] bump codecov/codecov-action from 5.4.3 to 5.5.2 (#6398)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.4.3 to 5.5.2.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](18283e04ce...671740ac38)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-version: 5.5.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-20 19:11:26 +00:00
Ayaz Salikhov
cb54adefed ci: Build docs in PRs and in private repos (#6400) 2026-02-20 13:41:43 -05:00
Ayaz Salikhov
d03d72bfd5 ci: Add dependabot config (#6379) 2026-02-20 09:19:00 +00:00
Ed Hennis
6f35d94b2f Fix tautological assertion (#6393) 2026-02-20 01:58:47 +00:00
Ayaz Salikhov
2c1fad1023 chore: Apply clang-format width 100 (#6387) 2026-02-19 23:30:00 +00:00
Ayaz Salikhov
25cca46553 chore: Set clang-format width to 100 in config file (#6387) 2026-02-19 23:29:46 +00:00
Ayaz Salikhov
469ce9f291 chore: Set cmake-format width to 100 (#6386) 2026-02-19 19:42:51 +00:00
Sergey Kuznetsov
31302877ab ci: Add clang tidy workflow to ci (#6369) 2026-02-19 14:06:44 -05:00
Jingchen
0976b2b68b refactor: Modularize app/tx (#6228) 2026-02-17 18:10:07 +00:00
1005 changed files with 45401 additions and 18267 deletions

View File

@@ -37,7 +37,7 @@ BinPackParameters: false
BreakBeforeBinaryOperators: false
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: true
ColumnLimit: 120
ColumnLimit: 100
CommentPragmas: "^ IWYU pragma:"
ConstructorInitializerAllOnOneLineOrOnePerLine: true
ConstructorInitializerIndentWidth: 4

202
.clang-tidy Normal file
View File

@@ -0,0 +1,202 @@
---
Checks: "-*,
bugprone-argument-comment,
bugprone-assert-side-effect,
bugprone-bad-signal-to-kill-thread,
bugprone-bool-pointer-implicit-conversion,
bugprone-casting-through-void,
bugprone-chained-comparison,
bugprone-compare-pointer-to-member-virtual-function,
bugprone-copy-constructor-init,
bugprone-dangling-handle,
bugprone-dynamic-static-initializers,
bugprone-empty-catch,
bugprone-fold-init-type,
bugprone-forward-declaration-namespace,
bugprone-inaccurate-erase,
bugprone-inc-dec-in-conditions,
bugprone-incorrect-enable-if,
bugprone-incorrect-roundings,
bugprone-infinite-loop,
bugprone-integer-division,
bugprone-lambda-function-name,
bugprone-macro-parentheses,
bugprone-macro-repeated-side-effects,
bugprone-misplaced-operator-in-strlen-in-alloc,
bugprone-misplaced-pointer-arithmetic-in-alloc,
bugprone-misplaced-widening-cast,
bugprone-move-forwarding-reference,
bugprone-multi-level-implicit-pointer-conversion,
bugprone-multiple-new-in-one-expression,
bugprone-multiple-statement-macro,
bugprone-no-escape,
bugprone-non-zero-enum-to-bool-conversion,
bugprone-optional-value-conversion,
bugprone-parent-virtual-call,
bugprone-pointer-arithmetic-on-polymorphic-object,
bugprone-posix-return,
bugprone-redundant-branch-condition,
bugprone-reserved-identifier,
bugprone-return-const-ref-from-parameter,
bugprone-shared-ptr-array-mismatch,
bugprone-signal-handler,
bugprone-signed-char-misuse,
bugprone-sizeof-container,
bugprone-sizeof-expression,
bugprone-spuriously-wake-up-functions,
bugprone-standalone-empty,
bugprone-string-constructor,
bugprone-string-integer-assignment,
bugprone-string-literal-with-embedded-nul,
bugprone-stringview-nullptr,
bugprone-suspicious-enum-usage,
bugprone-suspicious-include,
bugprone-suspicious-memory-comparison,
bugprone-suspicious-memset-usage,
bugprone-suspicious-missing-comma,
bugprone-suspicious-realloc-usage,
bugprone-suspicious-semicolon,
bugprone-suspicious-string-compare,
bugprone-suspicious-stringview-data-usage,
bugprone-swapped-arguments,
bugprone-terminating-continue,
bugprone-throw-keyword-missing,
bugprone-too-small-loop-variable,
bugprone-undefined-memory-manipulation,
bugprone-undelegated-constructor,
bugprone-unhandled-exception-at-new,
bugprone-unhandled-self-assignment,
bugprone-unique-ptr-array-mismatch,
bugprone-unsafe-functions,
bugprone-unused-raii,
bugprone-unused-local-non-trivial-variable,
bugprone-virtual-near-miss,
cppcoreguidelines-no-suspend-with-lock,
cppcoreguidelines-virtual-class-destructor,
hicpp-ignored-remove-result,
misc-definitions-in-headers,
misc-header-include-cycle,
misc-misplaced-const,
misc-static-assert,
misc-throw-by-value-catch-by-reference,
misc-unused-alias-decls,
misc-unused-using-decls,
readability-duplicate-include,
readability-enum-initial-value,
readability-misleading-indentation,
readability-non-const-parameter,
readability-redundant-declaration,
readability-reference-to-constructed-temporary,
modernize-deprecated-headers,
modernize-make-shared,
modernize-make-unique,
performance-implicit-conversion-in-loop,
performance-move-constructor-init,
performance-trivially-destructible
"
# ---
# checks that have some issues that need to be resolved:
#
# bugprone-crtp-constructor-accessibility,
# bugprone-move-forwarding-reference,
# bugprone-switch-missing-default-case,
# bugprone-unused-raii,
# bugprone-unused-return-value,
# bugprone-use-after-move,
#
# cppcoreguidelines-misleading-capture-default-by-value,
# cppcoreguidelines-init-variables,
# cppcoreguidelines-pro-type-member-init,
# cppcoreguidelines-pro-type-static-cast-downcast,
# cppcoreguidelines-use-default-member-init,
# cppcoreguidelines-rvalue-reference-param-not-moved,
#
# llvm-namespace-comment,
# misc-const-correctness,
# misc-include-cleaner,
# misc-redundant-expression,
#
# readability-avoid-nested-conditional-operator,
# readability-avoid-return-with-void-value,
# readability-braces-around-statements,
# readability-container-contains,
# readability-container-size-empty,
# readability-convert-member-functions-to-static,
# readability-const-return-type,
# readability-else-after-return,
# readability-implicit-bool-conversion,
# readability-inconsistent-declaration-parameter-name,
# readability-identifier-naming,
# readability-make-member-function-const,
# readability-math-missing-parentheses,
# readability-redundant-inline-specifier,
# readability-redundant-member-init,
# readability-redundant-casting,
# readability-redundant-string-init,
# readability-simplify-boolean-expr,
# readability-static-definition-in-anonymous-namespace,
# readability-suspicious-call-argument,
# readability-use-std-min-max,
# readability-static-accessed-through-instance,
#
# modernize-concat-nested-namespaces,
# modernize-pass-by-value,
# modernize-type-traits,
# modernize-use-designated-initializers,
# modernize-use-emplace,
# modernize-use-equals-default,
# modernize-use-equals-delete,
# modernize-use-override,
# modernize-use-ranges,
# modernize-use-starts-ends-with,
# modernize-use-std-numbers,
# modernize-use-using,
#
# performance-faster-string-find,
# performance-for-range-copy,
# performance-inefficient-vector-operation,
# performance-move-const-arg,
# performance-no-automatic-move,
# ---
#
CheckOptions:
# readability-braces-around-statements.ShortStatementLines: 2
# readability-identifier-naming.MacroDefinitionCase: UPPER_CASE
# readability-identifier-naming.ClassCase: CamelCase
# readability-identifier-naming.StructCase: CamelCase
# readability-identifier-naming.UnionCase: CamelCase
# readability-identifier-naming.EnumCase: CamelCase
# readability-identifier-naming.EnumConstantCase: CamelCase
# readability-identifier-naming.ScopedEnumConstantCase: CamelCase
# readability-identifier-naming.GlobalConstantCase: UPPER_CASE
# readability-identifier-naming.GlobalConstantPrefix: "k"
# readability-identifier-naming.GlobalVariableCase: CamelCase
# readability-identifier-naming.GlobalVariablePrefix: "g"
# readability-identifier-naming.ConstexprFunctionCase: camelBack
# readability-identifier-naming.ConstexprMethodCase: camelBack
# readability-identifier-naming.ClassMethodCase: camelBack
# readability-identifier-naming.ClassMemberCase: camelBack
# readability-identifier-naming.ClassConstantCase: UPPER_CASE
# readability-identifier-naming.ClassConstantPrefix: "k"
# readability-identifier-naming.StaticConstantCase: UPPER_CASE
# readability-identifier-naming.StaticConstantPrefix: "k"
# readability-identifier-naming.StaticVariableCase: UPPER_CASE
# readability-identifier-naming.StaticVariablePrefix: "k"
# readability-identifier-naming.ConstexprVariableCase: UPPER_CASE
# readability-identifier-naming.ConstexprVariablePrefix: "k"
# readability-identifier-naming.LocalConstantCase: camelBack
# readability-identifier-naming.LocalVariableCase: camelBack
# readability-identifier-naming.TemplateParameterCase: CamelCase
# readability-identifier-naming.ParameterCase: camelBack
# readability-identifier-naming.FunctionCase: camelBack
# readability-identifier-naming.MemberCase: camelBack
# readability-identifier-naming.PrivateMemberSuffix: _
# readability-identifier-naming.ProtectedMemberSuffix: _
# readability-identifier-naming.PublicMemberSuffix: ""
# readability-identifier-naming.FunctionIgnoredRegexp: ".*tag_invoke.*"
bugprone-unsafe-functions.ReportMoreUnsafeFunctions: true
# bugprone-unused-return-value.CheckedReturnTypes: ::std::error_code;::std::error_condition;::std::errc
# misc-include-cleaner.IgnoreHeaders: '.*/(detail|impl)/.*;.*(expected|unexpected).*;.*ranges_lower_bound\.h;time.h;stdlib.h;__chrono/.*;fmt/chrono.h;boost/uuid/uuid_hash.hpp'
#
# HeaderFilterRegex: '^.*/(src|tests)/.*\.(h|hpp)$'
WarningsAsErrors: "*"

View File

@@ -1,247 +0,0 @@
_help_parse: Options affecting listfile parsing
parse:
_help_additional_commands:
- Specify structure for custom cmake functions
additional_commands:
target_protobuf_sources:
pargs:
- target
- prefix
kwargs:
PROTOS: "*"
LANGUAGE: cpp
IMPORT_DIRS: "*"
GENERATE_EXTENSIONS: "*"
PLUGIN: "*"
_help_override_spec:
- Override configurations per-command where available
override_spec: {}
_help_vartags:
- Specify variable tags.
vartags: []
_help_proptags:
- Specify property tags.
proptags: []
_help_format: Options affecting formatting.
format:
_help_disable:
- Disable formatting entirely, making cmake-format a no-op
disable: false
_help_line_width:
- How wide to allow formatted cmake files
line_width: 120
_help_tab_size:
- How many spaces to tab for indent
tab_size: 4
_help_use_tabchars:
- If true, lines are indented using tab characters (utf-8
- 0x09) instead of <tab_size> space characters (utf-8 0x20).
- In cases where the layout would require a fractional tab
- character, the behavior of the fractional indentation is
- governed by <fractional_tab_policy>
use_tabchars: false
_help_fractional_tab_policy:
- If <use_tabchars> is True, then the value of this variable
- indicates how fractional indentions are handled during
- whitespace replacement. If set to 'use-space', fractional
- indentation is left as spaces (utf-8 0x20). If set to
- "`round-up` fractional indentation is replaced with a single"
- tab character (utf-8 0x09) effectively shifting the column
- to the next tabstop
fractional_tab_policy: use-space
_help_max_subgroups_hwrap:
- If an argument group contains more than this many sub-groups
- (parg or kwarg groups) then force it to a vertical layout.
max_subgroups_hwrap: 4
_help_max_pargs_hwrap:
- If a positional argument group contains more than this many
- arguments, then force it to a vertical layout.
max_pargs_hwrap: 5
_help_max_rows_cmdline:
- If a cmdline positional group consumes more than this many
- lines without nesting, then invalidate the layout (and nest)
max_rows_cmdline: 2
_help_separate_ctrl_name_with_space:
- If true, separate flow control names from their parentheses
- with a space
separate_ctrl_name_with_space: true
_help_separate_fn_name_with_space:
- If true, separate function names from parentheses with a
- space
separate_fn_name_with_space: false
_help_dangle_parens:
- If a statement is wrapped to more than one line, than dangle
- the closing parenthesis on its own line.
dangle_parens: false
_help_dangle_align:
- If the trailing parenthesis must be 'dangled' on its on
- "line, then align it to this reference: `prefix`: the start"
- "of the statement, `prefix-indent`: the start of the"
- "statement, plus one indentation level, `child`: align to"
- the column of the arguments
dangle_align: prefix
_help_min_prefix_chars:
- If the statement spelling length (including space and
- parenthesis) is smaller than this amount, then force reject
- nested layouts.
min_prefix_chars: 18
_help_max_prefix_chars:
- If the statement spelling length (including space and
- parenthesis) is larger than the tab width by more than this
- amount, then force reject un-nested layouts.
max_prefix_chars: 10
_help_max_lines_hwrap:
- If a candidate layout is wrapped horizontally but it exceeds
- this many lines, then reject the layout.
max_lines_hwrap: 2
_help_line_ending:
- What style line endings to use in the output.
line_ending: unix
_help_command_case:
- Format command names consistently as 'lower' or 'upper' case
command_case: canonical
_help_keyword_case:
- Format keywords consistently as 'lower' or 'upper' case
keyword_case: unchanged
_help_always_wrap:
- A list of command names which should always be wrapped
always_wrap: []
_help_enable_sort:
- If true, the argument lists which are known to be sortable
- will be sorted lexicographicall
enable_sort: true
_help_autosort:
- If true, the parsers may infer whether or not an argument
- list is sortable (without annotation).
autosort: true
_help_require_valid_layout:
- By default, if cmake-format cannot successfully fit
- everything into the desired linewidth it will apply the
- last, most aggressive attempt that it made. If this flag is
- True, however, cmake-format will print error, exit with non-
- zero status code, and write-out nothing
require_valid_layout: false
_help_layout_passes:
- A dictionary mapping layout nodes to a list of wrap
- decisions. See the documentation for more information.
layout_passes: {}
_help_markup: Options affecting comment reflow and formatting.
markup:
_help_bullet_char:
- What character to use for bulleted lists
bullet_char: "-"
_help_enum_char:
- What character to use as punctuation after numerals in an
- enumerated list
enum_char: .
_help_first_comment_is_literal:
- If comment markup is enabled, don't reflow the first comment
- block in each listfile. Use this to preserve formatting of
- your copyright/license statements.
first_comment_is_literal: false
_help_literal_comment_pattern:
- If comment markup is enabled, don't reflow any comment block
- which matches this (regex) pattern. Default is `None`
- (disabled).
literal_comment_pattern: null
_help_fence_pattern:
- Regular expression to match preformat fences in comments
- default= ``r'^\s*([`~]{3}[`~]*)(.*)$'``
fence_pattern: ^\s*([`~]{3}[`~]*)(.*)$
_help_ruler_pattern:
- Regular expression to match rulers in comments default=
- '``r''^\s*[^\w\s]{3}.*[^\w\s]{3}$''``'
ruler_pattern: ^\s*[^\w\s]{3}.*[^\w\s]{3}$
_help_explicit_trailing_pattern:
- If a comment line matches starts with this pattern then it
- is explicitly a trailing comment for the preceding
- argument. Default is '#<'
explicit_trailing_pattern: "#<"
_help_hashruler_min_length:
- If a comment line starts with at least this many consecutive
- hash characters, then don't lstrip() them off. This allows
- for lazy hash rulers where the first hash char is not
- separated by space
hashruler_min_length: 10
_help_canonicalize_hashrulers:
- If true, then insert a space between the first hash char and
- remaining hash chars in a hash ruler, and normalize its
- length to fill the column
canonicalize_hashrulers: true
_help_enable_markup:
- enable comment markup parsing and reflow
enable_markup: false
_help_lint: Options affecting the linter
lint:
_help_disabled_codes:
- a list of lint codes to disable
disabled_codes: []
_help_function_pattern:
- regular expression pattern describing valid function names
function_pattern: "[0-9a-z_]+"
_help_macro_pattern:
- regular expression pattern describing valid macro names
macro_pattern: "[0-9A-Z_]+"
_help_global_var_pattern:
- regular expression pattern describing valid names for
- variables with global (cache) scope
global_var_pattern: "[A-Z][0-9A-Z_]+"
_help_internal_var_pattern:
- regular expression pattern describing valid names for
- variables with global scope (but internal semantic)
internal_var_pattern: _[A-Z][0-9A-Z_]+
_help_local_var_pattern:
- regular expression pattern describing valid names for
- variables with local scope
local_var_pattern: "[a-z][a-z0-9_]+"
_help_private_var_pattern:
- regular expression pattern describing valid names for
- privatedirectory variables
private_var_pattern: _[0-9a-z_]+
_help_public_var_pattern:
- regular expression pattern describing valid names for public
- directory variables
public_var_pattern: "[A-Z][0-9A-Z_]+"
_help_argument_var_pattern:
- regular expression pattern describing valid names for
- function/macro arguments and loop variables.
argument_var_pattern: "[a-z][a-z0-9_]+"
_help_keyword_pattern:
- regular expression pattern describing valid names for
- keywords used in functions or macros
keyword_pattern: "[A-Z][0-9A-Z_]+"
_help_max_conditionals_custom_parser:
- In the heuristic for C0201, how many conditionals to match
- within a loop in before considering the loop a parser.
max_conditionals_custom_parser: 2
_help_min_statement_spacing:
- Require at least this many newlines between statements
min_statement_spacing: 1
_help_max_statement_spacing:
- Require no more than this many newlines between statements
max_statement_spacing: 2
max_returns: 6
max_branches: 12
max_arguments: 5
max_localvars: 15
max_statements: 50
_help_encode: Options affecting file encoding
encode:
_help_emit_byteorder_mark:
- If true, emit the unicode byte-order mark (BOM) at the start
- of the file
emit_byteorder_mark: false
_help_input_encoding:
- Specify the encoding of the input file. Defaults to utf-8
input_encoding: utf-8
_help_output_encoding:
- Specify the encoding of the output file. Defaults to utf-8.
- Note that cmake only claims to support utf-8 so be careful
- when using anything else
output_encoding: utf-8
_help_misc: Miscellaneous configurations options.
misc:
_help_per_command:
- A dictionary containing any per-command configuration
- overrides. Currently only `command_case` is supported.
per_command: {}

View File

@@ -0,0 +1,98 @@
# Custom CMake command definitions for gersemi formatting.
# These stubs teach gersemi the signatures of project-specific commands
# so it can format their invocations correctly.
function(git_branch branch_val)
endfunction()
function(isolate_headers target A B scope)
endfunction()
function(create_symbolic_link target link)
endfunction()
function(xrpl_add_test name)
endfunction()
macro(exclude_from_default target_)
endmacro()
macro(exclude_if_included target_)
endmacro()
function(target_protobuf_sources target prefix)
set(options APPEND_PATH DESCRIPTORS)
set(oneValueArgs
LANGUAGE
OUT_VAR
EXPORT_MACRO
TARGET
PROTOC_OUT_DIR
PLUGIN
PLUGIN_OPTIONS
PROTOC_EXE
)
set(multiValueArgs
PROTOS
IMPORT_DIRS
GENERATE_EXTENSIONS
PROTOC_OPTIONS
DEPENDENCIES
)
cmake_parse_arguments(
THIS_FUNCTION_PREFIX
"${options}"
"${oneValueArgs}"
"${multiValueArgs}"
${ARGN}
)
endfunction()
function(add_module parent name)
endfunction()
function(target_link_modules parent scope)
endfunction()
function(setup_target_for_coverage_gcovr)
set(options NONE)
set(oneValueArgs BASE_DIRECTORY NAME FORMAT)
set(multiValueArgs EXCLUDE EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)
cmake_parse_arguments(
THIS_FUNCTION_PREFIX
"${options}"
"${oneValueArgs}"
"${multiValueArgs}"
${ARGN}
)
endfunction()
function(add_code_coverage_to_target name scope)
endfunction()
function(verbose_find_path variable name)
set(options
NO_CACHE
REQUIRED
OPTIONAL
NO_DEFAULT_PATH
NO_PACKAGE_ROOT_PATH
NO_CMAKE_PATH
NO_CMAKE_ENVIRONMENT_PATH
NO_SYSTEM_ENVIRONMENT_PATH
NO_CMAKE_SYSTEM_PATH
NO_CMAKE_INSTALL_PREFIX
CMAKE_FIND_ROOT_PATH_BOTH
ONLY_CMAKE_FIND_ROOT_PATH
NO_CMAKE_FIND_ROOT_PATH
)
set(oneValueArgs REGISTRY_VIEW VALIDATOR DOC)
set(multiValueArgs NAMES HINTS PATHS PATH_SUFFIXES)
cmake_parse_arguments(
THIS_FUNCTION_PREFIX
"${options}"
"${oneValueArgs}"
"${multiValueArgs}"
${ARGN}
)
endfunction()

1
.gersemirc Normal file
View File

@@ -0,0 +1 @@
definitions: [.gersemi]

56
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,56 @@
version: 2
updates:
- package-ecosystem: github-actions
directory: /
schedule:
interval: weekly
day: monday
time: "04:00"
timezone: Etc/GMT
commit-message:
prefix: "ci: [DEPENDABOT] "
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/build-deps/
schedule:
interval: weekly
day: monday
time: "04:00"
timezone: Etc/GMT
commit-message:
prefix: "ci: [DEPENDABOT] "
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/generate-version/
schedule:
interval: weekly
day: monday
time: "04:00"
timezone: Etc/GMT
commit-message:
prefix: "ci: [DEPENDABOT] "
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/print-env/
schedule:
interval: weekly
day: monday
time: "04:00"
timezone: Etc/GMT
commit-message:
prefix: "ci: [DEPENDABOT] "
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/setup-conan/
schedule:
interval: weekly
day: monday
time: "04:00"
timezone: Etc/GMT
commit-message:
prefix: "ci: [DEPENDABOT] "
target-branch: develop

View File

@@ -32,6 +32,16 @@ libxrpl.server > xrpl.server
libxrpl.shamap > xrpl.basics
libxrpl.shamap > xrpl.protocol
libxrpl.shamap > xrpl.shamap
libxrpl.telemetry > xrpl.basics
libxrpl.telemetry > xrpl.telemetry
libxrpl.tx > xrpl.basics
libxrpl.tx > xrpl.conditions
libxrpl.tx > xrpl.core
libxrpl.tx > xrpl.json
libxrpl.tx > xrpl.ledger
libxrpl.tx > xrpl.protocol
libxrpl.tx > xrpl.server
libxrpl.tx > xrpl.tx
test.app > test.jtx
test.app > test.rpc
test.app > test.toplevel
@@ -49,6 +59,7 @@ test.app > xrpl.protocol
test.app > xrpl.rdb
test.app > xrpl.resource
test.app > xrpl.server
test.app > xrpl.tx
test.basics > test.jtx
test.basics > test.unit_test
test.basics > xrpl.basics
@@ -67,6 +78,7 @@ test.consensus > xrpld.app
test.consensus > xrpld.consensus
test.consensus > xrpl.json
test.consensus > xrpl.ledger
test.consensus > xrpl.tx
test.core > test.jtx
test.core > test.toplevel
test.core > test.unit_test
@@ -80,6 +92,7 @@ test.csf > xrpl.basics
test.csf > xrpld.consensus
test.csf > xrpl.json
test.csf > xrpl.protocol
test.csf > xrpl.telemetry
test.json > test.jtx
test.json > xrpl.json
test.jtx > xrpl.basics
@@ -93,6 +106,7 @@ test.jtx > xrpl.net
test.jtx > xrpl.protocol
test.jtx > xrpl.resource
test.jtx > xrpl.server
test.jtx > xrpl.tx
test.ledger > test.jtx
test.ledger > test.toplevel
test.ledger > xrpl.basics
@@ -138,9 +152,11 @@ test.rpc > xrpld.core
test.rpc > xrpld.overlay
test.rpc > xrpld.rpc
test.rpc > xrpl.json
test.rpc > xrpl.ledger
test.rpc > xrpl.protocol
test.rpc > xrpl.resource
test.rpc > xrpl.server
test.rpc > xrpl.tx
test.server > test.jtx
test.server > test.toplevel
test.server > test.unit_test
@@ -159,8 +175,10 @@ test.toplevel > test.csf
test.toplevel > xrpl.json
test.unit_test > xrpl.basics
tests.libxrpl > xrpl.basics
tests.libxrpl > xrpld.telemetry
tests.libxrpl > xrpl.json
tests.libxrpl > xrpl.net
tests.libxrpl > xrpl.telemetry
xrpl.conditions > xrpl.basics
xrpl.conditions > xrpl.protocol
xrpl.core > xrpl.basics
@@ -171,6 +189,7 @@ xrpl.json > xrpl.basics
xrpl.ledger > xrpl.basics
xrpl.ledger > xrpl.protocol
xrpl.ledger > xrpl.server
xrpl.ledger > xrpl.shamap
xrpl.net > xrpl.basics
xrpl.nodestore > xrpl.basics
xrpl.nodestore > xrpl.protocol
@@ -192,12 +211,17 @@ xrpl.server > xrpl.shamap
xrpl.shamap > xrpl.basics
xrpl.shamap > xrpl.nodestore
xrpl.shamap > xrpl.protocol
xrpl.telemetry > xrpl.basics
xrpl.tx > xrpl.basics
xrpl.tx > xrpl.core
xrpl.tx > xrpl.ledger
xrpl.tx > xrpl.protocol
xrpld.app > test.unit_test
xrpld.app > xrpl.basics
xrpld.app > xrpl.conditions
xrpld.app > xrpl.core
xrpld.app > xrpld.consensus
xrpld.app > xrpld.core
xrpld.app > xrpld.telemetry
xrpld.app > xrpl.json
xrpld.app > xrpl.ledger
xrpld.app > xrpl.net
@@ -207,9 +231,13 @@ xrpld.app > xrpl.rdb
xrpld.app > xrpl.resource
xrpld.app > xrpl.server
xrpld.app > xrpl.shamap
xrpld.app > xrpl.telemetry
xrpld.app > xrpl.tx
xrpld.consensus > xrpl.basics
xrpld.consensus > xrpld.telemetry
xrpld.consensus > xrpl.json
xrpld.consensus > xrpl.protocol
xrpld.consensus > xrpl.telemetry
xrpld.core > xrpl.basics
xrpld.core > xrpl.core
xrpld.core > xrpl.json
@@ -220,11 +248,13 @@ xrpld.overlay > xrpl.basics
xrpld.overlay > xrpl.core
xrpld.overlay > xrpld.core
xrpld.overlay > xrpld.peerfinder
xrpld.overlay > xrpld.telemetry
xrpld.overlay > xrpl.json
xrpld.overlay > xrpl.protocol
xrpld.overlay > xrpl.rdb
xrpld.overlay > xrpl.resource
xrpld.overlay > xrpl.server
xrpld.overlay > xrpl.tx
xrpld.peerfinder > xrpl.basics
xrpld.peerfinder > xrpld.core
xrpld.peerfinder > xrpl.protocol
@@ -236,6 +266,7 @@ xrpld.perflog > xrpl.json
xrpld.rpc > xrpl.basics
xrpld.rpc > xrpl.core
xrpld.rpc > xrpld.core
xrpld.rpc > xrpld.telemetry
xrpld.rpc > xrpl.json
xrpld.rpc > xrpl.ledger
xrpld.rpc > xrpl.net
@@ -244,4 +275,6 @@ xrpld.rpc > xrpl.protocol
xrpld.rpc > xrpl.rdb
xrpld.rpc > xrpl.resource
xrpld.rpc > xrpl.server
xrpld.rpc > xrpl.tx
xrpld.shamap > xrpl.shamap
xrpld.telemetry > xrpl.telemetry

View File

@@ -55,7 +55,7 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
# fee to 500.
# - Bookworm using GCC 15: Debug on linux/amd64, enable code
# coverage (which will be done below).
# - Bookworm using Clang 16: Debug on linux/arm64, enable voidstar.
# - Bookworm using Clang 16: Debug on linux/amd64, enable voidstar.
# - Bookworm using Clang 17: Release on linux/amd64, set the
# reference fee to 1000.
# - Bookworm using Clang 20: Debug on linux/amd64.
@@ -78,7 +78,7 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-16"
and build_type == "Debug"
and architecture["platform"] == "linux/arm64"
and architecture["platform"] == "linux/amd64"
):
cmake_args = f"-Dvoidstar=ON {cmake_args}"
skip = False

10
.github/workflows/check-pr-title.yml vendored Normal file
View File

@@ -0,0 +1,10 @@
name: Check PR title
on:
pull_request:
types: [opened, edited, reopened, synchronize]
branches: [develop]
jobs:
check_title:
uses: XRPLF/actions/.github/workflows/check-pr-title.yml@943eb8277e8f4b010fde0c826ce4154c36c39509

View File

@@ -33,7 +33,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Determine changed files
# This step checks whether any files have changed that should
# cause the next jobs to run. We do it this way rather than
@@ -46,7 +46,7 @@ jobs:
# that Github considers any skipped jobs to have passed, and in
# turn the required checks as well.
id: changes
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
with:
files: |
# These paths are unique to `on-pr.yml`.
@@ -65,9 +65,12 @@ jobs:
.github/workflows/reusable-build.yml
.github/workflows/reusable-build-test-config.yml
.github/workflows/reusable-build-test.yml
.github/workflows/reusable-clang-tidy.yml
.github/workflows/reusable-clang-tidy-files.yml
.github/workflows/reusable-strategy-matrix.yml
.github/workflows/reusable-test.yml
.github/workflows/reusable-upload-recipe.yml
.clang-tidy
.codecov.yml
cmake/**
conan/**
@@ -107,6 +110,17 @@ jobs:
if: ${{ needs.should-run.outputs.go == 'true' }}
uses: ./.github/workflows/reusable-check-rename.yml
clang-tidy:
needs: should-run
if: ${{ needs.should-run.outputs.go == 'true' }}
uses: ./.github/workflows/reusable-clang-tidy.yml
permissions:
issues: write
contents: read
with:
check_only_changed: true
create_issue_on_failure: false
build-test:
needs: should-run
if: ${{ needs.should-run.outputs.go == 'true' }}
@@ -127,9 +141,8 @@ jobs:
needs:
- should-run
- build-test
# Only run when committing to a PR that targets a release branch in the
# XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' && needs.should-run.outputs.go == 'true' && startsWith(github.ref, 'refs/heads/release') }}
# Only run when committing to a PR that targets a release branch.
if: ${{ github.repository == 'XRPLF/rippled' && needs.should-run.outputs.go == 'true' && github.event_name == 'pull_request' && startsWith(github.event.pull_request.base.ref, 'release') }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
@@ -156,6 +169,7 @@ jobs:
needs:
- check-levelization
- check-rename
- clang-tidy
- build-test
- upload-recipe
- notify-clio

View File

@@ -17,8 +17,7 @@ defaults:
jobs:
upload-recipe:
# Only run when a tag is pushed to the XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' }}
if: ${{ github.repository == 'XRPLF/rippled' }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}

View File

@@ -22,9 +22,12 @@ on:
- ".github/workflows/reusable-build.yml"
- ".github/workflows/reusable-build-test-config.yml"
- ".github/workflows/reusable-build-test.yml"
- ".github/workflows/reusable-clang-tidy.yml"
- ".github/workflows/reusable-clang-tidy-files.yml"
- ".github/workflows/reusable-strategy-matrix.yml"
- ".github/workflows/reusable-test.yml"
- ".github/workflows/reusable-upload-recipe.yml"
- ".clang-tidy"
- ".codecov.yml"
- "cmake/**"
- "conan/**"
@@ -60,6 +63,15 @@ defaults:
shell: bash
jobs:
clang-tidy:
uses: ./.github/workflows/reusable-clang-tidy.yml
permissions:
issues: write
contents: read
with:
check_only_changed: false
create_issue_on_failure: ${{ github.event_name == 'schedule' }}
build-test:
uses: ./.github/workflows/reusable-build-test.yml
strategy:
@@ -80,8 +92,8 @@ jobs:
upload-recipe:
needs: build-test
# Only run when pushing to the develop branch in the XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' && github.event_name == 'push' && github.ref == 'refs/heads/develop' }}
# Only run when pushing to the develop branch.
if: ${{ github.repository == 'XRPLF/rippled' && github.event_name == 'push' && github.ref == 'refs/heads/develop' }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}

View File

@@ -11,7 +11,7 @@ on:
jobs:
# Call the workflow in the XRPLF/actions repo that runs the pre-commit hooks.
run-hooks:
uses: XRPLF/actions/.github/workflows/pre-commit.yml@320be44621ca2a080f05aeb15817c44b84518108
uses: XRPLF/actions/.github/workflows/pre-commit.yml@44856eb0d6ecb7d376370244324ab3dc8b863bad
with:
runs_on: ubuntu-latest
container: '{ "image": "ghcr.io/xrplf/ci/tools-rippled-pre-commit:sha-ab4d1f0" }'
container: '{ "image": "ghcr.io/xrplf/ci/tools-rippled-pre-commit:sha-41ec7c1" }'

View File

@@ -4,6 +4,18 @@ name: Build and publish documentation
on:
push:
branches:
- "develop"
- "release*"
paths:
- ".github/workflows/publish-docs.yml"
- "*.md"
- "**/*.md"
- "docs/**"
- "include/**"
- "src/libxrpl/**"
- "src/xrpld/**"
pull_request:
paths:
- ".github/workflows/publish-docs.yml"
- "*.md"
@@ -23,17 +35,22 @@ defaults:
env:
BUILD_DIR: build
NPROC_SUBTRACT: 2
# ubuntu-latest has only 2 CPUs for private repositories
# https://docs.github.com/en/actions/reference/runners/github-hosted-runners#standard-github-hosted-runners-for--private-repositories
NPROC_SUBTRACT: ${{ github.event.repository.private && '1' || '2' }}
jobs:
publish:
build:
runs-on: ubuntu-latest
container: ghcr.io/xrplf/ci/tools-rippled-documentation:sha-a8c7be1
permissions:
contents: write
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Prepare runner
uses: XRPLF/actions/prepare-runner@2cbf481018d930656e9276fcc20dc0e3a0be5b6d
with:
enable_ccache: false
- name: Get number of processors
uses: XRPLF/actions/get-nproc@cf0433aa74563aead044a1e395610c96d65a37cf
@@ -64,9 +81,23 @@ jobs:
cmake -Donly_docs=ON ..
cmake --build . --target docs --parallel ${BUILD_NPROC}
- name: Publish documentation
if: ${{ github.ref_type == 'branch' && github.ref_name == github.event.repository.default_branch }}
uses: peaceiris/actions-gh-pages@4f9cc6602d3f66b9c108549d475ec49e8ef4d45e # v4.0.0
- name: Create documentation artifact
if: ${{ github.event_name == 'push' }}
uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b # v4.0.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ${{ env.BUILD_DIR }}/docs/html
path: ${{ env.BUILD_DIR }}/docs/html
deploy:
if: ${{ github.event_name == 'push' }}
needs: build
runs-on: ubuntu-latest
permissions:
pages: write
id-token: write
environment:
name: github-pages
url: ${{ steps.deploy.outputs.page_url }}
steps:
- name: Deploy to GitHub Pages
id: deploy
uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e # v4.0.5

View File

@@ -104,7 +104,7 @@ jobs:
uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Prepare runner
uses: XRPLF/actions/prepare-runner@2cbf481018d930656e9276fcc20dc0e3a0be5b6d
@@ -176,8 +176,8 @@ jobs:
fi
- name: Upload the binary (Linux)
if: ${{ github.repository_owner == 'XRPLF' && runner.os == 'Linux' }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: ${{ github.repository == 'XRPLF/rippled' && runner.os == 'Linux' }}
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: xrpld-${{ inputs.config_name }}
path: ${{ env.BUILD_DIR }}/xrpld
@@ -253,8 +253,8 @@ jobs:
--target coverage
- name: Upload coverage report
if: ${{ github.repository_owner == 'XRPLF' && !inputs.build_only && env.COVERAGE_ENABLED == 'true' }}
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
if: ${{ github.repository == 'XRPLF/rippled' && !inputs.build_only && env.COVERAGE_ENABLED == 'true' }}
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
with:
disable_search: true
disable_telem: true

View File

@@ -18,7 +18,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Check levelization
run: .github/scripts/levelization/generate.sh
- name: Check for differences

View File

@@ -18,7 +18,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Check definitions
run: .github/scripts/rename/definitions.sh .
- name: Check copyright notices

View File

@@ -0,0 +1,162 @@
name: Run clang-tidy on files
on:
workflow_call:
inputs:
files:
description: "List of files to check (empty means check all files)"
type: string
default: ""
create_issue_on_failure:
description: "Whether to create an issue if the check failed"
type: boolean
default: false
defaults:
run:
shell: bash
env:
# Conan installs the generators in the build/generators directory, see the
# layout() method in conanfile.py. We then run CMake from the build directory.
BUILD_DIR: build
BUILD_TYPE: Release
jobs:
run-clang-tidy:
name: Run clang tidy
runs-on: ["self-hosted", "Linux", "X64", "heavy"]
container: "ghcr.io/xrplf/ci/debian-trixie:clang-21-sha-53033a2"
permissions:
issues: write
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Prepare runner
uses: XRPLF/actions/prepare-runner@2cbf481018d930656e9276fcc20dc0e3a0be5b6d
with:
enable_ccache: false
- name: Print build environment
uses: ./.github/actions/print-env
- name: Get number of processors
uses: XRPLF/actions/get-nproc@cf0433aa74563aead044a1e395610c96d65a37cf
id: nproc
- name: Setup Conan
uses: ./.github/actions/setup-conan
- name: Build dependencies
uses: ./.github/actions/build-deps
with:
build_nproc: ${{ steps.nproc.outputs.nproc }}
build_type: ${{ env.BUILD_TYPE }}
log_verbosity: verbose
- name: Configure CMake
working-directory: ${{ env.BUILD_DIR }}
run: |
cmake \
-G 'Ninja' \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE="${BUILD_TYPE}" \
-Dtests=ON \
-Dwerr=ON \
-Dxrpld=ON \
..
# clang-tidy needs headers generated from proto files
- name: Build libxrpl.libpb
working-directory: ${{ env.BUILD_DIR }}
run: |
ninja -j ${{ steps.nproc.outputs.nproc }} xrpl.libpb
- name: Run clang tidy
id: run_clang_tidy
continue-on-error: true
env:
FILES: ${{ inputs.files }}
run: |
run-clang-tidy -j ${{ steps.nproc.outputs.nproc }} -p "$BUILD_DIR" $FILES 2>&1 | tee clang-tidy-output.txt
- name: Upload clang-tidy output
if: steps.run_clang_tidy.outcome != 'success'
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: clang-tidy-results
path: clang-tidy-output.txt
retention-days: 30
- name: Create an issue
if: steps.run_clang_tidy.outcome != 'success' && inputs.create_issue_on_failure
id: create_issue
shell: bash
env:
GH_TOKEN: ${{ github.token }}
run: |
# Prepare issue body with clang-tidy output
cat > issue.md <<EOF
## Clang-tidy Check Failed
**Workflow:** ${{ github.workflow }}
**Run ID:** ${{ github.run_id }}
**Commit:** ${{ github.sha }}
**Branch/Ref:** ${{ github.ref }}
**Triggered by:** ${{ github.actor }}
### Clang-tidy Output:
\`\`\`
EOF
# Append clang-tidy output (filter for errors and warnings)
if [ -f clang-tidy-output.txt ]; then
# Extract lines containing 'error:', 'warning:', or 'note:'
grep -E '(error:|warning:|note:)' clang-tidy-output.txt > filtered-output.txt || true
# If filtered output is empty, use original (might be a different error format)
if [ ! -s filtered-output.txt ]; then
cp clang-tidy-output.txt filtered-output.txt
fi
# Truncate if too large
head -c 60000 filtered-output.txt >> issue.md
if [ "$(wc -c < filtered-output.txt)" -gt 60000 ]; then
echo "" >> issue.md
echo "... (output truncated, see artifacts for full output)" >> issue.md
fi
rm filtered-output.txt
else
echo "No output file found" >> issue.md
fi
cat >> issue.md <<EOF
\`\`\`
**Workflow run:** ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
---
*This issue was automatically created by the clang-tidy workflow.*
EOF
# Create the issue
gh issue create \
--label "Bug,Clang-tidy" \
--title "Clang-tidy check failed" \
--body-file ./issue.md \
> create_issue.log
created_issue="$(sed 's|.*/||' create_issue.log)"
echo "created_issue=$created_issue" >> $GITHUB_OUTPUT
echo "Created issue #$created_issue"
rm -f create_issue.log issue.md clang-tidy-output.txt
- name: Fail the workflow if clang-tidy failed
if: steps.run_clang_tidy.outcome != 'success'
run: |
echo "Clang-tidy check failed!"
exit 1

View File

@@ -0,0 +1,55 @@
name: Clang-tidy check
on:
workflow_call:
inputs:
check_only_changed:
description: "Check only changed files in PR. If false, checks all files in the repository."
type: boolean
default: false
create_issue_on_failure:
description: "Whether to create an issue if the check failed"
type: boolean
default: false
defaults:
run:
shell: bash
jobs:
determine-files:
name: Determine files to check
if: ${{ inputs.check_only_changed }}
runs-on: ubuntu-latest
outputs:
clang_tidy_config_changed: ${{ steps.changed_clang_tidy.outputs.any_changed }}
any_cpp_changed: ${{ steps.changed_files.outputs.any_changed }}
all_changed_files: ${{ steps.changed_files.outputs.all_changed_files }}
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Get changed C++ files
id: changed_files
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
with:
files: |
**/*.cpp
**/*.h
**/*.ipp
separator: " "
- name: Get changed clang-tidy configuration
id: changed_clang_tidy
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
with:
files: |
.clang-tidy
run-clang-tidy:
needs: [determine-files]
if: ${{ always() && !cancelled() && (!inputs.check_only_changed || needs.determine-files.outputs.any_cpp_changed == 'true' || needs.determine-files.outputs.clang_tidy_config_changed == 'true') }}
uses: ./.github/workflows/reusable-clang-tidy-files.yml
with:
files: ${{ needs.determine-files.outputs.clang_tidy_config_changed == 'true' && '' || (inputs.check_only_changed && needs.determine-files.outputs.all_changed_files || '') }}
create_issue_on_failure: ${{ inputs.create_issue_on_failure }}

View File

@@ -29,10 +29,10 @@ jobs:
matrix: ${{ steps.generate.outputs.matrix }}
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
with:
python-version: 3.13

View File

@@ -43,7 +43,7 @@ jobs:
container: ghcr.io/xrplf/ci/ubuntu-noble:gcc-13-sha-5dd7158
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Generate build version number
id: version
@@ -69,22 +69,28 @@ jobs:
conan export . --version=${{ steps.version.outputs.version }}
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/${{ steps.version.outputs.version }}
# When this workflow is triggered by a push event, it will always be when merging into the
# 'develop' branch, see on-trigger.yml.
- name: Upload Conan recipe (develop)
if: ${{ github.ref == 'refs/heads/develop' }}
if: ${{ github.event_name == 'push' }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=develop
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/develop
# When this workflow is triggered by a pull request event, it will always be when merging into
# one of the 'release' branches, see on-pr.yml.
- name: Upload Conan recipe (rc)
if: ${{ startsWith(github.ref, 'refs/heads/release') }}
if: ${{ github.event_name == 'pull_request' }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=rc
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/rc
# When this workflow is triggered by a tag event, it will always be when tagging a final
# release, see on-tag.yml.
- name: Upload Conan recipe (release)
if: ${{ github.event_name == 'tag' }}
env:

View File

@@ -67,7 +67,7 @@ jobs:
uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Prepare runner
uses: XRPLF/actions/prepare-runner@2cbf481018d930656e9276fcc20dc0e3a0be5b6d
@@ -103,11 +103,11 @@ jobs:
sanitizers: ${{ matrix.sanitizers }}
- name: Log into Conan remote
if: ${{ github.repository_owner == 'XRPLF' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}
if: ${{ github.repository == 'XRPLF/rippled' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}
run: conan remote login "${CONAN_REMOTE_NAME}" "${{ secrets.CONAN_REMOTE_USERNAME }}" --password "${{ secrets.CONAN_REMOTE_PASSWORD }}"
- name: Upload Conan packages
if: ${{ github.repository_owner == 'XRPLF' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}
if: ${{ github.repository == 'XRPLF/rippled' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}
env:
FORCE_OPTION: ${{ github.event.inputs.force_upload == 'true' && '--force' || '' }}
run: conan upload "*" --remote="${CONAN_REMOTE_NAME}" --confirm ${FORCE_OPTION}

6
.gitignore vendored
View File

@@ -42,6 +42,9 @@ gmon.out
# Locally patched Conan recipes
external/conan-center-index/
# Local conan directory
.conan
# XCode IDE.
*.pbxuser
!default.pbxuser
@@ -72,5 +75,8 @@ DerivedData
/.claude
/CLAUDE.md
# Direnv's directory
/.direnv
# clangd cache
/.cache

View File

@@ -20,30 +20,29 @@ repos:
args: [--assume-in-merge]
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: 75ca4ad908dc4a99f57921f29b7e6c1521e10b26 # frozen: v21.1.8
rev: cd481d7b0bfb5c7b3090c21846317f9a8262e891 # frozen: v22.1.0
hooks:
- id: clang-format
args: [--style=file]
"types_or": [c++, c, proto]
- repo: https://github.com/cheshirekow/cmake-format-precommit
rev: e2c2116d86a80e72e7146a06e68b7c228afc6319 # frozen: v0.6.13
- repo: https://github.com/BlankSpruce/gersemi
rev: 0.26.0
hooks:
- id: cmake-format
additional_dependencies: [PyYAML]
- id: gersemi
- repo: https://github.com/rbubley/mirrors-prettier
rev: 5ba47274f9b181bce26a5150a725577f3c336011 # frozen: v3.6.2
rev: c2bc67fe8f8f549cc489e00ba8b45aa18ee713b1 # frozen: v3.8.1
hooks:
- id: prettier
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 831207fd435b47aeffdf6af853097e64322b4d44 # frozen: v25.12.0
rev: ea488cebbfd88a5f50b8bd95d5c829d0bb76feb8 # frozen: 26.1.0
hooks:
- id: black
- repo: https://github.com/streetsidesoftware/cspell-cli
rev: 1cfa010f078c354f3ffb8413616280cc28f5ba21 # frozen: v9.4.0
rev: a42085ade523f591dca134379a595e7859986445 # frozen: v9.7.0
hooks:
- id: cspell # Spell check changed files
exclude: .config/cspell.config.yaml
@@ -57,6 +56,24 @@ repos:
- .git/COMMIT_EDITMSG
stages: [commit-msg]
- repo: local
hooks:
- id: nix-fmt
name: Format Nix files
entry: |
bash -c '
if command -v nix &> /dev/null || [ "$GITHUB_ACTIONS" = "true" ]; then
nix --extra-experimental-features "nix-command flakes" fmt "$@"
else
echo "Skipping nix-fmt: nix not installed and not in GitHub Actions"
exit 0
fi
' --
language: system
types:
- nix
pass_filenames: true
exclude: |
(?x)^(
external/.*|

View File

@@ -22,6 +22,19 @@ API version 2 is available in `rippled` version 2.0.0 and later. See [API-VERSIO
This version is supported by all `rippled` versions. For WebSocket and HTTP JSON-RPC requests, it is currently the default API version used when no `api_version` is specified.
## Unreleased
This section contains changes targeting a future version.
### Additions
- `server_definitions`: Added the following new sections to the response ([#6321](https://github.com/XRPLF/rippled/pull/6321)):
- `TRANSACTION_FORMATS`: Describes the fields and their optionality for each transaction type, including common fields shared across all transactions.
- `LEDGER_ENTRY_FORMATS`: Describes the fields and their optionality for each ledger entry type, including common fields shared across all ledger entries.
- `TRANSACTION_FLAGS`: Maps transaction type names to their supported flags and flag values.
- `LEDGER_ENTRY_FLAGS`: Maps ledger entry type names to their flags and flag values.
- `ACCOUNT_SET_FLAGS`: Maps AccountSet flag names (asf flags) to their numeric values.
## XRP Ledger server version 3.1.0
[Version 3.1.0](https://github.com/XRPLF/rippled/releases/tag/3.1.0) was released on Jan 27, 2026.

View File

@@ -1,94 +1,84 @@
cmake_minimum_required(VERSION 3.16)
if (POLICY CMP0074)
if(POLICY CMP0074)
cmake_policy(SET CMP0074 NEW)
endif ()
if (POLICY CMP0077)
endif()
if(POLICY CMP0077)
cmake_policy(SET CMP0077 NEW)
endif ()
endif()
# Fix "unrecognized escape" issues when passing CMAKE_MODULE_PATH on Windows.
if (DEFINED CMAKE_MODULE_PATH)
if(DEFINED CMAKE_MODULE_PATH)
file(TO_CMAKE_PATH "${CMAKE_MODULE_PATH}" CMAKE_MODULE_PATH)
endif ()
endif()
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
project(xrpl)
set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
include(CompilationEnv)
if (is_gcc)
if(is_gcc)
# GCC-specific fixes
add_compile_options(-Wno-unknown-pragmas -Wno-subobject-linkage)
# -Wno-subobject-linkage can be removed when we upgrade GCC version to at least 13.3
elseif (is_clang)
elseif(is_clang)
# Clang-specific fixes
add_compile_options(-Wno-unknown-warning-option) # Ignore unknown warning options
elseif (is_msvc)
elseif(is_msvc)
# MSVC-specific fixes
add_compile_options(/wd4068) # Ignore unknown pragmas
endif ()
endif()
# Enable ccache to speed up builds.
include(Ccache)
# make GIT_COMMIT_HASH define available to all sources
find_package(Git)
if (Git_FOUND)
execute_process(COMMAND ${GIT_EXECUTABLE} --git-dir=${CMAKE_CURRENT_SOURCE_DIR}/.git rev-parse HEAD
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gch)
if (gch)
set(GIT_COMMIT_HASH "${gch}")
message(STATUS gch: ${GIT_COMMIT_HASH})
add_definitions(-DGIT_COMMIT_HASH="${GIT_COMMIT_HASH}")
endif ()
execute_process(COMMAND ${GIT_EXECUTABLE} --git-dir=${CMAKE_CURRENT_SOURCE_DIR}/.git rev-parse --abbrev-ref HEAD
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gb)
if (gb)
set(GIT_BRANCH "${gb}")
message(STATUS gb: ${GIT_BRANCH})
add_definitions(-DGIT_BRANCH="${GIT_BRANCH}")
endif ()
endif () # git
if (thread_safety_analysis)
add_compile_options(-Wthread-safety -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS
-DXRPL_ENABLE_THREAD_SAFETY_ANNOTATIONS)
if(thread_safety_analysis)
add_compile_options(
-Wthread-safety
-D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS
-DXRPL_ENABLE_THREAD_SAFETY_ANNOTATIONS
)
add_compile_options("-stdlib=libc++")
add_link_options("-stdlib=libc++")
endif ()
endif()
include(CheckCXXCompilerFlag)
include(FetchContent)
include(ExternalProject)
include(CMakeFuncs) # must come *after* ExternalProject b/c it overrides one function in EP
if (target)
message(FATAL_ERROR "The target option has been removed - use native cmake options to control build")
endif ()
if(target)
message(
FATAL_ERROR
"The target option has been removed - use native cmake options to control build"
)
endif()
include(XrplSanity)
include(XrplVersion)
include(XrplSettings)
# this check has to remain in the top-level cmake because of the early return statement
if (packages_only)
if (NOT TARGET rpm)
message(FATAL_ERROR "packages_only requested, but targets were not created - is docker installed?")
endif ()
if(packages_only)
if(NOT TARGET rpm)
message(
FATAL_ERROR
"packages_only requested, but targets were not created - is docker installed?"
)
endif()
return()
endif ()
endif()
include(XrplCompiler)
include(XrplSanitizers)
include(XrplInterface)
option(only_docs "Include only the docs target?" FALSE)
include(XrplDocs)
if (only_docs)
if(only_docs)
return()
endif ()
endif()
include(deps/Boost)
@@ -107,41 +97,57 @@ find_package(xxHash REQUIRED)
target_link_libraries(
xrpl_libs
INTERFACE ed25519::ed25519
lz4::lz4
OpenSSL::Crypto
OpenSSL::SSL
secp256k1::secp256k1
soci::soci
SQLite::SQLite3)
INTERFACE
ed25519::ed25519
lz4::lz4
OpenSSL::Crypto
OpenSSL::SSL
secp256k1::secp256k1
soci::soci
SQLite::SQLite3
)
option(rocksdb "Enable RocksDB" ON)
if (rocksdb)
if(rocksdb)
find_package(RocksDB REQUIRED)
set_target_properties(RocksDB::rocksdb PROPERTIES INTERFACE_COMPILE_DEFINITIONS XRPL_ROCKSDB_AVAILABLE=1)
set_target_properties(
RocksDB::rocksdb
PROPERTIES INTERFACE_COMPILE_DEFINITIONS XRPL_ROCKSDB_AVAILABLE=1
)
target_link_libraries(xrpl_libs INTERFACE RocksDB::rocksdb)
endif ()
endif()
# OpenTelemetry distributed tracing (optional).
# When ON, links against opentelemetry-cpp and defines XRPL_ENABLE_TELEMETRY
# so that tracing macros in TracingInstrumentation.h are compiled in.
# When OFF (default), all tracing code compiles to no-ops with zero overhead.
# Enable via: conan install -o telemetry=True, or cmake -Dtelemetry=ON.
option(telemetry "Enable OpenTelemetry tracing" OFF)
if(telemetry)
find_package(opentelemetry-cpp CONFIG REQUIRED)
add_compile_definitions(XRPL_ENABLE_TELEMETRY)
message(STATUS "OpenTelemetry tracing enabled")
endif()
# Work around changes to Conan recipe for now.
if (TARGET nudb::core)
if(TARGET nudb::core)
set(nudb nudb::core)
elseif (TARGET NuDB::nudb)
elseif(TARGET NuDB::nudb)
set(nudb NuDB::nudb)
else ()
else()
message(FATAL_ERROR "unknown nudb target")
endif ()
endif()
target_link_libraries(xrpl_libs INTERFACE ${nudb})
if (coverage)
if(coverage)
include(XrplCov)
endif ()
endif()
set(PROJECT_EXPORT_SET XrplExports)
include(XrplCore)
include(XrplInstall)
include(XrplValidatorKeys)
if (tests)
if(tests)
include(CTest)
add_subdirectory(src/tests/libxrpl)
endif ()
endif()

View File

@@ -0,0 +1,244 @@
# Distributed Tracing Fundamentals
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Next**: [Architecture Analysis](./01-architecture-analysis.md)
---
## What is Distributed Tracing?
Distributed tracing is a method for tracking data objects as they flow through distributed systems. In a network like XRP Ledger, a single transaction touches multiple independent nodes—each with no shared memory or logging. Distributed tracing connects these dots.
**Without tracing:** You see isolated logs on each node with no way to correlate them.
**With tracing:** You see the complete journey of a transaction or an event across all nodes it touched.
---
## Core Concepts
### 1. Trace
A **trace** represents the entire journey of a request through the system. It has a unique `trace_id` that stays constant across all nodes.
```
Trace ID: abc123
├── Node A: received transaction
├── Node B: relayed transaction
├── Node C: included in consensus
└── Node D: applied to ledger
```
### 2. Span
A **span** represents a single unit of work within a trace. Each span has:
| Attribute | Description | Example |
| ---------------- | --------------------- | -------------------------- |
| `trace_id` | Links to parent trace | `abc123` |
| `span_id` | Unique identifier | `span456` |
| `parent_span_id` | Parent span (if any) | `p_span123` |
| `name` | Operation name | `rpc.submit` |
| `start_time` | When work began | `2024-01-15T10:30:00Z` |
| `end_time` | When work completed | `2024-01-15T10:30:00.050Z` |
| `attributes` | Key-value metadata | `tx.hash=ABC...` |
| `status` | OK, ERROR MSG | `OK` |
### 3. Trace Context
**Trace context** is the data that propagates between services to link spans together. It contains:
- `trace_id` - The trace this span belongs to
- `span_id` - The current span (becomes parent for child spans)
- `trace_flags` - Sampling decisions
---
## How Spans Form a Trace
Spans have parent-child relationships forming a tree structure:
```mermaid
flowchart TB
subgraph trace["Trace: abc123"]
A["tx.submit<br/>span_id: 001<br/>50ms"] --> B["tx.validate<br/>span_id: 002<br/>5ms"]
A --> C["tx.relay<br/>span_id: 003<br/>10ms"]
A --> D["tx.apply<br/>span_id: 004<br/>30ms"]
D --> E["ledger.update<br/>span_id: 005<br/>20ms"]
end
style A fill:#0d47a1,stroke:#082f6a,color:#ffffff
style B fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style C fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style D fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style E fill:#bf360c,stroke:#8c2809,color:#ffffff
```
The same trace visualized as a **timeline (Gantt chart)**:
```
Time → 0ms 10ms 20ms 30ms 40ms 50ms
├───────────────────────────────────────────┤
tx.submit│▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
├─────┤
tx.valid │▓▓▓▓▓│
│ ├──────────┤
tx.relay │ │▓▓▓▓▓▓▓▓▓▓│
│ ├────────────────────────────┤
tx.apply │ │▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
│ ├──────────────────┤
ledger │ │▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
```
---
## Distributed Traces Across Nodes
In distributed systems like rippled, traces span **multiple independent nodes**. The trace context must be propagated in network messages:
```mermaid
sequenceDiagram
participant Client
participant NodeA as Node A
participant NodeB as Node B
participant NodeC as Node C
Client->>NodeA: Submit TX<br/>(no trace context)
Note over NodeA: Creates new trace<br/>trace_id: abc123<br/>span: tx.receive
NodeA->>NodeB: Relay TX<br/>(trace_id: abc123, parent: 001)
Note over NodeB: Creates child span<br/>span: tx.relay<br/>parent_span_id: 001
NodeA->>NodeC: Relay TX<br/>(trace_id: abc123, parent: 001)
Note over NodeC: Creates child span<br/>span: tx.relay<br/>parent_span_id: 001
Note over NodeA,NodeC: All spans share trace_id: abc123<br/>enabling correlation across nodes
```
---
## Context Propagation
For traces to work across nodes, **trace context must be propagated** in messages.
### What's in the Context (32 bytes)
| Field | Size | Description |
| ------------- | ---------- | ------------------------------------------------------- |
| `trace_id` | 16 bytes | Identifies the entire trace (constant across all nodes) |
| `span_id` | 8 bytes | The sender's current span (becomes parent on receiver) |
| `trace_flags` | 4 bytes | Sampling decision flags |
| `trace_state` | ~0-4 bytes | Optional vendor-specific data |
### How span_id Changes at Each Hop
Only **one** `span_id` travels in the context - the sender's current span. Each node:
1. Extracts the received `span_id` and uses it as the `parent_span_id`
2. Creates a **new** `span_id` for its own span
3. Sends its own `span_id` as the parent when forwarding
```
Node A Node B Node C
────── ────── ──────
Span AAA Span BBB Span CCC
│ │ │
▼ ▼ ▼
Context out: Context out: Context out:
├─ trace_id: abc123 ├─ trace_id: abc123 ├─ trace_id: abc123
├─ span_id: AAA ──────────► ├─ span_id: BBB ──────────► ├─ span_id: CCC ──────►
└─ flags: 01 └─ flags: 01 └─ flags: 01
│ │
parent = AAA parent = BBB
```
The `trace_id` stays constant, but `span_id` **changes at every hop** to maintain the parent-child chain.
### Propagation Formats
There are two patterns:
### HTTP/RPC Headers (W3C Trace Context)
```
traceparent: 00-abc123def456-span789-01
│ │ │ │
│ │ │ └── Flags (sampled)
│ │ └── Parent span ID
│ └── Trace ID
└── Version
```
### Protocol Buffers (rippled P2P messages)
```protobuf
message TMTransaction {
bytes rawTransaction = 1;
// ... existing fields ...
// Trace context extension
bytes trace_parent = 100; // W3C traceparent
bytes trace_state = 101; // W3C tracestate
}
```
---
## Sampling
Not every trace needs to be recorded. **Sampling** reduces overhead:
### Head Sampling (at trace start)
```
Request arrives → Random 10% chance → Record or skip entire trace
```
- ✅ Low overhead
- ❌ May miss interesting traces
### Tail Sampling (after trace completes)
```
Trace completes → Collector evaluates:
- Error? → KEEP
- Slow? → KEEP
- Normal? → Sample 10%
```
- ✅ Never loses important traces
- ❌ Higher memory usage at collector
---
## Key Benefits for rippled
| Challenge | How Tracing Helps |
| ---------------------------------- | ---------------------------------------- |
| "Where is my transaction?" | Follow trace across all nodes it touched |
| "Why was consensus slow?" | See timing breakdown of each phase |
| "Which node is the bottleneck?" | Compare span durations across nodes |
| "What happened during the outage?" | Correlate errors across the network |
---
## Glossary
| Term | Definition |
| ------------------- | --------------------------------------------------------------- |
| **Trace** | Complete journey of a request, identified by `trace_id` |
| **Span** | Single operation within a trace |
| **Context** | Data propagated between services (`trace_id`, `span_id`, flags) |
| **Instrumentation** | Code that creates spans and propagates context |
| **Collector** | Service that receives, processes, and exports traces |
| **Backend** | Storage/visualization system (Jaeger, Tempo, etc.) |
| **Head Sampling** | Sampling decision at trace start |
| **Tail Sampling** | Sampling decision after trace completes |
---
_Next: [Architecture Analysis](./01-architecture-analysis.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,330 @@
# Architecture Analysis
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Design Decisions](./02-design-decisions.md) | [Implementation Strategy](./03-implementation-strategy.md)
---
## 1.1 Current rippled Architecture Overview
The rippled node software consists of several interconnected components that need instrumentation for distributed tracing:
```mermaid
flowchart TB
subgraph rippled["rippled Node"]
subgraph services["Core Services"]
RPC["RPC Server<br/>(HTTP/WS/gRPC)"]
Overlay["Overlay<br/>(P2P Network)"]
Consensus["Consensus<br/>(RCLConsensus)"]
end
JobQueue["JobQueue<br/>(Thread Pool)"]
subgraph processing["Processing Layer"]
NetworkOPs["NetworkOPs<br/>(Tx Processing)"]
LedgerMaster["LedgerMaster<br/>(Ledger Mgmt)"]
NodeStore["NodeStore<br/>(Database)"]
end
subgraph observability["Existing Observability"]
PerfLog["PerfLog<br/>(JSON)"]
Insight["Insight<br/>(StatsD)"]
Logging["Logging<br/>(Journal)"]
end
services --> JobQueue
JobQueue --> processing
end
style rippled fill:#424242,stroke:#212121,color:#ffffff
style services fill:#1565c0,stroke:#0d47a1,color:#ffffff
style processing fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style observability fill:#e65100,stroke:#bf360c,color:#ffffff
```
---
## 1.2 Key Components for Instrumentation
| Component | Location | Purpose | Trace Value |
| ----------------- | ------------------------------------------ | ------------------------ | ---------------------------- |
| **Overlay** | `src/xrpld/overlay/` | P2P communication | Message propagation timing |
| **PeerImp** | `src/xrpld/overlay/detail/PeerImp.cpp` | Individual peer handling | Per-peer latency |
| **RCLConsensus** | `src/xrpld/app/consensus/RCLConsensus.cpp` | Consensus algorithm | Round timing, phase analysis |
| **NetworkOPs** | `src/xrpld/app/misc/NetworkOPs.cpp` | Transaction processing | Tx lifecycle tracking |
| **ServerHandler** | `src/xrpld/rpc/detail/ServerHandler.cpp` | RPC entry point | Request latency |
| **RPCHandler** | `src/xrpld/rpc/detail/RPCHandler.cpp` | Command execution | Per-command timing |
| **JobQueue** | `src/xrpl/core/JobQueue.h` | Async task execution | Queue wait times |
---
## 1.3 Transaction Flow Diagram
Transaction flow spans multiple nodes in the network. Each node creates linked spans to form a distributed trace:
```mermaid
sequenceDiagram
participant Client
participant PeerA as Peer A (Receive)
participant PeerB as Peer B (Relay)
participant PeerC as Peer C (Validate)
Client->>PeerA: 1. Submit TX
rect rgb(230, 245, 255)
Note over PeerA: tx.receive SPAN START
PeerA->>PeerA: HashRouter Deduplication
PeerA->>PeerA: tx.validate (child span)
end
PeerA->>PeerB: 2. Relay TX (with trace ctx)
rect rgb(230, 245, 255)
Note over PeerB: tx.receive (linked span)
end
PeerB->>PeerC: 3. Relay TX
rect rgb(230, 245, 255)
Note over PeerC: tx.receive (linked span)
PeerC->>PeerC: tx.process
end
Note over Client,PeerC: DISTRIBUTED TRACE (same trace_id: abc123)
```
### Trace Structure
```
trace_id: abc123
├── span: tx.receive (Peer A)
│ ├── span: tx.validate
│ └── span: tx.relay
├── span: tx.receive (Peer B) [parent: Peer A]
│ └── span: tx.relay
└── span: tx.receive (Peer C) [parent: Peer B]
└── span: tx.process
```
---
## 1.4 Consensus Round Flow
Consensus rounds are multi-phase operations that benefit significantly from tracing:
```mermaid
flowchart TB
subgraph round["consensus.round (root span)"]
attrs["Attributes:<br/>xrpl.consensus.ledger.seq = 12345678<br/>xrpl.consensus.mode = proposing<br/>xrpl.consensus.proposers = 35"]
subgraph open["consensus.phase.open"]
open_desc["Duration: ~3s<br/>Waiting for transactions"]
end
subgraph establish["consensus.phase.establish"]
est_attrs["proposals_received = 28<br/>disputes_resolved = 3"]
est_children["├── consensus.proposal.receive (×28)<br/>├── consensus.proposal.send (×1)<br/>└── consensus.dispute.resolve (×3)"]
end
subgraph accept["consensus.phase.accept"]
acc_attrs["transactions_applied = 150<br/>ledger.hash = DEF456..."]
acc_children["├── ledger.build<br/>└── ledger.validate"]
end
attrs --> open
open --> establish
establish --> accept
end
style round fill:#f57f17,stroke:#e65100,color:#ffffff
style open fill:#1565c0,stroke:#0d47a1,color:#ffffff
style establish fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style accept fill:#c2185b,stroke:#880e4f,color:#ffffff
```
---
## 1.5 RPC Request Flow
RPC requests support W3C Trace Context headers for distributed tracing across services:
```mermaid
flowchart TB
subgraph request["rpc.request (root span)"]
http["HTTP Request<br/>POST /<br/>traceparent: 00-abc123...-def456...-01"]
attrs["Attributes:<br/>http.method = POST<br/>net.peer.ip = 192.168.1.100<br/>xrpl.rpc.command = submit"]
subgraph enqueue["jobqueue.enqueue"]
job_attr["xrpl.job.type = jtCLIENT_RPC"]
end
subgraph command["rpc.command.submit"]
cmd_attrs["xrpl.rpc.version = 2<br/>xrpl.rpc.role = user"]
cmd_children["├── tx.deserialize<br/>├── tx.validate_local<br/>└── tx.submit_to_network"]
end
response["Response: 200 OK<br/>Duration: 45ms"]
http --> attrs
attrs --> enqueue
enqueue --> command
command --> response
end
style request fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style enqueue fill:#1565c0,stroke:#0d47a1,color:#ffffff
style command fill:#e65100,stroke:#bf360c,color:#ffffff
```
---
## 1.6 Key Trace Points
The following table identifies priority instrumentation points across the codebase:
| Category | Span Name | File | Method | Priority |
| --------------- | ---------------------- | -------------------- | ---------------------- | -------- |
| **Transaction** | `tx.receive` | `PeerImp.cpp` | `handleTransaction()` | High |
| **Transaction** | `tx.validate` | `NetworkOPs.cpp` | `processTransaction()` | High |
| **Transaction** | `tx.process` | `NetworkOPs.cpp` | `doTransactionSync()` | High |
| **Transaction** | `tx.relay` | `OverlayImpl.cpp` | `relay()` | Medium |
| **Consensus** | `consensus.round` | `RCLConsensus.cpp` | `startRound()` | High |
| **Consensus** | `consensus.phase.*` | `Consensus.h` | `timerEntry()` | High |
| **Consensus** | `consensus.proposal.*` | `RCLConsensus.cpp` | `peerProposal()` | Medium |
| **RPC** | `rpc.request` | `ServerHandler.cpp` | `onRequest()` | High |
| **RPC** | `rpc.command.*` | `RPCHandler.cpp` | `doCommand()` | High |
| **Peer** | `peer.connect` | `OverlayImpl.cpp` | `onHandoff()` | Low |
| **Peer** | `peer.message.*` | `PeerImp.cpp` | `onMessage()` | Low |
| **Ledger** | `ledger.acquire` | `InboundLedgers.cpp` | `acquire()` | Medium |
| **Ledger** | `ledger.build` | `RCLConsensus.cpp` | `buildLCL()` | High |
---
## 1.7 Instrumentation Priority
```mermaid
quadrantChart
title Instrumentation Priority Matrix
x-axis Low Complexity --> High Complexity
y-axis Low Value --> High Value
quadrant-1 Implement First
quadrant-2 Plan Carefully
quadrant-3 Quick Wins
quadrant-4 Consider Later
RPC Tracing: [0.3, 0.85]
Transaction Tracing: [0.65, 0.92]
Consensus Tracing: [0.75, 0.87]
Peer Message Tracing: [0.4, 0.3]
Ledger Acquisition: [0.5, 0.6]
JobQueue Tracing: [0.35, 0.5]
```
---
## 1.8 Observable Outcomes
After implementing OpenTelemetry, operators and developers will gain visibility into the following:
### 1.8.1 What You Will See: Traces
| Trace Type | Description | Example Query in Grafana/Tempo |
| -------------------------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| **Transaction Lifecycle** | Full journey from RPC submission through validation, relay, consensus, and ledger inclusion | `{service.name="rippled" && xrpl.tx.hash="ABC123..."}` |
| **Cross-Node Propagation** | Transaction path across multiple rippled nodes with timing | `{xrpl.tx.relay_count > 0}` |
| **Consensus Rounds** | Complete round with all phases (open, establish, accept) | `{span.name=~"consensus.round.*"}` |
| **RPC Request Processing** | Individual command execution with timing breakdown | `{xrpl.rpc.command="account_info"}` |
| **Ledger Acquisition** | Peer-to-peer ledger data requests and responses | `{span.name="ledger.acquire"}` |
### 1.8.2 What You Will See: Metrics (Derived from Traces)
| Metric | Description | Dashboard Panel |
| ----------------------------- | -------------------------------------- | --------------------------- |
| **RPC Latency (p50/p95/p99)** | Response time distribution per command | Heatmap by command |
| **Transaction Throughput** | Transactions processed per second | Time series graph |
| **Consensus Round Duration** | Time to complete consensus phases | Histogram |
| **Cross-Node Latency** | Time for transaction to reach N nodes | Line chart with percentiles |
| **Error Rate** | Failed transactions/RPC calls by type | Stacked bar chart |
### 1.8.3 Concrete Dashboard Examples
**Transaction Trace View (Jaeger/Tempo):**
```
┌────────────────────────────────────────────────────────────────────────────────┐
│ Trace: abc123... (Transaction Submission) Duration: 847ms │
├────────────────────────────────────────────────────────────────────────────────┤
│ ├── rpc.request [ServerHandler] ████░░░░░░ 45ms │
│ │ └── rpc.command.submit [RPCHandler] ████░░░░░░ 42ms │
│ │ └── tx.receive [NetworkOPs] ███░░░░░░░ 35ms │
│ │ ├── tx.validate [TxQ] █░░░░░░░░░ 8ms │
│ │ └── tx.relay [Overlay] ██░░░░░░░░ 15ms │
│ │ ├── tx.receive [Node-B] █████░░░░░ 52ms │
│ │ │ └── tx.relay [Node-B] ██░░░░░░░░ 18ms │
│ │ └── tx.receive [Node-C] ██████░░░░ 65ms │
│ └── consensus.round [RCLConsensus] ████████░░ 720ms │
│ ├── consensus.phase.open ██░░░░░░░░ 180ms │
│ ├── consensus.phase.establish █████░░░░░ 480ms │
│ └── consensus.phase.accept █░░░░░░░░░ 60ms │
└────────────────────────────────────────────────────────────────────────────────┘
```
**RPC Performance Dashboard Panel:**
```
┌─────────────────────────────────────────────────────────────┐
│ RPC Command Latency (Last 1 Hour) │
├─────────────────────────────────────────────────────────────┤
│ Command │ p50 │ p95 │ p99 │ Errors │ Rate │
│──────────────────┼────────┼────────┼────────┼────────┼──────│
│ account_info │ 12ms │ 45ms │ 89ms │ 0.1% │ 150/s│
│ submit │ 35ms │ 120ms │ 250ms │ 2.3% │ 45/s│
│ ledger │ 8ms │ 25ms │ 55ms │ 0.0% │ 80/s│
│ tx │ 15ms │ 50ms │ 100ms │ 0.5% │ 60/s│
│ server_info │ 5ms │ 12ms │ 20ms │ 0.0% │ 200/s│
└─────────────────────────────────────────────────────────────┘
```
**Consensus Health Dashboard Panel:**
```mermaid
---
config:
xyChart:
width: 1200
height: 400
plotReservedSpacePercent: 50
chartOrientation: vertical
themeVariables:
xyChart:
plotColorPalette: "#3498db"
---
xychart-beta
title "Consensus Round Duration (Last 24 Hours)"
x-axis "Time of Day (Hours)" [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24]
y-axis "Duration (seconds)" 1 --> 5
line [2.1, 2.3, 2.5, 2.4, 2.8, 1.6, 3.2, 3.0, 3.5, 1.3, 3.8, 3.6, 4.0, 3.2, 4.3, 4.1, 4.5, 4.3, 4.2, 2.4, 4.8, 4.6, 4.9, 4.7, 5.0, 4.9, 4.8, 2.6, 4.7, 4.5, 4.2, 4.0, 2.5, 3.7, 3.2, 3.4, 2.9, 3.1, 2.6, 2.8, 2.3, 1.5, 2.7, 2.4, 2.5, 2.3, 2.2, 2.1, 2.0]
```
### 1.8.4 Operator Actionable Insights
| Scenario | What You'll See | Action |
| --------------------- | ---------------------------------------------------------------------------- | -------------------------------- |
| **Slow RPC** | Span showing which phase is slow (parsing, execution, serialization) | Optimize specific code path |
| **Transaction Stuck** | Trace stops at validation; error attribute shows reason | Fix transaction parameters |
| **Consensus Delay** | Phase.establish taking too long; proposer attribute shows missing validators | Investigate network connectivity |
| **Memory Spike** | Large batch of spans correlating with memory increase | Tune batch_size or sampling |
| **Network Partition** | Traces missing cross-node links for specific peer | Check peer connectivity |
### 1.8.5 Developer Debugging Workflow
1. **Find Transaction**: Query by `xrpl.tx.hash` to get full trace
2. **Identify Bottleneck**: Look at span durations to find slowest component
3. **Check Attributes**: Review `xrpl.tx.validity`, `xrpl.rpc.status` for errors
4. **Correlate Logs**: Use `trace_id` to find related PerfLog entries
5. **Compare Nodes**: Filter by `service.instance.id` to compare behavior across nodes
---
_Next: [Design Decisions](./02-design-decisions.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,510 @@
# Design Decisions
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Architecture Analysis](./01-architecture-analysis.md) | [Code Samples](./04-code-samples.md)
---
## 2.1 OpenTelemetry Components
### 2.1.1 SDK Selection
**Primary Choice**: OpenTelemetry C++ SDK (`opentelemetry-cpp`)
| Component | Purpose | Required |
| --------------------------------------- | ---------------------- | ----------- |
| `opentelemetry-cpp::api` | Tracing API headers | Yes |
| `opentelemetry-cpp::sdk` | SDK implementation | Yes |
| `opentelemetry-cpp::ext` | Extensions (exporters) | Yes |
| `opentelemetry-cpp::otlp_grpc_exporter` | OTLP/gRPC export | Recommended |
| `opentelemetry-cpp::otlp_http_exporter` | OTLP/HTTP export | Alternative |
### 2.1.2 Instrumentation Strategy
**Manual Instrumentation** (recommended):
| Approach | Pros | Cons |
| ---------- | ----------------------------------------------------------------- | ------------------------------------------------------- |
| **Manual** | Precise control, optimized placement, rippled-specific attributes | More development effort |
| **Auto** | Less code, automatic coverage | Less control, potential overhead, limited customization |
---
## 2.2 Exporter Configuration
```mermaid
flowchart TB
subgraph nodes["rippled Nodes"]
node1["rippled<br/>Node 1"]
node2["rippled<br/>Node 2"]
node3["rippled<br/>Node 3"]
end
collector["OpenTelemetry<br/>Collector<br/>(sidecar or standalone)"]
subgraph backends["Observability Backends"]
jaeger["Jaeger<br/>(Dev)"]
tempo["Tempo<br/>(Prod)"]
elastic["Elastic<br/>APM"]
end
node1 -->|"OTLP/gRPC<br/>:4317"| collector
node2 -->|"OTLP/gRPC<br/>:4317"| collector
node3 -->|"OTLP/gRPC<br/>:4317"| collector
collector --> jaeger
collector --> tempo
collector --> elastic
style nodes fill:#0d47a1,stroke:#082f6a,color:#ffffff
style backends fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style collector fill:#bf360c,stroke:#8c2809,color:#ffffff
```
### 2.2.1 OTLP/gRPC (Recommended)
```cpp
// Configuration for OTLP over gRPC
namespace otlp = opentelemetry::exporter::otlp;
otlp::OtlpGrpcExporterOptions opts;
opts.endpoint = "localhost:4317";
opts.use_ssl_credentials = true;
opts.ssl_credentials_cacert_path = "/path/to/ca.crt";
```
### 2.2.2 OTLP/HTTP (Alternative)
```cpp
// Configuration for OTLP over HTTP
namespace otlp = opentelemetry::exporter::otlp;
otlp::OtlpHttpExporterOptions opts;
opts.url = "http://localhost:4318/v1/traces";
opts.content_type = otlp::HttpRequestContentType::kJson; // or kBinary
```
---
## 2.3 Span Naming Conventions
### 2.3.1 Naming Schema
```
<component>.<operation>[.<sub-operation>]
```
**Examples**:
- `tx.receive` - Transaction received from peer
- `consensus.phase.establish` - Consensus establish phase
- `rpc.command.server_info` - server_info RPC command
### 2.3.2 Complete Span Catalog
```yaml
# Transaction Spans
tx:
receive: "Transaction received from network"
validate: "Transaction signature/format validation"
process: "Full transaction processing"
relay: "Transaction relay to peers"
apply: "Apply transaction to ledger"
# Consensus Spans
consensus:
round: "Complete consensus round"
phase:
open: "Open phase - collecting transactions"
establish: "Establish phase - reaching agreement"
accept: "Accept phase - applying consensus"
proposal:
receive: "Receive peer proposal"
send: "Send our proposal"
validation:
receive: "Receive peer validation"
send: "Send our validation"
# RPC Spans
rpc:
request: "HTTP/WebSocket request handling"
command:
"*": "Specific RPC command (dynamic)"
# Peer Spans
peer:
connect: "Peer connection establishment"
disconnect: "Peer disconnection"
message:
send: "Send protocol message"
receive: "Receive protocol message"
# Ledger Spans
ledger:
acquire: "Ledger acquisition from network"
build: "Build new ledger"
validate: "Ledger validation"
close: "Close ledger"
# Job Spans
job:
enqueue: "Job added to queue"
execute: "Job execution"
```
---
## 2.4 Attribute Schema
### 2.4.1 Resource Attributes (Set Once at Startup)
```cpp
// Standard OpenTelemetry semantic conventions
resource::SemanticConventions::SERVICE_NAME = "rippled"
resource::SemanticConventions::SERVICE_VERSION = BuildInfo::getVersionString()
resource::SemanticConventions::SERVICE_INSTANCE_ID = <node_public_key_base58>
// Custom rippled attributes
"xrpl.network.id" = <network_id> // e.g., 0 for mainnet
"xrpl.network.type" = "mainnet" | "testnet" | "devnet" | "standalone"
"xrpl.node.type" = "validator" | "stock" | "reporting"
"xrpl.node.cluster" = <cluster_name> // If clustered
```
### 2.4.2 Span Attributes by Category
#### Transaction Attributes
```cpp
"xrpl.tx.hash" = string // Transaction hash (hex)
"xrpl.tx.type" = string // "Payment", "OfferCreate", etc.
"xrpl.tx.account" = string // Source account (redacted in prod)
"xrpl.tx.sequence" = int64 // Account sequence number
"xrpl.tx.fee" = int64 // Fee in drops
"xrpl.tx.result" = string // "tesSUCCESS", "tecPATH_DRY", etc.
"xrpl.tx.ledger_index" = int64 // Ledger containing transaction
```
#### Consensus Attributes
```cpp
"xrpl.consensus.round" = int64 // Round number
"xrpl.consensus.phase" = string // "open", "establish", "accept"
"xrpl.consensus.mode" = string // "proposing", "observing", etc.
"xrpl.consensus.proposers" = int64 // Number of proposers
"xrpl.consensus.ledger.prev" = string // Previous ledger hash
"xrpl.consensus.ledger.seq" = int64 // Ledger sequence
"xrpl.consensus.tx_count" = int64 // Transactions in consensus set
"xrpl.consensus.duration_ms" = float64 // Round duration
// Phase 4a: Establish-phase gap fill & cross-node correlation
"xrpl.consensus.round_id" = int64 // Consensus round number
"xrpl.consensus.ledger_id" = string // previousLedger.id() — shared across nodes
"xrpl.consensus.trace_strategy" = string // "deterministic" or "attribute"
"xrpl.consensus.converge_percent" = int64 // Convergence % (0-100+)
"xrpl.consensus.establish_count" = int64 // Number of establish iterations
"xrpl.consensus.disputes_count" = int64 // Active disputed transactions
"xrpl.consensus.proposers_agreed" = int64 // Peers agreeing with our position
"xrpl.consensus.proposers_total" = int64 // Total peer positions
"xrpl.consensus.agree_count" = int64 // Peers that agree (haveConsensus)
"xrpl.consensus.disagree_count" = int64 // Peers that disagree
"xrpl.consensus.threshold_percent" = int64 // Current threshold (50/65/70/95)
"xrpl.consensus.result" = string // "yes", "no", "moved_on"
"xrpl.consensus.mode.old" = string // Previous consensus mode
"xrpl.consensus.mode.new" = string // New consensus mode
```
#### RPC Attributes
```cpp
"xrpl.rpc.command" = string // Command name
"xrpl.rpc.version" = int64 // API version
"xrpl.rpc.role" = string // "admin" or "user"
"xrpl.rpc.params" = string // Sanitized parameters (optional)
```
#### Peer & Message Attributes
```cpp
"xrpl.peer.id" = string // Peer public key (base58)
"xrpl.peer.address" = string // IP:port
"xrpl.peer.latency_ms" = float64 // Measured latency
"xrpl.peer.cluster" = string // Cluster name if clustered
"xrpl.message.type" = string // Protocol message type name
"xrpl.message.size_bytes" = int64 // Message size
"xrpl.message.compressed" = bool // Whether compressed
```
#### Ledger & Job Attributes
```cpp
"xrpl.ledger.hash" = string // Ledger hash
"xrpl.ledger.index" = int64 // Ledger sequence/index
"xrpl.ledger.close_time" = int64 // Close time (epoch)
"xrpl.ledger.tx_count" = int64 // Transaction count
"xrpl.job.type" = string // Job type name
"xrpl.job.queue_ms" = float64 // Time spent in queue
"xrpl.job.worker" = int64 // Worker thread ID
```
### 2.4.3 Data Collection Summary
The following table summarizes what data is collected by category:
| Category | Attributes Collected | Purpose |
| --------------- | -------------------------------------------------------------------- | --------------------------- |
| **Transaction** | `tx.hash`, `tx.type`, `tx.result`, `tx.fee`, `ledger_index` | Trace transaction lifecycle |
| **Consensus** | `round`, `phase`, `mode`, `proposers` (public keys), `duration_ms` | Analyze consensus timing |
| **RPC** | `command`, `version`, `status`, `duration_ms` | Monitor RPC performance |
| **Peer** | `peer.id` (public key), `latency_ms`, `message.type`, `message.size` | Network topology analysis |
| **Ledger** | `ledger.hash`, `ledger.index`, `close_time`, `tx_count` | Ledger progression tracking |
| **Job** | `job.type`, `queue_ms`, `worker` | JobQueue performance |
### 2.4.4 Privacy & Sensitive Data Policy
OpenTelemetry instrumentation is designed to collect **operational metadata only**, never sensitive content.
#### Data NOT Collected
The following data is explicitly **excluded** from telemetry collection:
| Excluded Data | Reason |
| ----------------------- | ----------------------------------------- |
| **Private Keys** | Never exposed; not relevant to tracing |
| **Account Balances** | Financial data; privacy sensitive |
| **Transaction Amounts** | Financial data; privacy sensitive |
| **Raw TX Payloads** | May contain sensitive memo/data fields |
| **Personal Data** | No PII collected |
| **IP Addresses** | Configurable; excluded by default in prod |
#### Privacy Protection Mechanisms
| Mechanism | Description |
| ----------------------------- | ------------------------------------------------------------------------- |
| **Account Hashing** | `xrpl.tx.account` is hashed at collector level before storage |
| **Configurable Redaction** | Sensitive fields can be excluded via `[telemetry]` config section |
| **Sampling** | Only 10% of traces recorded by default, reducing data exposure |
| **Local Control** | Node operators have full control over what gets exported |
| **No Raw Payloads** | Transaction content is never recorded, only metadata (hash, type, result) |
| **Collector-Level Filtering** | Additional redaction/hashing can be configured at OTel Collector |
#### Collector-Level Data Protection
The OpenTelemetry Collector can be configured to hash or redact sensitive attributes before export:
```yaml
processors:
attributes:
actions:
# Hash account addresses before storage
- key: xrpl.tx.account
action: hash
# Remove IP addresses entirely
- key: xrpl.peer.address
action: delete
# Redact specific fields
- key: xrpl.rpc.params
action: delete
```
#### Configuration Options for Privacy
In `rippled.cfg`, operators can control data collection granularity:
```ini
[telemetry]
enabled=1
# Disable collection of specific components
trace_transactions=1
trace_consensus=1
trace_rpc=1
trace_peer=0 # Disable peer tracing (high volume, includes addresses)
# Redact specific attributes
redact_account=1 # Hash account addresses before export
redact_peer_address=1 # Remove peer IP addresses
```
> **Key Principle**: Telemetry collects **operational metadata** (timing, counts, hashes) — never **sensitive content** (keys, balances, amounts, raw payloads).
---
## 2.5 Context Propagation Design
### 2.5.1 Propagation Boundaries
```mermaid
flowchart TB
subgraph http["HTTP/WebSocket (RPC)"]
w3c["W3C Trace Context Headers:<br/>traceparent: 00-{trace_id}-{span_id}-{flags}<br/>tracestate: rippled=<state>"]
end
subgraph protobuf["Protocol Buffers (P2P)"]
proto["message TraceContext {<br/> bytes trace_id = 1; // 16 bytes<br/> bytes span_id = 2; // 8 bytes<br/> uint32 trace_flags = 3;<br/> string trace_state = 4;<br/>}"]
end
subgraph jobqueue["JobQueue (Internal Async)"]
job["Context captured at job creation,<br/>restored at execution<br/><br/>class Job {<br/> opentelemetry::context::Context traceContext_;<br/>};"]
end
style http fill:#0d47a1,stroke:#082f6a,color:#ffffff
style protobuf fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style jobqueue fill:#bf360c,stroke:#8c2809,color:#ffffff
```
---
## 2.6 Integration with Existing Observability
### 2.6.1 Existing Frameworks Comparison
rippled already has two observability mechanisms. OpenTelemetry complements (not replaces) them:
| Aspect | PerfLog | Beast Insight (StatsD) | OpenTelemetry |
| --------------------- | ----------------------------- | ---------------------------- | ------------------------- |
| **Type** | Logging | Metrics | Distributed Tracing |
| **Data** | JSON log entries | Counters, gauges, histograms | Spans with context |
| **Scope** | Single node | Single node | **Cross-node** |
| **Output** | `perf.log` file | StatsD server | OTLP Collector |
| **Question answered** | "What happened on this node?" | "How many? How fast?" | "What was the journey?" |
| **Correlation** | By timestamp | By metric name | By `trace_id` |
| **Overhead** | Low (file I/O) | Low (UDP packets) | Low-Medium (configurable) |
### 2.6.2 What Each Framework Does Best
#### PerfLog
- **Purpose**: Detailed local event logging for RPC and job execution
- **Strengths**:
- Rich JSON output with timing data
- Already integrated in RPC handlers
- File-based, no external dependencies
- **Limitations**:
- Single-node only (no cross-node correlation)
- No parent-child relationships between events
- Manual log parsing required
```json
// Example PerfLog entry
{
"time": "2024-01-15T10:30:00.123Z",
"method": "submit",
"duration_us": 1523,
"result": "tesSUCCESS"
}
```
#### Beast Insight (StatsD)
- **Purpose**: Real-time metrics for monitoring dashboards
- **Strengths**:
- Aggregated metrics (counters, gauges, histograms)
- Low overhead (UDP, fire-and-forget)
- Good for alerting thresholds
- **Limitations**:
- No request-level detail
- No causal relationships
- Single-node perspective
```cpp
// Example StatsD usage in rippled
insight.increment("rpc.submit.count");
insight.gauge("ledger.age", age);
insight.timing("consensus.round", duration);
```
#### OpenTelemetry (NEW)
- **Purpose**: Distributed request tracing across nodes
- **Strengths**:
- **Cross-node correlation** via `trace_id`
- Parent-child span relationships
- Rich attributes per span
- Industry standard (CNCF)
- **Limitations**:
- Requires collector infrastructure
- Higher complexity than logging
```cpp
// Example OpenTelemetry span
auto span = telemetry.startSpan("tx.relay");
span->SetAttribute("tx.hash", hash);
span->SetAttribute("peer.id", peerId);
// Span automatically linked to parent via context
```
### 2.6.3 When to Use Each
| Scenario | PerfLog | StatsD | OpenTelemetry |
| --------------------------------------- | ---------- | ------ | ------------- |
| "How many TXs per second?" | ❌ | ✅ | ❌ |
| "What's the p99 RPC latency?" | ❌ | ✅ | ✅ |
| "Why was this specific TX slow?" | ⚠️ partial | ❌ | ✅ |
| "Which node delayed consensus?" | ❌ | ❌ | ✅ |
| "What happened on node X at time T?" | ✅ | ❌ | ✅ |
| "Show me the TX journey across 5 nodes" | ❌ | ❌ | ✅ |
### 2.6.4 Coexistence Strategy
```mermaid
flowchart TB
subgraph rippled["rippled Process"]
perflog["PerfLog<br/>(JSON to file)"]
insight["Beast Insight<br/>(StatsD)"]
otel["OpenTelemetry<br/>(Tracing)"]
end
perflog --> perffile["perf.log"]
insight --> statsd["StatsD Server"]
otel --> collector["OTLP Collector"]
perffile --> grafana["Grafana<br/>(Unified UI)"]
statsd --> grafana
collector --> grafana
style rippled fill:#212121,stroke:#0a0a0a,color:#ffffff
style grafana fill:#bf360c,stroke:#8c2809,color:#ffffff
```
### 2.6.5 Correlation with PerfLog
Trace IDs can be correlated with existing PerfLog entries for comprehensive debugging:
```cpp
// In RPCHandler.cpp - correlate trace with PerfLog
Status doCommand(RPC::JsonContext& context, Json::Value& result)
{
// Start OpenTelemetry span
auto span = context.app.getTelemetry().startSpan(
"rpc.command." + context.method);
// Get trace ID for correlation
auto traceId = span->GetContext().trace_id().IsValid()
? toHex(span->GetContext().trace_id())
: "";
// Use existing PerfLog with trace correlation
auto const curId = context.app.getPerfLog().currentId();
context.app.getPerfLog().rpcStart(context.method, curId);
// Future: Add trace ID to PerfLog entry
// context.app.getPerfLog().setTraceId(curId, traceId);
try {
auto ret = handler(context, result);
context.app.getPerfLog().rpcFinish(context.method, curId);
span->SetStatus(opentelemetry::trace::StatusCode::kOk);
return ret;
} catch (std::exception const& e) {
context.app.getPerfLog().rpcError(context.method, curId);
span->RecordException(e);
span->SetStatus(opentelemetry::trace::StatusCode::kError, e.what());
throw;
}
}
```
---
_Previous: [Architecture Analysis](./01-architecture-analysis.md)_ | _Next: [Implementation Strategy](./03-implementation-strategy.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,451 @@
# Implementation Strategy
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Code Samples](./04-code-samples.md) | [Configuration Reference](./05-configuration-reference.md)
---
## 3.1 Directory Structure
The telemetry implementation follows rippled's existing code organization pattern:
```
include/xrpl/
├── telemetry/
│ ├── Telemetry.h # Main telemetry interface
│ ├── TelemetryConfig.h # Configuration structures
│ ├── TraceContext.h # Context propagation utilities
│ ├── SpanGuard.h # RAII span management
│ └── SpanAttributes.h # Attribute helper functions
src/libxrpl/
├── telemetry/
│ ├── Telemetry.cpp # Implementation
│ ├── TelemetryConfig.cpp # Config parsing
│ ├── TraceContext.cpp # Context serialization
│ └── NullTelemetry.cpp # No-op implementation
src/xrpld/
├── telemetry/
│ ├── TracingInstrumentation.h # Instrumentation macros
│ └── TracingInstrumentation.cpp
```
---
## 3.2 Implementation Approach
<div align="center">
```mermaid
%%{init: {'flowchart': {'nodeSpacing': 20, 'rankSpacing': 30}}}%%
flowchart TB
subgraph phase1["Phase 1: Core"]
direction LR
sdk["SDK Integration"] ~~~ interface["Telemetry Interface"] ~~~ config["Configuration"]
end
subgraph phase2["Phase 2: RPC"]
direction LR
http["HTTP Context"] ~~~ rpc["RPC Handlers"]
end
subgraph phase3["Phase 3: P2P"]
direction LR
proto["Protobuf Context"] ~~~ tx["Transaction Relay"]
end
subgraph phase4["Phase 4: Consensus"]
direction LR
consensus["Consensus Rounds"] ~~~ proposals["Proposals"]
end
phase1 --> phase2 --> phase3 --> phase4
style phase1 fill:#1565c0,stroke:#0d47a1,color:#ffffff
style phase2 fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style phase3 fill:#e65100,stroke:#bf360c,color:#ffffff
style phase4 fill:#c2185b,stroke:#880e4f,color:#ffffff
```
</div>
### Key Principles
1. **Minimal Intrusion**: Instrumentation should not alter existing control flow
2. **Zero-Cost When Disabled**: Use compile-time flags and no-op implementations
3. **Backward Compatibility**: Protocol Buffer extensions use high field numbers
4. **Graceful Degradation**: Tracing failures must not affect node operation
---
## 3.3 Performance Overhead Summary
| Metric | Overhead | Notes |
| ------------- | ---------- | ----------------------------------- |
| CPU | 1-3% | Span creation and attribute setting |
| Memory | 2-5 MB | Batch buffer for pending spans |
| Network | 10-50 KB/s | Compressed OTLP export to collector |
| Latency (p99) | <2% | With proper sampling configuration |
---
## 3.4 Detailed CPU Overhead Analysis
### 3.4.1 Per-Operation Costs
| Operation | Time (ns) | Frequency | Impact |
| --------------------- | --------- | ---------------------- | ---------- |
| Span creation | 200-500 | Every traced operation | Low |
| Span end | 100-200 | Every traced operation | Low |
| SetAttribute (string) | 80-120 | 3-5 per span | Low |
| SetAttribute (int) | 40-60 | 2-3 per span | Negligible |
| AddEvent | 50-80 | 0-2 per span | Negligible |
| Context injection | 150-250 | Per outgoing message | Low |
| Context extraction | 100-180 | Per incoming message | Low |
| GetCurrent context | 10-20 | Thread-local access | Negligible |
### 3.4.2 Transaction Processing Overhead
<div align="center">
```mermaid
%%{init: {'pie': {'textPosition': 0.75}}}%%
pie showData
"tx.receive (800ns)" : 800
"tx.validate (500ns)" : 500
"tx.relay (500ns)" : 500
"Context inject (600ns)" : 600
```
**Transaction Tracing Overhead (~2.4μs total)**
</div>
**Overhead percentage**: 2.4 μs / 200 μs (avg tx processing) = **~1.2%**
### 3.4.3 Consensus Round Overhead
| Operation | Count | Cost (ns) | Total |
| ---------------------- | ----- | --------- | ---------- |
| consensus.round span | 1 | ~1000 | ~1 μs |
| consensus.phase spans | 3 | ~700 | ~2.1 μs |
| proposal.receive spans | ~20 | ~600 | ~12 μs |
| proposal.send spans | ~3 | ~600 | ~1.8 μs |
| Context operations | ~30 | ~200 | ~6 μs |
| **TOTAL** | | | **~23 μs** |
**Overhead percentage**: 23 μs / 3s (typical round) = **~0.0008%** (negligible)
### 3.4.4 RPC Request Overhead
| Operation | Cost (ns) |
| ---------------- | ------------ |
| rpc.request span | ~700 |
| rpc.command span | ~600 |
| Context extract | ~250 |
| Context inject | ~200 |
| **TOTAL** | **~1.75 μs** |
- Fast RPC (1ms): 1.75 μs / 1ms = **~0.175%**
- Slow RPC (100ms): 1.75 μs / 100ms = **~0.002%**
---
## 3.5 Memory Overhead Analysis
### 3.5.1 Static Memory
| Component | Size | Allocated |
| ------------------------ | ----------- | ---------- |
| TracerProvider singleton | ~64 KB | At startup |
| BatchSpanProcessor | ~128 KB | At startup |
| OTLP exporter | ~256 KB | At startup |
| Propagator registry | ~8 KB | At startup |
| **Total static** | **~456 KB** | |
### 3.5.2 Dynamic Memory
| Component | Size per unit | Max units | Peak |
| -------------------- | ------------- | ---------- | ----------- |
| Active span | ~200 bytes | 1000 | ~200 KB |
| Queued span (export) | ~500 bytes | 2048 | ~1 MB |
| Attribute storage | ~50 bytes | 5 per span | Included |
| Context storage | ~64 bytes | Per thread | ~6.4 KB |
| **Total dynamic** | | | **~1.2 MB** |
### 3.5.3 Memory Growth Characteristics
```mermaid
---
config:
xyChart:
width: 700
height: 400
---
xychart-beta
title "Memory Usage vs Span Rate"
x-axis "Spans/second" [0, 200, 400, 600, 800, 1000]
y-axis "Memory (MB)" 0 --> 6
line [1, 1.8, 2.6, 3.4, 4.2, 5]
```
**Notes**:
- Memory increases linearly with span rate
- Batch export prevents unbounded growth
- Queue size is configurable (default 2048 spans)
- At queue limit, oldest spans are dropped (not blocked)
---
## 3.6 Network Overhead Analysis
### 3.6.1 Export Bandwidth
| Sampling Rate | Spans/sec | Bandwidth | Notes |
| ------------- | --------- | --------- | ---------------- |
| 100% | ~500 | ~250 KB/s | Development only |
| 10% | ~50 | ~25 KB/s | Staging |
| 1% | ~5 | ~2.5 KB/s | Production |
| Error-only | ~1 | ~0.5 KB/s | Minimal overhead |
### 3.6.2 Trace Context Propagation
| Message Type | Context Size | Messages/sec | Overhead |
| ---------------------- | ------------ | ------------ | ----------- |
| TMTransaction | 32 bytes | ~100 | ~3.2 KB/s |
| TMProposeSet | 32 bytes | ~10 | ~320 B/s |
| TMValidation | 32 bytes | ~50 | ~1.6 KB/s |
| **Total P2P overhead** | | | **~5 KB/s** |
---
## 3.7 Optimization Strategies
### 3.7.1 Sampling Strategies
```mermaid
flowchart TD
trace["New Trace"]
trace --> errors{"Is Error?"}
errors -->|Yes| sample["SAMPLE"]
errors -->|No| consensus{"Is Consensus?"}
consensus -->|Yes| sample
consensus -->|No| slow{"Is Slow?"}
slow -->|Yes| sample
slow -->|No| prob{"Random < 10%?"}
prob -->|Yes| sample
prob -->|No| drop["DROP"]
style sample fill:#4caf50,stroke:#388e3c,color:#fff
style drop fill:#f44336,stroke:#c62828,color:#fff
```
### 3.7.2 Batch Tuning Recommendations
| Environment | Batch Size | Batch Delay | Max Queue |
| ------------------ | ---------- | ----------- | --------- |
| Low-latency | 128 | 1000ms | 512 |
| High-throughput | 1024 | 10000ms | 8192 |
| Memory-constrained | 256 | 2000ms | 512 |
### 3.7.3 Conditional Instrumentation
```cpp
// Compile-time feature flag
#ifndef XRPL_ENABLE_TELEMETRY
// Zero-cost when disabled
#define XRPL_TRACE_SPAN(t, n) ((void)0)
#endif
// Runtime component filtering
if (telemetry.shouldTracePeer())
{
XRPL_TRACE_SPAN(telemetry, "peer.message.receive");
// ... instrumentation
}
// No overhead when component tracing disabled
```
---
## 3.8 Links to Detailed Documentation
- **[Code Samples](./04-code-samples.md)**: Complete implementation code for all components
- **[Configuration Reference](./05-configuration-reference.md)**: Configuration options and collector setup
- **[Implementation Phases](./06-implementation-phases.md)**: Detailed timeline and milestones
---
## 3.9 Code Intrusiveness Assessment
This section provides a detailed assessment of how intrusive the OpenTelemetry integration is to the existing rippled codebase.
### 3.9.1 Files Modified Summary
| Component | Files Modified | Lines Added | Lines Changed | Architectural Impact |
| --------------------- | -------------- | ----------- | ------------- | -------------------- |
| **Core Telemetry** | 5 new files | ~800 | 0 | None (new module) |
| **Application Init** | 2 files | ~30 | ~5 | Minimal |
| **RPC Layer** | 3 files | ~80 | ~20 | Minimal |
| **Transaction Relay** | 4 files | ~120 | ~40 | Low |
| **Consensus** | 3 files | ~100 | ~30 | Low-Medium |
| **Protocol Buffers** | 1 file | ~25 | 0 | Low |
| **CMake/Build** | 3 files | ~50 | ~10 | Minimal |
| **Total** | **~21 files** | **~1,205** | **~105** | **Low** |
### 3.9.2 Detailed File Impact
```mermaid
pie title Code Changes by Component
"New Telemetry Module" : 800
"Transaction Relay" : 160
"Consensus" : 130
"RPC Layer" : 100
"Application Init" : 35
"Protocol Buffers" : 25
"Build System" : 60
```
#### New Files (No Impact on Existing Code)
| File | Lines | Purpose |
| ---------------------------------------------- | ----- | -------------------- |
| `include/xrpl/telemetry/Telemetry.h` | ~160 | Main interface |
| `include/xrpl/telemetry/SpanGuard.h` | ~120 | RAII wrapper |
| `include/xrpl/telemetry/TraceContext.h` | ~80 | Context propagation |
| `src/xrpld/telemetry/TracingInstrumentation.h` | ~60 | Macros |
| `src/libxrpl/telemetry/Telemetry.cpp` | ~200 | Implementation |
| `src/libxrpl/telemetry/TelemetryConfig.cpp` | ~60 | Config parsing |
| `src/libxrpl/telemetry/NullTelemetry.cpp` | ~40 | No-op implementation |
#### Modified Files (Existing Rippled Code)
| File | Lines Added | Lines Changed | Risk Level |
| ------------------------------------------------- | ----------- | ------------- | ---------- |
| `src/xrpld/app/main/Application.cpp` | ~15 | ~3 | Low |
| `include/xrpl/app/main/Application.h` | ~5 | ~2 | Low |
| `src/xrpld/rpc/detail/ServerHandler.cpp` | ~40 | ~10 | Low |
| `src/xrpld/rpc/handlers/*.cpp` | ~30 | ~8 | Low |
| `src/xrpld/overlay/detail/PeerImp.cpp` | ~60 | ~15 | Medium |
| `src/xrpld/overlay/detail/OverlayImpl.cpp` | ~30 | ~10 | Medium |
| `src/xrpld/app/consensus/RCLConsensus.cpp` | ~50 | ~15 | Medium |
| `src/xrpld/app/consensus/RCLConsensusAdaptor.cpp` | ~40 | ~12 | Medium |
| `src/xrpld/core/JobQueue.cpp` | ~20 | ~5 | Low |
| `src/xrpld/overlay/detail/ripple.proto` | ~25 | 0 | Low |
| `CMakeLists.txt` | ~40 | ~8 | Low |
| `cmake/FindOpenTelemetry.cmake` | ~50 | 0 | None (new) |
### 3.9.3 Risk Assessment by Component
<div align="center">
**Do First** ↖ ↗ **Plan Carefully**
```mermaid
quadrantChart
title Code Intrusiveness Risk Matrix
x-axis Low Risk --> High Risk
y-axis Low Value --> High Value
RPC Tracing: [0.2, 0.8]
Transaction Relay: [0.5, 0.9]
Consensus Tracing: [0.7, 0.95]
Peer Message Tracing: [0.8, 0.4]
JobQueue Context: [0.4, 0.5]
Ledger Acquisition: [0.5, 0.6]
```
**Optional** ↙ ↘ **Avoid**
</div>
#### Risk Level Definitions
| Risk Level | Definition | Mitigation |
| ---------- | ---------------------------------------------------------------- | ---------------------------------- |
| **Low** | Additive changes only; no modification to existing logic | Standard code review |
| **Medium** | Minor modifications to existing functions; clear boundaries | Comprehensive unit tests |
| **High** | Changes to core logic or data structures; potential side effects | Integration tests + staged rollout |
### 3.9.4 Architectural Impact Assessment
| Aspect | Impact | Justification |
| -------------------- | ------- | --------------------------------------------------------------------- |
| **Data Flow** | None | Tracing is purely observational; no business logic changes |
| **Threading Model** | Minimal | Context propagation uses thread-local storage (standard OTel pattern) |
| **Memory Model** | Low | Bounded queues prevent unbounded growth; RAII ensures cleanup |
| **Network Protocol** | Low | Optional fields in protobuf (high field numbers); backward compatible |
| **Configuration** | None | New config section; existing configs unaffected |
| **Build System** | Low | Optional CMake flag; builds work without OpenTelemetry |
| **Dependencies** | Low | OpenTelemetry SDK is optional; null implementation when disabled |
### 3.9.5 Backward Compatibility
| Compatibility | Status | Notes |
| --------------- | ------- | ----------------------------------------------------- |
| **Config File** | ✅ Full | New `[telemetry]` section is optional |
| **Protocol** | ✅ Full | Optional protobuf fields with high field numbers |
| **Build** | ✅ Full | `XRPL_ENABLE_TELEMETRY=OFF` produces identical binary |
| **Runtime** | ✅ Full | `enabled=0` produces zero overhead |
| **API** | ✅ Full | No changes to public RPC or P2P APIs |
### 3.9.6 Rollback Strategy
If issues are discovered after deployment:
1. **Immediate**: Set `enabled=0` in config and restart (zero code change)
2. **Quick**: Rebuild with `XRPL_ENABLE_TELEMETRY=OFF`
3. **Complete**: Revert telemetry commits (clean separation makes this easy)
### 3.9.7 Code Change Examples
**Minimal RPC Instrumentation (Low Intrusiveness):**
```cpp
// Before
void ServerHandler::onRequest(...) {
auto result = processRequest(req);
send(result);
}
// After (only ~10 lines added)
void ServerHandler::onRequest(...) {
XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.request"); // +1 line
XRPL_TRACE_SET_ATTR("xrpl.rpc.command", command); // +1 line
auto result = processRequest(req);
XRPL_TRACE_SET_ATTR("xrpl.rpc.status", status); // +1 line
send(result);
}
```
**Consensus Instrumentation (Medium Intrusiveness):**
```cpp
// Before
void RCLConsensusAdaptor::startRound(...) {
// ... existing logic
}
// After (context storage required)
void RCLConsensusAdaptor::startRound(...) {
XRPL_TRACE_CONSENSUS(app_.getTelemetry(), "consensus.round");
XRPL_TRACE_SET_ATTR("xrpl.consensus.ledger.seq", seq);
// Store context for child spans in phase transitions
currentRoundContext_ = _xrpl_guard_->context(); // New member variable
// ... existing logic unchanged
}
```
---
_Previous: [Design Decisions](./02-design-decisions.md)_ | _Next: [Code Samples](./04-code-samples.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,982 @@
# Code Samples
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Implementation Strategy](./03-implementation-strategy.md) | [Configuration Reference](./05-configuration-reference.md)
---
## 4.1 Core Interfaces
### 4.1.1 Main Telemetry Interface
```cpp
// include/xrpl/telemetry/Telemetry.h
#pragma once
#include <xrpl/telemetry/TelemetryConfig.h>
#include <opentelemetry/trace/tracer.h>
#include <opentelemetry/trace/span.h>
#include <opentelemetry/context/context.h>
#include <memory>
#include <string>
#include <string_view>
namespace xrpl {
namespace telemetry {
/**
* Main telemetry interface for OpenTelemetry integration.
*
* This class provides the primary API for distributed tracing in rippled.
* It manages the OpenTelemetry SDK lifecycle and provides convenience
* methods for creating spans and propagating context.
*/
class Telemetry
{
public:
/**
* Configuration for the telemetry system.
*/
struct Setup
{
bool enabled = false;
std::string serviceName = "rippled";
std::string serviceVersion;
std::string serviceInstanceId; // Node public key
// Exporter configuration
std::string exporterType = "otlp_grpc"; // "otlp_grpc", "otlp_http", "none"
std::string exporterEndpoint = "localhost:4317";
bool useTls = false;
std::string tlsCertPath;
// Sampling configuration
double samplingRatio = 1.0; // 1.0 = 100% sampling
// Batch processor settings
std::uint32_t batchSize = 512;
std::chrono::milliseconds batchDelay{5000};
std::uint32_t maxQueueSize = 2048;
// Network attributes
std::uint32_t networkId = 0;
std::string networkType = "mainnet";
// Component filtering
bool traceTransactions = true;
bool traceConsensus = true;
bool traceRpc = true;
bool tracePeer = false; // High volume, disabled by default
bool traceLedger = true;
};
virtual ~Telemetry() = default;
// ═══════════════════════════════════════════════════════════════════════
// LIFECYCLE
// ═══════════════════════════════════════════════════════════════════════
/** Start the telemetry system (call after configuration) */
virtual void start() = 0;
/** Stop the telemetry system (flushes pending spans) */
virtual void stop() = 0;
/** Check if telemetry is enabled */
virtual bool isEnabled() const = 0;
// ═══════════════════════════════════════════════════════════════════════
// TRACER ACCESS
// ═══════════════════════════════════════════════════════════════════════
/** Get the tracer for creating spans */
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Tracer>
getTracer(std::string_view name = "rippled") = 0;
// ═══════════════════════════════════════════════════════════════════════
// SPAN CREATION (Convenience Methods)
// ═══════════════════════════════════════════════════════════════════════
/** Start a new span with default options */
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span>
startSpan(
std::string_view name,
opentelemetry::trace::SpanKind kind =
opentelemetry::trace::SpanKind::kInternal) = 0;
/** Start a span as child of given context */
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span>
startSpan(
std::string_view name,
opentelemetry::context::Context const& parentContext,
opentelemetry::trace::SpanKind kind =
opentelemetry::trace::SpanKind::kInternal) = 0;
// ═══════════════════════════════════════════════════════════════════════
// CONTEXT PROPAGATION
// ═══════════════════════════════════════════════════════════════════════
/** Serialize context for network transmission */
virtual std::string serializeContext(
opentelemetry::context::Context const& ctx) = 0;
/** Deserialize context from network data */
virtual opentelemetry::context::Context deserializeContext(
std::string const& serialized) = 0;
// ═══════════════════════════════════════════════════════════════════════
// COMPONENT FILTERING
// ═══════════════════════════════════════════════════════════════════════
/** Check if transaction tracing is enabled */
virtual bool shouldTraceTransactions() const = 0;
/** Check if consensus tracing is enabled */
virtual bool shouldTraceConsensus() const = 0;
/** Check if RPC tracing is enabled */
virtual bool shouldTraceRpc() const = 0;
/** Check if peer message tracing is enabled */
virtual bool shouldTracePeer() const = 0;
};
// Factory functions
std::unique_ptr<Telemetry>
make_Telemetry(
Telemetry::Setup const& setup,
beast::Journal journal);
Telemetry::Setup
setup_Telemetry(
Section const& section,
std::string const& nodePublicKey,
std::string const& version);
} // namespace telemetry
} // namespace xrpl
```
---
## 4.2 RAII Span Guard
```cpp
// include/xrpl/telemetry/SpanGuard.h
#pragma once
#include <opentelemetry/trace/span.h>
#include <opentelemetry/trace/scope.h>
#include <opentelemetry/trace/status.h>
#include <string_view>
#include <exception>
namespace xrpl {
namespace telemetry {
/**
* RAII guard for OpenTelemetry spans.
*
* Automatically ends the span on destruction and makes it the current
* span in the thread-local context.
*/
class SpanGuard
{
opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span> span_;
opentelemetry::trace::Scope scope_;
public:
/**
* Construct guard with span.
* The span becomes the current span in thread-local context.
*/
explicit SpanGuard(
opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span> span)
: span_(std::move(span))
, scope_(span_)
{
}
// Non-copyable, non-movable
SpanGuard(SpanGuard const&) = delete;
SpanGuard& operator=(SpanGuard const&) = delete;
SpanGuard(SpanGuard&&) = delete;
SpanGuard& operator=(SpanGuard&&) = delete;
~SpanGuard()
{
if (span_)
span_->End();
}
/** Access the underlying span */
opentelemetry::trace::Span& span() { return *span_; }
opentelemetry::trace::Span const& span() const { return *span_; }
/** Set span status to OK */
void setOk()
{
span_->SetStatus(opentelemetry::trace::StatusCode::kOk);
}
/** Set span status with code and description */
void setStatus(
opentelemetry::trace::StatusCode code,
std::string_view description = "")
{
span_->SetStatus(code, std::string(description));
}
/** Set an attribute on the span */
template<typename T>
void setAttribute(std::string_view key, T&& value)
{
span_->SetAttribute(
opentelemetry::nostd::string_view(key.data(), key.size()),
std::forward<T>(value));
}
/** Add an event to the span */
void addEvent(std::string_view name)
{
span_->AddEvent(std::string(name));
}
/** Record an exception on the span */
void recordException(std::exception const& e)
{
span_->RecordException(e);
span_->SetStatus(
opentelemetry::trace::StatusCode::kError,
e.what());
}
/** Get the current trace context */
opentelemetry::context::Context context() const
{
return opentelemetry::context::RuntimeContext::GetCurrent();
}
};
/**
* No-op span guard for when tracing is disabled.
* Provides the same interface but does nothing.
*/
class NullSpanGuard
{
public:
NullSpanGuard() = default;
void setOk() {}
void setStatus(opentelemetry::trace::StatusCode, std::string_view = "") {}
template<typename T>
void setAttribute(std::string_view, T&&) {}
void addEvent(std::string_view) {}
void recordException(std::exception const&) {}
};
} // namespace telemetry
} // namespace xrpl
```
---
## 4.3 Instrumentation Macros
```cpp
// src/xrpld/telemetry/TracingInstrumentation.h
#pragma once
#include <xrpl/telemetry/Telemetry.h>
#include <xrpl/telemetry/SpanGuard.h>
namespace xrpl {
namespace telemetry {
// ═══════════════════════════════════════════════════════════════════════════
// INSTRUMENTATION MACROS
// ═══════════════════════════════════════════════════════════════════════════
#ifdef XRPL_ENABLE_TELEMETRY
// Start a span that is automatically ended when guard goes out of scope
#define XRPL_TRACE_SPAN(telemetry, name) \
auto _xrpl_span_ = (telemetry).startSpan(name); \
::xrpl::telemetry::SpanGuard _xrpl_guard_(_xrpl_span_)
// Start a span with specific kind
#define XRPL_TRACE_SPAN_KIND(telemetry, name, kind) \
auto _xrpl_span_ = (telemetry).startSpan(name, kind); \
::xrpl::telemetry::SpanGuard _xrpl_guard_(_xrpl_span_)
// Conditional span based on component
#define XRPL_TRACE_TX(telemetry, name) \
std::optional<::xrpl::telemetry::SpanGuard> _xrpl_guard_; \
if ((telemetry).shouldTraceTransactions()) { \
_xrpl_guard_.emplace((telemetry).startSpan(name)); \
}
#define XRPL_TRACE_CONSENSUS(telemetry, name) \
std::optional<::xrpl::telemetry::SpanGuard> _xrpl_guard_; \
if ((telemetry).shouldTraceConsensus()) { \
_xrpl_guard_.emplace((telemetry).startSpan(name)); \
}
#define XRPL_TRACE_RPC(telemetry, name) \
std::optional<::xrpl::telemetry::SpanGuard> _xrpl_guard_; \
if ((telemetry).shouldTraceRpc()) { \
_xrpl_guard_.emplace((telemetry).startSpan(name)); \
}
// Set attribute on current span (if exists)
#define XRPL_TRACE_SET_ATTR(key, value) \
if (_xrpl_guard_.has_value()) { \
_xrpl_guard_->setAttribute(key, value); \
}
// Record exception on current span
#define XRPL_TRACE_EXCEPTION(e) \
if (_xrpl_guard_.has_value()) { \
_xrpl_guard_->recordException(e); \
}
#else // XRPL_ENABLE_TELEMETRY not defined
#define XRPL_TRACE_SPAN(telemetry, name) ((void)0)
#define XRPL_TRACE_SPAN_KIND(telemetry, name, kind) ((void)0)
#define XRPL_TRACE_TX(telemetry, name) ((void)0)
#define XRPL_TRACE_CONSENSUS(telemetry, name) ((void)0)
#define XRPL_TRACE_RPC(telemetry, name) ((void)0)
#define XRPL_TRACE_SET_ATTR(key, value) ((void)0)
#define XRPL_TRACE_EXCEPTION(e) ((void)0)
#endif // XRPL_ENABLE_TELEMETRY
} // namespace telemetry
} // namespace xrpl
```
---
## 4.4 Protocol Buffer Extensions
### 4.4.1 TraceContext Message Definition
Add to `src/xrpld/overlay/detail/ripple.proto`:
```protobuf
// Trace context for distributed tracing across nodes
// Uses W3C Trace Context format internally
message TraceContext {
// 16-byte trace identifier (required for valid context)
bytes trace_id = 1;
// 8-byte span identifier of parent span
bytes span_id = 2;
// Trace flags (bit 0 = sampled)
uint32 trace_flags = 3;
// W3C tracestate header value for vendor-specific data
string trace_state = 4;
}
// Extend existing messages with optional trace context
// High field numbers (1000+) to avoid conflicts
message TMTransaction {
// ... existing fields ...
// Optional trace context for distributed tracing
optional TraceContext trace_context = 1001;
}
message TMProposeSet {
// ... existing fields ...
optional TraceContext trace_context = 1001;
}
message TMValidation {
// ... existing fields ...
optional TraceContext trace_context = 1001;
}
message TMGetLedger {
// ... existing fields ...
optional TraceContext trace_context = 1001;
}
message TMLedgerData {
// ... existing fields ...
optional TraceContext trace_context = 1001;
}
```
### 4.4.2 Context Serialization/Deserialization
```cpp
// include/xrpl/telemetry/TraceContext.h
#pragma once
#include <opentelemetry/context/context.h>
#include <opentelemetry/trace/span_context.h>
#include <protocol/messages.h> // Generated protobuf
#include <optional>
#include <string>
namespace xrpl {
namespace telemetry {
/**
* Utilities for trace context serialization and propagation.
*/
class TraceContextPropagator
{
public:
/**
* Extract trace context from Protocol Buffer message.
* Returns empty context if no trace info present.
*/
static opentelemetry::context::Context
extract(protocol::TraceContext const& proto);
/**
* Inject current trace context into Protocol Buffer message.
*/
static void
inject(
opentelemetry::context::Context const& ctx,
protocol::TraceContext& proto);
/**
* Extract trace context from HTTP headers (for RPC).
* Supports W3C Trace Context (traceparent, tracestate).
*/
static opentelemetry::context::Context
extractFromHeaders(
std::function<std::optional<std::string>(std::string_view)> headerGetter);
/**
* Inject trace context into HTTP headers (for RPC responses).
*/
static void
injectToHeaders(
opentelemetry::context::Context const& ctx,
std::function<void(std::string_view, std::string_view)> headerSetter);
};
// ═══════════════════════════════════════════════════════════════════════════
// IMPLEMENTATION
// ═══════════════════════════════════════════════════════════════════════════
inline opentelemetry::context::Context
TraceContextPropagator::extract(protocol::TraceContext const& proto)
{
using namespace opentelemetry::trace;
if (proto.trace_id().size() != 16 || proto.span_id().size() != 8)
return opentelemetry::context::Context{}; // Invalid, return empty
// Construct TraceId and SpanId from bytes
TraceId traceId(reinterpret_cast<uint8_t const*>(proto.trace_id().data()));
SpanId spanId(reinterpret_cast<uint8_t const*>(proto.span_id().data()));
TraceFlags flags(static_cast<uint8_t>(proto.trace_flags()));
// Create SpanContext from extracted data
SpanContext spanContext(traceId, spanId, flags, /* remote = */ true);
// Create context with extracted span as parent
return opentelemetry::context::Context{}.SetValue(
opentelemetry::trace::kSpanKey,
opentelemetry::nostd::shared_ptr<Span>(
new DefaultSpan(spanContext)));
}
inline void
TraceContextPropagator::inject(
opentelemetry::context::Context const& ctx,
protocol::TraceContext& proto)
{
using namespace opentelemetry::trace;
// Get current span from context
auto span = GetSpan(ctx);
if (!span)
return;
auto const& spanCtx = span->GetContext();
if (!spanCtx.IsValid())
return;
// Serialize trace_id (16 bytes)
auto const& traceId = spanCtx.trace_id();
proto.set_trace_id(traceId.Id().data(), TraceId::kSize);
// Serialize span_id (8 bytes)
auto const& spanId = spanCtx.span_id();
proto.set_span_id(spanId.Id().data(), SpanId::kSize);
// Serialize flags
proto.set_trace_flags(spanCtx.trace_flags().flags());
// Note: tracestate not implemented yet
}
} // namespace telemetry
} // namespace xrpl
```
---
## 4.5 Module-Specific Instrumentation
### 4.5.1 Transaction Relay Instrumentation
```cpp
// src/xrpld/overlay/detail/PeerImp.cpp (modified)
#include <xrpl/telemetry/TracingInstrumentation.h>
void
PeerImp::handleTransaction(
std::shared_ptr<protocol::TMTransaction> const& m)
{
// Extract trace context from incoming message
opentelemetry::context::Context parentCtx;
if (m->has_trace_context())
{
parentCtx = telemetry::TraceContextPropagator::extract(
m->trace_context());
}
// Start span as child of remote span (cross-node link)
auto span = app_.getTelemetry().startSpan(
"tx.receive",
parentCtx,
opentelemetry::trace::SpanKind::kServer);
telemetry::SpanGuard guard(span);
try
{
// Parse and validate transaction
SerialIter sit(makeSlice(m->rawtransaction()));
auto stx = std::make_shared<STTx const>(sit);
// Add transaction attributes
guard.setAttribute("xrpl.tx.hash", to_string(stx->getTransactionID()));
guard.setAttribute("xrpl.tx.type", stx->getTxnType());
guard.setAttribute("xrpl.peer.id", remote_address_.to_string());
// Check if we've seen this transaction (HashRouter)
auto const [flags, suppressed] =
app_.getHashRouter().addSuppressionPeer(
stx->getTransactionID(),
id_);
if (suppressed)
{
guard.setAttribute("xrpl.tx.suppressed", true);
guard.addEvent("tx.duplicate");
return; // Already processing this transaction
}
// Create child span for validation
{
auto validateSpan = app_.getTelemetry().startSpan("tx.validate");
telemetry::SpanGuard validateGuard(validateSpan);
auto [validity, reason] = checkTransaction(stx);
validateGuard.setAttribute("xrpl.tx.validity",
validity == Validity::Valid ? "valid" : "invalid");
if (validity != Validity::Valid)
{
validateGuard.setStatus(
opentelemetry::trace::StatusCode::kError,
reason);
return;
}
}
// Relay to other peers (capture context for propagation)
auto ctx = guard.context();
// Create child span for relay
auto relaySpan = app_.getTelemetry().startSpan(
"tx.relay",
ctx,
opentelemetry::trace::SpanKind::kClient);
{
telemetry::SpanGuard relayGuard(relaySpan);
// Inject context into outgoing message
protocol::TraceContext protoCtx;
telemetry::TraceContextPropagator::inject(
relayGuard.context(), protoCtx);
// Relay to other peers
app_.overlay().relay(
stx->getTransactionID(),
*m,
protoCtx, // Pass trace context
exclusions);
relayGuard.setAttribute("xrpl.tx.relay_count",
static_cast<int64_t>(relayCount));
}
guard.setOk();
}
catch (std::exception const& e)
{
guard.recordException(e);
JLOG(journal_.warn()) << "Transaction handling failed: " << e.what();
}
}
```
### 4.5.2 Consensus Instrumentation
```cpp
// src/xrpld/app/consensus/RCLConsensus.cpp (modified)
#include <xrpl/telemetry/TracingInstrumentation.h>
void
RCLConsensusAdaptor::startRound(
NetClock::time_point const& now,
RCLCxLedger::ID const& prevLedgerHash,
RCLCxLedger const& prevLedger,
hash_set<NodeID> const& peers,
bool proposing)
{
XRPL_TRACE_CONSENSUS(app_.getTelemetry(), "consensus.round");
XRPL_TRACE_SET_ATTR("xrpl.consensus.ledger.prev", to_string(prevLedgerHash));
XRPL_TRACE_SET_ATTR("xrpl.consensus.ledger.seq",
static_cast<int64_t>(prevLedger.seq() + 1));
XRPL_TRACE_SET_ATTR("xrpl.consensus.proposers",
static_cast<int64_t>(peers.size()));
XRPL_TRACE_SET_ATTR("xrpl.consensus.mode",
proposing ? "proposing" : "observing");
// Store trace context for use in phase transitions
currentRoundContext_ = _xrpl_guard_.has_value()
? _xrpl_guard_->context()
: opentelemetry::context::Context{};
// ... existing implementation ...
}
ConsensusPhase
RCLConsensusAdaptor::phaseTransition(ConsensusPhase newPhase)
{
// Create span for phase transition
auto span = app_.getTelemetry().startSpan(
"consensus.phase." + to_string(newPhase),
currentRoundContext_);
telemetry::SpanGuard guard(span);
guard.setAttribute("xrpl.consensus.phase", to_string(newPhase));
guard.addEvent("phase.enter");
auto const startTime = std::chrono::steady_clock::now();
try
{
auto result = doPhaseTransition(newPhase);
auto const duration = std::chrono::steady_clock::now() - startTime;
guard.setAttribute("xrpl.consensus.phase_duration_ms",
std::chrono::duration<double, std::milli>(duration).count());
guard.setOk();
return result;
}
catch (std::exception const& e)
{
guard.recordException(e);
throw;
}
}
void
RCLConsensusAdaptor::peerProposal(
NetClock::time_point const& now,
RCLCxPeerPos const& proposal)
{
// Extract trace context from proposal message
opentelemetry::context::Context parentCtx;
if (proposal.hasTraceContext())
{
parentCtx = telemetry::TraceContextPropagator::extract(
proposal.traceContext());
}
auto span = app_.getTelemetry().startSpan(
"consensus.proposal.receive",
parentCtx,
opentelemetry::trace::SpanKind::kServer);
telemetry::SpanGuard guard(span);
guard.setAttribute("xrpl.consensus.proposer",
toBase58(TokenType::NodePublic, proposal.nodeId()));
guard.setAttribute("xrpl.consensus.round",
static_cast<int64_t>(proposal.proposal().proposeSeq()));
// ... existing implementation ...
guard.setOk();
}
```
### 4.5.3 RPC Handler Instrumentation
```cpp
// src/xrpld/rpc/detail/ServerHandler.cpp (modified)
#include <xrpl/telemetry/TracingInstrumentation.h>
void
ServerHandler::onRequest(
http_request_type&& req,
std::function<void(http_response_type&&)>&& send)
{
// Extract trace context from HTTP headers (W3C Trace Context)
auto parentCtx = telemetry::TraceContextPropagator::extractFromHeaders(
[&req](std::string_view name) -> std::optional<std::string> {
auto it = req.find(boost::beast::http::field{
std::string(name)});
if (it != req.end())
return std::string(it->value());
return std::nullopt;
});
// Start request span
auto span = app_.getTelemetry().startSpan(
"rpc.request",
parentCtx,
opentelemetry::trace::SpanKind::kServer);
telemetry::SpanGuard guard(span);
// Add HTTP attributes
guard.setAttribute("http.method", std::string(req.method_string()));
guard.setAttribute("http.target", std::string(req.target()));
guard.setAttribute("http.user_agent",
std::string(req[boost::beast::http::field::user_agent]));
auto const startTime = std::chrono::steady_clock::now();
try
{
// Parse and process request
auto const& body = req.body();
Json::Value jv;
Json::Reader reader;
if (!reader.parse(body, jv))
{
guard.setStatus(
opentelemetry::trace::StatusCode::kError,
"Invalid JSON");
sendError(send, "Invalid JSON");
return;
}
// Extract command name
std::string command = jv.isMember("command")
? jv["command"].asString()
: jv.isMember("method")
? jv["method"].asString()
: "unknown";
guard.setAttribute("xrpl.rpc.command", command);
// Create child span for command execution
auto cmdSpan = app_.getTelemetry().startSpan(
"rpc.command." + command);
{
telemetry::SpanGuard cmdGuard(cmdSpan);
// Execute RPC command
auto result = processRequest(jv);
// Record result attributes
if (result.isMember("status"))
{
cmdGuard.setAttribute("xrpl.rpc.status",
result["status"].asString());
}
if (result["status"].asString() == "error")
{
cmdGuard.setStatus(
opentelemetry::trace::StatusCode::kError,
result.isMember("error_message")
? result["error_message"].asString()
: "RPC error");
}
else
{
cmdGuard.setOk();
}
}
auto const duration = std::chrono::steady_clock::now() - startTime;
guard.setAttribute("http.duration_ms",
std::chrono::duration<double, std::milli>(duration).count());
// Inject trace context into response headers
http_response_type resp;
telemetry::TraceContextPropagator::injectToHeaders(
guard.context(),
[&resp](std::string_view name, std::string_view value) {
resp.set(std::string(name), std::string(value));
});
guard.setOk();
send(std::move(resp));
}
catch (std::exception const& e)
{
guard.recordException(e);
JLOG(journal_.error()) << "RPC request failed: " << e.what();
sendError(send, e.what());
}
}
```
### 4.5.4 JobQueue Context Propagation
```cpp
// src/xrpld/core/JobQueue.h (modified)
#include <opentelemetry/context/context.h>
class Job
{
// ... existing members ...
// Captured trace context at job creation
opentelemetry::context::Context traceContext_;
public:
// Constructor captures current trace context
Job(JobType type, std::function<void()> func, ...)
: type_(type)
, func_(std::move(func))
, traceContext_(opentelemetry::context::RuntimeContext::GetCurrent())
// ... other initializations ...
{
}
// Get trace context for restoration during execution
opentelemetry::context::Context const&
traceContext() const { return traceContext_; }
};
// src/xrpld/core/JobQueue.cpp (modified)
void
Worker::run()
{
while (auto job = getJob())
{
// Restore trace context from job creation
auto token = opentelemetry::context::RuntimeContext::Attach(
job->traceContext());
// Start execution span
auto span = app_.getTelemetry().startSpan("job.execute");
telemetry::SpanGuard guard(span);
guard.setAttribute("xrpl.job.type", to_string(job->type()));
guard.setAttribute("xrpl.job.queue_ms", job->queueTimeMs());
guard.setAttribute("xrpl.job.worker", workerId_);
try
{
job->execute();
guard.setOk();
}
catch (std::exception const& e)
{
guard.recordException(e);
JLOG(journal_.error()) << "Job execution failed: " << e.what();
}
}
}
```
---
## 4.6 Span Flow Visualization
<div align="center">
```mermaid
flowchart TB
subgraph Client["External Client"]
submit["Submit TX"]
end
subgraph NodeA["rippled Node A"]
rpcA["rpc.request"]
cmdA["rpc.command.submit"]
txRecvA["tx.receive"]
txValA["tx.validate"]
txRelayA["tx.relay"]
end
subgraph NodeB["rippled Node B"]
txRecvB["tx.receive"]
txValB["tx.validate"]
txRelayB["tx.relay"]
end
subgraph NodeC["rippled Node C"]
txRecvC["tx.receive"]
consensusC["consensus.round"]
phaseC["consensus.phase.establish"]
end
submit --> rpcA
rpcA --> cmdA
cmdA --> txRecvA
txRecvA --> txValA
txValA --> txRelayA
txRelayA -.->|"TraceContext"| txRecvB
txRecvB --> txValB
txValB --> txRelayB
txRelayB -.->|"TraceContext"| txRecvC
txRecvC --> consensusC
consensusC --> phaseC
style Client fill:#334155,stroke:#1e293b,color:#fff
style NodeA fill:#1e3a8a,stroke:#172554,color:#fff
style NodeB fill:#064e3b,stroke:#022c22,color:#fff
style NodeC fill:#78350f,stroke:#451a03,color:#fff
style submit fill:#e2e8f0,stroke:#cbd5e1,color:#1e293b
style rpcA fill:#1d4ed8,stroke:#1e40af,color:#fff
style cmdA fill:#1d4ed8,stroke:#1e40af,color:#fff
style txRecvA fill:#047857,stroke:#064e3b,color:#fff
style txValA fill:#047857,stroke:#064e3b,color:#fff
style txRelayA fill:#047857,stroke:#064e3b,color:#fff
style txRecvB fill:#047857,stroke:#064e3b,color:#fff
style txValB fill:#047857,stroke:#064e3b,color:#fff
style txRelayB fill:#047857,stroke:#064e3b,color:#fff
style txRecvC fill:#047857,stroke:#064e3b,color:#fff
style consensusC fill:#fef3c7,stroke:#fde68a,color:#1e293b
style phaseC fill:#fef3c7,stroke:#fde68a,color:#1e293b
```
</div>
---
_Previous: [Implementation Strategy](./03-implementation-strategy.md)_ | _Next: [Configuration Reference](./05-configuration-reference.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,954 @@
# Configuration Reference
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Code Samples](./04-code-samples.md) | [Implementation Phases](./06-implementation-phases.md)
---
## 5.1 rippled Configuration
### 5.1.1 Configuration File Section
Add to `cfg/xrpld-example.cfg`:
```ini
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY (OpenTelemetry Distributed Tracing)
# ═══════════════════════════════════════════════════════════════════════════════
#
# Enables distributed tracing for transaction flow, consensus, and RPC calls.
# Traces are exported to an OpenTelemetry Collector using OTLP protocol.
#
# [telemetry]
#
# # Enable/disable telemetry (default: 0 = disabled)
# enabled=1
#
# # Exporter type: "otlp_grpc" (default), "otlp_http", or "none"
# exporter=otlp_grpc
#
# # OTLP endpoint (default: localhost:4317 for gRPC, localhost:4318 for HTTP)
# endpoint=localhost:4317
#
# # Use TLS for exporter connection (default: 0)
# use_tls=0
#
# # Path to CA certificate for TLS (optional)
# # tls_ca_cert=/path/to/ca.crt
#
# # Sampling ratio: 0.0-1.0 (default: 1.0 = 100% sampling)
# # Use lower values in production to reduce overhead
# sampling_ratio=0.1
#
# # Batch processor settings
# batch_size=512 # Spans per batch (default: 512)
# batch_delay_ms=5000 # Max delay before sending batch (default: 5000)
# max_queue_size=2048 # Max queued spans (default: 2048)
#
# # Component-specific tracing (default: all enabled except peer)
# trace_transactions=1 # Transaction relay and processing
# trace_consensus=1 # Consensus rounds and proposals
# trace_rpc=1 # RPC request handling
# trace_peer=0 # Peer messages (high volume, disabled by default)
# trace_ledger=1 # Ledger acquisition and building
#
# # Service identification (automatically detected if not specified)
# # service_name=rippled
# # service_instance_id=<node_public_key>
[telemetry]
enabled=0
```
### 5.1.2 Configuration Options Summary
| Option | Type | Default | Description |
| --------------------- | ------ | ---------------- | ----------------------------------------- |
| `enabled` | bool | `false` | Enable/disable telemetry |
| `exporter` | string | `"otlp_grpc"` | Exporter type: otlp_grpc, otlp_http, none |
| `endpoint` | string | `localhost:4317` | OTLP collector endpoint |
| `use_tls` | bool | `false` | Enable TLS for exporter connection |
| `tls_ca_cert` | string | `""` | Path to CA certificate file |
| `sampling_ratio` | float | `1.0` | Sampling ratio (0.0-1.0) |
| `batch_size` | uint | `512` | Spans per export batch |
| `batch_delay_ms` | uint | `5000` | Max delay before sending batch (ms) |
| `max_queue_size` | uint | `2048` | Maximum queued spans |
| `trace_transactions` | bool | `true` | Enable transaction tracing |
| `trace_consensus` | bool | `true` | Enable consensus tracing |
| `trace_rpc` | bool | `true` | Enable RPC tracing |
| `trace_peer` | bool | `false` | Enable peer message tracing (high volume) |
| `trace_ledger` | bool | `true` | Enable ledger tracing |
| `service_name` | string | `"rippled"` | Service name for traces |
| `service_instance_id` | string | `<node_pubkey>` | Instance identifier |
---
## 5.2 Configuration Parser
```cpp
// src/libxrpl/telemetry/TelemetryConfig.cpp
#include <xrpl/telemetry/Telemetry.h>
#include <xrpl/basics/Log.h>
namespace xrpl {
namespace telemetry {
Telemetry::Setup
setup_Telemetry(
Section const& section,
std::string const& nodePublicKey,
std::string const& version)
{
Telemetry::Setup setup;
// Basic settings
setup.enabled = section.value_or("enabled", false);
setup.serviceName = section.value_or("service_name", "rippled");
setup.serviceVersion = version;
setup.serviceInstanceId = section.value_or(
"service_instance_id", nodePublicKey);
// Exporter settings
setup.exporterType = section.value_or("exporter", "otlp_grpc");
if (setup.exporterType == "otlp_grpc")
setup.exporterEndpoint = section.value_or("endpoint", "localhost:4317");
else if (setup.exporterType == "otlp_http")
setup.exporterEndpoint = section.value_or("endpoint", "localhost:4318");
setup.useTls = section.value_or("use_tls", false);
setup.tlsCertPath = section.value_or("tls_ca_cert", "");
// Sampling
setup.samplingRatio = section.value_or("sampling_ratio", 1.0);
if (setup.samplingRatio < 0.0 || setup.samplingRatio > 1.0)
{
Throw<std::runtime_error>(
"telemetry.sampling_ratio must be between 0.0 and 1.0");
}
// Batch processor
setup.batchSize = section.value_or("batch_size", 512u);
setup.batchDelay = std::chrono::milliseconds{
section.value_or("batch_delay_ms", 5000u)};
setup.maxQueueSize = section.value_or("max_queue_size", 2048u);
// Component filtering
setup.traceTransactions = section.value_or("trace_transactions", true);
setup.traceConsensus = section.value_or("trace_consensus", true);
setup.traceRpc = section.value_or("trace_rpc", true);
setup.tracePeer = section.value_or("trace_peer", false);
setup.traceLedger = section.value_or("trace_ledger", true);
return setup;
}
} // namespace telemetry
} // namespace xrpl
```
---
## 5.3 Application Integration
### 5.3.1 ApplicationImp Changes
```cpp
// src/xrpld/app/main/Application.cpp (modified)
#include <xrpl/telemetry/Telemetry.h>
class ApplicationImp : public Application
{
// ... existing members ...
// Telemetry (must be constructed early, destroyed late)
std::unique_ptr<telemetry::Telemetry> telemetry_;
public:
ApplicationImp(...)
{
// Initialize telemetry early (before other components)
auto telemetrySection = config_->section("telemetry");
auto telemetrySetup = telemetry::setup_Telemetry(
telemetrySection,
toBase58(TokenType::NodePublic, nodeIdentity_.publicKey()),
BuildInfo::getVersionString());
// Set network attributes
telemetrySetup.networkId = config_->NETWORK_ID;
telemetrySetup.networkType = [&]() {
if (config_->NETWORK_ID == 0) return "mainnet";
if (config_->NETWORK_ID == 1) return "testnet";
if (config_->NETWORK_ID == 2) return "devnet";
return "custom";
}();
telemetry_ = telemetry::make_Telemetry(
telemetrySetup,
logs_->journal("Telemetry"));
// ... rest of initialization ...
}
void start() override
{
// Start telemetry first
if (telemetry_)
telemetry_->start();
// ... existing start code ...
}
void stop() override
{
// ... existing stop code ...
// Stop telemetry last (to capture shutdown spans)
if (telemetry_)
telemetry_->stop();
}
telemetry::Telemetry& getTelemetry() override
{
assert(telemetry_);
return *telemetry_;
}
};
```
### 5.3.2 Application Interface Addition
```cpp
// include/xrpl/app/main/Application.h (modified)
namespace telemetry { class Telemetry; }
class Application
{
public:
// ... existing virtual methods ...
/** Get the telemetry system for distributed tracing */
virtual telemetry::Telemetry& getTelemetry() = 0;
};
```
---
## 5.4 CMake Integration
### 5.4.1 Find OpenTelemetry Module
```cmake
# cmake/FindOpenTelemetry.cmake
# Find OpenTelemetry C++ SDK
#
# This module defines:
# OpenTelemetry_FOUND - System has OpenTelemetry
# OpenTelemetry::api - API library target
# OpenTelemetry::sdk - SDK library target
# OpenTelemetry::otlp_grpc_exporter - OTLP gRPC exporter target
# OpenTelemetry::otlp_http_exporter - OTLP HTTP exporter target
find_package(opentelemetry-cpp CONFIG QUIET)
if(opentelemetry-cpp_FOUND)
set(OpenTelemetry_FOUND TRUE)
# Create imported targets if not already created by config
if(NOT TARGET OpenTelemetry::api)
add_library(OpenTelemetry::api ALIAS opentelemetry-cpp::api)
endif()
if(NOT TARGET OpenTelemetry::sdk)
add_library(OpenTelemetry::sdk ALIAS opentelemetry-cpp::sdk)
endif()
if(NOT TARGET OpenTelemetry::otlp_grpc_exporter)
add_library(OpenTelemetry::otlp_grpc_exporter ALIAS
opentelemetry-cpp::otlp_grpc_exporter)
endif()
else()
# Try pkg-config fallback
find_package(PkgConfig QUIET)
if(PKG_CONFIG_FOUND)
pkg_check_modules(OTEL opentelemetry-cpp QUIET)
if(OTEL_FOUND)
set(OpenTelemetry_FOUND TRUE)
# Create imported targets from pkg-config
add_library(OpenTelemetry::api INTERFACE IMPORTED)
target_include_directories(OpenTelemetry::api INTERFACE
${OTEL_INCLUDE_DIRS})
endif()
endif()
endif()
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(OpenTelemetry
REQUIRED_VARS OpenTelemetry_FOUND)
```
### 5.4.2 CMakeLists.txt Changes
```cmake
# CMakeLists.txt (additions)
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY OPTIONS
# ═══════════════════════════════════════════════════════════════════════════════
option(XRPL_ENABLE_TELEMETRY
"Enable OpenTelemetry distributed tracing support" OFF)
if(XRPL_ENABLE_TELEMETRY)
find_package(OpenTelemetry REQUIRED)
# Define compile-time flag
add_compile_definitions(XRPL_ENABLE_TELEMETRY)
message(STATUS "OpenTelemetry tracing: ENABLED")
else()
message(STATUS "OpenTelemetry tracing: DISABLED")
endif()
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY LIBRARY
# ═══════════════════════════════════════════════════════════════════════════════
if(XRPL_ENABLE_TELEMETRY)
add_library(xrpl_telemetry
src/libxrpl/telemetry/Telemetry.cpp
src/libxrpl/telemetry/TelemetryConfig.cpp
src/libxrpl/telemetry/TraceContext.cpp
)
target_include_directories(xrpl_telemetry
PUBLIC
${CMAKE_CURRENT_SOURCE_DIR}/include
)
target_link_libraries(xrpl_telemetry
PUBLIC
OpenTelemetry::api
OpenTelemetry::sdk
OpenTelemetry::otlp_grpc_exporter
PRIVATE
xrpl_basics
)
# Add to main library dependencies
target_link_libraries(xrpld PRIVATE xrpl_telemetry)
else()
# Create null implementation library
add_library(xrpl_telemetry
src/libxrpl/telemetry/NullTelemetry.cpp
)
target_include_directories(xrpl_telemetry
PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include
)
endif()
```
---
## 5.5 OpenTelemetry Collector Configuration
### 5.5.1 Development Configuration
```yaml
# otel-collector-dev.yaml
# Minimal configuration for local development
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 100
exporters:
# Console output for debugging
logging:
verbosity: detailed
sampling_initial: 5
sampling_thereafter: 200
# Jaeger for trace visualization
jaeger:
endpoint: jaeger:14250
tls:
insecure: true
# Grafana Tempo for trace storage
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, jaeger, otlp/tempo]
```
### 5.5.2 Production Configuration
```yaml
# otel-collector-prod.yaml
# Production configuration with filtering, sampling, and multiple backends
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
tls:
cert_file: /etc/otel/server.crt
key_file: /etc/otel/server.key
ca_file: /etc/otel/ca.crt
processors:
# Memory limiter to prevent OOM
memory_limiter:
check_interval: 1s
limit_mib: 1000
spike_limit_mib: 200
# Batch processing for efficiency
batch:
timeout: 5s
send_batch_size: 512
send_batch_max_size: 1024
# Tail-based sampling (keep errors and slow traces)
tail_sampling:
decision_wait: 10s
num_traces: 100000
expected_new_traces_per_sec: 1000
policies:
# Always keep error traces
- name: errors
type: status_code
status_code:
status_codes: [ERROR]
# Keep slow consensus rounds (>5s)
- name: slow-consensus
type: latency
latency:
threshold_ms: 5000
# Keep slow RPC requests (>1s)
- name: slow-rpc
type: and
and:
and_sub_policy:
- name: rpc-spans
type: string_attribute
string_attribute:
key: xrpl.rpc.command
values: [".*"]
enabled_regex_matching: true
- name: latency
type: latency
latency:
threshold_ms: 1000
# Probabilistic sampling for the rest
- name: probabilistic
type: probabilistic
probabilistic:
sampling_percentage: 10
# Attribute processing
attributes:
actions:
# Hash sensitive data
- key: xrpl.tx.account
action: hash
# Add deployment info
- key: deployment.environment
value: production
action: upsert
exporters:
# Grafana Tempo for long-term storage
otlp/tempo:
endpoint: tempo.monitoring:4317
tls:
insecure: false
ca_file: /etc/otel/tempo-ca.crt
# Elastic APM for correlation with logs
otlp/elastic:
endpoint: apm.elastic:8200
headers:
Authorization: "Bearer ${ELASTIC_APM_TOKEN}"
extensions:
health_check:
endpoint: 0.0.0.0:13133
zpages:
endpoint: 0.0.0.0:55679
service:
extensions: [health_check, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, tail_sampling, attributes, batch]
exporters: [otlp/tempo, otlp/elastic]
```
---
## 5.6 Docker Compose Development Environment
```yaml
# docker-compose-telemetry.yaml
version: "3.8"
services:
# OpenTelemetry Collector
otel-collector:
image: otel/opentelemetry-collector-contrib:0.92.0
container_name: otel-collector
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-dev.yaml:/etc/otel-collector-config.yaml:ro
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
- "13133:13133" # Health check
depends_on:
- jaeger
# Jaeger for trace visualization
jaeger:
image: jaegertracing/all-in-one:1.53
container_name: jaeger
environment:
- COLLECTOR_OTLP_ENABLED=true
ports:
- "16686:16686" # UI
- "14250:14250" # gRPC
# Grafana Tempo for trace storage (recommended for production)
tempo:
image: grafana/tempo:2.7.2
container_name: tempo
command: ["-config.file=/etc/tempo.yaml"]
volumes:
- ./tempo.yaml:/etc/tempo.yaml:ro
- tempo-data:/var/tempo
ports:
- "3200:3200" # HTTP API
# Grafana for dashboards
grafana:
image: grafana/grafana:10.2.3
container_name: grafana
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
volumes:
- ./grafana/provisioning:/etc/grafana/provisioning:ro
- ./grafana/dashboards:/var/lib/grafana/dashboards:ro
ports:
- "3000:3000"
depends_on:
- jaeger
- tempo
# Prometheus for metrics (optional, for correlation)
prometheus:
image: prom/prometheus:v2.48.1
container_name: prometheus
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml:ro
ports:
- "9090:9090"
networks:
default:
name: rippled-telemetry
```
---
## 5.7 Configuration Architecture
```mermaid
flowchart TB
subgraph config["Configuration Sources"]
cfgFile["xrpld.cfg<br/>[telemetry] section"]
cmake["CMake<br/>XRPL_ENABLE_TELEMETRY"]
end
subgraph init["Initialization"]
parse["setup_Telemetry()"]
factory["make_Telemetry()"]
end
subgraph runtime["Runtime Components"]
tracer["TracerProvider"]
exporter["OTLP Exporter"]
processor["BatchProcessor"]
end
subgraph collector["Collector Pipeline"]
recv["Receivers"]
proc["Processors"]
exp["Exporters"]
end
cfgFile --> parse
cmake -->|"compile flag"| parse
parse --> factory
factory --> tracer
tracer --> processor
processor --> exporter
exporter -->|"OTLP"| recv
recv --> proc
proc --> exp
style config fill:#e3f2fd,stroke:#1976d2
style runtime fill:#e8f5e9,stroke:#388e3c
style collector fill:#fff3e0,stroke:#ff9800
```
---
## 5.8 Grafana Integration
Step-by-step instructions for integrating rippled traces with Grafana.
### 5.8.1 Data Source Configuration
#### Tempo (Recommended)
```yaml
# grafana/provisioning/datasources/tempo.yaml
apiVersion: 1
datasources:
- name: Tempo
type: tempo
access: proxy
url: http://tempo:3200
jsonData:
httpMethod: GET
tracesToLogs:
datasourceUid: loki
tags: ["service.name", "xrpl.tx.hash"]
mappedTags: [{ key: "trace_id", value: "traceID" }]
mapTagNamesEnabled: true
filterByTraceID: true
serviceMap:
datasourceUid: prometheus
nodeGraph:
enabled: true
search:
hide: false
lokiSearch:
datasourceUid: loki
```
#### Jaeger
```yaml
# grafana/provisioning/datasources/jaeger.yaml
apiVersion: 1
datasources:
- name: Jaeger
type: jaeger
access: proxy
url: http://jaeger:16686
jsonData:
tracesToLogs:
datasourceUid: loki
tags: ["service.name"]
```
#### Elastic APM
```yaml
# grafana/provisioning/datasources/elastic-apm.yaml
apiVersion: 1
datasources:
- name: Elasticsearch-APM
type: elasticsearch
access: proxy
url: http://elasticsearch:9200
database: "apm-*"
jsonData:
esVersion: "8.0.0"
timeField: "@timestamp"
logMessageField: message
logLevelField: log.level
```
### 5.8.2 Dashboard Provisioning
```yaml
# grafana/provisioning/dashboards/dashboards.yaml
apiVersion: 1
providers:
- name: "rippled-dashboards"
orgId: 1
folder: "rippled"
folderUid: "rippled"
type: file
disableDeletion: false
updateIntervalSeconds: 30
options:
path: /var/lib/grafana/dashboards/rippled
```
### 5.8.3 Example Dashboard: RPC Performance
```json
{
"title": "rippled RPC Performance",
"uid": "rippled-rpc-performance",
"panels": [
{
"title": "RPC Latency by Command",
"type": "heatmap",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && span.xrpl.rpc.command != \"\"} | histogram_over_time(duration) by (span.xrpl.rpc.command)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 0 }
},
{
"title": "RPC Error Rate",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && status.code=error} | rate() by (span.xrpl.rpc.command)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 0 }
},
{
"title": "Top 10 Slowest RPC Commands",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && span.xrpl.rpc.command != \"\"} | avg(duration) by (span.xrpl.rpc.command) | topk(10)"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 8 }
},
{
"title": "Recent Traces",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\"}"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 16 }
}
]
}
```
### 5.8.4 Example Dashboard: Transaction Tracing
```json
{
"title": "rippled Transaction Tracing",
"uid": "rippled-tx-tracing",
"panels": [
{
"title": "Transaction Throughput",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.receive\"} | rate()"
}
],
"gridPos": { "h": 4, "w": 6, "x": 0, "y": 0 }
},
{
"title": "Cross-Node Relay Count",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.relay\"} | avg(span.xrpl.tx.relay_count)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 4 }
},
{
"title": "Transaction Validation Errors",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.validate\" && status.code=error}"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 4 }
}
]
}
```
### 5.8.5 TraceQL Query Examples
Common queries for rippled traces:
```
# Find all traces for a specific transaction hash
{resource.service.name="rippled" && span.xrpl.tx.hash="ABC123..."}
# Find slow RPC commands (>100ms)
{resource.service.name="rippled" && name=~"rpc.command.*"} | duration > 100ms
# Find consensus rounds taking >5 seconds
{resource.service.name="rippled" && name="consensus.round"} | duration > 5s
# Find failed transactions with error details
{resource.service.name="rippled" && name="tx.validate" && status.code=error}
# Find transactions relayed to many peers
{resource.service.name="rippled" && name="tx.relay"} | span.xrpl.tx.relay_count > 10
# Compare latency across nodes
{resource.service.name="rippled" && name="rpc.command.account_info"} | avg(duration) by (resource.service.instance.id)
```
### 5.8.6 Correlation with PerfLog
To correlate OpenTelemetry traces with existing PerfLog data:
**Step 1: Configure Loki to ingest PerfLog**
```yaml
# promtail-config.yaml
scrape_configs:
- job_name: rippled-perflog
static_configs:
- targets:
- localhost
labels:
job: rippled
__path__: /var/log/rippled/perf*.log
pipeline_stages:
- json:
expressions:
trace_id: trace_id
ledger_seq: ledger_seq
tx_hash: tx_hash
- labels:
trace_id:
ledger_seq:
tx_hash:
```
**Step 2: Add trace_id to PerfLog entries**
Modify PerfLog to include trace_id when available:
```cpp
// In PerfLog output, add trace_id from current span context
void logPerf(Json::Value& entry) {
auto span = opentelemetry::trace::GetSpan(
opentelemetry::context::RuntimeContext::GetCurrent());
if (span && span->GetContext().IsValid()) {
char traceIdHex[33];
span->GetContext().trace_id().ToLowerBase16(traceIdHex);
entry["trace_id"] = std::string(traceIdHex, 32);
}
// ... existing logging
}
```
**Step 3: Configure Grafana trace-to-logs link**
In Tempo data source configuration, set up the derived field:
```yaml
jsonData:
tracesToLogs:
datasourceUid: loki
tags: ["trace_id", "xrpl.tx.hash"]
filterByTraceID: true
filterBySpanID: false
```
### 5.8.7 Correlation with Insight/StatsD Metrics
To correlate traces with existing Beast Insight metrics:
**Step 1: Export Insight metrics to Prometheus**
```yaml
# prometheus.yaml
scrape_configs:
- job_name: "rippled-statsd"
static_configs:
- targets: ["statsd-exporter:9102"]
```
**Step 2: Add exemplars to metrics**
OpenTelemetry SDK automatically adds exemplars (trace IDs) to metrics when using the Prometheus exporter. This links metrics spikes to specific traces.
**Step 3: Configure Grafana metric-to-trace link**
```yaml
# In Prometheus data source
jsonData:
exemplarTraceIdDestinations:
- name: trace_id
datasourceUid: tempo
```
**Step 4: Dashboard panel with exemplars**
```json
{
"title": "RPC Latency with Trace Links",
"type": "timeseries",
"datasource": "Prometheus",
"targets": [
{
"expr": "histogram_quantile(0.99, rate(rippled_rpc_duration_seconds_bucket[5m]))",
"exemplar": true
}
]
}
```
This allows clicking on metric data points to jump directly to the related trace.
---
_Previous: [Code Samples](./04-code-samples.md)_ | _Next: [Implementation Phases](./06-implementation-phases.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,636 @@
# Implementation Phases
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Configuration Reference](./05-configuration-reference.md) | [Observability Backends](./07-observability-backends.md)
---
## 6.1 Phase Overview
```mermaid
gantt
title OpenTelemetry Implementation Timeline
dateFormat YYYY-MM-DD
axisFormat Week %W
section Phase 1
Core Infrastructure :p1, 2024-01-01, 2w
SDK Integration :p1a, 2024-01-01, 4d
Telemetry Interface :p1b, after p1a, 3d
Configuration & CMake :p1c, after p1b, 3d
Unit Tests :p1d, after p1c, 2d
section Phase 2
RPC Tracing :p2, after p1, 2w
HTTP Context Extraction :p2a, after p1, 2d
RPC Handler Instrumentation :p2b, after p2a, 4d
WebSocket Support :p2c, after p2b, 2d
Integration Tests :p2d, after p2c, 2d
section Phase 3
Transaction Tracing :p3, after p2, 2w
Protocol Buffer Extension :p3a, after p2, 2d
PeerImp Instrumentation :p3b, after p3a, 3d
Relay Context Propagation :p3c, after p3b, 3d
Multi-node Tests :p3d, after p3c, 2d
section Phase 4
Consensus Tracing :p4, after p3, 2w
Consensus Round Spans :p4a, after p3, 3d
Proposal Handling :p4b, after p4a, 3d
Validation Tests :p4c, after p4b, 4d
section Phase 5
Documentation & Deploy :p5, after p4, 1w
```
---
## 6.2 Phase 1: Core Infrastructure (Weeks 1-2)
**Objective**: Establish foundational telemetry infrastructure
### Tasks
| Task | Description | Effort | Risk |
| ---- | ----------------------------------------------------- | ------ | ------ |
| 1.1 | Add OpenTelemetry C++ SDK to Conan/CMake | 2d | Low |
| 1.2 | Implement `Telemetry` interface and factory | 2d | Low |
| 1.3 | Implement `SpanGuard` RAII wrapper | 1d | Low |
| 1.4 | Implement configuration parser | 1d | Low |
| 1.5 | Integrate into `ApplicationImp` | 1d | Medium |
| 1.6 | Add conditional compilation (`XRPL_ENABLE_TELEMETRY`) | 1d | Low |
| 1.7 | Create `NullTelemetry` no-op implementation | 0.5d | Low |
| 1.8 | Unit tests for core infrastructure | 1.5d | Low |
**Total Effort**: 10 days (2 developers)
### Exit Criteria
- [ ] OpenTelemetry SDK compiles and links
- [ ] Telemetry can be enabled/disabled via config
- [ ] Basic span creation works
- [ ] No performance regression when disabled
- [ ] Unit tests passing
---
## 6.3 Phase 2: RPC Tracing (Weeks 3-4)
**Objective**: Complete tracing for all RPC operations
### Tasks
| Task | Description | Effort | Risk |
| ---- | -------------------------------------------------- | ------ | ------ |
| 2.1 | Implement W3C Trace Context HTTP header extraction | 1d | Low |
| 2.2 | Instrument `ServerHandler::onRequest()` | 1d | Low |
| 2.3 | Instrument `RPCHandler::doCommand()` | 2d | Medium |
| 2.4 | Add RPC-specific attributes | 1d | Low |
| 2.5 | Instrument WebSocket handler | 1d | Medium |
| 2.6 | Integration tests for RPC tracing | 2d | Low |
| 2.7 | Performance benchmarks | 1d | Low |
| 2.8 | Documentation | 1d | Low |
**Total Effort**: 10 days
### Exit Criteria
- [ ] All RPC commands traced
- [ ] Trace context propagates from HTTP headers
- [ ] WebSocket and HTTP both instrumented
- [ ] <1ms overhead per RPC call
- [ ] Integration tests passing
---
## 6.4 Phase 3: Transaction Tracing (Weeks 5-6)
**Objective**: Trace transaction lifecycle across network
### Tasks
| Task | Description | Effort | Risk |
| ---- | --------------------------------------------- | ------ | ------ |
| 3.1 | Define `TraceContext` Protocol Buffer message | 1d | Low |
| 3.2 | Implement protobuf context serialization | 1d | Low |
| 3.3 | Instrument `PeerImp::handleTransaction()` | 2d | Medium |
| 3.4 | Instrument `NetworkOPs::submitTransaction()` | 1d | Medium |
| 3.5 | Instrument HashRouter integration | 1d | Medium |
| 3.6 | Implement relay context propagation | 2d | High |
| 3.7 | Integration tests (multi-node) | 2d | Medium |
| 3.8 | Performance benchmarks | 1d | Low |
**Total Effort**: 11 days
### Exit Criteria
- [ ] Transaction traces span across nodes
- [ ] Trace context in Protocol Buffer messages
- [ ] HashRouter deduplication visible in traces
- [ ] Multi-node integration tests passing
- [ ] <5% overhead on transaction throughput
---
## 6.5 Phase 4: Consensus Tracing (Weeks 7-8)
**Objective**: Full observability into consensus rounds
### Tasks
| Task | Description | Effort | Risk |
| ---- | ---------------------------------------------- | ------ | ------ |
| 4.1 | Instrument `RCLConsensusAdaptor::startRound()` | 1d | Medium |
| 4.2 | Instrument phase transitions | 2d | Medium |
| 4.3 | Instrument proposal handling | 2d | High |
| 4.4 | Instrument validation handling | 1d | Medium |
| 4.5 | Add consensus-specific attributes | 1d | Low |
| 4.6 | Correlate with transaction traces | 1d | Medium |
| 4.7 | Multi-validator integration tests | 2d | High |
| 4.8 | Performance validation | 1d | Medium |
**Total Effort**: 11 days
### Spans Produced
| Span Name | Location | Attributes |
| --------------------------- | ---------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| `consensus.proposal.send` | `RCLConsensus.cpp:177` | `xrpl.consensus.round` |
| `consensus.ledger_close` | `RCLConsensus.cpp:282` | `xrpl.consensus.ledger.seq`, `xrpl.consensus.mode` |
| `consensus.accept` | `RCLConsensus.cpp:395` | `xrpl.consensus.proposers`, `xrpl.consensus.round_time_ms` |
| `consensus.accept.apply` | `RCLConsensus.cpp:453` | `xrpl.consensus.close_time`, `close_time_correct`, `close_resolution_ms`, `state`, `proposing`, `round_time_ms`, `ledger.seq` |
| `consensus.validation.send` | `RCLConsensus.cpp:753` | `xrpl.consensus.proposing` |
### Exit Criteria
- [x] Complete consensus round traces
- [x] Phase transitions visible
- [x] Proposals and validations traced
- [x] Close time agreement tracked (per `avCT_CONSENSUS_PCT`)
- [x] No impact on consensus timing
- [ ] Multi-validator test network validated
### Implementation Status — Phase 4a Complete
Phase 4a (establish-phase gap fill & cross-node correlation) adds:
- **Deterministic trace ID** derived from `previousLedger.id()` so all validators
in the same round share the same `trace_id` (switchable via
`consensus_trace_strategy` config: `"deterministic"` or `"attribute"`).
- **Round lifecycle spans**: `consensus.round` with round-to-round span links.
- **Establish phase**: `consensus.establish`, `consensus.update_positions` (with
`dispute.resolve` events), `consensus.check` (with threshold tracking).
- **Mode changes**: `consensus.mode_change` spans.
- **Validation**: `consensus.validation.send` with span link to round span
(thread-safe cross-thread access via `roundSpanContext_` snapshot).
- **Separation of concerns**: telemetry extracted to private helpers
(`startRoundTracing`, `createValidationSpan`, `startEstablishTracing`,
`updateEstablishTracing`, `endEstablishTracing`).
See [Phase4_taskList.md](./Phase4_taskList.md) for the full spec and implementation notes.
---
## 6.5a Phase 4a: Establish-Phase Gap Fill & Cross-Node Correlation
**Objective**: Fill tracing gaps in the establish phase and establish cross-node
correlation using deterministic trace IDs derived from `previousLedger.id()`.
**Approach**: Direct instrumentation in `Consensus.h`. Long-lived spans use
direct SpanGuard members; short-lived scoped spans use `XRPL_TRACE_*` macros.
### Tasks
| Task | Description | Effort | Risk |
| ---- | ------------------------------------------------ | ------ | ------ |
| 4a.0 | Prerequisites: extend SpanGuard & Telemetry APIs | 1d | Medium |
| 4a.1 | Adaptor `getTelemetry()` method | 0.5d | Low |
| 4a.2 | Switchable round span with deterministic traceID | 2d | High |
| 4a.3 | Span members in `Consensus.h` | 0.5d | Medium |
| 4a.4 | Instrument `phaseEstablish()` | 1d | Medium |
| 4a.5 | Instrument `updateOurPositions()` | 1d | Medium |
| 4a.6 | Instrument `haveConsensus()` (thresholds) | 1d | Medium |
| 4a.7 | Instrument mode changes | 0.5d | Low |
| 4a.8 | Reparent existing spans under round | 0.5d | Low |
| 4a.9 | Build verification and testing | 1d | Low |
**Total Effort**: 9 days
### Spans Produced
| Span Name | Location | Key Attributes |
| ---------------------------- | ------------------ | ---------------------------------------------------------------- |
| `consensus.round` | `RCLConsensus.cpp` | `round_id`, `ledger_id`, `ledger.seq`, `mode`; link prev round |
| `consensus.establish` | `Consensus.h` | `converge_percent`, `establish_count`, `proposers` |
| `consensus.update_positions` | `Consensus.h` | `disputes_count`, `converge_percent`, `proposers_agreed/total` |
| `consensus.check` | `Consensus.h` | `agree/disagree_count`, `threshold_percent`, `result` |
| `consensus.mode_change` | `RCLConsensus.cpp` | `mode.old`, `mode.new` |
### Exit Criteria
- [ ] Establish phase internals fully traced (disputes, convergence, thresholds)
- [ ] Cross-node correlation works via deterministic trace_id
- [ ] Strategy switchable via config (`deterministic` / `attribute`)
- [ ] Consecutive rounds linked via follows-from spans
- [ ] Build passes with telemetry ON and OFF
- [ ] No impact on consensus timing
See [Phase4_taskList.md](./Phase4_taskList.md) for full task details.
---
## 6.5b Phase 4b: Cross-Node Propagation (Future)
**Objective**: Wire `TraceContextPropagator` for P2P messages (proposals,
validations) to enable true distributed tracing between nodes.
**Status**: Design documented, NOT implemented. Protobuf fields (field 1001)
and `TraceContextPropagator` class exist. Wiring deferred until Phase 4a is
validated in a multi-node environment.
**Prerequisites**: Phase 4a complete and validated.
See [Phase4_taskList.md § Phase 4b](./Phase4_taskList.md) for full design.
---
## 6.6 Phase 5: Documentation & Deployment (Week 9)
**Objective**: Production readiness
### Tasks
| Task | Description | Effort | Risk |
| ---- | ----------------------------- | ------ | ---- |
| 5.1 | Operator runbook | 1d | Low |
| 5.2 | Grafana dashboards | 1d | Low |
| 5.3 | Alert definitions | 0.5d | Low |
| 5.4 | Collector deployment examples | 0.5d | Low |
| 5.5 | Developer documentation | 1d | Low |
| 5.6 | Training materials | 0.5d | Low |
| 5.7 | Final integration testing | 0.5d | Low |
**Total Effort**: 5 days
---
## 6.7 Risk Assessment
```mermaid
quadrantChart
title Risk Assessment Matrix
x-axis Low Impact --> High Impact
y-axis Low Likelihood --> High Likelihood
quadrant-1 Monitor Closely
quadrant-2 Mitigate Immediately
quadrant-3 Accept Risk
quadrant-4 Plan Mitigation
SDK Compatibility: [0.25, 0.2]
Protocol Changes: [0.75, 0.65]
Performance Overhead: [0.65, 0.45]
Context Propagation: [0.5, 0.5]
Memory Leaks: [0.8, 0.2]
```
### Risk Details
| Risk | Likelihood | Impact | Mitigation |
| ------------------------------------ | ---------- | ------ | --------------------------------------- |
| Protocol changes break compatibility | Medium | High | Use high field numbers, optional fields |
| Performance overhead unacceptable | Medium | Medium | Sampling, conditional compilation |
| Context propagation complexity | Medium | Medium | Phased rollout, extensive testing |
| SDK compatibility issues | Low | Medium | Pin SDK version, fallback to no-op |
| Memory leaks in long-running nodes | Low | High | Memory profiling, bounded queues |
---
## 6.8 Success Metrics
| Metric | Target | Measurement |
| ------------------------ | ------------------------------ | --------------------- |
| Trace coverage | >95% of transactions | Sampling verification |
| CPU overhead | <3% | Benchmark tests |
| Memory overhead | <5 MB | Memory profiling |
| Latency impact (p99) | <2% | Performance tests |
| Trace completeness | >99% spans with required attrs | Validation script |
| Cross-node trace linkage | >90% of multi-hop transactions | Integration tests |
---
## 6.9 Effort Summary
<div align="center">
```mermaid
%%{init: {'pie': {'textPosition': 0.75}}}%%
pie showData
"Phase 1: Core Infrastructure" : 10
"Phase 2: RPC Tracing" : 10
"Phase 3: Transaction Tracing" : 11
"Phase 4: Consensus Tracing" : 11
"Phase 5: Documentation" : 5
```
**Total Effort Distribution (47 developer-days)**
</div>
### Resource Requirements
| Phase | Developers | Duration | Total Effort |
| --------- | ---------- | ----------- | ------------ |
| 1 | 2 | 2 weeks | 10 days |
| 2 | 1-2 | 2 weeks | 10 days |
| 3 | 2 | 2 weeks | 11 days |
| 4 | 2 | 2 weeks | 11 days |
| 5 | 1 | 1 week | 5 days |
| **Total** | **2** | **9 weeks** | **47 days** |
---
## 6.10 Quick Wins and Crawl-Walk-Run Strategy
This section outlines a prioritized approach to maximize ROI with minimal initial investment.
### 6.10.1 Crawl-Walk-Run Overview
<div align="center">
```mermaid
flowchart TB
subgraph crawl["🐢 CRAWL (Week 1-2)"]
direction LR
c1[Core SDK Setup] ~~~ c2[RPC Tracing Only] ~~~ c3[Single Node]
end
subgraph walk["🚶 WALK (Week 3-5)"]
direction LR
w1[Transaction Tracing] ~~~ w2[Cross-Node Context] ~~~ w3[Basic Dashboards]
end
subgraph run["🏃 RUN (Week 6-9)"]
direction LR
r1[Consensus Tracing] ~~~ r2[Full Correlation] ~~~ r3[Production Deploy]
end
crawl --> walk --> run
style crawl fill:#1b5e20,stroke:#0d3d14,color:#fff
style walk fill:#bf360c,stroke:#8c2809,color:#fff
style run fill:#0d47a1,stroke:#082f6a,color:#fff
style c1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style c2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style c3 fill:#1b5e20,stroke:#0d3d14,color:#fff
style w1 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style w2 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style w3 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style r1 fill:#0d47a1,stroke:#082f6a,color:#fff
style r2 fill:#0d47a1,stroke:#082f6a,color:#fff
style r3 fill:#0d47a1,stroke:#082f6a,color:#fff
```
</div>
### 6.10.2 Quick Wins (Immediate Value)
| Quick Win | Effort | Value | When to Deploy |
| ------------------------------ | -------- | ------ | -------------- |
| **RPC Command Tracing** | 2 days | High | Week 2 |
| **RPC Latency Histograms** | 0.5 days | High | Week 2 |
| **Error Rate Dashboard** | 0.5 days | Medium | Week 2 |
| **Transaction Submit Tracing** | 1 day | High | Week 3 |
| **Consensus Round Duration** | 1 day | Medium | Week 6 |
### 6.10.3 CRAWL Phase (Weeks 1-2)
**Goal**: Get basic tracing working with minimal code changes.
**What You Get**:
- RPC request/response traces for all commands
- Latency breakdown per RPC command
- Error visibility with stack traces
- Basic Grafana dashboard
**Code Changes**: ~15 lines in `ServerHandler.cpp`, ~40 lines in new telemetry module
**Why Start Here**:
- RPC is the lowest-risk, highest-visibility component
- Immediate value for debugging client issues
- No cross-node complexity
- Single file modification to existing code
### 6.10.4 WALK Phase (Weeks 3-5)
**Goal**: Add transaction lifecycle tracing across nodes.
**What You Get**:
- End-to-end transaction traces from submit to relay
- Cross-node correlation (see transaction path)
- HashRouter deduplication visibility
- Relay latency metrics
**Code Changes**: ~120 lines across 4 files, plus protobuf extension
**Why Do This Second**:
- Builds on RPC tracing (transactions submitted via RPC)
- Moderate complexity (requires context propagation)
- High value for debugging transaction issues
### 6.10.5 RUN Phase (Weeks 6-9)
**Goal**: Full observability including consensus.
**What You Get**:
- Complete consensus round visibility
- Phase transition timing
- Validator proposal tracking
- Full end-to-end traces (client → RPC → TX → consensus → ledger)
**Code Changes**: ~100 lines across 3 consensus files
**Why Do This Last**:
- Highest complexity (consensus is critical path)
- Requires thorough testing
- Lower relative value (consensus issues are rarer)
### 6.10.6 ROI Prioritization Matrix
```mermaid
quadrantChart
title Implementation ROI Matrix
x-axis Low Effort --> High Effort
y-axis Low Value --> High Value
quadrant-1 Quick Wins - Do First
quadrant-2 Major Projects - Plan Carefully
quadrant-3 Nice to Have - Optional
quadrant-4 Time Sinks - Avoid
RPC Tracing: [0.15, 0.9]
TX Submit Trace: [0.25, 0.85]
TX Relay Trace: [0.5, 0.8]
Consensus Trace: [0.7, 0.75]
Peer Message Trace: [0.85, 0.3]
Ledger Acquire: [0.55, 0.5]
```
---
## 6.11 Definition of Done
Clear, measurable criteria for each phase.
### 6.11.1 Phase 1: Core Infrastructure
| Criterion | Measurement | Target |
| --------------- | ---------------------------------------------------------- | ---------------------------- |
| SDK Integration | `cmake --build` succeeds with `-DXRPL_ENABLE_TELEMETRY=ON` | ✅ Compiles |
| Runtime Toggle | `enabled=0` produces zero overhead | <0.1% CPU difference |
| Span Creation | Unit test creates and exports span | Span appears in Jaeger |
| Configuration | All config options parsed correctly | Config validation tests pass |
| Documentation | Developer guide exists | PR approved |
**Definition of Done**: All criteria met, PR merged, no regressions in CI.
### 6.11.2 Phase 2: RPC Tracing
| Criterion | Measurement | Target |
| ------------------ | ---------------------------------- | -------------------------- |
| Coverage | All RPC commands instrumented | 100% of commands |
| Context Extraction | traceparent header propagates | Integration test passes |
| Attributes | Command, status, duration recorded | Validation script confirms |
| Performance | RPC latency overhead | <1ms p99 |
| Dashboard | Grafana dashboard deployed | Screenshot in docs |
**Definition of Done**: RPC traces visible in Jaeger/Tempo for all commands, dashboard shows latency distribution.
### 6.11.3 Phase 3: Transaction Tracing
| Criterion | Measurement | Target |
| ---------------- | ------------------------------- | ---------------------------------- |
| Local Trace | Submit validate TxQ traced | Single-node test passes |
| Cross-Node | Context propagates via protobuf | Multi-node test passes |
| Relay Visibility | relay_count attribute correct | Spot check 100 txs |
| HashRouter | Deduplication visible in trace | Duplicate txs show suppressed=true |
| Performance | TX throughput overhead | <5% degradation |
**Definition of Done**: Transaction traces span 3+ nodes in test network, performance within bounds.
### 6.11.4 Phase 4: Consensus Tracing
| Criterion | Measurement | Target |
| -------------------- | ----------------------------- | ------------------------- |
| Round Tracing | startRound creates root span | Unit test passes |
| Phase Visibility | All phases have child spans | Integration test confirms |
| Proposer Attribution | Proposer ID in attributes | Spot check 50 rounds |
| Timing Accuracy | Phase durations match PerfLog | <5% variance |
| No Consensus Impact | Round timing unchanged | Performance test passes |
**Definition of Done**: Consensus rounds fully traceable, no impact on consensus timing.
### 6.11.5 Phase 5: Production Deployment
| Criterion | Measurement | Target |
| ------------ | ---------------------------- | -------------------------- |
| Collector HA | Multiple collectors deployed | No single point of failure |
| Sampling | Tail sampling configured | 10% base + errors + slow |
| Retention | Data retained per policy | 7 days hot, 30 days warm |
| Alerting | Alerts configured | Error spike, high latency |
| Runbook | Operator documentation | Approved by ops team |
| Training | Team trained | Session completed |
**Definition of Done**: Telemetry running in production, operators trained, alerts active.
### 6.11.6 Success Metrics Summary
| Phase | Primary Metric | Secondary Metric | Deadline |
| ------- | ---------------------- | --------------------------- | ------------- |
| Phase 1 | SDK compiles and runs | Zero overhead when disabled | End of Week 2 |
| Phase 2 | 100% RPC coverage | <1ms latency overhead | End of Week 4 |
| Phase 3 | Cross-node traces work | <5% throughput impact | End of Week 6 |
| Phase 4 | Consensus fully traced | No consensus timing impact | End of Week 8 |
| Phase 5 | Production deployment | Operators trained | End of Week 9 |
---
## 6.12 Recommended Implementation Order
Based on ROI analysis, implement in this exact order:
```mermaid
flowchart TB
subgraph week1["Week 1"]
t1[1. OpenTelemetry SDK<br/>Conan/CMake integration]
t2[2. Telemetry interface<br/>SpanGuard, config]
end
subgraph week2["Week 2"]
t3[3. RPC ServerHandler<br/>instrumentation]
t4[4. Basic Jaeger setup<br/>for testing]
end
subgraph week3["Week 3"]
t5[5. Transaction submit<br/>tracing]
t6[6. Grafana dashboard<br/>v1]
end
subgraph week4["Week 4"]
t7[7. Protobuf context<br/>extension]
t8[8. PeerImp tx.relay<br/>instrumentation]
end
subgraph week5["Week 5"]
t9[9. Multi-node<br/>integration tests]
t10[10. Performance<br/>benchmarks]
end
subgraph week6_8["Weeks 6-8"]
t11[11. Consensus<br/>instrumentation]
t12[12. Full integration<br/>testing]
end
subgraph week9["Week 9"]
t13[13. Production<br/>deployment]
t14[14. Documentation<br/>& training]
end
t1 --> t2 --> t3 --> t4
t4 --> t5 --> t6
t6 --> t7 --> t8
t8 --> t9 --> t10
t10 --> t11 --> t12
t12 --> t13 --> t14
style week1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style week2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style week3 fill:#bf360c,stroke:#8c2809,color:#fff
style week4 fill:#bf360c,stroke:#8c2809,color:#fff
style week5 fill:#bf360c,stroke:#8c2809,color:#fff
style week6_8 fill:#0d47a1,stroke:#082f6a,color:#fff
style week9 fill:#4a148c,stroke:#2e0d57,color:#fff
style t1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style t2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style t3 fill:#1b5e20,stroke:#0d3d14,color:#fff
style t4 fill:#1b5e20,stroke:#0d3d14,color:#fff
style t5 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t6 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t7 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t8 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t9 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t10 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t11 fill:#0d47a1,stroke:#082f6a,color:#fff
style t12 fill:#0d47a1,stroke:#082f6a,color:#fff
style t13 fill:#4a148c,stroke:#2e0d57,color:#fff
style t14 fill:#4a148c,stroke:#2e0d57,color:#fff
```
---
_Previous: [Configuration Reference](./05-configuration-reference.md)_ | _Next: [Observability Backends](./07-observability-backends.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,595 @@
# Observability Backend Recommendations
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Implementation Phases](./06-implementation-phases.md) | [Appendix](./08-appendix.md)
---
## 7.1 Development/Testing Backends
| Backend | Pros | Cons | Use Case |
| ---------- | ------------------- | ----------------- | ----------------- |
| **Jaeger** | Easy setup, good UI | Limited retention | Local dev, CI |
| **Zipkin** | Simple, lightweight | Basic features | Quick prototyping |
### Quick Start with Jaeger
```bash
# Start Jaeger with OTLP support
docker run -d --name jaeger \
-e COLLECTOR_OTLP_ENABLED=true \
-p 16686:16686 \
-p 4317:4317 \
-p 4318:4318 \
jaegertracing/all-in-one:latest
```
---
## 7.2 Production Backends
| Backend | Pros | Cons | Use Case |
| ----------------- | ----------------------------------------- | ------------------ | --------------------------- |
| **Grafana Tempo** | Cost-effective, Grafana integration | Newer project | Most production deployments |
| **Elastic APM** | Full observability stack, log correlation | Resource intensive | Existing Elastic users |
| **Honeycomb** | Excellent query, high cardinality | SaaS cost | Deep debugging needs |
| **Datadog APM** | Full platform, easy setup | SaaS cost | Enterprise with budget |
### Backend Selection Flowchart
```mermaid
flowchart TD
start[Select Backend] --> budget{Budget<br/>Constraints?}
budget -->|Yes| oss[Open Source]
budget -->|No| saas{Prefer<br/>SaaS?}
oss --> existing{Existing<br/>Stack?}
existing -->|Grafana| tempo[Grafana Tempo]
existing -->|Elastic| elastic[Elastic APM]
existing -->|None| tempo
saas -->|Yes| enterprise{Enterprise<br/>Support?}
saas -->|No| oss
enterprise -->|Yes| datadog[Datadog APM]
enterprise -->|No| honeycomb[Honeycomb]
tempo --> final[Configure Collector]
elastic --> final
honeycomb --> final
datadog --> final
style start fill:#0f172a,stroke:#020617,color:#fff
style budget fill:#334155,stroke:#1e293b,color:#fff
style oss fill:#1e293b,stroke:#0f172a,color:#fff
style existing fill:#334155,stroke:#1e293b,color:#fff
style saas fill:#334155,stroke:#1e293b,color:#fff
style enterprise fill:#334155,stroke:#1e293b,color:#fff
style final fill:#0f172a,stroke:#020617,color:#fff
style tempo fill:#1b5e20,stroke:#0d3d14,color:#fff
style elastic fill:#bf360c,stroke:#8c2809,color:#fff
style honeycomb fill:#0d47a1,stroke:#082f6a,color:#fff
style datadog fill:#4a148c,stroke:#2e0d57,color:#fff
```
---
## 7.3 Recommended Production Architecture
```mermaid
flowchart TB
subgraph validators["Validator Nodes"]
v1[rippled<br/>Validator 1]
v2[rippled<br/>Validator 2]
end
subgraph stock["Stock Nodes"]
s1[rippled<br/>Stock 1]
s2[rippled<br/>Stock 2]
end
subgraph collector["OTel Collector Cluster"]
c1[Collector<br/>DC1]
c2[Collector<br/>DC2]
end
subgraph backends["Storage Backends"]
tempo[(Grafana<br/>Tempo)]
elastic[(Elastic<br/>APM)]
archive[(S3/GCS<br/>Archive)]
end
subgraph ui["Visualization"]
grafana[Grafana<br/>Dashboards]
end
v1 -->|OTLP| c1
v2 -->|OTLP| c1
s1 -->|OTLP| c2
s2 -->|OTLP| c2
c1 --> tempo
c1 --> elastic
c2 --> tempo
c2 --> archive
tempo --> grafana
elastic --> grafana
style validators fill:#b71c1c,stroke:#7f1d1d,color:#ffffff
style stock fill:#0d47a1,stroke:#082f6a,color:#ffffff
style collector fill:#bf360c,stroke:#8c2809,color:#ffffff
style backends fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style ui fill:#4a148c,stroke:#2e0d57,color:#ffffff
```
---
## 7.4 Architecture Considerations
### 7.4.1 Collector Placement
| Strategy | Description | Pros | Cons |
| ------------- | -------------------- | ------------------------ | ----------------------- |
| **Sidecar** | Collector per node | Isolation, simple config | Resource overhead |
| **DaemonSet** | Collector per host | Shared resources | Complexity |
| **Gateway** | Central collector(s) | Centralized processing | Single point of failure |
**Recommendation**: Use **Gateway** pattern with regional collectors for rippled networks:
- One collector cluster per datacenter/region
- Tail-based sampling at collector level
- Multiple export destinations for redundancy
### 7.4.2 Sampling Strategy
```mermaid
flowchart LR
subgraph head["Head Sampling (Node)"]
hs[10% probabilistic]
end
subgraph tail["Tail Sampling (Collector)"]
ts1[Keep all errors]
ts2[Keep slow >5s]
ts3[Keep 10% rest]
end
head --> tail
ts1 --> final[Final Traces]
ts2 --> final
ts3 --> final
style head fill:#0d47a1,stroke:#082f6a,color:#fff
style tail fill:#1b5e20,stroke:#0d3d14,color:#fff
style hs fill:#0d47a1,stroke:#082f6a,color:#fff
style ts1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style ts2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style ts3 fill:#1b5e20,stroke:#0d3d14,color:#fff
style final fill:#bf360c,stroke:#8c2809,color:#fff
```
### 7.4.3 Data Retention
| Environment | Hot Storage | Warm Storage | Cold Archive |
| ----------- | ----------- | ------------ | ------------ |
| Development | 24 hours | N/A | N/A |
| Staging | 7 days | N/A | N/A |
| Production | 7 days | 30 days | many years |
---
## 7.5 Integration Checklist
- [ ] Choose primary backend (Tempo recommended for cost/features)
- [ ] Deploy collector cluster with high availability
- [ ] Configure tail-based sampling for error/latency traces
- [ ] Set up Grafana dashboards for trace visualization
- [ ] Configure alerts for trace anomalies
- [ ] Establish data retention policies
- [ ] Test trace correlation with logs and metrics
---
## 7.6 Grafana Dashboard Examples
Pre-built dashboards for rippled observability.
### 7.6.1 Consensus Health Dashboard
```json
{
"title": "rippled Consensus Health",
"uid": "rippled-consensus-health",
"tags": ["rippled", "consensus", "tracing"],
"panels": [
{
"title": "Consensus Round Duration",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"consensus.round\"} | avg(duration) by (resource.service.instance.id)"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"thresholds": {
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 4000 },
{ "color": "red", "value": 5000 }
]
}
}
},
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 0 }
},
{
"title": "Phase Duration Breakdown",
"type": "barchart",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=~\"consensus.phase.*\"} | avg(duration) by (name)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 0 }
},
{
"title": "Proposers per Round",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"consensus.round\"} | avg(span.xrpl.consensus.proposers)"
}
],
"gridPos": { "h": 4, "w": 6, "x": 0, "y": 8 }
},
{
"title": "Recent Slow Rounds (>5s)",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"consensus.round\"} | duration > 5s"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 12 }
}
]
}
```
### 7.6.2 Node Overview Dashboard
```json
{
"title": "rippled Node Overview",
"uid": "rippled-node-overview",
"panels": [
{
"title": "Active Nodes",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\"} | count_over_time() by (resource.service.instance.id) | count()"
}
],
"gridPos": { "h": 4, "w": 4, "x": 0, "y": 0 }
},
{
"title": "Total Transactions (1h)",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.receive\"} | count()"
}
],
"gridPos": { "h": 4, "w": 4, "x": 4, "y": 0 }
},
{
"title": "Error Rate",
"type": "gauge",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && status.code=error} | rate() / {resource.service.name=\"rippled\"} | rate() * 100"
}
],
"fieldConfig": {
"defaults": {
"unit": "percent",
"max": 10,
"thresholds": {
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 1 },
{ "color": "red", "value": 5 }
]
}
}
},
"gridPos": { "h": 4, "w": 4, "x": 8, "y": 0 }
},
{
"title": "Service Map",
"type": "nodeGraph",
"datasource": "Tempo",
"gridPos": { "h": 12, "w": 12, "x": 12, "y": 0 }
}
]
}
```
### 7.6.3 Alert Rules
```yaml
# grafana/provisioning/alerting/rippled-alerts.yaml
apiVersion: 1
groups:
- name: rippled-tracing-alerts
folder: rippled
interval: 1m
rules:
- uid: consensus-slow
title: Consensus Round Slow
condition: A
data:
- refId: A
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="rippled" && name="consensus.round"} | avg(duration) > 5s'
for: 5m
annotations:
summary: Consensus rounds taking >5 seconds
description: "Consensus duration: {{ $value }}ms"
labels:
severity: warning
- uid: rpc-error-spike
title: RPC Error Rate Spike
condition: B
data:
- refId: B
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="rippled" && name=~"rpc.command.*" && status.code=error} | rate() > 0.05'
for: 2m
annotations:
summary: RPC error rate >5%
labels:
severity: critical
- uid: tx-throughput-drop
title: Transaction Throughput Drop
condition: C
data:
- refId: C
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="rippled" && name="tx.receive"} | rate() < 10'
for: 10m
annotations:
summary: Transaction throughput below threshold
labels:
severity: warning
```
---
## 7.7 PerfLog and Insight Correlation
How to correlate OpenTelemetry traces with existing rippled observability.
### 7.7.1 Correlation Architecture
```mermaid
flowchart TB
subgraph rippled["rippled Node"]
otel[OpenTelemetry<br/>Spans]
perflog[PerfLog<br/>JSON Logs]
insight[Beast Insight<br/>StatsD Metrics]
end
subgraph collectors["Data Collection"]
otelc[OTel Collector]
promtail[Promtail/Fluentd]
statsd[StatsD Exporter]
end
subgraph storage["Storage"]
tempo[(Tempo)]
loki[(Loki)]
prom[(Prometheus)]
end
subgraph grafana["Grafana"]
traces[Trace View]
logs[Log View]
metrics[Metrics View]
corr[Correlation<br/>Panel]
end
otel -->|OTLP| otelc --> tempo
perflog -->|JSON| promtail --> loki
insight -->|StatsD| statsd --> prom
tempo --> traces
loki --> logs
prom --> metrics
traces --> corr
logs --> corr
metrics --> corr
style rippled fill:#0d47a1,stroke:#082f6a,color:#fff
style collectors fill:#bf360c,stroke:#8c2809,color:#fff
style storage fill:#1b5e20,stroke:#0d3d14,color:#fff
style grafana fill:#4a148c,stroke:#2e0d57,color:#fff
style otel fill:#0d47a1,stroke:#082f6a,color:#fff
style perflog fill:#0d47a1,stroke:#082f6a,color:#fff
style insight fill:#0d47a1,stroke:#082f6a,color:#fff
style otelc fill:#bf360c,stroke:#8c2809,color:#fff
style promtail fill:#bf360c,stroke:#8c2809,color:#fff
style statsd fill:#bf360c,stroke:#8c2809,color:#fff
style tempo fill:#1b5e20,stroke:#0d3d14,color:#fff
style loki fill:#1b5e20,stroke:#0d3d14,color:#fff
style prom fill:#1b5e20,stroke:#0d3d14,color:#fff
style traces fill:#4a148c,stroke:#2e0d57,color:#fff
style logs fill:#4a148c,stroke:#2e0d57,color:#fff
style metrics fill:#4a148c,stroke:#2e0d57,color:#fff
style corr fill:#4a148c,stroke:#2e0d57,color:#fff
```
### 7.7.2 Correlation Fields
| Source | Field | Link To | Purpose |
| ----------- | --------------------------- | ------------- | -------------------------- |
| **Trace** | `trace_id` | Logs | Find log entries for trace |
| **Trace** | `xrpl.tx.hash` | Logs, Metrics | Find TX-related data |
| **Trace** | `xrpl.consensus.ledger.seq` | Logs | Find ledger-related logs |
| **PerfLog** | `trace_id` (new) | Traces | Jump to trace from log |
| **PerfLog** | `ledger_seq` | Traces | Find consensus trace |
| **Insight** | `exemplar.trace_id` | Traces | Jump from metric spike |
### 7.7.3 Example: Debugging a Slow Transaction
**Step 1: Find the trace**
```
# In Grafana Explore with Tempo
{resource.service.name="rippled" && span.xrpl.tx.hash="ABC123..."}
```
**Step 2: Get the trace_id from the trace view**
```
Trace ID: 4bf92f3577b34da6a3ce929d0e0e4736
```
**Step 3: Find related PerfLog entries**
```
# In Grafana Explore with Loki
{job="rippled"} |= "4bf92f3577b34da6a3ce929d0e0e4736"
```
**Step 4: Check Insight metrics for the time window**
```
# In Grafana with Prometheus
rate(rippled_tx_applied_total[1m])
@ timestamp_from_trace
```
### 7.7.4 Unified Dashboard Example
```json
{
"title": "rippled Unified Observability",
"uid": "rippled-unified",
"panels": [
{
"title": "Transaction Latency (Traces)",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.receive\"} | histogram_over_time(duration)"
}
],
"gridPos": { "h": 6, "w": 8, "x": 0, "y": 0 }
},
{
"title": "Transaction Rate (Metrics)",
"type": "timeseries",
"datasource": "Prometheus",
"targets": [
{
"expr": "rate(rippled_tx_received_total[5m])",
"legendFormat": "{{ instance }}"
}
],
"fieldConfig": {
"defaults": {
"links": [
{
"title": "View traces",
"url": "/explore?left={\"datasource\":\"Tempo\",\"query\":\"{resource.service.name=\\\"rippled\\\" && name=\\\"tx.receive\\\"}\"}"
}
]
}
},
"gridPos": { "h": 6, "w": 8, "x": 8, "y": 0 }
},
{
"title": "Recent Logs",
"type": "logs",
"datasource": "Loki",
"targets": [
{
"expr": "{job=\"rippled\"} | json"
}
],
"gridPos": { "h": 6, "w": 8, "x": 16, "y": 0 }
},
{
"title": "Trace Search",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\"}"
}
],
"fieldConfig": {
"overrides": [
{
"matcher": { "id": "byName", "options": "traceID" },
"properties": [
{
"id": "links",
"value": [
{
"title": "View trace",
"url": "/explore?left={\"datasource\":\"Tempo\",\"query\":\"${__value.raw}\"}"
},
{
"title": "View logs",
"url": "/explore?left={\"datasource\":\"Loki\",\"query\":\"{job=\\\"rippled\\\"} |= \\\"${__value.raw}\\\"\"}"
}
]
}
]
}
]
},
"gridPos": { "h": 12, "w": 24, "x": 0, "y": 6 }
}
]
}
```
---
_Previous: [Implementation Phases](./06-implementation-phases.md)_ | _Next: [Appendix](./08-appendix.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,147 @@
# Appendix
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Observability Backends](./07-observability-backends.md)
---
## 8.1 Glossary
| Term | Definition |
| --------------------- | ---------------------------------------------------------- |
| **Span** | A unit of work with start/end time, name, and attributes |
| **Trace** | A collection of spans representing a complete request flow |
| **Trace ID** | 128-bit unique identifier for a trace |
| **Span ID** | 64-bit unique identifier for a span within a trace |
| **Context** | Carrier for trace/span IDs across boundaries |
| **Propagator** | Component that injects/extracts context |
| **Sampler** | Decides which traces to record |
| **Exporter** | Sends spans to backend |
| **Collector** | Receives, processes, and forwards telemetry |
| **OTLP** | OpenTelemetry Protocol (wire format) |
| **W3C Trace Context** | Standard HTTP headers for trace propagation |
| **Baggage** | Key-value pairs propagated across service boundaries |
| **Resource** | Entity producing telemetry (service, host, etc.) |
| **Instrumentation** | Code that creates telemetry data |
### rippled-Specific Terms
| Term | Definition |
| ----------------- | -------------------------------------------------- |
| **Overlay** | P2P network layer managing peer connections |
| **Consensus** | XRP Ledger consensus algorithm (RCL) |
| **Proposal** | Validator's suggested transaction set for a ledger |
| **Validation** | Validator's signature on a closed ledger |
| **HashRouter** | Component for transaction deduplication |
| **JobQueue** | Thread pool for asynchronous task execution |
| **PerfLog** | Existing performance logging system in rippled |
| **Beast Insight** | Existing metrics framework in rippled |
---
## 8.2 Span Hierarchy Visualization
```mermaid
flowchart TB
subgraph trace["Trace: Transaction Lifecycle"]
rpc["rpc.submit<br/>(entry point)"]
validate["tx.validate"]
relay["tx.relay<br/>(parent span)"]
subgraph peers["Peer Spans"]
p1["peer.send<br/>Peer A"]
p2["peer.send<br/>Peer B"]
p3["peer.send<br/>Peer C"]
end
consensus["consensus.round"]
apply["tx.apply"]
end
rpc --> validate
validate --> relay
relay --> p1
relay --> p2
relay --> p3
p1 -.->|"context propagation"| consensus
consensus --> apply
style trace fill:#0f172a,stroke:#020617,color:#fff
style peers fill:#1e3a8a,stroke:#172554,color:#fff
style rpc fill:#1d4ed8,stroke:#1e40af,color:#fff
style validate fill:#047857,stroke:#064e3b,color:#fff
style relay fill:#047857,stroke:#064e3b,color:#fff
style p1 fill:#0e7490,stroke:#155e75,color:#fff
style p2 fill:#0e7490,stroke:#155e75,color:#fff
style p3 fill:#0e7490,stroke:#155e75,color:#fff
style consensus fill:#fef3c7,stroke:#fde68a,color:#1e293b
style apply fill:#047857,stroke:#064e3b,color:#fff
```
---
## 8.3 References
### OpenTelemetry Resources
1. [OpenTelemetry C++ SDK](https://github.com/open-telemetry/opentelemetry-cpp)
2. [OpenTelemetry Specification](https://opentelemetry.io/docs/specs/otel/)
3. [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/)
4. [OTLP Protocol Specification](https://opentelemetry.io/docs/specs/otlp/)
### Standards
5. [W3C Trace Context](https://www.w3.org/TR/trace-context/)
6. [W3C Baggage](https://www.w3.org/TR/baggage/)
7. [Protocol Buffers](https://protobuf.dev/)
### rippled Resources
8. [rippled Source Code](https://github.com/XRPLF/rippled)
9. [XRP Ledger Documentation](https://xrpl.org/docs/)
10. [rippled Overlay README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/overlay/README.md)
11. [rippled RPC README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/rpc/README.md)
12. [rippled Consensus README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/app/consensus/README.md)
---
## 8.4 Version History
| Version | Date | Author | Changes |
| ------- | ---------- | ------ | --------------------------------- |
| 1.0 | 2026-02-12 | - | Initial implementation plan |
| 1.1 | 2026-02-13 | - | Refactored into modular documents |
---
## 8.5 Document Index
### Plan Documents
| Document | Description |
| ---------------------------------------------------------------- | -------------------------------------------- |
| [OpenTelemetryPlan.md](./OpenTelemetryPlan.md) | Master overview and executive summary |
| [00-tracing-fundamentals.md](./00-tracing-fundamentals.md) | Distributed tracing concepts and OTel primer |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | rippled architecture and trace points |
| [02-design-decisions.md](./02-design-decisions.md) | SDK selection, exporters, span conventions |
| [03-implementation-strategy.md](./03-implementation-strategy.md) | Directory structure, performance analysis |
| [04-code-samples.md](./04-code-samples.md) | C++ code examples for all components |
| [05-configuration-reference.md](./05-configuration-reference.md) | rippled config, CMake, Collector configs |
| [06-implementation-phases.md](./06-implementation-phases.md) | Timeline, tasks, risks, success metrics |
| [07-observability-backends.md](./07-observability-backends.md) | Backend selection and architecture |
| [08-appendix.md](./08-appendix.md) | Glossary, references, version history |
| [presentation.md](./presentation.md) | Slide deck for OTel plan overview |
### Task Lists
| Document | Description |
| ------------------------------------------ | -------------------------------------- |
| [POC_taskList.md](./POC_taskList.md) | Proof-of-concept telemetry integration |
| [Phase2_taskList.md](./Phase2_taskList.md) | RPC layer trace instrumentation |
| [Phase3_taskList.md](./Phase3_taskList.md) | Peer overlay & consensus tracing |
| [Phase4_taskList.md](./Phase4_taskList.md) | Transaction lifecycle tracing |
| [Phase5_taskList.md](./Phase5_taskList.md) | Ledger processing & advanced tracing |
---
_Previous: [Observability Backends](./07-observability-backends.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,190 @@
# [OpenTelemetry](00-tracing-fundamentals.md) Distributed Tracing Implementation Plan for rippled (xrpld)
## Executive Summary
This document provides a comprehensive implementation plan for integrating OpenTelemetry distributed tracing into the rippled XRP Ledger node software. The plan addresses the unique challenges of a decentralized peer-to-peer system where trace context must propagate across network boundaries between independent nodes.
### Key Benefits
- **End-to-end transaction visibility**: Track transactions from submission through consensus to ledger inclusion
- **Consensus round analysis**: Understand timing and behavior of consensus phases across validators
- **RPC performance insights**: Identify slow handlers and optimize response times
- **Network topology understanding**: Visualize message propagation patterns between peers
- **Incident debugging**: Correlate events across distributed nodes during issues
### Estimated Performance Overhead
| Metric | Overhead | Notes |
| ------------- | ---------- | ----------------------------------- |
| CPU | 1-3% | Span creation and attribute setting |
| Memory | 2-5 MB | Batch buffer for pending spans |
| Network | 10-50 KB/s | Compressed OTLP export to collector |
| Latency (p99) | <2% | With proper sampling configuration |
---
## Document Structure
This implementation plan is organized into modular documents for easier navigation:
<div align="center">
```mermaid
flowchart TB
overview["📋 OpenTelemetryPlan.md<br/>(This Document)"]
subgraph analysis["Analysis & Design"]
arch["01-architecture-analysis.md"]
design["02-design-decisions.md"]
end
subgraph impl["Implementation"]
strategy["03-implementation-strategy.md"]
code["04-code-samples.md"]
config["05-configuration-reference.md"]
end
subgraph deploy["Deployment & Planning"]
phases["06-implementation-phases.md"]
backends["07-observability-backends.md"]
appendix["08-appendix.md"]
end
overview --> analysis
overview --> impl
overview --> deploy
arch --> design
design --> strategy
strategy --> code
code --> config
config --> phases
phases --> backends
backends --> appendix
style overview fill:#1b5e20,stroke:#0d3d14,color:#fff,stroke-width:2px
style analysis fill:#0d47a1,stroke:#082f6a,color:#fff
style impl fill:#bf360c,stroke:#8c2809,color:#fff
style deploy fill:#4a148c,stroke:#2e0d57,color:#fff
style arch fill:#0d47a1,stroke:#082f6a,color:#fff
style design fill:#0d47a1,stroke:#082f6a,color:#fff
style strategy fill:#bf360c,stroke:#8c2809,color:#fff
style code fill:#bf360c,stroke:#8c2809,color:#fff
style config fill:#bf360c,stroke:#8c2809,color:#fff
style phases fill:#4a148c,stroke:#2e0d57,color:#fff
style backends fill:#4a148c,stroke:#2e0d57,color:#fff
style appendix fill:#4a148c,stroke:#2e0d57,color:#fff
```
</div>
---
## Table of Contents
| Section | Document | Description |
| ------- | ---------------------------------------------------------- | ---------------------------------------------------------------------- |
| **1** | [Architecture Analysis](./01-architecture-analysis.md) | rippled component analysis, trace points, instrumentation priorities |
| **2** | [Design Decisions](./02-design-decisions.md) | SDK selection, exporters, span naming, attributes, context propagation |
| **3** | [Implementation Strategy](./03-implementation-strategy.md) | Directory structure, key principles, performance optimization |
| **4** | [Code Samples](./04-code-samples.md) | Complete C++ implementation examples for all components |
| **5** | [Configuration Reference](./05-configuration-reference.md) | rippled config, CMake integration, Collector configurations |
| **6** | [Implementation Phases](./06-implementation-phases.md) | 5-phase timeline, tasks, risks, success metrics |
| **7** | [Observability Backends](./07-observability-backends.md) | Backend selection guide and production architecture |
| **8** | [Appendix](./08-appendix.md) | Glossary, references, version history |
---
## 1. Architecture Analysis
The rippled node consists of several key components that require instrumentation for comprehensive distributed tracing. The main areas include the RPC server (HTTP/WebSocket), Overlay P2P network, Consensus mechanism (RCLConsensus), JobQueue for async task execution, and existing observability infrastructure (PerfLog, Insight/StatsD, Journal logging).
Key trace points span across transaction submission via RPC, peer-to-peer message propagation, consensus round execution, and ledger building. The implementation prioritizes high-value, low-risk components first: RPC handlers provide immediate value with minimal risk, while consensus tracing requires careful implementation to avoid timing impacts.
➡️ **[Read full Architecture Analysis](./01-architecture-analysis.md)**
---
## 2. Design Decisions
The OpenTelemetry C++ SDK is selected for its CNCF backing, active development, and native performance characteristics. Traces are exported via OTLP/gRPC (primary) or OTLP/HTTP (fallback) to an OpenTelemetry Collector, which provides flexible routing and sampling.
Span naming follows a hierarchical `<component>.<operation>` convention (e.g., `rpc.submit`, `tx.relay`, `consensus.round`). Context propagation uses W3C Trace Context headers for HTTP and embedded Protocol Buffer fields for P2P messages. The implementation coexists with existing PerfLog and Insight observability systems through correlation IDs.
**Data Collection & Privacy**: Telemetry collects only operational metadata (timing, counts, hashes) — never sensitive content (private keys, balances, amounts, raw payloads). Privacy protection includes account hashing, configurable redaction, sampling, and collector-level filtering. Node operators retain full control(not penned down in this document yet) over what data is exported.
➡️ **[Read full Design Decisions](./02-design-decisions.md)**
---
## 3. Implementation Strategy
The telemetry code is organized under `include/xrpl/telemetry/` for headers and `src/libxrpl/telemetry/` for implementation. Key principles include RAII-based span management via `SpanGuard`, conditional compilation with `XRPL_ENABLE_TELEMETRY`, and minimal runtime overhead through batch processing and efficient sampling.
Performance optimization strategies include probabilistic head sampling (10% default), tail-based sampling at the collector for errors and slow traces, batch export to reduce network overhead, and conditional instrumentation that compiles to no-ops when disabled.
➡️ **[Read full Implementation Strategy](./03-implementation-strategy.md)**
---
## 4. Code Samples
Complete C++ implementation examples are provided for all telemetry components:
- `Telemetry.h` - Core interface for tracer access and span creation
- `SpanGuard.h` - RAII wrapper for automatic span lifecycle management
- `TracingInstrumentation.h` - Macros for conditional instrumentation
- Protocol Buffer extensions for trace context propagation
- Module-specific instrumentation (RPC, Consensus, P2P, JobQueue)
➡️ **[View all Code Samples](./04-code-samples.md)**
---
## 5. Configuration Reference
Configuration is handled through the `[telemetry]` section in `xrpld.cfg` with options for enabling/disabling, exporter selection, endpoint configuration, sampling ratios, and component-level filtering. CMake integration includes a `XRPL_ENABLE_TELEMETRY` option for compile-time control.
OpenTelemetry Collector configurations are provided for development (with Jaeger) and production (with tail-based sampling, Tempo, and Elastic APM). Docker Compose examples enable quick local development environment setup.
➡️ **[View full Configuration Reference](./05-configuration-reference.md)**
---
## 6. Implementation Phases
The implementation spans 9 weeks across 5 phases:
| Phase | Duration | Focus | Key Deliverables |
| ----- | --------- | ------------------- | --------------------------------------------------- |
| 1 | Weeks 1-2 | Core Infrastructure | SDK integration, Telemetry interface, Configuration |
| 2 | Weeks 3-4 | RPC Tracing | HTTP context extraction, Handler instrumentation |
| 3 | Weeks 5-6 | Transaction Tracing | Protocol Buffer context, Relay propagation |
| 4 | Weeks 7-8 | Consensus Tracing | Round spans, Proposal/validation tracing |
| 5 | Week 9 | Documentation | Runbook, Dashboards, Training |
**Total Effort**: 47 developer-days with 2 developers
➡️ **[View full Implementation Phases](./06-implementation-phases.md)**
---
## 7. Observability Backends
For development and testing, Jaeger provides easy setup with a good UI. For production deployments, Grafana Tempo is recommended for its cost-effectiveness and Grafana integration, while Elastic APM is ideal for organizations with existing Elastic infrastructure.
The recommended production architecture uses a gateway collector pattern with regional collectors performing tail-based sampling, routing traces to multiple backends (Tempo for primary storage, Elastic for log correlation, S3/GCS for long-term archive).
➡️ **[View Observability Backend Recommendations](./07-observability-backends.md)**
---
## 8. Appendix
The appendix contains a glossary of OpenTelemetry and rippled-specific terms, references to external documentation and specifications, version history for this implementation plan, and a complete document index.
➡️ **[View Appendix](./08-appendix.md)**
---
_This document provides a comprehensive implementation plan for integrating OpenTelemetry distributed tracing into the rippled XRP Ledger node software. For detailed information on any section, follow the links to the corresponding sub-documents._

View File

@@ -0,0 +1,610 @@
# OpenTelemetry POC Task List
> **Goal**: Build a minimal end-to-end proof of concept that demonstrates distributed tracing in rippled. A successful POC will show RPC request traces flowing from rippled through an OTel Collector into Jaeger, viewable in a browser UI.
>
> **Scope**: RPC tracing only (highest value, lowest risk per the [CRAWL phase](./06-implementation-phases.md#6102-quick-wins-immediate-value) in the implementation phases). No cross-node P2P context propagation or consensus tracing in the POC.
### Related Plan Documents
| Document | Relevance to POC |
| ---------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [00-tracing-fundamentals.md](./00-tracing-fundamentals.md) | Core concepts: traces, spans, context propagation, sampling |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | RPC request flow (§1.5), key trace points (§1.6), instrumentation priority (§1.7) |
| [02-design-decisions.md](./02-design-decisions.md) | SDK selection (§2.1), exporter config (§2.2), span naming (§2.3), attribute schema (§2.4), coexistence with PerfLog/Insight (§2.6) |
| [03-implementation-strategy.md](./03-implementation-strategy.md) | Directory structure (§3.1), key principles (§3.2), performance overhead (§3.3-3.6), conditional compilation (§3.7.3), code intrusiveness (§3.9) |
| [04-code-samples.md](./04-code-samples.md) | Telemetry interface (§4.1), SpanGuard (§4.2), macros (§4.3), RPC instrumentation (§4.5.3) |
| [05-configuration-reference.md](./05-configuration-reference.md) | rippled config (§5.1), config parser (§5.2), Application integration (§5.3), CMake (§5.4), Collector config (§5.5), Docker Compose (§5.6), Grafana (§5.8) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 1 core tasks (§6.2), Phase 2 RPC tasks (§6.3), quick wins (§6.10), definition of done (§6.11) |
| [07-observability-backends.md](./07-observability-backends.md) | Jaeger dev setup (§7.1), Grafana dashboards (§7.6), alert rules (§7.6.3) |
---
## Task 0: Docker Observability Stack Setup
**Objective**: Stand up the backend infrastructure to receive, store, and display traces.
**What to do**:
- Create `docker/telemetry/docker-compose.yml` in the repo with three services:
1. **OpenTelemetry Collector** (`otel/opentelemetry-collector-contrib:latest`)
- Expose ports `4317` (OTLP gRPC) and `4318` (OTLP HTTP)
- Expose port `13133` (health check)
- Mount a config file `docker/telemetry/otel-collector-config.yaml`
2. **Jaeger** (`jaegertracing/all-in-one:latest`)
- Expose port `16686` (UI) and `14250` (gRPC collector)
- Set env `COLLECTOR_OTLP_ENABLED=true`
3. **Grafana** (`grafana/grafana:latest`) — optional but useful
- Expose port `3000`
- Enable anonymous admin access for local dev (`GF_AUTH_ANONYMOUS_ENABLED=true`, `GF_AUTH_ANONYMOUS_ORG_ROLE=Admin`)
- Provision Jaeger as a data source via `docker/telemetry/grafana/provisioning/datasources/jaeger.yaml`
- Create `docker/telemetry/otel-collector-config.yaml`:
```yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 100
exporters:
logging:
verbosity: detailed
otlp/jaeger:
endpoint: jaeger:4317
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp/jaeger]
```
- Create Grafana Jaeger datasource provisioning file at `docker/telemetry/grafana/provisioning/datasources/jaeger.yaml`:
```yaml
apiVersion: 1
datasources:
- name: Jaeger
type: jaeger
access: proxy
url: http://jaeger:16686
```
**Verification**: Run `docker compose -f docker/telemetry/docker-compose.yml up -d`, then:
- `curl http://localhost:13133` returns healthy (Collector)
- `http://localhost:16686` opens Jaeger UI (no traces yet)
- `http://localhost:3000` opens Grafana (optional)
**Reference**:
- [05-configuration-reference.md §5.5](./05-configuration-reference.md) — Collector config (dev YAML with Jaeger exporter)
- [05-configuration-reference.md §5.6](./05-configuration-reference.md) — Docker Compose development environment
- [07-observability-backends.md §7.1](./07-observability-backends.md) — Jaeger quick start and backend selection
- [05-configuration-reference.md §5.8](./05-configuration-reference.md) — Grafana datasource provisioning and dashboards
---
## Task 1: Add OpenTelemetry C++ SDK Dependency
**Objective**: Make `opentelemetry-cpp` available to the build system.
**What to do**:
- Edit `conanfile.py` to add `opentelemetry-cpp` as an **optional** dependency. The gRPC otel plugin flag (`"grpc/*:otel_plugin": False`) in the existing conanfile may need to remain false — we pull the OTel SDK separately.
- Add a Conan option: `with_telemetry = [True, False]` defaulting to `False`
- When `with_telemetry` is `True`, add `opentelemetry-cpp` to `self.requires()`
- Required OTel Conan components: `opentelemetry-cpp` (which bundles api, sdk, and exporters). If the package isn't in Conan Center, consider using `FetchContent` in CMake or building from source as a fallback.
- Edit `CMakeLists.txt`:
- Add option: `option(XRPL_ENABLE_TELEMETRY "Enable OpenTelemetry tracing" OFF)`
- When ON, `find_package(opentelemetry-cpp CONFIG REQUIRED)` and add compile definition `XRPL_ENABLE_TELEMETRY`
- When OFF, do nothing (zero build impact)
- Verify the build succeeds with `-DXRPL_ENABLE_TELEMETRY=OFF` (no regressions) and with `-DXRPL_ENABLE_TELEMETRY=ON` (SDK links successfully).
**Key files**:
- `conanfile.py`
- `CMakeLists.txt`
**Reference**:
- [05-configuration-reference.md §5.4](./05-configuration-reference.md) — CMake integration, `FindOpenTelemetry.cmake`, `XRPL_ENABLE_TELEMETRY` option
- [03-implementation-strategy.md §3.2](./03-implementation-strategy.md) — Key principle: zero-cost when disabled via compile-time flags
- [02-design-decisions.md §2.1](./02-design-decisions.md) — SDK selection rationale and required OTel components
---
## Task 2: Create Core Telemetry Interface and NullTelemetry
**Objective**: Define the `Telemetry` abstract interface and a no-op implementation so the rest of the codebase can reference telemetry without hard-depending on the OTel SDK.
**What to do**:
- Create `include/xrpl/telemetry/Telemetry.h`:
- Define `namespace xrpl::telemetry`
- Define `struct Telemetry::Setup` holding: `enabled`, `exporterEndpoint`, `samplingRatio`, `serviceName`, `serviceVersion`, `serviceInstanceId`, `traceRpc`, `traceTransactions`, `traceConsensus`, `tracePeer`
- Define abstract `class Telemetry` with:
- `virtual void start() = 0;`
- `virtual void stop() = 0;`
- `virtual bool isEnabled() const = 0;`
- `virtual nostd::shared_ptr<Tracer> getTracer(string_view name = "rippled") = 0;`
- `virtual nostd::shared_ptr<Span> startSpan(string_view name, SpanKind kind = kInternal) = 0;`
- `virtual nostd::shared_ptr<Span> startSpan(string_view name, Context const& parentContext, SpanKind kind = kInternal) = 0;`
- `virtual bool shouldTraceRpc() const = 0;`
- `virtual bool shouldTraceTransactions() const = 0;`
- `virtual bool shouldTraceConsensus() const = 0;`
- Factory: `std::unique_ptr<Telemetry> make_Telemetry(Setup const&, beast::Journal);`
- Config parser: `Telemetry::Setup setup_Telemetry(Section const&, std::string const& nodePublicKey, std::string const& version);`
- Create `include/xrpl/telemetry/SpanGuard.h`:
- RAII guard that takes an `nostd::shared_ptr<Span>`, creates a `Scope`, and calls `span->End()` in destructor.
- Convenience: `setAttribute()`, `setOk()`, `setStatus()`, `addEvent()`, `recordException()`, `context()`
- See [04-code-samples.md](./04-code-samples.md) §4.2 for the full implementation.
- Create `src/libxrpl/telemetry/NullTelemetry.cpp`:
- Implements `Telemetry` with all no-ops.
- `isEnabled()` returns `false`, `startSpan()` returns a noop span.
- This is used when `XRPL_ENABLE_TELEMETRY` is OFF or `enabled=0` in config.
- Guard all OTel SDK headers behind `#ifdef XRPL_ENABLE_TELEMETRY`. The `NullTelemetry` implementation should compile without the OTel SDK present.
**Key new files**:
- `include/xrpl/telemetry/Telemetry.h`
- `include/xrpl/telemetry/SpanGuard.h`
- `src/libxrpl/telemetry/NullTelemetry.cpp`
**Reference**:
- [04-code-samples.md §4.1](./04-code-samples.md) — Full `Telemetry` interface with `Setup` struct, lifecycle, tracer access, span creation, and component filtering methods
- [04-code-samples.md §4.2](./04-code-samples.md) — Full `SpanGuard` RAII implementation and `NullSpanGuard` no-op class
- [03-implementation-strategy.md §3.1](./03-implementation-strategy.md) — Directory structure: `include/xrpl/telemetry/` for headers, `src/libxrpl/telemetry/` for implementation
- [03-implementation-strategy.md §3.7.3](./03-implementation-strategy.md) — Conditional instrumentation and zero-cost compile-time disabled pattern
---
## Task 3: Implement OTel-Backed Telemetry
**Objective**: Implement the real `Telemetry` class that initializes the OTel SDK, configures the OTLP exporter and batch processor, and creates tracers/spans.
**What to do**:
- Create `src/libxrpl/telemetry/Telemetry.cpp` (compiled only when `XRPL_ENABLE_TELEMETRY=ON`):
- `class TelemetryImpl : public Telemetry` that:
- In `start()`: creates a `TracerProvider` with:
- Resource attributes: `service.name`, `service.version`, `service.instance.id`
- An `OtlpGrpcExporter` pointed at `setup.exporterEndpoint` (default `localhost:4317`)
- A `BatchSpanProcessor` with configurable batch size and delay
- A `TraceIdRatioBasedSampler` using `setup.samplingRatio`
- Sets the global `TracerProvider`
- In `stop()`: calls `ForceFlush()` then shuts down the provider
- In `startSpan()`: delegates to `getTracer()->StartSpan(name, ...)`
- `shouldTraceRpc()` etc. read from `Setup` fields
- Create `src/libxrpl/telemetry/TelemetryConfig.cpp`:
- `setup_Telemetry()` parses the `[telemetry]` config section from `xrpld.cfg`
- Maps config keys: `enabled`, `exporter`, `endpoint`, `sampling_ratio`, `trace_rpc`, `trace_transactions`, `trace_consensus`, `trace_peer`
- Wire `make_Telemetry()` factory:
- If `setup.enabled` is true AND `XRPL_ENABLE_TELEMETRY` is defined: return `TelemetryImpl`
- Otherwise: return `NullTelemetry`
- Add telemetry source files to CMake. When `XRPL_ENABLE_TELEMETRY=ON`, compile `Telemetry.cpp` and `TelemetryConfig.cpp` and link against `opentelemetry-cpp::api`, `opentelemetry-cpp::sdk`, `opentelemetry-cpp::otlp_grpc_exporter`. When OFF, compile only `NullTelemetry.cpp`.
**Key new files**:
- `src/libxrpl/telemetry/Telemetry.cpp`
- `src/libxrpl/telemetry/TelemetryConfig.cpp`
**Key modified files**:
- `CMakeLists.txt` (add telemetry library target)
**Reference**:
- [04-code-samples.md §4.1](./04-code-samples.md) — `Telemetry` interface that `TelemetryImpl` must implement
- [05-configuration-reference.md §5.2](./05-configuration-reference.md) — `setup_Telemetry()` config parser implementation
- [02-design-decisions.md §2.2](./02-design-decisions.md) — OTLP/gRPC exporter config (endpoint, TLS options)
- [02-design-decisions.md §2.4.1](./02-design-decisions.md) — Resource attributes: `service.name`, `service.version`, `service.instance.id`, `xrpl.network.id`
- [03-implementation-strategy.md §3.4](./03-implementation-strategy.md) — Per-operation CPU costs and overhead budget for span creation
- [03-implementation-strategy.md §3.5](./03-implementation-strategy.md) — Memory overhead: static (~456 KB) and dynamic (~1.2 MB) budgets
---
## Task 4: Integrate Telemetry into Application Lifecycle
**Objective**: Wire the `Telemetry` object into `Application` so all components can access it.
**What to do**:
- Edit `src/xrpld/app/main/Application.h`:
- Forward-declare `namespace xrpl::telemetry { class Telemetry; }`
- Add pure virtual method: `virtual telemetry::Telemetry& getTelemetry() = 0;`
- Edit `src/xrpld/app/main/Application.cpp` (the `ApplicationImp` class):
- Add member: `std::unique_ptr<telemetry::Telemetry> telemetry_;`
- In the constructor, after config is loaded and node identity is known:
```cpp
auto const telemetrySection = config_->section("telemetry");
auto telemetrySetup = telemetry::setup_Telemetry(
telemetrySection,
toBase58(TokenType::NodePublic, nodeIdentity_.publicKey()),
BuildInfo::getVersionString());
telemetry_ = telemetry::make_Telemetry(telemetrySetup, logs_->journal("Telemetry"));
```
- In `start()`: call `telemetry_->start()` early
- In `stop()` or destructor: call `telemetry_->stop()` late (to flush pending spans)
- Implement `getTelemetry()` override: return `*telemetry_`
- Add `[telemetry]` section to the example config `cfg/rippled-example.cfg`:
```ini
# [telemetry]
# enabled=1
# endpoint=localhost:4317
# sampling_ratio=1.0
# trace_rpc=1
```
**Key modified files**:
- `src/xrpld/app/main/Application.h`
- `src/xrpld/app/main/Application.cpp`
- `cfg/rippled-example.cfg` (or equivalent example config)
**Reference**:
- [05-configuration-reference.md §5.3](./05-configuration-reference.md) — `ApplicationImp` changes: member declaration, constructor init, `start()`/`stop()` wiring, `getTelemetry()` override
- [05-configuration-reference.md §5.1](./05-configuration-reference.md) — `[telemetry]` config section format and all option defaults
- [03-implementation-strategy.md §3.9.2](./03-implementation-strategy.md) — File impact assessment: `Application.cpp` ~15 lines added, ~3 changed (Low risk)
---
## Task 5: Create Instrumentation Macros
**Objective**: Define convenience macros that make instrumenting code one-liners, and that compile to zero-cost no-ops when telemetry is disabled.
**What to do**:
- Create `src/xrpld/telemetry/TracingInstrumentation.h`:
- When `XRPL_ENABLE_TELEMETRY` is defined:
```cpp
#define XRPL_TRACE_SPAN(telemetry, name) \
auto _xrpl_span_ = (telemetry).startSpan(name); \
::xrpl::telemetry::SpanGuard _xrpl_guard_(_xrpl_span_)
#define XRPL_TRACE_RPC(telemetry, name) \
std::optional<::xrpl::telemetry::SpanGuard> _xrpl_guard_; \
if ((telemetry).shouldTraceRpc()) { \
_xrpl_guard_.emplace((telemetry).startSpan(name)); \
}
#define XRPL_TRACE_SET_ATTR(key, value) \
if (_xrpl_guard_.has_value()) { \
_xrpl_guard_->setAttribute(key, value); \
}
#define XRPL_TRACE_EXCEPTION(e) \
if (_xrpl_guard_.has_value()) { \
_xrpl_guard_->recordException(e); \
}
```
- When `XRPL_ENABLE_TELEMETRY` is NOT defined, all macros expand to `((void)0)`
**Key new file**:
- `src/xrpld/telemetry/TracingInstrumentation.h`
**Reference**:
- [04-code-samples.md §4.3](./04-code-samples.md) — Full macro definitions for `XRPL_TRACE_SPAN`, `XRPL_TRACE_RPC`, `XRPL_TRACE_CONSENSUS`, `XRPL_TRACE_SET_ATTR`, `XRPL_TRACE_EXCEPTION` with both enabled and disabled branches
- [03-implementation-strategy.md §3.7.3](./03-implementation-strategy.md) — Conditional instrumentation pattern: compile-time `#ifndef` and runtime `shouldTrace*()` checks
- [03-implementation-strategy.md §3.9.7](./03-implementation-strategy.md) — Before/after code examples showing minimal intrusiveness (~1-3 lines per instrumentation point)
---
## Task 6: Instrument RPC ServerHandler
**Objective**: Add tracing to the HTTP RPC entry point so every incoming RPC request creates a span.
**What to do**:
- Edit `src/xrpld/rpc/detail/ServerHandler.cpp`:
- `#include` the `TracingInstrumentation.h` header
- In `ServerHandler::onRequest(Session& session)`:
- At the top of the method, add: `XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.request");`
- After the RPC command name is extracted, set attribute: `XRPL_TRACE_SET_ATTR("xrpl.rpc.command", command);`
- After the response status is known, set: `XRPL_TRACE_SET_ATTR("http.status_code", static_cast<int64_t>(statusCode));`
- Wrap error paths with: `XRPL_TRACE_EXCEPTION(e);`
- In `ServerHandler::processRequest(...)`:
- Add a child span: `XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.process");`
- Set method attribute: `XRPL_TRACE_SET_ATTR("xrpl.rpc.method", request_method);`
- In `ServerHandler::onWSMessage(...)` (WebSocket path):
- Add: `XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.ws.message");`
- The goal is to see spans like:
```
rpc.request
└── rpc.process
```
in Jaeger for every HTTP RPC call.
**Key modified file**:
- `src/xrpld/rpc/detail/ServerHandler.cpp` (~15-25 lines added)
**Reference**:
- [04-code-samples.md §4.5.3](./04-code-samples.md) — Complete `ServerHandler::onRequest()` instrumented code sample with W3C header extraction, span creation, attribute setting, and error handling
- [01-architecture-analysis.md §1.5](./01-architecture-analysis.md) — RPC request flow diagram: HTTP request -> attributes -> jobqueue.enqueue -> rpc.command -> response
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — Key trace points table: `rpc.request` in `ServerHandler.cpp::onRequest()` (Priority: High)
- [02-design-decisions.md §2.3](./02-design-decisions.md) — Span naming convention: `rpc.request`, `rpc.command.*`
- [02-design-decisions.md §2.4.2](./02-design-decisions.md) — RPC span attributes: `xrpl.rpc.command`, `xrpl.rpc.version`, `xrpl.rpc.role`, `xrpl.rpc.params`
- [03-implementation-strategy.md §3.9.2](./03-implementation-strategy.md) — File impact: `ServerHandler.cpp` ~40 lines added, ~10 changed (Low risk)
---
## Task 7: Instrument RPC Command Execution
**Objective**: Add per-command tracing inside the RPC handler so each command (e.g., `submit`, `account_info`, `server_info`) gets its own child span.
**What to do**:
- Edit `src/xrpld/rpc/detail/RPCHandler.cpp`:
- `#include` the `TracingInstrumentation.h` header
- In `doCommand(RPC::JsonContext& context, Json::Value& result)`:
- At the top: `XRPL_TRACE_RPC(context.app.getTelemetry(), "rpc.command." + context.method);`
- Set attributes:
- `XRPL_TRACE_SET_ATTR("xrpl.rpc.command", context.method);`
- `XRPL_TRACE_SET_ATTR("xrpl.rpc.version", static_cast<int64_t>(context.apiVersion));`
- `XRPL_TRACE_SET_ATTR("xrpl.rpc.role", (context.role == Role::ADMIN) ? "admin" : "user");`
- On success: `XRPL_TRACE_SET_ATTR("xrpl.rpc.status", "success");`
- On error: `XRPL_TRACE_SET_ATTR("xrpl.rpc.status", "error");` and set the error message
- After this, traces in Jaeger should look like:
```
rpc.request (xrpl.rpc.command=account_info)
└── rpc.process
└── rpc.command.account_info (xrpl.rpc.version=2, xrpl.rpc.role=user, xrpl.rpc.status=success)
```
**Key modified file**:
- `src/xrpld/rpc/detail/RPCHandler.cpp` (~15-20 lines added)
**Reference**:
- [04-code-samples.md §4.5.3](./04-code-samples.md) — `ServerHandler::onRequest()` code sample (includes child span pattern for `rpc.command.*`)
- [02-design-decisions.md §2.3](./02-design-decisions.md) — Span naming: `rpc.command.*` pattern with dynamic command name (e.g., `rpc.command.server_info`)
- [02-design-decisions.md §2.4.2](./02-design-decisions.md) — RPC attribute schema: `xrpl.rpc.command`, `xrpl.rpc.version`, `xrpl.rpc.role`, `xrpl.rpc.status`
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — Key trace points table: `rpc.command.*` in `RPCHandler.cpp::doCommand()` (Priority: High)
- [02-design-decisions.md §2.6.5](./02-design-decisions.md) — Correlation with PerfLog: how `doCommand()` can link trace_id with existing PerfLog entries
- [03-implementation-strategy.md §3.4.4](./03-implementation-strategy.md) — RPC request overhead budget: ~1.75 μs total per request
---
## Task 8: Build, Run, and Verify End-to-End
**Objective**: Prove the full pipeline works: rippled emits traces -> OTel Collector receives them -> Jaeger displays them.
**What to do**:
1. **Start the Docker stack**:
```bash
docker compose -f docker/telemetry/docker-compose.yml up -d
```
Verify Collector health: `curl http://localhost:13133`
2. **Build rippled with telemetry**:
```bash
# Adjust for your actual build workflow
conan install . --build=missing -o with_telemetry=True
cmake --preset default -DXRPL_ENABLE_TELEMETRY=ON
cmake --build --preset default
```
3. **Configure rippled**:
Add to `rippled.cfg` (or your local test config):
```ini
[telemetry]
enabled=1
endpoint=localhost:4317
sampling_ratio=1.0
trace_rpc=1
```
4. **Start rippled** in standalone mode:
```bash
./rippled --conf rippled.cfg -a --start
```
5. **Generate RPC traffic**:
```bash
# server_info
curl -s -X POST http://localhost:5005 \
-H "Content-Type: application/json" \
-d '{"method":"server_info","params":[{}]}'
# ledger
curl -s -X POST http://localhost:5005 \
-H "Content-Type: application/json" \
-d '{"method":"ledger","params":[{"ledger_index":"current"}]}'
# account_info (will error in standalone, that's fine — we trace errors too)
curl -s -X POST http://localhost:5005 \
-H "Content-Type: application/json" \
-d '{"method":"account_info","params":[{"account":"rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"}]}'
```
6. **Verify in Jaeger**:
- Open `http://localhost:16686`
- Select service `rippled` from the dropdown
- Click "Find Traces"
- Confirm you see traces with spans: `rpc.request` -> `rpc.process` -> `rpc.command.server_info`
- Click into a trace and verify attributes: `xrpl.rpc.command`, `xrpl.rpc.status`, `xrpl.rpc.version`
7. **Verify zero-overhead when disabled**:
- Rebuild with `XRPL_ENABLE_TELEMETRY=OFF`, or set `enabled=0` in config
- Run the same RPC calls
- Confirm no new traces appear and no errors in rippled logs
**Verification Checklist**:
- [ ] Docker stack starts without errors
- [ ] rippled builds with `-DXRPL_ENABLE_TELEMETRY=ON`
- [ ] rippled starts and connects to OTel Collector (check rippled logs for telemetry messages)
- [ ] Traces appear in Jaeger UI under service "rippled"
- [ ] Span hierarchy is correct (parent-child relationships)
- [ ] Span attributes are populated (`xrpl.rpc.command`, `xrpl.rpc.status`, etc.)
- [ ] Error spans show error status and message
- [ ] Building with `XRPL_ENABLE_TELEMETRY=OFF` produces no regressions
- [ ] Setting `enabled=0` at runtime produces no traces and no errors
**Reference**:
- [06-implementation-phases.md §6.11.1](./06-implementation-phases.md) — Phase 1 definition of done: SDK compiles, runtime toggle works, span creation verified in Jaeger, config validation passes
- [06-implementation-phases.md §6.11.2](./06-implementation-phases.md) — Phase 2 definition of done: 100% RPC coverage, traceparent propagation, <1ms p99 overhead, dashboard deployed
- [06-implementation-phases.md §6.8](./06-implementation-phases.md) — Success metrics: trace coverage >95%, CPU overhead <3%, memory <5 MB, latency impact <2%
- [03-implementation-strategy.md §3.9.5](./03-implementation-strategy.md) — Backward compatibility: config optional, protocol unchanged, `XRPL_ENABLE_TELEMETRY=OFF` produces identical binary
- [01-architecture-analysis.md §1.8](./01-architecture-analysis.md) — Observable outcomes: what traces, metrics, and dashboards to expect
---
## Task 9: Document POC Results and Next Steps
**Objective**: Capture findings, screenshots, and remaining work for the team.
**What to do**:
- Take screenshots of Jaeger showing:
- The service list with "rippled"
- A trace with the full span tree
- Span detail view showing attributes
- Document any issues encountered (build issues, SDK quirks, missing attributes)
- Note performance observations (build time impact, any noticeable runtime overhead)
- Write a short summary of what the POC proves and what it doesn't cover yet:
- **Proves**: OTel SDK integrates with rippled, OTLP export works, RPC traces visible
- **Doesn't cover**: Cross-node P2P context propagation, consensus tracing, protobuf trace context, W3C traceparent header extraction, tail-based sampling, production deployment
- Outline next steps (mapping to the full plan phases):
- [Phase 2](./06-implementation-phases.md) completion: [W3C header extraction](./02-design-decisions.md) (§2.5), WebSocket tracing, all [RPC handlers](./01-architecture-analysis.md) (§1.6)
- [Phase 3](./06-implementation-phases.md): [Protobuf `TraceContext` message](./04-code-samples.md) (§4.4), [transaction relay tracing](./04-code-samples.md) (§4.5.1) across nodes
- [Phase 4](./06-implementation-phases.md): [Consensus round and phase tracing](./04-code-samples.md) (§4.5.2)
- [Phase 5](./06-implementation-phases.md): [Production collector config](./05-configuration-reference.md) (§5.5.2), [Grafana dashboards](./07-observability-backends.md) (§7.6), [alerting](./07-observability-backends.md) (§7.6.3)
**Reference**:
- [06-implementation-phases.md §6.1](./06-implementation-phases.md) — Full 5-phase timeline overview and Gantt chart
- [06-implementation-phases.md §6.10](./06-implementation-phases.md) — Crawl-Walk-Run strategy: POC is the CRAWL phase, next steps are WALK and RUN
- [06-implementation-phases.md §6.12](./06-implementation-phases.md) — Recommended implementation order (14 steps across 9 weeks)
- [03-implementation-strategy.md §3.9](./03-implementation-strategy.md) — Code intrusiveness assessment and risk matrix for each remaining component
- [07-observability-backends.md §7.2](./07-observability-backends.md) — Production backend selection (Tempo, Elastic APM, Honeycomb, Datadog)
- [02-design-decisions.md §2.5](./02-design-decisions.md) — Context propagation design: W3C HTTP headers, protobuf P2P, JobQueue internal
- [00-tracing-fundamentals.md](./00-tracing-fundamentals.md) — Reference for team onboarding on distributed tracing concepts
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ------------------------------------ | --------- | -------------- | ---------- |
| 0 | Docker observability stack | 4 | 0 | — |
| 1 | OTel C++ SDK dependency | 0 | 2 | — |
| 2 | Core Telemetry interface + NullImpl | 3 | 0 | 1 |
| 3 | OTel-backed Telemetry implementation | 2 | 1 | 1, 2 |
| 4 | Application lifecycle integration | 0 | 3 | 2, 3 |
| 5 | Instrumentation macros | 1 | 0 | 2 |
| 6 | Instrument RPC ServerHandler | 0 | 1 | 4, 5 |
| 7 | Instrument RPC command execution | 0 | 1 | 4, 5 |
| 8 | End-to-end verification | 0 | 0 | 0-7 |
| 9 | Document results and next steps | 1 | 0 | 8 |
**Parallel work**: Tasks 0 and 1 can run in parallel. Tasks 2 and 5 have no dependency on each other. Tasks 6 and 7 can be done in parallel once Tasks 4 and 5 are complete.
---
## Next Steps (Post-POC)
### Metrics Pipeline for Grafana Dashboards
The current POC exports **traces only**. Grafana's Explore view can query Jaeger for individual traces, but time-series charts (latency histograms, request throughput, error rates) require a **metrics pipeline**. To enable this:
1. **Add a `spanmetrics` connector** to the OTel Collector config that derives RED metrics (Rate, Errors, Duration) from trace spans automatically:
```yaml
connectors:
spanmetrics:
histogram:
explicit:
buckets: [1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 5s]
dimensions:
- name: xrpl.rpc.command
- name: xrpl.rpc.status
exporters:
prometheus:
endpoint: 0.0.0.0:8889
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp/jaeger, spanmetrics]
metrics:
receivers: [spanmetrics]
exporters: [prometheus]
```
2. **Add Prometheus** to the Docker Compose stack to scrape the collector's metrics endpoint.
3. **Add Prometheus as a Grafana datasource** and build dashboards for:
- RPC request latency (p50/p95/p99) by command
- RPC throughput (requests/sec) by command
- Error rate by command
- Span duration distribution
### Additional Instrumentation
- **W3C `traceparent` header extraction** in `ServerHandler` to support cross-service context propagation from external callers
- **WebSocket RPC tracing** in `ServerHandler::onWSMessage()`
- **Transaction relay tracing** across nodes using protobuf `TraceContext` messages
- **Consensus round and phase tracing** for validator coordination visibility
- **Ledger close tracing** to measure close-to-validated latency
### Production Hardening
- **Tail-based sampling** in the OTel Collector to reduce volume while retaining error/slow traces
- **TLS configuration** for the OTLP exporter in production deployments
- **Resource limits** on the batch processor queue to prevent unbounded memory growth
- **Health monitoring** for the telemetry pipeline itself (collector lag, export failures)
### POC Lessons Learned
Issues encountered during POC implementation that inform future work:
| Issue | Resolution | Impact on Future Work |
| -------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------- | ---------------------------------------------------------------- |
| Conan lockfile rejected `opentelemetry-cpp/1.18.0` | Used `--lockfile=""` to bypass | Lockfile must be regenerated when adding new dependencies |
| Conan package only builds OTLP HTTP exporter, not gRPC | Switched from gRPC to HTTP exporter (`localhost:4318/v1/traces`) | HTTP exporter is the default; gRPC requires custom Conan profile |
| CMake target `opentelemetry-cpp::api` etc. don't exist in Conan package | Use umbrella target `opentelemetry-cpp::opentelemetry-cpp` | Conan targets differ from upstream CMake targets |
| OTel Collector `logging` exporter deprecated | Renamed to `debug` exporter | Use `debug` in all collector configs going forward |
| Macro parameter `telemetry` collided with `::xrpl::telemetry::` namespace | Renamed macro params to `_tel_obj_`, `_span_name_` | Avoid common words as macro parameter names |
| `opentelemetry::trace::Scope` creates new context on move | Store scope as member, create once in constructor | SpanGuard move semantics need care with Scope lifecycle |
| `TracerProviderFactory::Create` returns `unique_ptr<sdk::TracerProvider>`, not `nostd::shared_ptr` | Use `std::shared_ptr` member, wrap in `nostd::shared_ptr` for global provider | OTel SDK factory return types don't match API provider types |

View File

@@ -0,0 +1,187 @@
# Phase 2: RPC Tracing Completion Task List
> **Goal**: Complete full RPC tracing coverage with W3C Trace Context propagation, unit tests, and performance validation. Build on the POC foundation to achieve production-quality RPC observability.
>
> **Scope**: W3C header extraction, TraceContext propagation utilities, unit tests for core telemetry, integration tests for RPC tracing, and performance benchmarks.
>
> **Branch**: `pratik/otel-phase2-rpc-tracing` (from `pratik/OpenTelemetry_and_DistributedTracing_planning`)
### Related Plan Documents
| Document | Relevance |
| ------------------------------------------------------------ | ------------------------------------------------------------- |
| [04-code-samples.md](./04-code-samples.md) | TraceContextPropagator (§4.4.2), RPC instrumentation (§4.5.3) |
| [02-design-decisions.md](./02-design-decisions.md) | W3C Trace Context (§2.5), span attributes (§2.4.2) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 2 tasks (§6.3), definition of done (§6.11.2) |
---
## Task 2.1: Implement W3C Trace Context HTTP Header Extraction
**Objective**: Extract `traceparent` and `tracestate` headers from incoming HTTP RPC requests so external callers can propagate their trace context into rippled.
**What to do**:
- Create `include/xrpl/telemetry/TraceContextPropagator.h`:
- `extractFromHeaders(headerGetter)` - extract W3C traceparent/tracestate from HTTP headers
- `injectToHeaders(ctx, headerSetter)` - inject trace context into response headers
- Use OTel's `TextMapPropagator` with `W3CTraceContextPropagator` for standards compliance
- Only compiled when `XRPL_ENABLE_TELEMETRY` is defined
- Create `src/libxrpl/telemetry/TraceContextPropagator.cpp`:
- Implement a simple `TextMapCarrier` adapter for HTTP headers
- Use `opentelemetry::context::propagation::GlobalTextMapPropagator` for extraction/injection
- Register the W3C propagator in `TelemetryImpl::start()`
- Modify `src/xrpld/rpc/detail/ServerHandler.cpp`:
- In the HTTP request handler, extract parent context from headers before creating span
- Pass extracted context to `startSpan()` as parent
- Inject trace context into response headers
**Key new files**:
- `include/xrpl/telemetry/TraceContextPropagator.h`
- `src/libxrpl/telemetry/TraceContextPropagator.cpp`
**Key modified files**:
- `src/xrpld/rpc/detail/ServerHandler.cpp`
- `src/libxrpl/telemetry/Telemetry.cpp` (register W3C propagator)
**Reference**:
- [04-code-samples.md §4.4.2](./04-code-samples.md) — TraceContextPropagator with extractFromHeaders/injectToHeaders
- [02-design-decisions.md §2.5](./02-design-decisions.md) — W3C Trace Context propagation design
---
## Task 2.2: Add XRPL_TRACE_PEER Macro
**Objective**: Add the missing peer-tracing macro for future Phase 3 use and ensure macro completeness.
**What to do**:
- Edit `src/xrpld/telemetry/TracingInstrumentation.h`:
- Add `XRPL_TRACE_PEER(_tel_obj_, _span_name_)` macro that checks `shouldTracePeer()`
- Add `XRPL_TRACE_LEDGER(_tel_obj_, _span_name_)` macro (for future ledger tracing)
- Ensure disabled variants expand to `((void)0)`
**Key modified file**:
- `src/xrpld/telemetry/TracingInstrumentation.h`
---
## Task 2.3: Add shouldTraceLedger() to Telemetry Interface
**Objective**: The `Setup` struct has a `traceLedger` field but there's no corresponding virtual method. Add it for interface completeness.
**What to do**:
- Edit `include/xrpl/telemetry/Telemetry.h`:
- Add `virtual bool shouldTraceLedger() const = 0;`
- Update all implementations:
- `src/libxrpl/telemetry/Telemetry.cpp` (TelemetryImpl, NullTelemetryOtel)
- `src/libxrpl/telemetry/NullTelemetry.cpp` (NullTelemetry)
**Key modified files**:
- `include/xrpl/telemetry/Telemetry.h`
- `src/libxrpl/telemetry/Telemetry.cpp`
- `src/libxrpl/telemetry/NullTelemetry.cpp`
---
## Task 2.4: Unit Tests for Core Telemetry Infrastructure
**Objective**: Add unit tests for the core telemetry abstractions to validate correctness and catch regressions.
**What to do**:
- Create `src/test/telemetry/Telemetry_test.cpp`:
- Test NullTelemetry: verify all methods return expected no-op values
- Test Setup defaults: verify all Setup fields have correct defaults
- Test setup_Telemetry config parser: verify parsing of [telemetry] section
- Test enabled/disabled factory paths
- Test shouldTrace\* methods respect config flags
- Create `src/test/telemetry/SpanGuard_test.cpp`:
- Test SpanGuard RAII lifecycle (span ends on destruction)
- Test move constructor works correctly
- Test setAttribute, setOk, setStatus, addEvent, recordException
- Test context() returns valid context
- Add test files to CMake build
**Key new files**:
- `src/test/telemetry/Telemetry_test.cpp`
- `src/test/telemetry/SpanGuard_test.cpp`
**Reference**:
- [06-implementation-phases.md §6.11.1](./06-implementation-phases.md) — Phase 1 exit criteria (unit tests passing)
---
## Task 2.5: Enhance RPC Span Attributes
**Objective**: Add additional attributes to RPC spans per the semantic conventions defined in the plan.
**What to do**:
- Edit `src/xrpld/rpc/detail/ServerHandler.cpp`:
- Add `http.method` attribute for HTTP requests
- Add `http.status_code` attribute for responses
- Add `net.peer.ip` attribute for client IP (if available)
- Edit `src/xrpld/rpc/detail/RPCHandler.cpp`:
- Add `xrpl.rpc.duration_ms` attribute on completion
- Add error message attribute on failure: `xrpl.rpc.error_message`
**Key modified files**:
- `src/xrpld/rpc/detail/ServerHandler.cpp`
- `src/xrpld/rpc/detail/RPCHandler.cpp`
**Reference**:
- [02-design-decisions.md §2.4.2](./02-design-decisions.md) — RPC attribute schema
---
## Task 2.6: Build Verification and Performance Baseline
**Objective**: Verify the build succeeds with and without telemetry, and establish a performance baseline.
**What to do**:
1. Build with `telemetry=ON` and verify no compilation errors
2. Build with `telemetry=OFF` and verify no regressions
3. Run existing unit tests to verify no breakage
4. Document any build issues in lessons.md
**Verification Checklist**:
- [ ] `conan install . --build=missing -o telemetry=True` succeeds
- [ ] `cmake --preset default -Dtelemetry=ON` configures correctly
- [ ] Build succeeds with telemetry ON
- [ ] Build succeeds with telemetry OFF
- [ ] Existing tests pass with telemetry ON
- [ ] Existing tests pass with telemetry OFF
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ------------------------------------------- | --------- | -------------- | ---------- |
| 2.1 | W3C Trace Context header extraction | 2 | 2 | POC |
| 2.2 | Add XRPL_TRACE_PEER/LEDGER macros | 0 | 1 | POC |
| 2.3 | Add shouldTraceLedger() interface method | 0 | 3 | POC |
| 2.4 | Unit tests for core telemetry | 2 | 1 | POC |
| 2.5 | Enhanced RPC span attributes | 0 | 2 | POC |
| 2.6 | Build verification and performance baseline | 0 | 0 | 2.1-2.5 |
**Parallel work**: Tasks 2.1, 2.2, 2.3 can run in parallel. Task 2.4 depends on 2.3. Task 2.5 can run in parallel with 2.4. Task 2.6 depends on all others.

View File

@@ -0,0 +1,238 @@
# Phase 3: Transaction Tracing Task List
> **Goal**: Trace the full transaction lifecycle from RPC submission through peer relay, including cross-node context propagation via Protocol Buffer extensions. This is the WALK phase that demonstrates true distributed tracing.
>
> **Scope**: Protocol Buffer `TraceContext` message, context serialization, PeerImp transaction instrumentation, NetworkOPs processing instrumentation, HashRouter visibility, and multi-node relay context propagation.
>
> **Branch**: `pratik/otel-phase3-tx-tracing` (from `pratik/otel-phase2-rpc-tracing`)
### Related Plan Documents
| Document | Relevance |
| ------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ |
| [04-code-samples.md](./04-code-samples.md) | TraceContext protobuf (§4.4.1), PeerImp instrumentation (§4.5.1), context serialization (§4.4.2) |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | Transaction flow (§1.3), key trace points (§1.6) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 3 tasks (§6.4), definition of done (§6.11.3) |
| [02-design-decisions.md](./02-design-decisions.md) | Context propagation design (§2.5), attribute schema (§2.4.3) |
---
## Task 3.1: Define TraceContext Protocol Buffer Message
**Objective**: Add trace context fields to the P2P protocol messages so trace IDs can propagate across nodes.
**What to do**:
- Edit `include/xrpl/proto/xrpl.proto` (or `src/ripple/proto/ripple.proto`, wherever the proto is):
- Add `TraceContext` message definition:
```protobuf
message TraceContext {
bytes trace_id = 1; // 16-byte trace identifier
bytes span_id = 2; // 8-byte span identifier
uint32 trace_flags = 3; // bit 0 = sampled
string trace_state = 4; // W3C tracestate value
}
```
- Add `optional TraceContext trace_context = 1001;` to:
- `TMTransaction`
- `TMProposeSet` (for Phase 4 use)
- `TMValidation` (for Phase 4 use)
- Use high field numbers (1001+) to avoid conflicts with existing fields
- Regenerate protobuf C++ code
**Key modified files**:
- `include/xrpl/proto/xrpl.proto` (or equivalent)
**Reference**:
- [04-code-samples.md §4.4.1](./04-code-samples.md) — TraceContext message definition
- [02-design-decisions.md §2.5.2](./02-design-decisions.md) — Protocol buffer context propagation design
---
## Task 3.2: Implement Protobuf Context Serialization
**Objective**: Create utilities to serialize/deserialize OTel trace context to/from protobuf `TraceContext` messages.
**What to do**:
- Create `include/xrpl/telemetry/TraceContextPropagator.h` (extend from Phase 2 if exists, or add protobuf methods):
- Add protobuf-specific methods:
- `static Context extractFromProtobuf(protocol::TraceContext const& proto)` — reconstruct OTel context from protobuf fields
- `static void injectToProtobuf(Context const& ctx, protocol::TraceContext& proto)` — serialize current span context into protobuf fields
- Both methods guard behind `#ifdef XRPL_ENABLE_TELEMETRY`
- Create/extend `src/libxrpl/telemetry/TraceContextPropagator.cpp`:
- Implement extraction: read trace_id (16 bytes), span_id (8 bytes), trace_flags from protobuf, construct `SpanContext`, wrap in `Context`
- Implement injection: get current span from context, serialize its TraceId, SpanId, and TraceFlags into protobuf fields
**Key new/modified files**:
- `include/xrpl/telemetry/TraceContextPropagator.h`
- `src/libxrpl/telemetry/TraceContextPropagator.cpp`
**Reference**:
- [04-code-samples.md §4.4.2](./04-code-samples.md) — Full extract/inject implementation
---
## Task 3.3: Instrument PeerImp Transaction Handling
**Objective**: Add trace spans to the peer-level transaction receive and relay path.
**What to do**:
- Edit `src/xrpld/overlay/detail/PeerImp.cpp`:
- In `onMessage(TMTransaction)` / `handleTransaction()`:
- Extract parent trace context from incoming `TMTransaction::trace_context` field (if present)
- Create `tx.receive` span as child of extracted context (or new root if none)
- Set attributes: `xrpl.tx.hash`, `xrpl.peer.id`, `xrpl.tx.status`
- On HashRouter suppression (duplicate): set `xrpl.tx.suppressed=true`, add `tx.duplicate` event
- Wrap validation call with child span `tx.validate`
- Wrap relay with `tx.relay` span
- When relaying to peers:
- Inject current trace context into outgoing `TMTransaction::trace_context`
- Set `xrpl.tx.relay_count` attribute
- Include `TracingInstrumentation.h` and use `XRPL_TRACE_TX` macro
**Key modified files**:
- `src/xrpld/overlay/detail/PeerImp.cpp`
**Reference**:
- [04-code-samples.md §4.5.1](./04-code-samples.md) — Full PeerImp instrumentation example
- [01-architecture-analysis.md §1.3](./01-architecture-analysis.md) — Transaction flow diagram
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — tx.receive trace point
---
## Task 3.4: Instrument NetworkOPs Transaction Processing
**Objective**: Trace the transaction processing pipeline in NetworkOPs, covering both sync and async paths.
**What to do**:
- Edit `src/xrpld/app/misc/NetworkOPs.cpp`:
- In `processTransaction()`:
- Create `tx.process` span
- Set attributes: `xrpl.tx.hash`, `xrpl.tx.type`, `xrpl.tx.local` (whether from RPC or peer)
- Record whether sync or async path is taken
- In `doTransactionAsync()`:
- Capture parent context before queuing
- Create `tx.queue` span with queue depth attribute
- Add event when transaction is dequeued for processing
- In `doTransactionSync()`:
- Create `tx.process_sync` span
- Record result (applied, queued, rejected)
**Key modified files**:
- `src/xrpld/app/misc/NetworkOPs.cpp`
**Reference**:
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — tx.validate and tx.process trace points
- [02-design-decisions.md §2.4.3](./02-design-decisions.md) — Transaction attribute schema
---
## Task 3.5: Instrument HashRouter for Dedup Visibility
**Objective**: Make transaction deduplication visible in traces by recording HashRouter decisions as span attributes/events.
**What to do**:
- Edit `src/xrpld/overlay/detail/PeerImp.cpp` (in handleTransaction):
- After calling `HashRouter::shouldProcess()` or `addSuppressionPeer()`:
- Record `xrpl.tx.suppressed` attribute (true/false)
- Record `xrpl.tx.flags` showing current HashRouter state (SAVED, TRUSTED, etc.)
- Add `tx.first_seen` or `tx.duplicate` event
- This is NOT a modification to HashRouter itself — just recording its decisions as span attributes in the existing PeerImp instrumentation from Task 3.3.
**Key modified files**:
- `src/xrpld/overlay/detail/PeerImp.cpp` (same changes as 3.3, logically grouped)
---
## Task 3.6: Context Propagation in Transaction Relay
**Objective**: Ensure trace context flows correctly when transactions are relayed between peers, creating linked spans across nodes.
**What to do**:
- Verify the relay path injects trace context:
- When `PeerImp` relays a transaction, the `TMTransaction` message should carry `trace_context`
- When a remote peer receives it, the context is extracted and used as parent
- Test context propagation:
- Manually verify with 2+ node setup that trace IDs match across nodes
- Confirm parent-child span relationships are correct in Jaeger
- Handle edge cases:
- Missing trace context (older peers): create new root span
- Corrupted trace context: log warning, create new root span
- Sampled-out traces: respect trace flags
**Key modified files**:
- `src/xrpld/overlay/detail/PeerImp.cpp`
- `src/xrpld/overlay/detail/OverlayImpl.cpp` (if relay method needs context param)
**Reference**:
- [02-design-decisions.md §2.5](./02-design-decisions.md) — Context propagation design
- [04-code-samples.md §4.5.1](./04-code-samples.md) — Relay context injection pattern
---
## Task 3.7: Build Verification and Testing
**Objective**: Verify all Phase 3 changes compile and work correctly.
**What to do**:
1. Build with `telemetry=ON` — verify no compilation errors
2. Build with `telemetry=OFF` — verify no regressions
3. Run existing unit tests
4. Verify protobuf regeneration produces correct C++ code
5. Document any issues encountered
**Verification Checklist**:
- [ ] Protobuf changes generate valid C++
- [ ] Build succeeds with telemetry ON
- [ ] Build succeeds with telemetry OFF
- [ ] Existing tests pass
- [ ] No undefined symbols from new telemetry calls
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ----------------------------------- | --------- | -------------- | ---------- |
| 3.1 | TraceContext protobuf message | 0 | 1 | Phase 2 |
| 3.2 | Protobuf context serialization | 1-2 | 0 | 3.1 |
| 3.3 | PeerImp transaction instrumentation | 0 | 1 | 3.2 |
| 3.4 | NetworkOPs transaction processing | 0 | 1 | Phase 2 |
| 3.5 | HashRouter dedup visibility | 0 | 1 | 3.3 |
| 3.6 | Relay context propagation | 0 | 1-2 | 3.3, 3.5 |
| 3.7 | Build verification and testing | 0 | 0 | 3.1-3.6 |
**Parallel work**: Tasks 3.1 and 3.4 can start in parallel. Task 3.2 depends on 3.1. Tasks 3.3 and 3.5 depend on 3.2. Task 3.6 depends on 3.3 and 3.5.
**Exit Criteria** (from [06-implementation-phases.md §6.11.3](./06-implementation-phases.md)):
- [ ] Transaction traces span across nodes
- [ ] Trace context in Protocol Buffer messages
- [ ] HashRouter deduplication visible in traces
- [ ] <5% overhead on transaction throughput

View File

@@ -0,0 +1,833 @@
# Phase 4: Consensus Tracing Task List
> **Goal**: Full observability into consensus rounds — track round lifecycle, phase transitions, proposal handling, and validation. This is the RUN phase that completes the distributed tracing story.
>
> **Scope**: RCLConsensus instrumentation for round starts, phase transitions (open/establish/accept), proposal send/receive, validation handling, and correlation with transaction traces from Phase 3.
>
> **Branch**: `pratik/otel-phase4-consensus-tracing` (from `pratik/otel-phase3-tx-tracing`)
### Related Plan Documents
| Document | Relevance |
| ------------------------------------------------------------ | ----------------------------------------------------------- |
| [04-code-samples.md](./04-code-samples.md) | Consensus instrumentation (§4.5.2), consensus span patterns |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | Consensus round flow (§1.4), key trace points (§1.6) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 4 tasks (§6.5), definition of done (§6.11.4) |
| [02-design-decisions.md](./02-design-decisions.md) | Consensus attribute schema (§2.4.4) |
---
## Task 4.1: Instrument Consensus Round Start
**Objective**: Create a root span for each consensus round that captures the round's key parameters.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp`:
- In `RCLConsensus::startRound()` (or the Adaptor's startRound):
- Create `consensus.round` span using `XRPL_TRACE_CONSENSUS` macro
- Set attributes:
- `xrpl.consensus.ledger.prev` — previous ledger hash
- `xrpl.consensus.ledger.seq` — target ledger sequence
- `xrpl.consensus.proposers` — number of trusted proposers
- `xrpl.consensus.mode` — "proposing" or "observing"
- Store the span context for use by child spans in phase transitions
- Add a member to hold current round trace context:
- `opentelemetry::context::Context currentRoundContext_` (guarded by `#ifdef`)
- Updated at round start, used by phase transition spans
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `src/xrpld/app/consensus/RCLConsensus.h` (add context member)
**Reference**:
- [04-code-samples.md §4.5.2](./04-code-samples.md) — startRound instrumentation example
- [01-architecture-analysis.md §1.4](./01-architecture-analysis.md) — Consensus round flow
---
## Task 4.2: Instrument Phase Transitions
**Objective**: Create child spans for each consensus phase (open, establish, accept) to show timing breakdown.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp`:
- Identify where phase transitions occur (the `Consensus<Adaptor>` template drives this)
- For each phase entry:
- Create span as child of `currentRoundContext_`: `consensus.phase.open`, `consensus.phase.establish`, `consensus.phase.accept`
- Set `xrpl.consensus.phase` attribute
- Add `phase.enter` event at start, `phase.exit` event at end
- Record phase duration in milliseconds
- In the `onClose` adaptor method:
- Create `consensus.ledger_close` span
- Set attributes: close_time, mode, transaction count in initial position
- Note: The Consensus template class in `src/xrpld/consensus/Consensus.h` drives phase transitions — Phase 4a instruments directly in the template
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- Possibly `include/xrpl/consensus/Consensus.h` (for template-level phase tracking)
**Reference**:
- [04-code-samples.md §4.5.2](./04-code-samples.md) — phaseTransition instrumentation
---
## Task 4.3: Instrument Proposal Handling
**Objective**: Trace proposal send and receive to show validator coordination.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp`:
- In `Adaptor::propose()`:
- Create `consensus.proposal.send` span
- Set attributes: `xrpl.consensus.round` (proposal sequence), proposal hash
- Inject trace context into outgoing `TMProposeSet::trace_context` (from Phase 3 protobuf)
- In `Adaptor::peerProposal()` (or wherever peer proposals are received):
- Extract trace context from incoming `TMProposeSet::trace_context`
- Create `consensus.proposal.receive` span as child of extracted context
- Set attributes: `xrpl.consensus.proposer` (node ID), `xrpl.consensus.round`
- In `Adaptor::share(RCLCxPeerPos)`:
- Create `consensus.proposal.relay` span for relaying peer proposals
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
**Reference**:
- [04-code-samples.md §4.5.2](./04-code-samples.md) — peerProposal instrumentation
- [02-design-decisions.md §2.4.4](./02-design-decisions.md) — Consensus attribute schema
---
## Task 4.4: Instrument Validation Handling
**Objective**: Trace validation send and receive to show ledger validation flow.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp` (or the validation handler):
- When sending our validation:
- Create `consensus.validation.send` span
- Set attributes: validated ledger hash, sequence, signing time
- When receiving a peer validation:
- Extract trace context from `TMValidation::trace_context` (if present)
- Create `consensus.validation.receive` span
- Set attributes: `xrpl.consensus.validator` (node ID), ledger hash
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `src/xrpld/app/misc/NetworkOPs.cpp` (if validation handling is here)
---
## Task 4.5: Add Consensus-Specific Attributes
**Objective**: Enrich consensus spans with detailed attributes for debugging and analysis.
**What to do**:
- Review all consensus spans and ensure they include:
- `xrpl.consensus.ledger.seq` — target ledger sequence number
- `xrpl.consensus.round` — consensus round number
- `xrpl.consensus.mode` — proposing/observing/wrongLedger
- `xrpl.consensus.phase` — current phase name
- `xrpl.consensus.phase_duration_ms` — time spent in phase
- `xrpl.consensus.proposers` — number of trusted proposers
- `xrpl.consensus.tx_count` — transactions in proposed set
- `xrpl.consensus.disputes` — number of disputed transactions
- `xrpl.consensus.converge_percent` — convergence percentage
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
---
## Task 4.6: Correlate Transaction and Consensus Traces
**Objective**: Link transaction traces from Phase 3 with consensus traces so you can follow a transaction from submission through consensus into the ledger.
**What to do**:
- In `onClose()` or `onAccept()`:
- When building the consensus position, link the round span to individual transaction spans using span links (if OTel SDK supports it) or events
- At minimum, record the transaction hashes included in the consensus set as span events: `tx.included` with `xrpl.tx.hash` attribute
- In `processTransactionSet()` (NetworkOPs):
- If the consensus round span context is available, create child spans for each transaction applied to the ledger
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `src/xrpld/app/misc/NetworkOPs.cpp`
---
## Task 4.7: Build Verification and Testing
**Objective**: Verify all Phase 4 changes compile and don't affect consensus timing.
**What to do**:
1. Build with `telemetry=ON` — verify no compilation errors
2. Build with `telemetry=OFF` — verify no regressions (critical for consensus code)
3. Run existing consensus-related unit tests
4. Verify that all macros expand to no-ops when disabled
5. Check that no consensus-critical code paths are affected by instrumentation overhead
**Verification Checklist**:
- [ ] Build succeeds with telemetry ON
- [ ] Build succeeds with telemetry OFF
- [ ] Existing consensus tests pass
- [ ] No new includes in consensus headers when telemetry is OFF
- [ ] Phase timing instrumentation doesn't use blocking operations
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ------------------------------------- | --------- | -------------- | ------------- |
| 4.1 | Consensus round start instrumentation | 0 | 2 | Phase 3 |
| 4.2 | Phase transition instrumentation | 0 | 1-2 | 4.1 |
| 4.3 | Proposal handling instrumentation | 0 | 1 | 4.1 |
| 4.4 | Validation handling instrumentation | 0 | 1-2 | 4.1 |
| 4.5 | Consensus-specific attributes | 0 | 1 | 4.2, 4.3, 4.4 |
| 4.6 | Transaction-consensus correlation | 0 | 2 | 4.2, Phase 3 |
| 4.7 | Build verification and testing | 0 | 0 | 4.1-4.6 |
**Parallel work**: Tasks 4.2, 4.3, and 4.4 can run in parallel after 4.1 is complete. Task 4.5 depends on all three. Task 4.6 depends on 4.2 and Phase 3.
### Implemented Spans
| Span Name | Method | Key Attributes |
| --------------------------- | ---------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| `consensus.proposal.send` | `Adaptor::propose` | `xrpl.consensus.round` |
| `consensus.ledger_close` | `Adaptor::onClose` | `xrpl.consensus.ledger.seq`, `xrpl.consensus.mode` |
| `consensus.accept` | `Adaptor::onAccept` | `xrpl.consensus.proposers`, `xrpl.consensus.round_time_ms` |
| `consensus.accept.apply` | `Adaptor::doAccept` | `xrpl.consensus.close_time`, `close_time_correct`, `close_resolution_ms`, `state`, `proposing`, `round_time_ms`, `ledger.seq` |
| `consensus.validation.send` | `Adaptor::onAccept` (via validate) | `xrpl.consensus.proposing` |
#### Close Time Attributes (consensus.accept.apply)
The `consensus.accept.apply` span captures ledger close time agreement details
driven by `avCT_CONSENSUS_PCT` (75% validator agreement threshold):
- **`xrpl.consensus.close_time`** — Agreed-upon ledger close time (epoch seconds). When validators disagree (`consensusCloseTime == epoch`), this is synthetically set to `prevCloseTime + 1s`.
- **`xrpl.consensus.close_time_correct`** — `true` if validators reached agreement, `false` if they "agreed to disagree" (close time forced to prev+1s).
- **`xrpl.consensus.close_resolution_ms`** — Rounding granularity for close time (starts at 30s, decreases as ledger interval stabilizes).
- **`xrpl.consensus.state`** — `"finished"` (normal) or `"moved_on"` (consensus failed, adopted best available).
- **`xrpl.consensus.proposing`** — Whether this node was proposing.
- **`xrpl.consensus.round_time_ms`** — Total consensus round duration.
**Exit Criteria** (from [06-implementation-phases.md §6.11.4](./06-implementation-phases.md)):
- [x] Complete consensus round traces
- [x] Phase transitions visible
- [x] Proposals and validations traced
- [x] Close time agreement tracked (per `avCT_CONSENSUS_PCT`)
- [x] No impact on consensus timing
---
# Phase 4a: Establish-Phase Gap Fill & Cross-Node Correlation
> **Goal**: Fill tracing gaps in the consensus establish phase (disputes, convergence,
> threshold escalation, mode changes) and establish cross-node correlation using a
> deterministic shared trace ID derived from `previousLedger.id()`.
>
> **Approach**: Direct instrumentation in `Consensus.h` — the generic consensus
> template has full access to internal state (`convergePercent_`, `result_->disputes`,
> `mode_`, threshold logic). Telemetry access comes via a single new adaptor
> method `getTelemetry()`. Long-lived spans (round, establish) are stored as
> class members using `SpanGuard` directly — NOT the `XRPL_TRACE_*` convenience
> macros (which create local variables named `_xrpl_guard_`). Short-lived
> scoped spans (update_positions, check) can use the macros. All code compiles
> to no-ops when `XRPL_ENABLE_TELEMETRY` is not defined.
>
> **Branch**: `pratik/otel-phase4-consensus-tracing`
## Design: Switchable Correlation Strategy
Two strategies for cross-node trace correlation, switchable via config:
### Strategy A — Deterministic Trace ID (Default)
Derive `trace_id = SHA256(previousLedger.id())[0:16]` so all nodes in the same
consensus round share the same trace_id without P2P context propagation.
- **Pros**: All nodes appear in the same trace in Tempo/Jaeger automatically.
No collector-side post-processing needed.
- **Cons**: Overrides OTel's random trace_id generation; requires custom
`IdGenerator` or manual span context construction.
### Strategy B — Attribute-Based Correlation
Use normal random trace_id but attach `xrpl.consensus.ledger_id` as an attribute
on every consensus span. Correlation happens at query time via Tempo/Grafana
`by attribute` queries.
- **Pros**: Standard OTel trace_id semantics; no SDK customization.
- **Cons**: Cross-node correlation requires query-time joins, not automatic.
### Config
```ini
[telemetry]
# "deterministic" (default) or "attribute"
consensus_trace_strategy=deterministic
```
### Implementation
In `RCLConsensus::Adaptor::startRound()`:
- If `deterministic`:
1. Compute `trace_id_bytes = SHA256(prevLedgerID)[0:16]`
2. Construct `opentelemetry::trace::TraceId(trace_id_bytes)`
3. Create a synthetic `SpanContext` with this trace_id and a random span_id:
```cpp
auto traceId = opentelemetry::trace::TraceId(trace_id_bytes);
auto spanId = opentelemetry::trace::SpanId(random_8_bytes);
auto syntheticCtx = opentelemetry::trace::SpanContext(
traceId, spanId, opentelemetry::trace::TraceFlags(1), false);
```
4. Wrap in `opentelemetry::context::Context` via
`opentelemetry::trace::SetSpan(context, syntheticSpan)`
5. Call `startSpan("consensus.round", parentContext)` so the new span
inherits the deterministic trace_id.
- If `attribute`: start a normal `consensus.round` span, set
`xrpl.consensus.ledger_id = previousLedger.id()` as attribute.
Both strategies always set `xrpl.consensus.round_id` (round number) and
`xrpl.consensus.ledger_id` (previous ledger hash) as attributes.
---
## Design: Span Hierarchy
```
consensus.round (root — created in RCLConsensus::startRound, closed at accept)
│ link → previous round's SpanContext (follows-from)
├── consensus.establish (phaseEstablish → acceptance, in Consensus.h)
│ ├── consensus.update_positions (each updateOurPositions call)
│ │ └── consensus.dispute.resolve (per-tx dispute resolution event)
│ ├── consensus.check (each haveConsensus call)
│ └── consensus.mode_change (short-lived span in adaptor on mode transition)
├── consensus.accept (existing onAccept span — reparented under round)
└── consensus.validation.send (existing — reparented, follows-from link to round)
```
### Span Links (follows-from relationships)
| Link Source | Link Target | Rationale |
| ----------------------------------------- | -------------------------- | ------------------------------------------------------------------------------ |
| `consensus.round` (N+1) | `consensus.round` (N) | Causal chain: round N+1 exists because round N accepted |
| `consensus.validation.send` | `consensus.round` | Validation follows from the round that produced it; may outlive the round span |
| _(Phase 4b)_ Received proposal processing | Sender's `consensus.round` | Cross-node causal link via P2P context propagation |
---
## Task 4a.0: Prerequisites — Extend SpanGuard and Telemetry APIs
**Objective**: Add missing API surface needed by later tasks.
**What to do**:
1. **Add `SpanGuard::addEvent()` with attributes** (needed by Task 4a.5):
The current `addEvent(string_view name)` only accepts a name. Add an
overload that accepts key-value attributes:
```cpp
void addEvent(std::string_view name,
std::initializer_list<
std::pair<opentelemetry::nostd::string_view,
opentelemetry::common::AttributeValue>> attributes)
{
span_->AddEvent(std::string(name), attributes);
}
```
2. **Add a `Telemetry::startSpan()` overload that accepts span links** (needed by Tasks 4a.2, 4a.8):
The current `startSpan()` has no span link support. Add an overload that
accepts a vector of `SpanContext` links for follows-from relationships:
```cpp
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span>
startSpan(
std::string_view name,
opentelemetry::context::Context const& parentContext,
std::vector<opentelemetry::trace::SpanContext> const& links,
opentelemetry::trace::SpanKind kind = opentelemetry::trace::SpanKind::kInternal) = 0;
```
3. **Add `XRPL_TRACE_ADD_EVENT` macro** (needed by Task 4a.5):
Add to `TracingInstrumentation.h` to expose `addEvent(name, attrs)` through
the macro interface (consistent with `XRPL_TRACE_SET_ATTR` pattern):
```cpp
#ifdef XRPL_ENABLE_TELEMETRY
#define XRPL_TRACE_ADD_EVENT(name, ...) \
if (_xrpl_guard_.has_value()) \
{ \
_xrpl_guard_->addEvent(name, __VA_ARGS__); \
}
#else
#define XRPL_TRACE_ADD_EVENT(name, ...) ((void)0)
#endif
```
**Key modified files**:
- `include/xrpl/telemetry/SpanGuard.h` — add `addEvent()` overload
- `include/xrpl/telemetry/Telemetry.h` — add `startSpan()` with links
- `src/xrpld/telemetry/Telemetry.cpp` — implement new overload
- `src/xrpld/telemetry/NullTelemetry.cpp` — no-op implementation
- `src/xrpld/telemetry/TracingInstrumentation.h` — add `XRPL_TRACE_ADD_EVENT` macro
---
## Task 4a.1: Adaptor `getTelemetry()` Method
**Objective**: Give `Consensus.h` access to the telemetry subsystem without
coupling the generic template to OTel headers.
**What to do**:
- Add `getTelemetry()` method to the Adaptor concept (returns
`xrpl::telemetry::Telemetry&`). The return type is already forward-declared
behind `#ifdef XRPL_ENABLE_TELEMETRY`.
- Implement in `RCLConsensus::Adaptor` — delegates to `app_.getTelemetry()`.
- In `Consensus.h`, the `XRPL_TRACE_*` macros call
`adaptor_.getTelemetry()` — when telemetry is disabled, the macros expand to
`((void)0)` and the method is never called.
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.h` — declare `getTelemetry()`
- `src/xrpld/app/consensus/RCLConsensus.cpp` — implement `getTelemetry()`
---
## Task 4a.2: Switchable Round Span with Deterministic Trace ID
**Objective**: Create a `consensus.round` root span in `startRound()` that uses
the switchable correlation strategy. Store span context as a member for child
spans in `Consensus.h`.
**What to do**:
- In `RCLConsensus::Adaptor::startRound()` (or a new helper):
- Read `consensus_trace_strategy` from config.
- **Deterministic**: compute `trace_id = SHA256(prevLedgerID)[0:16]`.
Construct a `SpanContext` with this trace_id, then start
`consensus.round` span as child of that context.
- **Attribute**: start normal `consensus.round` span.
- Set attributes on both: `xrpl.consensus.round_id`,
`xrpl.consensus.ledger_id`, `xrpl.consensus.ledger.seq`,
`xrpl.consensus.mode`.
- Store the round span in `Consensus` as a member (see Task 4a.3).
- If a previous round's span context is available, add a **span link**
(follows-from) to establish the round chain.
- Add `createDeterministicTraceId(hash)` utility to
`include/xrpl/telemetry/Telemetry.h` (returns 16-byte trace ID from a
256-bit hash by truncation).
- Add `consensus_trace_strategy` to `Telemetry::Setup` and
`TelemetryConfig.cpp` parser:
```cpp
/** Cross-node correlation strategy: "deterministic" or "attribute". */
std::string consensusTraceStrategy = "deterministic";
```
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `include/xrpl/telemetry/Telemetry.h` — `createDeterministicTraceId()`
- `src/xrpld/telemetry/TelemetryConfig.cpp` — parse new config option
---
## Task 4a.3: Span Members in `Consensus.h`
**Objective**: Add span storage to the `Consensus` class so that spans created
in `startRound()` (adaptor) are accessible from `phaseEstablish()`,
`updateOurPositions()`, and `haveConsensus()` (template methods).
**What to do**:
- Add to `Consensus` private members (guarded by `#ifdef XRPL_ENABLE_TELEMETRY`):
```cpp
#ifdef XRPL_ENABLE_TELEMETRY
std::optional<xrpl::telemetry::SpanGuard> roundSpan_;
std::optional<xrpl::telemetry::SpanGuard> establishSpan_;
opentelemetry::context::Context prevRoundContext_;
#endif
```
- `roundSpan_` is created in `startRound()` via the adaptor and stored.
Its `SpanGuard::Scope` member keeps the span active on the thread context
for the entire round lifetime.
- `establishSpan_` is created when entering phaseEstablish and cleared on accept.
It becomes a child of `roundSpan_` via OTel's thread-local context propagation.
- `prevRoundContext_` stores the previous round's context for follows-from links.
**Threading assumption**: `startRound()`, `phaseEstablish()`, `updateOurPositions()`,
and `haveConsensus()` all run on the same thread (the consensus job queue thread).
This is required for the `SpanGuard::Scope`-based parent-child hierarchy to work.
The `Consensus` class documentation confirms it is NOT thread-safe and calls are
serialized by the application.
- Add conditional include at top of `Consensus.h`:
```cpp
#ifdef XRPL_ENABLE_TELEMETRY
#include <xrpl/telemetry/SpanGuard.h>
#include <xrpld/telemetry/TracingInstrumentation.h>
#endif
```
**Key modified files**:
- `src/xrpld/consensus/Consensus.h`
---
## Task 4a.4: Instrument `phaseEstablish()`
**Objective**: Create `consensus.establish` span wrapping the establish phase,
with attributes for convergence progress.
**What to do**:
- At the start of `phaseEstablish()` (line 1298), if `establishSpan_` is not
yet created, create it as child of `roundSpan_` using the **direct API**
(NOT the `XRPL_TRACE_CONSENSUS` macro, which creates a local variable):
```cpp
#ifdef XRPL_ENABLE_TELEMETRY
if (!establishSpan_ && adaptor_.getTelemetry().shouldTraceConsensus())
{
establishSpan_.emplace(
adaptor_.getTelemetry().startSpan("consensus.establish"));
}
#endif
```
- Set attributes on each call:
- `xrpl.consensus.converge_percent` — `convergePercent_`
- `xrpl.consensus.establish_count` — `establishCounter_`
- `xrpl.consensus.proposers` — `currPeerPositions_.size()`
- On phase exit (transition to accept), close the establish span and record
final duration.
**Key modified files**:
- `src/xrpld/consensus/Consensus.h` — `phaseEstablish()` method
---
## Task 4a.5: Instrument `updateOurPositions()`
**Objective**: Trace each position update cycle including dispute resolution
details.
**What to do**:
- At the start of `updateOurPositions()` (line 1418), create a scoped child
span. This method is called and returns within a single `phaseEstablish()`
call, so the `XRPL_TRACE_CONSENSUS` macro works here (scoped local):
```cpp
XRPL_TRACE_CONSENSUS(adaptor_.getTelemetry(), "consensus.update_positions");
```
- Set attributes:
- `xrpl.consensus.disputes_count` — `result_->disputes.size()`
- `xrpl.consensus.converge_percent` — current convergence
- `xrpl.consensus.proposers_agreed` — count of peers with same position
- `xrpl.consensus.proposers_total` — total peer positions
- Inside the dispute resolution loop, for each dispute that changes our vote,
add an **event** with attributes using `XRPL_TRACE_ADD_EVENT` (from Task 4a.0):
```cpp
XRPL_TRACE_ADD_EVENT("dispute.resolve", {
{"xrpl.tx.id", std::string(tx_id)},
{"xrpl.dispute.our_vote", our_vote},
{"xrpl.dispute.yays", static_cast<int64_t>(yays)},
{"xrpl.dispute.nays", static_cast<int64_t>(nays)}
});
```
**Key modified files**:
- `src/xrpld/consensus/Consensus.h` — `updateOurPositions()` method
---
## Task 4a.6: Instrument `haveConsensus()` (Threshold & Convergence)
**Objective**: Trace consensus checking including threshold escalation
(`ConsensusParms::AvalancheState::{init, mid, late, stuck}`).
**What to do**:
- At the start of `haveConsensus()` (line 1598), create a scoped child span:
```cpp
XRPL_TRACE_CONSENSUS(adaptor_.getTelemetry(), "consensus.check");
```
- Set attributes:
- `xrpl.consensus.agree_count` — peers that agree with our position
- `xrpl.consensus.disagree_count` — peers that disagree
- `xrpl.consensus.converge_percent` — convergence percentage
- `xrpl.consensus.result` — ConsensusState result (Yes/No/MovedOn)
- The free function `checkConsensus()` in `Consensus.cpp` (line 151) determines
thresholds based on `currentAgreeTime`. Threshold values come from
`ConsensusParms::avalancheCutoffs` (defined in `ConsensusParms.h`).
The escalation states are `ConsensusParms::AvalancheState::{init, mid, late, stuck}`.
Record the effective threshold as an attribute on the span:
- `xrpl.consensus.threshold_percent` — current threshold from `avalancheCutoffs`
**Key modified files**:
- `src/xrpld/consensus/Consensus.h` — `haveConsensus()` method
---
## Task 4a.7: Instrument Mode Changes
**Objective**: Trace consensus mode transitions (proposing ↔ observing,
wrongLedger, switchedLedger).
**What to do**:
Mode changes are rare (typically 0-1 per round), so a **standalone short-lived
span** is appropriate (not an event). This captures timing of the mode change
itself.
- In `RCLConsensus::Adaptor::onModeChange()`, create a scoped span:
```cpp
XRPL_TRACE_CONSENSUS(app_.getTelemetry(), "consensus.mode_change");
XRPL_TRACE_SET_ATTR("xrpl.consensus.mode.old", to_string(before).c_str());
XRPL_TRACE_SET_ATTR("xrpl.consensus.mode.new", to_string(after).c_str());
```
- Note: `MonitoredMode::set()` (line 304 in `Consensus.h`) calls
`adaptor_.onModeChange(before, after)` — so the span is created in the
adaptor, which already has telemetry access. No instrumentation needed
in `Consensus.h` for this task.
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp` — `onModeChange()`
---
## Task 4a.8: Reparent Existing Spans Under Round
**Objective**: Make existing consensus spans (`consensus.accept`,
`consensus.accept.apply`, `consensus.validation.send`) children of the
`consensus.round` root span instead of being standalone.
**What to do**:
- The existing spans in `onAccept()`, `doAccept()`, and `validate()` use
`XRPL_TRACE_CONSENSUS(app_.getTelemetry(), ...)` which creates standalone
spans on the current thread's context.
- After Task 4a.2 creates the round span and stores it, these methods run on
the same thread within the round span's scope, so they automatically become
children. Verify this works correctly.
- For `consensus.validation.send`: add a **span link** (follows-from) to the
round span context, since the validation may be processed after the round
completes.
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp` — verify parent-child hierarchy
---
## Task 4a.9: Build Verification and Testing
**Objective**: Verify all Phase 4a changes compile cleanly with telemetry ON
and OFF, and don't affect consensus timing.
**What to do**:
1. Build with `telemetry=ON` — verify no compilation errors
2. Build with `telemetry=OFF` — verify macros expand to no-ops, no new includes
leak into `Consensus.h` when disabled
3. Run existing consensus unit tests
4. Verify `#ifdef XRPL_ENABLE_TELEMETRY` guards on all new members in
`Consensus.h`
5. Run `pccl` pre-commit checks
**Verification Checklist**:
- [x] Build succeeds with telemetry ON
- [x] Build succeeds with telemetry OFF
- [x] Existing consensus tests pass
- [x] `Consensus.h` has zero OTel includes when telemetry is OFF
- [x] No new virtual calls in hot consensus paths
- [x] `pccl` passes
---
## Phase 4a Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ------------------------------------------------ | --------- | -------------- | ---------- |
| 4a.0 | Prerequisites: extend SpanGuard & Telemetry APIs | 0 | 4 | Phase 4 |
| 4a.1 | Adaptor `getTelemetry()` method | 0 | 2 | Phase 4 |
| 4a.2 | Switchable round span with deterministic traceID | 0 | 3 | 4a.0, 4a.1 |
| 4a.3 | Span members in `Consensus.h` | 0 | 1 | 4a.1 |
| 4a.4 | Instrument `phaseEstablish()` | 0 | 1 | 4a.3 |
| 4a.5 | Instrument `updateOurPositions()` | 0 | 1 | 4a.0, 4a.3 |
| 4a.6 | Instrument `haveConsensus()` (thresholds) | 0 | 1 | 4a.3 |
| 4a.7 | Instrument mode changes | 0 | 1 | 4a.1 |
| 4a.8 | Reparent existing spans under round | 0 | 1 | 4a.0, 4a.2 |
| 4a.9 | Build verification and testing | 0 | 0 | 4a.0-4a.8 |
**Parallel work**: Tasks 4a.0 and 4a.1 can run in parallel. Tasks 4a.4, 4a.5, 4a.6, and 4a.7 can run in parallel after 4a.3 (and 4a.0 for 4a.5).
### New Spans (Phase 4a)
| Span Name | Location | Key Attributes |
| ---------------------------- | ------------------ | ---------------------------------------------------------------------------------- |
| `consensus.round` | `RCLConsensus.cpp` | `round_id`, `ledger_id`, `ledger.seq`, `mode`; link → prev round |
| `consensus.establish` | `Consensus.h` | `converge_percent`, `establish_count`, `proposers` |
| `consensus.update_positions` | `Consensus.h` | `disputes_count`, `converge_percent`, `proposers_agreed`, `proposers_total` |
| `consensus.check` | `Consensus.h` | `agree_count`, `disagree_count`, `converge_percent`, `result`, `threshold_percent` |
| `consensus.mode_change` | `RCLConsensus.cpp` | `mode.old`, `mode.new` |
### New Events (Phase 4a)
| Event Name | Parent Span | Attributes |
| ----------------- | ---------------------------- | ----------------------------------- |
| `dispute.resolve` | `consensus.update_positions` | `tx_id`, `our_vote`, `yays`, `nays` |
### New Attributes (Phase 4a)
```cpp
// Round-level (on consensus.round)
"xrpl.consensus.round_id" = int64 // Consensus round number
"xrpl.consensus.ledger_id" = string // previousLedger.id() hash
"xrpl.consensus.trace_strategy" = string // "deterministic" or "attribute"
// Establish-level
"xrpl.consensus.converge_percent" = int64 // Convergence % (0-100+)
"xrpl.consensus.establish_count" = int64 // Number of establish iterations
"xrpl.consensus.disputes_count" = int64 // Active disputes
"xrpl.consensus.proposers_agreed" = int64 // Peers agreeing with us
"xrpl.consensus.proposers_total" = int64 // Total peer positions
"xrpl.consensus.agree_count" = int64 // Peers that agree (haveConsensus)
"xrpl.consensus.disagree_count" = int64 // Peers that disagree
"xrpl.consensus.threshold_percent" = int64 // Current threshold (50/65/70/95)
"xrpl.consensus.result" = string // "yes", "no", "moved_on"
// Mode change
"xrpl.consensus.mode.old" = string // Previous mode
"xrpl.consensus.mode.new" = string // New mode
```
### Implementation Notes
- **Separation of concerns**: All non-trivial telemetry code extracted to private
helpers (`startRoundTracing`, `createValidationSpan`, `startEstablishTracing`,
`updateEstablishTracing`, `endEstablishTracing`). Business logic methods contain
only single-line `#ifdef` blocks calling these helpers.
- **Thread safety**: `createValidationSpan()` runs on the jtACCEPT worker thread.
Instead of accessing `roundSpan_` across threads, a `roundSpanContext_` snapshot
(lightweight `SpanContext` value type) is captured on the consensus thread in
`startRoundTracing()` and read by `createValidationSpan()`. The job queue
provides the happens-before guarantee.
- **Macro safety**: `XRPL_TRACE_ADD_EVENT` uses `do { } while (0)` to prevent
dangling-else issues.
- **Config validation**: `consensus_trace_strategy` is validated to be either
`"deterministic"` or `"attribute"`, falling back to `"deterministic"` for
unrecognised values.
- **Plan deviation**: `roundSpan_` is stored in `RCLConsensus::Adaptor` (not
`Consensus.h`) because the adaptor has access to telemetry config and can
implement the deterministic trace ID strategy. `establishSpan_` is correctly
in `Consensus.h` as planned.
---
# Phase 4b: Cross-Node Propagation (Future — Documentation Only)
> **Goal**: Wire `TraceContextPropagator` for P2P messages so that proposals
> and validations carry trace context between nodes. This enables true
> distributed tracing where a proposal sent by Node A creates a child span
> on Node B.
>
> **Status**: NOT IMPLEMENTED. The protobuf fields and propagator class exist
> but are not wired. This section documents the design for future work.
## Architecture
```
Node A (proposing) Node B (receiving)
───────────────── ──────────────────
consensus.round consensus.round
├── propose() ├── peerProposal()
│ └── TraceContextPropagator │ └── TraceContextPropagator
│ ::injectToProtobuf( │ ::extractFromProtobuf(
│ TMProposeSet.trace_context) │ TMProposeSet.trace_context)
│ │ └── span link → Node A's context
└── validate() └── onValidation()
└── inject into TMValidation └── extract from TMValidation
```
## Wiring Points
| Message | Inject Location | Extract Location | Protobuf Field |
| --------------- | ---------------------------------- | ----------------------------------- | -------------------------- |
| `TMProposeSet` | `Adaptor::propose()` | `PeerImp::onMessage(TMProposeSet)` | field 1001: `TraceContext` |
| `TMValidation` | `Adaptor::validate()` | `PeerImp::onMessage(TMValidation)` | field 1001: `TraceContext` |
| `TMTransaction` | `NetworkOPs::processTransaction()` | `PeerImp::onMessage(TMTransaction)` | field 1001: `TraceContext` |
## Span Link Semantics
Received messages use **span links** (follows-from), NOT parent-child:
- The receiver's processing span links to the sender's context
- This preserves each node's independent trace tree
- Cross-node correlation visible via linked traces in Tempo/Jaeger
## Interaction with Deterministic Trace ID (Strategy A)
When using deterministic trace_id (Phase 4a default), cross-node spans already
share the same trace_id. P2P propagation adds **span-level** linking:
- Without propagation: spans from different nodes appear in the same trace
(same trace_id) but without parent-child or follows-from relationships.
- With propagation: spans have explicit links showing which proposal/validation
from Node A caused processing on Node B.
## Prerequisites
- Phase 4a (this task list) — establish phase tracing must be in place
- `TraceContextPropagator` class (already exists in
`include/xrpl/telemetry/TraceContextPropagator.h`)
- Protobuf `TraceContext` message (already exists, field 1001)

View File

@@ -0,0 +1,241 @@
# Phase 5: Documentation & Deployment Task List
> **Goal**: Production readiness — Grafana dashboards, spanmetrics pipeline, operator runbook, alert definitions, and final integration testing. This phase ensures the telemetry system is useful and maintainable in production.
>
> **Scope**: Grafana dashboard definitions, OTel Collector spanmetrics connector, Prometheus integration, alert rules, operator documentation, and production-ready Docker Compose stack.
>
> **Branch**: `pratik/otel-phase5-docs-deployment` (from `pratik/otel-phase4-consensus-tracing`)
### Related Plan Documents
| Document | Relevance |
| ---------------------------------------------------------------- | -------------------------------------------------------------------------- |
| [07-observability-backends.md](./07-observability-backends.md) | Jaeger setup (§7.1), Grafana dashboards (§7.6), alerts (§7.6.3) |
| [05-configuration-reference.md](./05-configuration-reference.md) | Collector config (§5.5), production config (§5.5.2), Docker Compose (§5.6) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 5 tasks (§6.6), definition of done (§6.11.5) |
---
## Task 5.1: Add Spanmetrics Connector to OTel Collector
**Objective**: Derive RED metrics (Rate, Errors, Duration) from trace spans automatically, enabling Grafana time-series dashboards.
**What to do**:
- Edit `docker/telemetry/otel-collector-config.yaml`:
- Add `spanmetrics` connector:
```yaml
connectors:
spanmetrics:
histogram:
explicit:
buckets: [1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 5s]
dimensions:
- name: xrpl.rpc.command
- name: xrpl.rpc.status
- name: xrpl.consensus.phase
- name: xrpl.tx.type
```
- Add `prometheus` exporter:
```yaml
exporters:
prometheus:
endpoint: 0.0.0.0:8889
```
- Wire the pipeline:
```yaml
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp/jaeger, spanmetrics]
metrics:
receivers: [spanmetrics]
exporters: [prometheus]
```
- Edit `docker/telemetry/docker-compose.yml`:
- Expose port `8889` on the collector for Prometheus scraping
- Add Prometheus service
- Add Prometheus as Grafana datasource
**Key modified files**:
- `docker/telemetry/otel-collector-config.yaml`
- `docker/telemetry/docker-compose.yml`
**Key new files**:
- `docker/telemetry/prometheus.yml` (Prometheus scrape config)
- `docker/telemetry/grafana/provisioning/datasources/prometheus.yaml`
**Reference**:
- [POC_taskList.md §Next Steps](./POC_taskList.md) — Metrics pipeline for Grafana dashboards
---
## Task 5.2: Create Grafana Dashboards
**Objective**: Provide pre-built Grafana dashboards for RPC performance, transaction lifecycle, and consensus health.
**What to do**:
- Create `docker/telemetry/grafana/provisioning/dashboards/dashboards.yaml` (provisioning config)
- Create dashboard JSON files:
1. **RPC Performance Dashboard** (`rpc-performance.json`):
- RPC request latency (p50/p95/p99) by command — histogram panel
- RPC throughput (requests/sec) by command — time series
- RPC error rate by command — bar gauge
- Top slowest RPC commands — table
2. **Transaction Overview Dashboard** (`transaction-overview.json`):
- Transaction processing rate — time series
- Transaction latency distribution — histogram
- Suppression rate (duplicates) — stat panel
- Transaction processing path (sync vs async) — pie chart
3. **Consensus Health Dashboard** (`consensus-health.json`):
- Consensus round duration — time series
- Phase duration breakdown (open/establish/accept) — stacked bar
- Proposals sent/received per round — stat panel
- Consensus mode distribution (proposing/observing) — pie chart
- Store dashboards in `docker/telemetry/grafana/dashboards/`
**Key new files**:
- `docker/telemetry/grafana/provisioning/dashboards/dashboards.yaml`
- `docker/telemetry/grafana/dashboards/rpc-performance.json`
- `docker/telemetry/grafana/dashboards/transaction-overview.json`
- `docker/telemetry/grafana/dashboards/consensus-health.json`
**Reference**:
- [07-observability-backends.md §7.6](./07-observability-backends.md) — Grafana dashboard specifications
- [01-architecture-analysis.md §1.8.3](./01-architecture-analysis.md) — Dashboard panel examples
---
## Task 5.3: Define Alert Rules
**Objective**: Create alert definitions for key telemetry anomalies.
**What to do**:
- Create `docker/telemetry/grafana/provisioning/alerting/alerts.yaml`:
- **RPC Latency Alert**: p99 latency > 1s for any command over 5 minutes
- **RPC Error Rate Alert**: Error rate > 5% for any command over 5 minutes
- **Consensus Duration Alert**: Round duration > 10s (warn), > 30s (critical)
- **Transaction Processing Alert**: Processing rate drops below threshold
- **Telemetry Pipeline Health**: No spans received for > 2 minutes
**Key new files**:
- `docker/telemetry/grafana/provisioning/alerting/alerts.yaml`
**Reference**:
- [07-observability-backends.md §7.6.3](./07-observability-backends.md) — Alert rule definitions
---
## Task 5.4: Production Collector Configuration
**Objective**: Create a production-ready OTel Collector configuration with tail-based sampling and resource limits.
**What to do**:
- Create `docker/telemetry/otel-collector-config-production.yaml`:
- Tail-based sampling policy:
- Always sample errors and slow traces
- 10% base sampling rate for normal traces
- Always sample first trace for each unique RPC command
- Resource limits:
- Memory limiter processor (80% of available memory)
- Queued retry for export failures
- TLS configuration for production endpoints
- Health check endpoint
**Key new files**:
- `docker/telemetry/otel-collector-config-production.yaml`
**Reference**:
- [05-configuration-reference.md §5.5.2](./05-configuration-reference.md) — Production collector config
---
## Task 5.5: Operator Runbook
**Objective**: Create operator documentation for managing the telemetry system in production.
**What to do**:
- Create `docs/telemetry-runbook.md`:
- **Setup**: How to enable telemetry in rippled
- **Configuration**: All config options with descriptions
- **Collector Deployment**: Docker Compose vs. Kubernetes vs. bare metal
- **Troubleshooting**: Common issues and resolutions
- No traces appearing
- High memory usage from telemetry
- Collector connection failures
- Sampling configuration tuning
- **Performance Tuning**: Batch size, queue size, sampling ratio guidelines
- **Upgrading**: How to upgrade OTel SDK and Collector versions
**Key new files**:
- `docs/telemetry-runbook.md`
---
## Task 5.6: Final Integration Testing
**Objective**: Validate the complete telemetry stack end-to-end.
**What to do**:
1. Start full Docker stack (Collector, Jaeger, Grafana, Prometheus)
2. Build rippled with `telemetry=ON`
3. Run in standalone mode with telemetry enabled
4. Generate RPC traffic and verify traces in Jaeger
5. Verify dashboards populate in Grafana
6. Verify alerts trigger correctly
7. Test telemetry OFF path (no regressions)
8. Run full test suite
**Verification Checklist**:
- [ ] Docker stack starts without errors
- [ ] Traces appear in Jaeger with correct hierarchy
- [ ] Grafana dashboards show metrics derived from spans
- [ ] Prometheus scrapes spanmetrics successfully
- [ ] Alerts can be triggered by simulated conditions
- [ ] Build succeeds with telemetry ON and OFF
- [ ] Full test suite passes
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ---------------------------------- | --------- | -------------- | ---------- |
| 5.1 | Spanmetrics connector + Prometheus | 2 | 2 | Phase 4 |
| 5.2 | Grafana dashboards | 4 | 0 | 5.1 |
| 5.3 | Alert definitions | 1 | 0 | 5.1 |
| 5.4 | Production collector config | 1 | 0 | Phase 4 |
| 5.5 | Operator runbook | 1 | 0 | Phase 4 |
| 5.6 | Final integration testing | 0 | 0 | 5.1-5.5 |
**Parallel work**: Tasks 5.1, 5.4, and 5.5 can run in parallel. Tasks 5.2 and 5.3 depend on 5.1. Task 5.6 depends on all others.
**Exit Criteria** (from [06-implementation-phases.md §6.11.5](./06-implementation-phases.md)):
- [ ] Dashboards deployed and showing data
- [ ] Alerts configured and tested
- [ ] Operator documentation complete
- [ ] Production collector config ready
- [ ] Full test suite passes

View File

@@ -0,0 +1,280 @@
# OpenTelemetry Distributed Tracing for rippled
---
## Slide 1: Introduction
### What is OpenTelemetry?
OpenTelemetry is an open-source, CNCF-backed observability framework for distributed tracing, metrics, and logs.
### Why OpenTelemetry for rippled?
- **End-to-End Transaction Visibility**: Track transactions from submission → consensus → ledger inclusion
- **Cross-Node Correlation**: Follow requests across multiple independent nodes using a unique `trace_id`
- **Consensus Round Analysis**: Understand timing and behavior across validators
- **Incident Debugging**: Correlate events across distributed nodes during issues
```mermaid
flowchart LR
A["Node A<br/>tx.receive<br/>trace_id: abc123"] --> B["Node B<br/>tx.relay<br/>trace_id: abc123"] --> C["Node C<br/>tx.validate<br/>trace_id: abc123"] --> D["Node D<br/>ledger.apply<br/>trace_id: abc123"]
style A fill:#1565c0,stroke:#0d47a1,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#2e7d32,stroke:#1b5e20,color:#fff
style D fill:#e65100,stroke:#bf360c,color:#fff
```
> **Trace ID: abc123** — All nodes share the same trace, enabling cross-node correlation.
---
## Slide 2: OpenTelemetry vs Open Source Alternatives
| Feature | OpenTelemetry | Jaeger | Zipkin | SkyWalking | Pinpoint | Prometheus |
| ------------------- | ---------------- | ---------------- | ------------------ | ---------- | ---------- | ---------- |
| **Tracing** | YES | YES | YES | YES | YES | NO |
| **Metrics** | YES | NO | NO | YES | YES | YES |
| **Logs** | YES | NO | NO | YES | NO | NO |
| **C++ SDK** | YES Official | YES (Deprecated) | YES (Unmaintained) | NO | NO | YES |
| **Vendor Neutral** | YES Primary goal | NO | NO | NO | NO | NO |
| **Instrumentation** | Manual + Auto | Manual | Manual | Auto-first | Auto-first | Manual |
| **Backend** | Any (exporters) | Self | Self | Self | Self | Self |
| **CNCF Status** | Incubating | Graduated | NO | Incubating | NO | Graduated |
> **Why OpenTelemetry?** It's the only actively maintained, full-featured C++ option with vendor neutrality — allowing export to Jaeger, Prometheus, Grafana, or any commercial backend without changing instrumentation.
---
## Slide 3: Comparison with rippled's Existing Solutions
### Current Observability Stack
| Aspect | PerfLog (JSON) | StatsD (Metrics) | OpenTelemetry (NEW) |
| --------------------- | --------------------- | --------------------- | --------------------------- |
| **Type** | Logging | Metrics | Distributed Tracing |
| **Scope** | Single node | Single node | **Cross-node** |
| **Data** | JSON log entries | Counters, gauges | Spans with context |
| **Correlation** | By timestamp | By metric name | By `trace_id` |
| **Overhead** | Low (file I/O) | Low (UDP) | Low-Medium (configurable) |
| **Question Answered** | "What happened here?" | "How many? How fast?" | **"What was the journey?"** |
### Use Case Matrix
| Scenario | PerfLog | StatsD | OpenTelemetry |
| -------------------------------- | ------- | ------ | ------------- |
| "How many TXs per second?" | ❌ | ✅ | ❌ |
| "Why was this specific TX slow?" | ⚠️ | ❌ | ✅ |
| "Which node delayed consensus?" | ❌ | ❌ | ✅ |
| "Show TX journey across 5 nodes" | ❌ | ❌ | ✅ |
> **Key Insight**: OpenTelemetry **complements** (not replaces) existing systems.
---
## Slide 4: Architecture
### High-Level Integration Architecture
```mermaid
flowchart TB
subgraph rippled["rippled Node"]
subgraph services["Core Services"]
direction LR
RPC["RPC Server<br/>(HTTP/WS)"] ~~~ Overlay["Overlay<br/>(P2P Network)"] ~~~ Consensus["Consensus<br/>(RCLConsensus)"]
end
Telemetry["Telemetry Module<br/>(OpenTelemetry SDK)"]
services --> Telemetry
end
Telemetry -->|OTLP/gRPC| Collector["OTel Collector"]
Collector --> Tempo["Grafana Tempo"]
Collector --> Jaeger["Jaeger"]
Collector --> Elastic["Elastic APM"]
style rippled fill:#424242,stroke:#212121,color:#fff
style services fill:#1565c0,stroke:#0d47a1,color:#fff
style Telemetry fill:#2e7d32,stroke:#1b5e20,color:#fff
style Collector fill:#e65100,stroke:#bf360c,color:#fff
```
### Context Propagation
```mermaid
sequenceDiagram
participant Client
participant NodeA as Node A
participant NodeB as Node B
Client->>NodeA: Submit TX (no context)
Note over NodeA: Creates trace_id: abc123<br/>span: tx.receive
NodeA->>NodeB: Relay TX<br/>(traceparent: abc123)
Note over NodeB: Links to trace_id: abc123<br/>span: tx.relay
```
- **HTTP/RPC**: W3C Trace Context headers (`traceparent`)
- **P2P Messages**: Protocol Buffer extension fields
---
## Slide 5: Implementation Plan
### 5-Phase Rollout (9 Weeks)
```mermaid
gantt
title Implementation Timeline
dateFormat YYYY-MM-DD
axisFormat Week %W
section Phase 1
Core Infrastructure :p1, 2024-01-01, 2w
section Phase 2
RPC Tracing :p2, after p1, 2w
section Phase 3
Transaction Tracing :p3, after p2, 2w
section Phase 4
Consensus Tracing :p4, after p3, 2w
section Phase 5
Documentation :p5, after p4, 1w
```
### Phase Details
| Phase | Focus | Key Deliverables | Effort |
| ----- | ------------------- | -------------------------------------------- | ------- |
| 1 | Core Infrastructure | SDK integration, Telemetry interface, Config | 10 days |
| 2 | RPC Tracing | HTTP context extraction, Handler spans | 10 days |
| 3 | Transaction Tracing | Protobuf context, P2P relay propagation | 10 days |
| 4 | Consensus Tracing | Round spans, Proposal/validation tracing | 10 days |
| 5 | Documentation | Runbook, Dashboards, Training | 7 days |
**Total Effort**: ~47 developer-days (2 developers)
---
## Slide 6: Performance Overhead
### Estimated System Impact
| Metric | Overhead | Notes |
| ----------------- | ---------- | ----------------------------------- |
| **CPU** | 1-3% | Span creation and attribute setting |
| **Memory** | 2-5 MB | Batch buffer for pending spans |
| **Network** | 10-50 KB/s | Compressed OTLP export to collector |
| **Latency (p99)** | <2% | With proper sampling configuration |
### Per-Message Overhead (Context Propagation)
Each P2P message carries trace context with the following overhead:
| Field | Size | Description |
| ------------- | ------------- | ----------------------------------------- |
| `trace_id` | 16 bytes | Unique identifier for the entire trace |
| `span_id` | 8 bytes | Current span (becomes parent on receiver) |
| `trace_flags` | 4 bytes | Sampling decision flags |
| `trace_state` | 0-4 bytes | Optional vendor-specific data |
| **Total** | **~32 bytes** | **Added per traced P2P message** |
```mermaid
flowchart LR
subgraph msg["P2P Message with Trace Context"]
A["Original Message<br/>(variable size)"] --> B["+ TraceContext<br/>(~32 bytes)"]
end
subgraph breakdown["Context Breakdown"]
C["trace_id<br/>16 bytes"]
D["span_id<br/>8 bytes"]
E["flags<br/>4 bytes"]
F["state<br/>0-4 bytes"]
end
B --> breakdown
style A fill:#424242,stroke:#212121,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#1565c0,stroke:#0d47a1,color:#fff
style D fill:#1565c0,stroke:#0d47a1,color:#fff
style E fill:#e65100,stroke:#bf360c,color:#fff
style F fill:#4a148c,stroke:#2e0d57,color:#fff
```
> **Note**: 32 bytes is negligible compared to typical transaction messages (hundreds to thousands of bytes)
### Mitigation Strategies
```mermaid
flowchart LR
A["Head Sampling<br/>10% default"] --> B["Tail Sampling<br/>Keep errors/slow"] --> C["Batch Export<br/>Reduce I/O"] --> D["Conditional Compile<br/>XRPL_ENABLE_TELEMETRY"]
style A fill:#1565c0,stroke:#0d47a1,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#e65100,stroke:#bf360c,color:#fff
style D fill:#4a148c,stroke:#2e0d57,color:#fff
```
### Kill Switches (Rollback Options)
1. **Config Disable**: Set `enabled=0` in config instant disable, no restart needed for sampling
2. **Rebuild**: Compile with `XRPL_ENABLE_TELEMETRY=OFF` zero overhead (no-op)
3. **Full Revert**: Clean separation allows easy commit reversion
---
## Slide 7: Data Collection & Privacy
### What Data is Collected
| Category | Attributes Collected | Purpose |
| --------------- | ---------------------------------------------------------------------------------- | --------------------------- |
| **Transaction** | `tx.hash`, `tx.type`, `tx.result`, `tx.fee`, `ledger_index` | Trace transaction lifecycle |
| **Consensus** | `round`, `phase`, `mode`, `proposers`(public key or public node id), `duration_ms` | Analyze consensus timing |
| **RPC** | `command`, `version`, `status`, `duration_ms` | Monitor RPC performance |
| **Peer** | `peer.id`(public key), `latency_ms`, `message.type`, `message.size` | Network topology analysis |
| **Ledger** | `ledger.hash`, `ledger.index`, `close_time`, `tx_count` | Ledger progression tracking |
| **Job** | `job.type`, `queue_ms`, `worker` | JobQueue performance |
### What is NOT Collected (Privacy Guarantees)
```mermaid
flowchart LR
subgraph notCollected["❌ NOT Collected"]
direction LR
A["Private Keys"] ~~~ B["Account Balances"] ~~~ C["Transaction Amounts"]
end
subgraph alsoNot["❌ Also Excluded"]
direction LR
D["IP Addresses<br/>(configurable)"] ~~~ E["Personal Data"] ~~~ F["Raw TX Payloads"]
end
style A fill:#c62828,stroke:#8c2809,color:#fff
style B fill:#c62828,stroke:#8c2809,color:#fff
style C fill:#c62828,stroke:#8c2809,color:#fff
style D fill:#c62828,stroke:#8c2809,color:#fff
style E fill:#c62828,stroke:#8c2809,color:#fff
style F fill:#c62828,stroke:#8c2809,color:#fff
```
### Privacy Protection Mechanisms
| Mechanism | Description |
| -------------------------- | ------------------------------------------------------------- |
| **Account Hashing** | `xrpl.tx.account` is hashed at collector level before storage |
| **Configurable Redaction** | Sensitive fields can be excluded via config |
| **Sampling** | Only 10% of traces recorded by default (reduces exposure) |
| **Local Control** | Node operators control what gets exported |
| **No Raw Payloads** | Transaction content is never recorded, only metadata |
> **Key Principle**: Telemetry collects **operational metadata** (timing, counts, hashes) — never **sensitive content** (keys, balances, amounts).
---
_End of Presentation_

View File

@@ -1529,3 +1529,46 @@ validators.txt
# set to ssl_verify to 0.
[ssl_verify]
1
#-------------------------------------------------------------------------------
#
# 11. Telemetry (OpenTelemetry Tracing)
#
#-------------------------------------------------------------------------------
#
# Enables distributed tracing via OpenTelemetry. Requires building with
# -DXRPL_ENABLE_TELEMETRY=ON (telemetry Conan option).
#
# [telemetry]
#
# enabled=0
#
# Enable or disable telemetry at runtime. Default: 0 (disabled).
#
# endpoint=http://localhost:4318/v1/traces
#
# The OpenTelemetry Collector endpoint (OTLP/HTTP). Default: http://localhost:4318/v1/traces.
#
# exporter=otlp_http
#
# Exporter type: otlp_http. Default: otlp_http.
#
# sampling_ratio=1.0
#
# Fraction of traces to sample (0.0 to 1.0). Default: 1.0 (all traces).
#
# trace_rpc=1
#
# Enable RPC request tracing. Default: 1.
#
# trace_transactions=1
#
# Enable transaction lifecycle tracing. Default: 1.
#
# trace_consensus=1
#
# Enable consensus round tracing. Default: 1.
#
# trace_peer=0
#
# Enable peer message tracing (high volume). Default: 0.
#

View File

@@ -1,29 +1,32 @@
macro (exclude_from_default target_)
macro(exclude_from_default target_)
set_target_properties(${target_} PROPERTIES EXCLUDE_FROM_ALL ON)
set_target_properties(${target_} PROPERTIES EXCLUDE_FROM_DEFAULT_BUILD ON)
endmacro ()
endmacro()
macro (exclude_if_included target_)
macro(exclude_if_included target_)
get_directory_property(has_parent PARENT_DIRECTORY)
if (has_parent)
if(has_parent)
exclude_from_default(${target_})
endif ()
endmacro ()
endif()
endmacro()
find_package(Git)
function (git_branch branch_val)
if (NOT GIT_FOUND)
function(git_branch branch_val)
if(NOT GIT_FOUND)
return()
endif ()
endif()
set(_branch "")
execute_process(COMMAND ${GIT_EXECUTABLE} "rev-parse" "--abbrev-ref" "HEAD"
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
RESULT_VARIABLE _git_exit_code
OUTPUT_VARIABLE _temp_branch
OUTPUT_STRIP_TRAILING_WHITESPACE ERROR_QUIET)
if (_git_exit_code EQUAL 0)
execute_process(
COMMAND ${GIT_EXECUTABLE} "rev-parse" "--abbrev-ref" "HEAD"
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
RESULT_VARIABLE _git_exit_code
OUTPUT_VARIABLE _temp_branch
OUTPUT_STRIP_TRAILING_WHITESPACE
ERROR_QUIET
)
if(_git_exit_code EQUAL 0)
set(_branch ${_temp_branch})
endif ()
endif()
set(${branch_val} "${_branch}" PARENT_SCOPE)
endfunction ()
endfunction()

View File

@@ -1,49 +1,62 @@
find_program(CCACHE_PATH "ccache")
if (NOT CCACHE_PATH)
if(NOT CCACHE_PATH)
return()
endif ()
endif()
# For Linux and macOS we can use the ccache binary directly.
if (NOT MSVC)
if(NOT MSVC)
set(CMAKE_C_COMPILER_LAUNCHER "${CCACHE_PATH}")
set(CMAKE_CXX_COMPILER_LAUNCHER "${CCACHE_PATH}")
message(STATUS "Found ccache: ${CCACHE_PATH}")
return()
endif ()
endif()
# For Windows more effort is required. The code below is a modified version of
# https://github.com/ccache/ccache/wiki/MS-Visual-Studio#usage-with-cmake.
if ("${CCACHE_PATH}" MATCHES "chocolatey")
if("${CCACHE_PATH}" MATCHES "chocolatey")
message(DEBUG "Ccache path: ${CCACHE_PATH}")
# Chocolatey uses a shim executable that we cannot use directly, in which case we have to find the executable it
# points to. If we cannot find the target executable then we cannot use ccache.
find_program(BASH_PATH "bash")
if (NOT BASH_PATH)
if(NOT BASH_PATH)
message(WARNING "Could not find bash.")
return()
endif ()
endif()
execute_process(COMMAND bash -c
"export LC_ALL='en_US.UTF-8'; ${CCACHE_PATH} --shimgen-noop | grep -oP 'path to executable: \\K.+' | head -c -1"
OUTPUT_VARIABLE CCACHE_PATH)
execute_process(
COMMAND
bash -c
"export LC_ALL='en_US.UTF-8'; ${CCACHE_PATH} --shimgen-noop | grep -oP 'path to executable: \\K.+' | head -c -1"
OUTPUT_VARIABLE CCACHE_PATH
)
if (NOT CCACHE_PATH)
if(NOT CCACHE_PATH)
message(WARNING "Could not find ccache target.")
return()
endif ()
endif()
file(TO_CMAKE_PATH "${CCACHE_PATH}" CCACHE_PATH)
endif ()
endif()
message(STATUS "Found ccache: ${CCACHE_PATH}")
# Tell cmake to use ccache for compiling with Visual Studio.
file(COPY_FILE ${CCACHE_PATH} ${CMAKE_BINARY_DIR}/cl.exe ONLY_IF_DIFFERENT)
set(CMAKE_VS_GLOBALS "CLToolExe=cl.exe" "CLToolPath=${CMAKE_BINARY_DIR}" "TrackFileAccess=false"
"UseMultiToolTask=true")
set(CMAKE_VS_GLOBALS
"CLToolExe=cl.exe"
"CLToolPath=${CMAKE_BINARY_DIR}"
"TrackFileAccess=false"
"UseMultiToolTask=true"
)
# By default Visual Studio generators will use /Zi to capture debug information, which is not compatible with ccache, so
# tell it to use /Z7 instead.
if (MSVC)
foreach (var_ CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE)
if(MSVC)
foreach(
var_
CMAKE_C_FLAGS_DEBUG
CMAKE_C_FLAGS_RELEASE
CMAKE_CXX_FLAGS_DEBUG
CMAKE_CXX_FLAGS_RELEASE
)
string(REPLACE "/Zi" "/Z7" ${var_} "${${var_}}")
endforeach ()
endif ()
endforeach()
endif()

View File

@@ -174,36 +174,45 @@ option(CODE_COVERAGE_VERBOSE "Verbose information" FALSE)
# Check prereqs
find_program(GCOVR_PATH gcovr PATHS ${CMAKE_SOURCE_DIR}/scripts/test)
if (DEFINED CODE_COVERAGE_GCOV_TOOL)
if(DEFINED CODE_COVERAGE_GCOV_TOOL)
set(GCOV_TOOL "${CODE_COVERAGE_GCOV_TOOL}")
elseif (DEFINED ENV{CODE_COVERAGE_GCOV_TOOL})
elseif(DEFINED ENV{CODE_COVERAGE_GCOV_TOOL})
set(GCOV_TOOL "$ENV{CODE_COVERAGE_GCOV_TOOL}")
elseif ("${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if (APPLE)
execute_process(COMMAND xcrun -f llvm-cov OUTPUT_VARIABLE LLVMCOV_PATH OUTPUT_STRIP_TRAILING_WHITESPACE)
else ()
elseif("${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if(APPLE)
execute_process(
COMMAND xcrun -f llvm-cov
OUTPUT_VARIABLE LLVMCOV_PATH
OUTPUT_STRIP_TRAILING_WHITESPACE
)
else()
find_program(LLVMCOV_PATH llvm-cov)
endif ()
if (LLVMCOV_PATH)
endif()
if(LLVMCOV_PATH)
set(GCOV_TOOL "${LLVMCOV_PATH} gcov")
endif ()
elseif ("${CMAKE_CXX_COMPILER_ID}" MATCHES "GNU")
endif()
elseif("${CMAKE_CXX_COMPILER_ID}" MATCHES "GNU")
find_program(GCOV_PATH gcov)
set(GCOV_TOOL "${GCOV_PATH}")
endif ()
endif()
# Check supported compiler (Clang, GNU and Flang)
get_property(LANGUAGES GLOBAL PROPERTY ENABLED_LANGUAGES)
foreach (LANG ${LANGUAGES})
if ("${CMAKE_${LANG}_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if ("${CMAKE_${LANG}_COMPILER_VERSION}" VERSION_LESS 3)
message(FATAL_ERROR "Clang version must be 3.0.0 or greater! Aborting...")
endif ()
elseif (NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES "GNU" AND NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES
"(LLVM)?[Ff]lang")
foreach(LANG ${LANGUAGES})
if("${CMAKE_${LANG}_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if("${CMAKE_${LANG}_COMPILER_VERSION}" VERSION_LESS 3)
message(
FATAL_ERROR
"Clang version must be 3.0.0 or greater! Aborting..."
)
endif()
elseif(
NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES "GNU"
AND NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES "(LLVM)?[Ff]lang"
)
message(FATAL_ERROR "Compiler is not GNU or Flang! Aborting...")
endif ()
endforeach ()
endif()
endforeach()
set(COVERAGE_COMPILER_FLAGS "-g --coverage" CACHE INTERNAL "")
@@ -212,7 +221,7 @@ set(COVERAGE_C_COMPILER_FLAGS "")
set(COVERAGE_CXX_LINKER_FLAGS "")
set(COVERAGE_C_LINKER_FLAGS "")
if (CMAKE_CXX_COMPILER_ID MATCHES "(GNU|Clang)")
if(CMAKE_CXX_COMPILER_ID MATCHES "(GNU|Clang)")
include(CheckCXXCompilerFlag)
include(CheckCCompilerFlag)
include(CheckLinkerFlag)
@@ -223,51 +232,77 @@ if (CMAKE_CXX_COMPILER_ID MATCHES "(GNU|Clang)")
set(COVERAGE_C_LINKER_FLAGS ${COVERAGE_COMPILER_FLAGS})
check_cxx_compiler_flag(-fprofile-abs-path HAVE_cxx_fprofile_abs_path)
if (HAVE_cxx_fprofile_abs_path)
set(COVERAGE_CXX_COMPILER_FLAGS "${COVERAGE_CXX_COMPILER_FLAGS} -fprofile-abs-path")
endif ()
if(HAVE_cxx_fprofile_abs_path)
set(COVERAGE_CXX_COMPILER_FLAGS
"${COVERAGE_CXX_COMPILER_FLAGS} -fprofile-abs-path"
)
endif()
check_c_compiler_flag(-fprofile-abs-path HAVE_c_fprofile_abs_path)
if (HAVE_c_fprofile_abs_path)
set(COVERAGE_C_COMPILER_FLAGS "${COVERAGE_C_COMPILER_FLAGS} -fprofile-abs-path")
endif ()
if(HAVE_c_fprofile_abs_path)
set(COVERAGE_C_COMPILER_FLAGS
"${COVERAGE_C_COMPILER_FLAGS} -fprofile-abs-path"
)
endif()
check_linker_flag(CXX -fprofile-abs-path HAVE_cxx_linker_fprofile_abs_path)
if (HAVE_cxx_linker_fprofile_abs_path)
set(COVERAGE_CXX_LINKER_FLAGS "${COVERAGE_CXX_LINKER_FLAGS} -fprofile-abs-path")
endif ()
if(HAVE_cxx_linker_fprofile_abs_path)
set(COVERAGE_CXX_LINKER_FLAGS
"${COVERAGE_CXX_LINKER_FLAGS} -fprofile-abs-path"
)
endif()
check_linker_flag(C -fprofile-abs-path HAVE_c_linker_fprofile_abs_path)
if (HAVE_c_linker_fprofile_abs_path)
set(COVERAGE_C_LINKER_FLAGS "${COVERAGE_C_LINKER_FLAGS} -fprofile-abs-path")
endif ()
if(HAVE_c_linker_fprofile_abs_path)
set(COVERAGE_C_LINKER_FLAGS
"${COVERAGE_C_LINKER_FLAGS} -fprofile-abs-path"
)
endif()
check_cxx_compiler_flag(-fprofile-update=atomic HAVE_cxx_fprofile_update)
if (HAVE_cxx_fprofile_update)
set(COVERAGE_CXX_COMPILER_FLAGS "${COVERAGE_CXX_COMPILER_FLAGS} -fprofile-update=atomic")
endif ()
if(HAVE_cxx_fprofile_update)
set(COVERAGE_CXX_COMPILER_FLAGS
"${COVERAGE_CXX_COMPILER_FLAGS} -fprofile-update=atomic"
)
endif()
check_c_compiler_flag(-fprofile-update=atomic HAVE_c_fprofile_update)
if (HAVE_c_fprofile_update)
set(COVERAGE_C_COMPILER_FLAGS "${COVERAGE_C_COMPILER_FLAGS} -fprofile-update=atomic")
endif ()
if(HAVE_c_fprofile_update)
set(COVERAGE_C_COMPILER_FLAGS
"${COVERAGE_C_COMPILER_FLAGS} -fprofile-update=atomic"
)
endif()
check_linker_flag(CXX -fprofile-update=atomic HAVE_cxx_linker_fprofile_update)
if (HAVE_cxx_linker_fprofile_update)
set(COVERAGE_CXX_LINKER_FLAGS "${COVERAGE_CXX_LINKER_FLAGS} -fprofile-update=atomic")
endif ()
check_linker_flag(
CXX
-fprofile-update=atomic
HAVE_cxx_linker_fprofile_update
)
if(HAVE_cxx_linker_fprofile_update)
set(COVERAGE_CXX_LINKER_FLAGS
"${COVERAGE_CXX_LINKER_FLAGS} -fprofile-update=atomic"
)
endif()
check_linker_flag(C -fprofile-update=atomic HAVE_c_linker_fprofile_update)
if (HAVE_c_linker_fprofile_update)
set(COVERAGE_C_LINKER_FLAGS "${COVERAGE_C_LINKER_FLAGS} -fprofile-update=atomic")
endif ()
if(HAVE_c_linker_fprofile_update)
set(COVERAGE_C_LINKER_FLAGS
"${COVERAGE_C_LINKER_FLAGS} -fprofile-update=atomic"
)
endif()
endif()
endif ()
get_property(GENERATOR_IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
if (NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG))
message(WARNING "Code coverage results with an optimised (non-Debug) build may be misleading")
endif () # NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG)
get_property(
GENERATOR_IS_MULTI_CONFIG
GLOBAL
PROPERTY GENERATOR_IS_MULTI_CONFIG
)
if(NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG))
message(
WARNING
"Code coverage results with an optimised (non-Debug) build may be misleading"
)
endif() # NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG)
# Defines a target for running and collection code coverage information
# Builds dependencies, runs the given executable and outputs reports.
@@ -291,123 +326,167 @@ endif () # NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG)
# )
# The user can set the variable GCOVR_ADDITIONAL_ARGS to supply additional flags to the
# GCVOR command.
function (setup_target_for_coverage_gcovr)
function(setup_target_for_coverage_gcovr)
set(options NONE)
set(oneValueArgs BASE_DIRECTORY NAME FORMAT)
set(multiValueArgs EXCLUDE EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)
cmake_parse_arguments(Coverage "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
cmake_parse_arguments(
Coverage
"${options}"
"${oneValueArgs}"
"${multiValueArgs}"
${ARGN}
)
if (NOT GCOV_TOOL)
if(NOT GCOV_TOOL)
message(FATAL_ERROR "Could not find gcov or llvm-cov tool! Aborting...")
endif ()
endif()
if (NOT GCOVR_PATH)
if(NOT GCOVR_PATH)
message(FATAL_ERROR "Could not find gcovr tool! Aborting...")
endif ()
endif()
# Set base directory (as absolute path), or default to PROJECT_SOURCE_DIR
if (DEFINED Coverage_BASE_DIRECTORY)
if(DEFINED Coverage_BASE_DIRECTORY)
get_filename_component(BASEDIR ${Coverage_BASE_DIRECTORY} ABSOLUTE)
else ()
else()
set(BASEDIR ${PROJECT_SOURCE_DIR})
endif ()
endif()
if (NOT DEFINED Coverage_FORMAT)
if(NOT DEFINED Coverage_FORMAT)
set(Coverage_FORMAT xml)
endif ()
endif()
if (NOT DEFINED Coverage_EXECUTABLE AND DEFINED Coverage_EXECUTABLE_ARGS)
message(FATAL_ERROR "EXECUTABLE_ARGS must not be set if EXECUTABLE is not set")
endif ()
if(NOT DEFINED Coverage_EXECUTABLE AND DEFINED Coverage_EXECUTABLE_ARGS)
message(
FATAL_ERROR
"EXECUTABLE_ARGS must not be set if EXECUTABLE is not set"
)
endif()
if ("--output" IN_LIST GCOVR_ADDITIONAL_ARGS)
message(FATAL_ERROR "Unsupported --output option detected in GCOVR_ADDITIONAL_ARGS! Aborting...")
else ()
if ((Coverage_FORMAT STREQUAL "html-details") OR (Coverage_FORMAT STREQUAL "html-nested"))
set(GCOVR_OUTPUT_FILE ${PROJECT_BINARY_DIR}/${Coverage_NAME}/index.html)
if("--output" IN_LIST GCOVR_ADDITIONAL_ARGS)
message(
FATAL_ERROR
"Unsupported --output option detected in GCOVR_ADDITIONAL_ARGS! Aborting..."
)
else()
if(
(Coverage_FORMAT STREQUAL "html-details")
OR (Coverage_FORMAT STREQUAL "html-nested")
)
set(GCOVR_OUTPUT_FILE
${PROJECT_BINARY_DIR}/${Coverage_NAME}/index.html
)
set(GCOVR_CREATE_FOLDER ${PROJECT_BINARY_DIR}/${Coverage_NAME})
elseif (Coverage_FORMAT STREQUAL "html-single")
elseif(Coverage_FORMAT STREQUAL "html-single")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.html)
elseif ((Coverage_FORMAT STREQUAL "json-summary") OR (Coverage_FORMAT STREQUAL "json-details")
OR (Coverage_FORMAT STREQUAL "coveralls"))
elseif(
(Coverage_FORMAT STREQUAL "json-summary")
OR (Coverage_FORMAT STREQUAL "json-details")
OR (Coverage_FORMAT STREQUAL "coveralls")
)
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.json)
elseif (Coverage_FORMAT STREQUAL "txt")
elseif(Coverage_FORMAT STREQUAL "txt")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.txt)
elseif (Coverage_FORMAT STREQUAL "csv")
elseif(Coverage_FORMAT STREQUAL "csv")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.csv)
elseif (Coverage_FORMAT STREQUAL "lcov")
elseif(Coverage_FORMAT STREQUAL "lcov")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.lcov)
else ()
else()
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.xml)
endif ()
endif ()
endif()
endif()
if ((Coverage_FORMAT STREQUAL "cobertura") OR (Coverage_FORMAT STREQUAL "xml"))
if(
(Coverage_FORMAT STREQUAL "cobertura")
OR (Coverage_FORMAT STREQUAL "xml")
)
list(APPEND GCOVR_ADDITIONAL_ARGS --cobertura "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --cobertura-pretty)
set(Coverage_FORMAT cobertura) # overwrite xml
elseif (Coverage_FORMAT STREQUAL "sonarqube")
elseif(Coverage_FORMAT STREQUAL "sonarqube")
list(APPEND GCOVR_ADDITIONAL_ARGS --sonarqube "${GCOVR_OUTPUT_FILE}")
elseif (Coverage_FORMAT STREQUAL "jacoco")
elseif(Coverage_FORMAT STREQUAL "jacoco")
list(APPEND GCOVR_ADDITIONAL_ARGS --jacoco "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --jacoco-pretty)
elseif (Coverage_FORMAT STREQUAL "clover")
elseif(Coverage_FORMAT STREQUAL "clover")
list(APPEND GCOVR_ADDITIONAL_ARGS --clover "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --clover-pretty)
elseif (Coverage_FORMAT STREQUAL "lcov")
elseif(Coverage_FORMAT STREQUAL "lcov")
list(APPEND GCOVR_ADDITIONAL_ARGS --lcov "${GCOVR_OUTPUT_FILE}")
elseif (Coverage_FORMAT STREQUAL "json-summary")
elseif(Coverage_FORMAT STREQUAL "json-summary")
list(APPEND GCOVR_ADDITIONAL_ARGS --json-summary "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --json-summary-pretty)
elseif (Coverage_FORMAT STREQUAL "json-details")
elseif(Coverage_FORMAT STREQUAL "json-details")
list(APPEND GCOVR_ADDITIONAL_ARGS --json "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --json-pretty)
elseif (Coverage_FORMAT STREQUAL "coveralls")
elseif(Coverage_FORMAT STREQUAL "coveralls")
list(APPEND GCOVR_ADDITIONAL_ARGS --coveralls "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --coveralls-pretty)
elseif (Coverage_FORMAT STREQUAL "csv")
elseif(Coverage_FORMAT STREQUAL "csv")
list(APPEND GCOVR_ADDITIONAL_ARGS --csv "${GCOVR_OUTPUT_FILE}")
elseif (Coverage_FORMAT STREQUAL "txt")
elseif(Coverage_FORMAT STREQUAL "txt")
list(APPEND GCOVR_ADDITIONAL_ARGS --txt "${GCOVR_OUTPUT_FILE}")
elseif (Coverage_FORMAT STREQUAL "html-single")
elseif(Coverage_FORMAT STREQUAL "html-single")
list(APPEND GCOVR_ADDITIONAL_ARGS --html "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --html-self-contained)
elseif (Coverage_FORMAT STREQUAL "html-nested")
elseif(Coverage_FORMAT STREQUAL "html-nested")
list(APPEND GCOVR_ADDITIONAL_ARGS --html-nested "${GCOVR_OUTPUT_FILE}")
elseif (Coverage_FORMAT STREQUAL "html-details")
elseif(Coverage_FORMAT STREQUAL "html-details")
list(APPEND GCOVR_ADDITIONAL_ARGS --html-details "${GCOVR_OUTPUT_FILE}")
else ()
message(FATAL_ERROR "Unsupported output style ${Coverage_FORMAT}! Aborting...")
endif ()
else()
message(
FATAL_ERROR
"Unsupported output style ${Coverage_FORMAT}! Aborting..."
)
endif()
# Collect excludes (CMake 3.4+: Also compute absolute paths)
set(GCOVR_EXCLUDES "")
foreach (EXCLUDE ${Coverage_EXCLUDE} ${COVERAGE_EXCLUDES} ${COVERAGE_GCOVR_EXCLUDES})
if (CMAKE_VERSION VERSION_GREATER 3.4)
get_filename_component(EXCLUDE ${EXCLUDE} ABSOLUTE BASE_DIR ${BASEDIR})
endif ()
foreach(
EXCLUDE
${Coverage_EXCLUDE}
${COVERAGE_EXCLUDES}
${COVERAGE_GCOVR_EXCLUDES}
)
if(CMAKE_VERSION VERSION_GREATER 3.4)
get_filename_component(
EXCLUDE
${EXCLUDE}
ABSOLUTE
BASE_DIR ${BASEDIR}
)
endif()
list(APPEND GCOVR_EXCLUDES "${EXCLUDE}")
endforeach ()
endforeach()
list(REMOVE_DUPLICATES GCOVR_EXCLUDES)
# Combine excludes to several -e arguments
set(GCOVR_EXCLUDE_ARGS "")
foreach (EXCLUDE ${GCOVR_EXCLUDES})
foreach(EXCLUDE ${GCOVR_EXCLUDES})
list(APPEND GCOVR_EXCLUDE_ARGS "-e")
list(APPEND GCOVR_EXCLUDE_ARGS "${EXCLUDE}")
endforeach ()
endforeach()
# Set up commands which will be run to generate coverage data
# If EXECUTABLE is not set, the user is expected to run the tests manually
# before running the coverage target NAME
if (DEFINED Coverage_EXECUTABLE)
set(GCOVR_EXEC_TESTS_CMD ${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS})
endif ()
if(DEFINED Coverage_EXECUTABLE)
set(GCOVR_EXEC_TESTS_CMD
${Coverage_EXECUTABLE}
${Coverage_EXECUTABLE_ARGS}
)
endif()
# Create folder
if (DEFINED GCOVR_CREATE_FOLDER)
set(GCOVR_FOLDER_CMD ${CMAKE_COMMAND} -E make_directory ${GCOVR_CREATE_FOLDER})
endif ()
if(DEFINED GCOVR_CREATE_FOLDER)
set(GCOVR_FOLDER_CMD
${CMAKE_COMMAND}
-E
make_directory
${GCOVR_CREATE_FOLDER}
)
endif()
# Running gcovr
set(GCOVR_CMD
@@ -419,53 +498,95 @@ function (setup_target_for_coverage_gcovr)
${BASEDIR}
${GCOVR_ADDITIONAL_ARGS}
${GCOVR_EXCLUDE_ARGS}
--object-directory=${PROJECT_BINARY_DIR})
--object-directory=${PROJECT_BINARY_DIR}
)
if (CODE_COVERAGE_VERBOSE)
if(CODE_COVERAGE_VERBOSE)
message(STATUS "Executed command report")
if (NOT "${GCOVR_EXEC_TESTS_CMD}" STREQUAL "")
if(NOT "${GCOVR_EXEC_TESTS_CMD}" STREQUAL "")
message(STATUS "Command to run tests: ")
string(REPLACE ";" " " GCOVR_EXEC_TESTS_CMD_SPACED "${GCOVR_EXEC_TESTS_CMD}")
string(
REPLACE ";"
" "
GCOVR_EXEC_TESTS_CMD_SPACED
"${GCOVR_EXEC_TESTS_CMD}"
)
message(STATUS "${GCOVR_EXEC_TESTS_CMD_SPACED}")
endif ()
endif()
if (NOT "${GCOVR_FOLDER_CMD}" STREQUAL "")
if(NOT "${GCOVR_FOLDER_CMD}" STREQUAL "")
message(STATUS "Command to create a folder: ")
string(REPLACE ";" " " GCOVR_FOLDER_CMD_SPACED "${GCOVR_FOLDER_CMD}")
string(
REPLACE ";"
" "
GCOVR_FOLDER_CMD_SPACED
"${GCOVR_FOLDER_CMD}"
)
message(STATUS "${GCOVR_FOLDER_CMD_SPACED}")
endif ()
endif()
message(STATUS "Command to generate gcovr coverage data: ")
string(REPLACE ";" " " GCOVR_CMD_SPACED "${GCOVR_CMD}")
message(STATUS "${GCOVR_CMD_SPACED}")
endif ()
endif()
add_custom_target(${Coverage_NAME}
COMMAND ${GCOVR_EXEC_TESTS_CMD}
COMMAND ${GCOVR_FOLDER_CMD}
COMMAND ${GCOVR_CMD}
BYPRODUCTS ${GCOVR_OUTPUT_FILE}
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
DEPENDS ${Coverage_DEPENDENCIES}
VERBATIM # Protect arguments to commands
COMMENT "Running gcovr to produce code coverage report.")
add_custom_target(
${Coverage_NAME}
COMMAND ${GCOVR_EXEC_TESTS_CMD}
COMMAND ${GCOVR_FOLDER_CMD}
COMMAND ${GCOVR_CMD}
BYPRODUCTS ${GCOVR_OUTPUT_FILE}
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
DEPENDS ${Coverage_DEPENDENCIES}
VERBATIM # Protect arguments to commands
COMMENT "Running gcovr to produce code coverage report."
)
# Show info where to find the report
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD COMMAND echo
COMMENT "Code coverage report saved in ${GCOVR_OUTPUT_FILE} formatted as ${Coverage_FORMAT}")
endfunction () # setup_target_for_coverage_gcovr
add_custom_command(
TARGET ${Coverage_NAME}
POST_BUILD
COMMAND echo
COMMENT
"Code coverage report saved in ${GCOVR_OUTPUT_FILE} formatted as ${Coverage_FORMAT}"
)
endfunction() # setup_target_for_coverage_gcovr
function (add_code_coverage_to_target name scope)
separate_arguments(COVERAGE_CXX_COMPILER_FLAGS NATIVE_COMMAND "${COVERAGE_CXX_COMPILER_FLAGS}")
separate_arguments(COVERAGE_C_COMPILER_FLAGS NATIVE_COMMAND "${COVERAGE_C_COMPILER_FLAGS}")
separate_arguments(COVERAGE_CXX_LINKER_FLAGS NATIVE_COMMAND "${COVERAGE_CXX_LINKER_FLAGS}")
separate_arguments(COVERAGE_C_LINKER_FLAGS NATIVE_COMMAND "${COVERAGE_C_LINKER_FLAGS}")
function(add_code_coverage_to_target name scope)
separate_arguments(
COVERAGE_CXX_COMPILER_FLAGS
NATIVE_COMMAND
"${COVERAGE_CXX_COMPILER_FLAGS}"
)
separate_arguments(
COVERAGE_C_COMPILER_FLAGS
NATIVE_COMMAND
"${COVERAGE_C_COMPILER_FLAGS}"
)
separate_arguments(
COVERAGE_CXX_LINKER_FLAGS
NATIVE_COMMAND
"${COVERAGE_CXX_LINKER_FLAGS}"
)
separate_arguments(
COVERAGE_C_LINKER_FLAGS
NATIVE_COMMAND
"${COVERAGE_C_LINKER_FLAGS}"
)
# Add compiler options to the target
target_compile_options(${name} ${scope} $<$<COMPILE_LANGUAGE:CXX>:${COVERAGE_CXX_COMPILER_FLAGS}>
$<$<COMPILE_LANGUAGE:C>:${COVERAGE_C_COMPILER_FLAGS}>)
target_compile_options(
${name}
${scope}
$<$<COMPILE_LANGUAGE:CXX>:${COVERAGE_CXX_COMPILER_FLAGS}>
$<$<COMPILE_LANGUAGE:C>:${COVERAGE_C_COMPILER_FLAGS}>
)
target_link_libraries(${name} ${scope} $<$<LINK_LANGUAGE:CXX>:${COVERAGE_CXX_LINKER_FLAGS}>
$<$<LINK_LANGUAGE:C>:${COVERAGE_C_LINKER_FLAGS}>)
endfunction () # add_code_coverage_to_target
target_link_libraries(
${name}
${scope}
$<$<LINK_LANGUAGE:CXX>:${COVERAGE_CXX_LINKER_FLAGS}>
$<$<LINK_LANGUAGE:C>:${COVERAGE_C_LINKER_FLAGS}>
)
endfunction() # add_code_coverage_to_target

View File

@@ -14,20 +14,20 @@ set(is_gcc FALSE)
set(is_msvc FALSE)
set(is_xcode FALSE)
if (CMAKE_CXX_COMPILER_ID MATCHES ".*Clang") # Clang or AppleClang
if(CMAKE_CXX_COMPILER_ID MATCHES ".*Clang") # Clang or AppleClang
set(is_clang TRUE)
elseif (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(is_gcc TRUE)
elseif (MSVC)
elseif(MSVC)
set(is_msvc TRUE)
else ()
else()
message(FATAL_ERROR "Unsupported C++ compiler: ${CMAKE_CXX_COMPILER_ID}")
endif ()
endif()
# Xcode generator detection
if (CMAKE_GENERATOR STREQUAL "Xcode")
if(CMAKE_GENERATOR STREQUAL "Xcode")
set(is_xcode TRUE)
endif ()
endif()
# --------------------------------------------------------------------
# Operating system detection
@@ -36,23 +36,23 @@ set(is_linux FALSE)
set(is_windows FALSE)
set(is_macos FALSE)
if (CMAKE_SYSTEM_NAME STREQUAL "Linux")
if(CMAKE_SYSTEM_NAME STREQUAL "Linux")
set(is_linux TRUE)
elseif (CMAKE_SYSTEM_NAME STREQUAL "Windows")
elseif(CMAKE_SYSTEM_NAME STREQUAL "Windows")
set(is_windows TRUE)
elseif (CMAKE_SYSTEM_NAME STREQUAL "Darwin")
elseif(CMAKE_SYSTEM_NAME STREQUAL "Darwin")
set(is_macos TRUE)
endif ()
endif()
# --------------------------------------------------------------------
# Architecture
# --------------------------------------------------------------------
set(is_amd64 FALSE)
set(is_arm64 FALSE)
if (CMAKE_SYSTEM_PROCESSOR MATCHES "x86_64|AMD64")
if(CMAKE_SYSTEM_PROCESSOR MATCHES "x86_64|AMD64")
set(is_amd64 TRUE)
elseif (CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|arm64|ARM64")
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|arm64|ARM64")
set(is_arm64 TRUE)
else ()
else()
message(FATAL_ERROR "Unknown architecture: ${CMAKE_SYSTEM_PROCESSOR}")
endif ()
endif()

28
cmake/GitInfo.cmake Normal file
View File

@@ -0,0 +1,28 @@
include_guard()
set(GIT_BUILD_BRANCH "")
set(GIT_COMMIT_HASH "")
find_package(Git)
if(NOT Git_FOUND)
message(WARNING "Git not found. Git branch and commit hash will be empty.")
return()
endif()
set(GIT_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/.git)
execute_process(
COMMAND
${GIT_EXECUTABLE} --git-dir=${GIT_DIRECTORY} rev-parse --abbrev-ref HEAD
OUTPUT_STRIP_TRAILING_WHITESPACE
OUTPUT_VARIABLE GIT_BUILD_BRANCH
)
execute_process(
COMMAND ${GIT_EXECUTABLE} --git-dir=${GIT_DIRECTORY} rev-parse HEAD
OUTPUT_STRIP_TRAILING_WHITESPACE
OUTPUT_VARIABLE GIT_COMMIT_HASH
)
message(STATUS "Git branch: ${GIT_BUILD_BRANCH}")
message(STATUS "Git commit hash: ${GIT_COMMIT_HASH}")

View File

@@ -1,13 +1,22 @@
include(isolate_headers)
function (xrpl_add_test name)
function(xrpl_add_test name)
set(target ${PROJECT_NAME}.test.${name})
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/${name}/*.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/${name}.cpp")
file(
GLOB_RECURSE sources
CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/${name}/*.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/${name}.cpp"
)
add_executable(${target} ${ARGN} ${sources})
isolate_headers(${target} "${CMAKE_SOURCE_DIR}" "${CMAKE_SOURCE_DIR}/tests/${name}" PRIVATE)
isolate_headers(
${target}
"${CMAKE_SOURCE_DIR}"
"${CMAKE_SOURCE_DIR}/tests/${name}"
PRIVATE
)
add_test(NAME ${target} COMMAND ${target})
endfunction ()
endfunction()

View File

@@ -14,31 +14,42 @@ include(XrplSanitizers)
# add a single global dependency on this interface lib
link_libraries(Xrpl::common)
# Respect CMAKE_POSITION_INDEPENDENT_CODE setting (may be set by Conan toolchain)
if (NOT DEFINED CMAKE_POSITION_INDEPENDENT_CODE)
if(NOT DEFINED CMAKE_POSITION_INDEPENDENT_CODE)
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
endif ()
set_target_properties(common PROPERTIES INTERFACE_POSITION_INDEPENDENT_CODE ${CMAKE_POSITION_INDEPENDENT_CODE})
endif()
set_target_properties(
common
PROPERTIES
INTERFACE_POSITION_INDEPENDENT_CODE ${CMAKE_POSITION_INDEPENDENT_CODE}
)
set(CMAKE_CXX_EXTENSIONS OFF)
target_compile_definitions(
common
INTERFACE $<$<CONFIG:Debug>:DEBUG
_DEBUG>
#[===[
INTERFACE
$<$<CONFIG:Debug>:DEBUG
_DEBUG>
#[===[
NOTE: CMAKE release builds already have NDEBUG defined, so no need to add it
explicitly except for the special case of (profile ON) and (assert OFF).
Presumably this is because we don't want profile builds asserting unless
asserts were specifically requested.
]===]
$<$<AND:$<BOOL:${profile}>,$<NOT:$<BOOL:${assert}>>>:NDEBUG>
# TODO: Remove once we have migrated functions from OpenSSL 1.x to 3.x.
OPENSSL_SUPPRESS_DEPRECATED)
$<$<AND:$<BOOL:${profile}>,$<NOT:$<BOOL:${assert}>>>:NDEBUG>
# TODO: Remove once we have migrated functions from OpenSSL 1.x to 3.x.
OPENSSL_SUPPRESS_DEPRECATED
)
if (MSVC)
if(MSVC)
# remove existing exception flag since we set it to -EHa
string(REGEX REPLACE "[-/]EH[a-z]+" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
foreach (var_ CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE)
foreach(
var_
CMAKE_C_FLAGS_DEBUG
CMAKE_C_FLAGS_RELEASE
CMAKE_CXX_FLAGS_DEBUG
CMAKE_CXX_FLAGS_RELEASE
)
# also remove dynamic runtime
string(REGEX REPLACE "[-/]MD[d]*" " " ${var_} "${${var_}}")
@@ -46,117 +57,143 @@ if (MSVC)
string(REPLACE "/ZI" "/Zi" ${var_} "${${var_}}")
# omit debug info completely under CI (not needed)
if (is_ci)
if(is_ci)
string(REPLACE "/Zi" " " ${var_} "${${var_}}")
string(REPLACE "/Z7" " " ${var_} "${${var_}}")
endif ()
endforeach ()
endif()
endforeach()
target_compile_options(
common
INTERFACE # Increase object file max size
-bigobj
# Floating point behavior
-fp:precise
# __cdecl calling convention
-Gd
# Minimal rebuild: disabled
-Gm-
# Function level linking: disabled
-Gy-
# Multiprocessor compilation
-MP
# pragma omp: disabled
-openmp-
# No error reporting to Internet
-errorReport:none
# Suppress login banner
-nologo
# Disable signed/unsigned comparison warnings
-wd4018
# Disable float to int possible loss of data warnings
-wd4244
# Disable size_t to T possible loss of data warnings
-wd4267
# Disable C4800(int to bool performance)
-wd4800
# Decorated name length exceeded, name was truncated
-wd4503
$<$<COMPILE_LANGUAGE:CXX>:
-EHa
-GR
>
$<$<CONFIG:Release>:-Ox>
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:
-GS
-Zc:forScope
>
# static runtime
$<$<CONFIG:Debug>:-MTd>
$<$<NOT:$<CONFIG:Debug>>:-MT>
$<$<BOOL:${werr}>:-WX>)
-bigobj
# Floating point behavior
-fp:precise
# __cdecl calling convention
-Gd
# Minimal rebuild: disabled
-Gm-
# Function level linking: disabled
-Gy-
# Multiprocessor compilation
-MP
# pragma omp: disabled
-openmp-
# No error reporting to Internet
-errorReport:none
# Suppress login banner
-nologo
# Disable signed/unsigned comparison warnings
-wd4018
# Disable float to int possible loss of data warnings
-wd4244
# Disable size_t to T possible loss of data warnings
-wd4267
# Disable C4800(int to bool performance)
-wd4800
# Decorated name length exceeded, name was truncated
-wd4503
$<$<COMPILE_LANGUAGE:CXX>:
-EHa
-GR
>
$<$<CONFIG:Release>:-Ox>
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:
-GS
-Zc:forScope
>
# static runtime
$<$<CONFIG:Debug>:-MTd>
$<$<NOT:$<CONFIG:Debug>>:-MT>
$<$<BOOL:${werr}>:-WX>
)
target_compile_definitions(
common
INTERFACE _WIN32_WINNT=0x6000
_SCL_SECURE_NO_WARNINGS
_CRT_SECURE_NO_WARNINGS
WIN32_CONSOLE
WIN32_LEAN_AND_MEAN
NOMINMAX
# TODO: Resolve these warnings, don't just silence them
_SILENCE_ALL_CXX17_DEPRECATION_WARNINGS
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:_CRTDBG_MAP_ALLOC>)
INTERFACE
_WIN32_WINNT=0x6000
_SCL_SECURE_NO_WARNINGS
_CRT_SECURE_NO_WARNINGS
WIN32_CONSOLE
WIN32_LEAN_AND_MEAN
NOMINMAX
# TODO: Resolve these warnings, don't just silence them
_SILENCE_ALL_CXX17_DEPRECATION_WARNINGS
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:_CRTDBG_MAP_ALLOC>
)
target_link_libraries(common INTERFACE -errorreport:none -machine:X64)
else ()
else()
target_compile_options(
common
INTERFACE -Wall
-Wdeprecated
$<$<BOOL:${is_clang}>:-Wno-deprecated-declarations>
$<$<BOOL:${wextra}>:-Wextra
-Wno-unused-parameter>
$<$<BOOL:${werr}>:-Werror>
-fstack-protector
-Wno-sign-compare
-Wno-unused-but-set-variable
$<$<NOT:$<CONFIG:Debug>>:-fno-strict-aliasing>
# tweak gcc optimization for debug
$<$<AND:$<BOOL:${is_gcc}>,$<CONFIG:Debug>>:-O0>
# Add debug symbols to release config
$<$<CONFIG:Release>:-g>)
INTERFACE
-Wall
-Wdeprecated
$<$<BOOL:${is_clang}>:-Wno-deprecated-declarations>
$<$<BOOL:${wextra}>:-Wextra
-Wno-unused-parameter>
$<$<BOOL:${werr}>:-Werror>
-fstack-protector
-Wno-sign-compare
-Wno-unused-but-set-variable
$<$<NOT:$<CONFIG:Debug>>:-fno-strict-aliasing>
# tweak gcc optimization for debug
$<$<AND:$<BOOL:${is_gcc}>,$<CONFIG:Debug>>:-O0>
# Add debug symbols to release config
$<$<CONFIG:Release>:-g>
)
target_link_libraries(
common
INTERFACE -rdynamic
$<$<BOOL:${is_linux}>:-Wl,-z,relro,-z,now,--build-id>
# link to static libc/c++ iff: * static option set and * NOT APPLE (AppleClang does not support static
# libc/c++) and * NOT SANITIZERS (sanitizers typically don't work with static libc/c++)
$<$<AND:$<BOOL:${static}>,$<NOT:$<BOOL:${APPLE}>>,$<NOT:$<BOOL:${SANITIZERS_ENABLED}>>>:
-static-libstdc++
-static-libgcc
>)
endif ()
INTERFACE
-rdynamic
$<$<BOOL:${is_linux}>:-Wl,-z,relro,-z,now,--build-id>
# link to static libc/c++ iff: * static option set and * NOT APPLE (AppleClang does not support static
# libc/c++) and * NOT SANITIZERS (sanitizers typically don't work with static libc/c++)
$<$<AND:$<BOOL:${static}>,$<NOT:$<BOOL:${APPLE}>>,$<NOT:$<BOOL:${SANITIZERS_ENABLED}>>>:
-static-libstdc++
-static-libgcc
>
)
endif()
# Antithesis instrumentation will only be built and deployed using machines running Linux.
if (voidstar)
if (NOT CMAKE_BUILD_TYPE STREQUAL "Debug")
message(FATAL_ERROR "Antithesis instrumentation requires Debug build type, aborting...")
elseif (NOT is_linux)
message(FATAL_ERROR "Antithesis instrumentation requires Linux, aborting...")
elseif (NOT (is_clang AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 16.0))
message(FATAL_ERROR "Antithesis instrumentation requires Clang version 16 or later, aborting...")
endif ()
endif ()
if(voidstar)
if(NOT CMAKE_BUILD_TYPE STREQUAL "Debug")
message(
FATAL_ERROR
"Antithesis instrumentation requires Debug build type, aborting..."
)
elseif(NOT is_linux)
message(
FATAL_ERROR
"Antithesis instrumentation requires Linux, aborting..."
)
elseif(
NOT (is_clang AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 16.0)
)
message(
FATAL_ERROR
"Antithesis instrumentation requires Clang version 16 or later, aborting..."
)
endif()
endif()
if (use_mold)
if(use_mold)
# use mold linker if available
execute_process(COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=mold -Wl,--version ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if ("${LD_VERSION}" MATCHES "mold")
execute_process(
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=mold -Wl,--version
ERROR_QUIET
OUTPUT_VARIABLE LD_VERSION
)
if("${LD_VERSION}" MATCHES "mold")
target_link_libraries(common INTERFACE -fuse-ld=mold)
endif ()
endif()
unset(LD_VERSION)
elseif (use_gold AND is_gcc)
elseif(use_gold AND is_gcc)
# use gold linker if available
execute_process(COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=gold -Wl,--version ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
execute_process(
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=gold -Wl,--version
ERROR_QUIET
OUTPUT_VARIABLE LD_VERSION
)
#[=========================================================[
NOTE: THE gold linker inserts -rpath as DT_RUNPATH by
default instead of DT_RPATH, so you might have slightly
@@ -170,31 +207,37 @@ elseif (use_gold AND is_gcc)
disabling would be to figure out all the settings
required to make gold play nicely with jemalloc.
#]=========================================================]
if (("${LD_VERSION}" MATCHES "GNU gold") AND (NOT jemalloc))
if(("${LD_VERSION}" MATCHES "GNU gold") AND (NOT jemalloc))
target_link_libraries(
common
INTERFACE -fuse-ld=gold
-Wl,--no-as-needed
#[=========================================================[
INTERFACE
-fuse-ld=gold
-Wl,--no-as-needed
#[=========================================================[
see https://bugs.launchpad.net/ubuntu/+source/eglibc/+bug/1253638/comments/5
DT_RUNPATH does not work great for transitive
dependencies (of which boost has a few) - so just
switch to DT_RPATH if doing dynamic linking with gold
#]=========================================================]
$<$<NOT:$<BOOL:${static}>>:-Wl,--disable-new-dtags>)
endif ()
$<$<NOT:$<BOOL:${static}>>:-Wl,--disable-new-dtags>
)
endif()
unset(LD_VERSION)
elseif (use_lld)
elseif(use_lld)
# use lld linker if available
execute_process(COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=lld -Wl,--version ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if ("${LD_VERSION}" MATCHES "LLD")
execute_process(
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=lld -Wl,--version
ERROR_QUIET
OUTPUT_VARIABLE LD_VERSION
)
if("${LD_VERSION}" MATCHES "LLD")
target_link_libraries(common INTERFACE -fuse-ld=lld)
endif ()
endif()
unset(LD_VERSION)
endif ()
endif()
if (assert)
foreach (var_ CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_RELEASE)
if(assert)
foreach(var_ CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_RELEASE)
string(REGEX REPLACE "[-/]DNDEBUG" "" ${var_} "${${var_}}")
endforeach ()
endif ()
endforeach()
endif()

View File

@@ -1,52 +0,0 @@
include(CMakeFindDependencyMacro)
# need to represent system dependencies of the lib here
#[=========================================================[
Boost
#]=========================================================]
if (static OR APPLE OR MSVC)
set(Boost_USE_STATIC_LIBS ON)
endif ()
set(Boost_USE_MULTITHREADED ON)
if (static OR MSVC)
set(Boost_USE_STATIC_RUNTIME ON)
else ()
set(Boost_USE_STATIC_RUNTIME OFF)
endif ()
find_dependency(Boost
COMPONENTS
chrono
container
context
coroutine
date_time
filesystem
program_options
regex
system
thread)
#[=========================================================[
OpenSSL
#]=========================================================]
if (NOT DEFINED OPENSSL_ROOT_DIR)
if (DEFINED ENV{OPENSSL_ROOT})
set(OPENSSL_ROOT_DIR $ENV{OPENSSL_ROOT})
elseif (APPLE)
find_program(homebrew brew)
if (homebrew)
execute_process(COMMAND ${homebrew} --prefix openssl OUTPUT_VARIABLE OPENSSL_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
endif ()
endif ()
file(TO_CMAKE_PATH "${OPENSSL_ROOT_DIR}" OPENSSL_ROOT_DIR)
endif ()
if (static OR APPLE OR MSVC)
set(OPENSSL_USE_STATIC_LIBS ON)
endif ()
set(OPENSSL_MSVC_STATIC_RT ON)
find_dependency(OpenSSL REQUIRED)
find_dependency(ZLIB)
find_dependency(date)
if (TARGET ZLIB::ZLIB)
set_target_properties(OpenSSL::Crypto PROPERTIES INTERFACE_LINK_LIBRARIES ZLIB::ZLIB)
endif ()

View File

@@ -10,23 +10,44 @@ include(target_protobuf_sources)
# so we just build them as a separate library.
add_library(xrpl.libpb)
set_target_properties(xrpl.libpb PROPERTIES UNITY_BUILD OFF)
target_protobuf_sources(xrpl.libpb xrpl/proto LANGUAGE cpp IMPORT_DIRS include/xrpl/proto
PROTOS include/xrpl/proto/xrpl.proto)
target_protobuf_sources(
xrpl.libpb
xrpl/proto
LANGUAGE cpp
IMPORT_DIRS include/xrpl/proto
PROTOS include/xrpl/proto/xrpl.proto
)
file(GLOB_RECURSE protos "include/xrpl/proto/org/*.proto")
target_protobuf_sources(xrpl.libpb xrpl/proto LANGUAGE cpp IMPORT_DIRS include/xrpl/proto PROTOS "${protos}")
target_protobuf_sources(
xrpl.libpb xrpl/proto
xrpl.libpb
xrpl/proto
LANGUAGE cpp
IMPORT_DIRS include/xrpl/proto
PROTOS "${protos}"
)
target_protobuf_sources(
xrpl.libpb
xrpl/proto
LANGUAGE grpc
IMPORT_DIRS include/xrpl/proto
PROTOS "${protos}"
PLUGIN protoc-gen-grpc=$<TARGET_FILE:gRPC::grpc_cpp_plugin>
GENERATE_EXTENSIONS .grpc.pb.h .grpc.pb.cc)
GENERATE_EXTENSIONS .grpc.pb.h .grpc.pb.cc
)
target_compile_options(
xrpl.libpb PUBLIC $<$<BOOL:${is_msvc}>:-wd4996> $<$<BOOL:${is_xcode}>: --system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec >
PRIVATE $<$<BOOL:${is_msvc}>:-wd4065> $<$<NOT:$<BOOL:${is_msvc}>>:-Wno-deprecated-declarations>)
xrpl.libpb
PUBLIC
$<$<BOOL:${is_msvc}>:-wd4996>
$<$<BOOL:${is_xcode}>:
--system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec
>
PRIVATE
$<$<BOOL:${is_msvc}>:-wd4065>
$<$<NOT:$<BOOL:${is_msvc}>>:-Wno-deprecated-declarations>
)
target_link_libraries(xrpl.libpb PUBLIC protobuf::libprotobuf gRPC::grpc++)
@@ -35,19 +56,21 @@ add_library(xrpl.imports.main INTERFACE)
target_link_libraries(
xrpl.imports.main
INTERFACE absl::random_random
date::date
ed25519::ed25519
LibArchive::LibArchive
OpenSSL::Crypto
Xrpl::boost
Xrpl::libs
Xrpl::opts
Xrpl::syslibs
secp256k1::secp256k1
xrpl.libpb
xxHash::xxhash
$<$<BOOL:${voidstar}>:antithesis-sdk-cpp>)
INTERFACE
absl::random_random
date::date
ed25519::ed25519
LibArchive::LibArchive
OpenSSL::Crypto
Xrpl::boost
Xrpl::libs
Xrpl::opts
Xrpl::syslibs
secp256k1::secp256k1
xrpl.libpb
xxHash::xxhash
$<$<BOOL:${voidstar}>:antithesis-sdk-cpp>
)
include(add_module)
include(target_link_modules)
@@ -56,6 +79,16 @@ include(target_link_modules)
add_module(xrpl beast)
target_link_libraries(xrpl.libxrpl.beast PUBLIC xrpl.imports.main)
include(GitInfo)
add_module(xrpl git)
target_compile_definitions(
xrpl.libxrpl.git
PRIVATE
GIT_COMMIT_HASH="${GIT_COMMIT_HASH}"
GIT_BUILD_BRANCH="${GIT_BUILD_BRANCH}"
)
target_link_libraries(xrpl.libxrpl.git PUBLIC xrpl.imports.main)
# Level 02
add_module(xrpl basics)
target_link_libraries(xrpl.libxrpl.basics PUBLIC xrpl.libxrpl.beast)
@@ -69,11 +102,17 @@ target_link_libraries(xrpl.libxrpl.crypto PUBLIC xrpl.libxrpl.basics)
# Level 04
add_module(xrpl protocol)
target_link_libraries(xrpl.libxrpl.protocol PUBLIC xrpl.libxrpl.crypto xrpl.libxrpl.json)
target_link_libraries(
xrpl.libxrpl.protocol
PUBLIC xrpl.libxrpl.crypto xrpl.libxrpl.git xrpl.libxrpl.json
)
# Level 05
add_module(xrpl core)
target_link_libraries(xrpl.libxrpl.core PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol)
target_link_libraries(
xrpl.libxrpl.core
PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol
)
# Level 06
add_module(xrpl resource)
@@ -81,22 +120,46 @@ target_link_libraries(xrpl.libxrpl.resource PUBLIC xrpl.libxrpl.protocol)
# Level 07
add_module(xrpl net)
target_link_libraries(xrpl.libxrpl.net PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol
xrpl.libxrpl.resource)
target_link_libraries(
xrpl.libxrpl.net
PUBLIC
xrpl.libxrpl.basics
xrpl.libxrpl.json
xrpl.libxrpl.protocol
xrpl.libxrpl.resource
)
add_module(xrpl nodestore)
target_link_libraries(xrpl.libxrpl.nodestore PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol)
target_link_libraries(
xrpl.libxrpl.nodestore
PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol
)
add_module(xrpl shamap)
target_link_libraries(xrpl.libxrpl.shamap PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.crypto xrpl.libxrpl.protocol
xrpl.libxrpl.nodestore)
target_link_libraries(
xrpl.libxrpl.shamap
PUBLIC
xrpl.libxrpl.basics
xrpl.libxrpl.crypto
xrpl.libxrpl.protocol
xrpl.libxrpl.nodestore
)
add_module(xrpl rdb)
target_link_libraries(xrpl.libxrpl.rdb PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.core)
target_link_libraries(
xrpl.libxrpl.rdb
PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.core
)
add_module(xrpl server)
target_link_libraries(xrpl.libxrpl.server PUBLIC xrpl.libxrpl.protocol xrpl.libxrpl.core xrpl.libxrpl.rdb
xrpl.libxrpl.resource)
target_link_libraries(
xrpl.libxrpl.server
PUBLIC
xrpl.libxrpl.protocol
xrpl.libxrpl.core
xrpl.libxrpl.rdb
xrpl.libxrpl.resource
)
add_module(xrpl conditions)
target_link_libraries(xrpl.libxrpl.conditions PUBLIC xrpl.libxrpl.server)
@@ -104,19 +167,46 @@ target_link_libraries(xrpl.libxrpl.conditions PUBLIC xrpl.libxrpl.server)
add_module(xrpl ledger)
target_link_libraries(
xrpl.libxrpl.ledger
PUBLIC xrpl.libxrpl.basics
xrpl.libxrpl.json
xrpl.libxrpl.protocol
xrpl.libxrpl.rdb
xrpl.libxrpl.server
xrpl.libxrpl.conditions)
PUBLIC
xrpl.libxrpl.basics
xrpl.libxrpl.json
xrpl.libxrpl.protocol
xrpl.libxrpl.rdb
xrpl.libxrpl.server
xrpl.libxrpl.shamap
xrpl.libxrpl.conditions
)
add_module(xrpl tx)
target_link_libraries(xrpl.libxrpl.tx PUBLIC xrpl.libxrpl.ledger)
# Telemetry module — OpenTelemetry distributed tracing support.
# Sources: include/xrpl/telemetry/ (headers), src/libxrpl/telemetry/ (impl).
# When telemetry=ON, links the Conan-provided umbrella target
# opentelemetry-cpp::opentelemetry-cpp (individual component targets like
# ::api, ::sdk are not available in the Conan package).
add_module(xrpl telemetry)
target_link_libraries(
xrpl.libxrpl.telemetry
PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.beast
)
if(telemetry)
target_link_libraries(
xrpl.libxrpl.telemetry
PUBLIC opentelemetry-cpp::opentelemetry-cpp
)
endif()
add_library(xrpl.libxrpl)
set_target_properties(xrpl.libxrpl PROPERTIES OUTPUT_NAME xrpl)
add_library(xrpl::libxrpl ALIAS xrpl.libxrpl)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/libxrpl/*.cpp")
file(
GLOB_RECURSE sources
CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/libxrpl/*.cpp"
)
target_sources(xrpl.libxrpl PRIVATE ${sources})
target_link_modules(
@@ -127,6 +217,7 @@ target_link_modules(
conditions
core
crypto
git
json
ledger
net
@@ -135,7 +226,10 @@ target_link_modules(
rdb
resource
server
shamap)
shamap
telemetry
tx
)
# All headers in libxrpl are in modules.
# Uncomment this stanza if you have not yet moved new headers into a module.
@@ -146,34 +240,51 @@ target_link_modules(
# $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>
# $<INSTALL_INTERFACE:include>)
if (xrpld)
if(xrpld)
add_executable(xrpld)
if (tests)
if(tests)
target_compile_definitions(xrpld PUBLIC ENABLE_TESTS)
target_compile_definitions(xrpld PRIVATE UNIT_TEST_REFERENCE_FEE=${UNIT_TEST_REFERENCE_FEE})
endif ()
target_include_directories(xrpld PRIVATE $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>)
target_compile_definitions(
xrpld
PRIVATE UNIT_TEST_REFERENCE_FEE=${UNIT_TEST_REFERENCE_FEE}
)
endif()
target_include_directories(
xrpld
PRIVATE $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>
)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/xrpld/*.cpp")
file(
GLOB_RECURSE sources
CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/xrpld/*.cpp"
)
target_sources(xrpld PRIVATE ${sources})
if (tests)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/test/*.cpp")
if(tests)
file(
GLOB_RECURSE sources
CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/test/*.cpp"
)
target_sources(xrpld PRIVATE ${sources})
endif ()
endif()
target_link_libraries(xrpld Xrpl::boost Xrpl::opts Xrpl::libs xrpl.libxrpl)
exclude_if_included(xrpld)
# define a macro for tests that might need to
# be excluded or run differently in CI environment
if (is_ci)
if(is_ci)
target_compile_definitions(xrpld PRIVATE XRPL_RUNNING_IN_CI)
endif ()
endif()
if (voidstar)
if(voidstar)
target_compile_options(xrpld PRIVATE -fsanitize-coverage=trace-pc-guard)
# xrpld requires access to antithesis-sdk-cpp implementation file
# antithesis_instrumentation.h, which is not exported as INTERFACE
target_include_directories(xrpld PRIVATE ${CMAKE_SOURCE_DIR}/external/antithesis-sdk)
endif ()
endif ()
target_include_directories(
xrpld
PRIVATE ${CMAKE_SOURCE_DIR}/external/antithesis-sdk
)
endif()
endif()

View File

@@ -2,14 +2,17 @@
coverage report target
#]===================================================================]
if (NOT coverage)
if(NOT coverage)
message(FATAL_ERROR "Code coverage not enabled! Aborting ...")
endif ()
endif()
if (CMAKE_CXX_COMPILER_ID MATCHES "MSVC")
message(WARNING "Code coverage on Windows is not supported, ignoring 'coverage' flag")
if(CMAKE_CXX_COMPILER_ID MATCHES "MSVC")
message(
WARNING
"Code coverage on Windows is not supported, ignoring 'coverage' flag"
)
return()
endif ()
endif()
include(ProcessorCount)
ProcessorCount(PROCESSOR_COUNT)
@@ -21,32 +24,30 @@ include(CodeCoverage)
# `CodeCoverage.cmake`)
set(GCOVR_ADDITIONAL_ARGS ${coverage_extra_args})
if (NOT GCOVR_ADDITIONAL_ARGS STREQUAL "")
if(NOT GCOVR_ADDITIONAL_ARGS STREQUAL "")
separate_arguments(GCOVR_ADDITIONAL_ARGS)
endif ()
endif()
list(APPEND
GCOVR_ADDITIONAL_ARGS
--exclude-throw-branches
--exclude-noncode-lines
--exclude-unreachable-branches
-s
-j
${PROCESSOR_COUNT})
list(
APPEND GCOVR_ADDITIONAL_ARGS
--exclude-throw-branches
--exclude-noncode-lines
--exclude-unreachable-branches
-s
-j
${PROCESSOR_COUNT}
)
setup_target_for_coverage_gcovr(
NAME
coverage
FORMAT
${coverage_format}
NAME coverage
FORMAT ${coverage_format}
EXCLUDE
"src/test"
"src/tests"
"include/xrpl/beast/test"
"include/xrpl/beast/unit_test"
"${CMAKE_BINARY_DIR}/pb-xrpl.libpb"
DEPENDENCIES
xrpld
xrpl.tests)
"src/test"
"src/tests"
"include/xrpl/beast/test"
"include/xrpl/beast/unit_test"
"${CMAKE_BINARY_DIR}/pb-xrpl.libpb"
DEPENDENCIES xrpld xrpl.tests
)
add_code_coverage_to_target(opts INTERFACE)

View File

@@ -2,71 +2,90 @@
docs target (optional)
#]===================================================================]
if (NOT only_docs)
if(NOT only_docs)
return()
endif ()
endif()
find_package(Doxygen)
if (NOT TARGET Doxygen::doxygen)
if(NOT TARGET Doxygen::doxygen)
message(STATUS "doxygen executable not found -- skipping docs target")
return()
endif ()
endif()
set(doxygen_output_directory "${CMAKE_BINARY_DIR}/docs")
set(doxygen_include_path "${CMAKE_CURRENT_SOURCE_DIR}/src")
set(doxygen_index_file "${doxygen_output_directory}/html/index.html")
set(doxyfile "${CMAKE_CURRENT_SOURCE_DIR}/docs/Doxyfile")
file(GLOB_RECURSE
doxygen_input
docs/*.md
include/*.h
include/*.cpp
include/*.md
src/*.h
src/*.cpp
src/*.md
Builds/*.md
*.md)
file(
GLOB_RECURSE doxygen_input
docs/*.md
include/*.h
include/*.cpp
include/*.md
src/*.h
src/*.cpp
src/*.md
Builds/*.md
*.md
)
list(APPEND doxygen_input external/README.md)
set(dependencies "${doxygen_input}" "${doxyfile}")
function (verbose_find_path variable name)
function(verbose_find_path variable name)
# find_path sets a CACHE variable, so don't try using a "local" variable.
find_path(${variable} "${name}" ${ARGN})
if (NOT ${variable})
if(NOT ${variable})
message(NOTICE "could not find ${name}")
else ()
else()
message(STATUS "found ${name}: ${${variable}}/${name}")
endif ()
endfunction ()
endif()
endfunction()
verbose_find_path(doxygen_plantuml_jar_path plantuml.jar PATH_SUFFIXES share/plantuml)
verbose_find_path(
doxygen_plantuml_jar_path
plantuml.jar
PATH_SUFFIXES share/plantuml
)
verbose_find_path(doxygen_dot_path dot)
# https://en.cppreference.com/w/Cppreference:Archives
# https://stackoverflow.com/questions/60822559/how-to-move-a-file-download-from-configure-step-to-build-step
set(download_script "${CMAKE_BINARY_DIR}/docs/download-cppreference.cmake")
file(WRITE "${download_script}"
"file(DOWNLOAD \
file(
WRITE "${download_script}"
"file(DOWNLOAD \
https://github.com/PeterFeicht/cppreference-doc/releases/download/v20250209/html-book-20250209.zip \
${CMAKE_BINARY_DIR}/docs/cppreference.zip \
EXPECTED_HASH MD5=bda585f72fbca4b817b29a3d5746567b \
)\n \
execute_process( \
COMMAND \"${CMAKE_COMMAND}\" -E tar -xf cppreference.zip \
)\n")
)\n"
)
set(tagfile "${CMAKE_BINARY_DIR}/docs/cppreference-doxygen-web.tag.xml")
add_custom_command(OUTPUT "${tagfile}" COMMAND "${CMAKE_COMMAND}" -P "${download_script}"
WORKING_DIRECTORY "${CMAKE_BINARY_DIR}/docs")
add_custom_command(
OUTPUT "${tagfile}"
COMMAND "${CMAKE_COMMAND}" -P "${download_script}"
WORKING_DIRECTORY "${CMAKE_BINARY_DIR}/docs"
)
set(doxygen_tagfiles "${tagfile}=http://en.cppreference.com/w/")
add_custom_command(
OUTPUT "${doxygen_index_file}"
COMMAND "${CMAKE_COMMAND}" -E env "DOXYGEN_OUTPUT_DIRECTORY=${doxygen_output_directory}"
"DOXYGEN_INCLUDE_PATH=${doxygen_include_path}" "DOXYGEN_TAGFILES=${doxygen_tagfiles}"
"DOXYGEN_PLANTUML_JAR_PATH=${doxygen_plantuml_jar_path}" "DOXYGEN_DOT_PATH=${doxygen_dot_path}"
"${DOXYGEN_EXECUTABLE}" "${doxyfile}"
COMMAND
"${CMAKE_COMMAND}" -E env
"DOXYGEN_OUTPUT_DIRECTORY=${doxygen_output_directory}"
"DOXYGEN_INCLUDE_PATH=${doxygen_include_path}"
"DOXYGEN_TAGFILES=${doxygen_tagfiles}"
"DOXYGEN_PLANTUML_JAR_PATH=${doxygen_plantuml_jar_path}"
"DOXYGEN_DOT_PATH=${doxygen_dot_path}" "${DOXYGEN_EXECUTABLE}"
"${doxyfile}"
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
DEPENDS "${dependencies}" "${tagfile}")
add_custom_target(docs DEPENDS "${doxygen_index_file}" SOURCES "${dependencies}")
DEPENDS "${dependencies}" "${tagfile}"
)
add_custom_target(
docs
DEPENDS "${doxygen_index_file}"
SOURCES "${dependencies}"
)

View File

@@ -2,74 +2,38 @@
install stuff
#]===================================================================]
include(create_symbolic_link)
include(GNUInstallDirs)
# If no suffix is defined for executables (e.g. Windows uses .exe but Linux
# and macOS use none), then explicitly set it to the empty string.
if (NOT DEFINED suffix)
set(suffix "")
endif ()
if(is_root_project AND TARGET xrpld)
install(
TARGETS xrpld
RUNTIME DESTINATION "${CMAKE_INSTALL_BINDIR}" COMPONENT runtime
)
install(TARGETS common
opts
xrpl_boost
xrpl_libs
xrpl_syslibs
xrpl.imports.main
xrpl.libpb
xrpl.libxrpl
xrpl.libxrpl.basics
xrpl.libxrpl.beast
xrpl.libxrpl.conditions
xrpl.libxrpl.core
xrpl.libxrpl.crypto
xrpl.libxrpl.json
xrpl.libxrpl.rdb
xrpl.libxrpl.ledger
xrpl.libxrpl.net
xrpl.libxrpl.nodestore
xrpl.libxrpl.protocol
xrpl.libxrpl.resource
xrpl.libxrpl.server
xrpl.libxrpl.shamap
antithesis-sdk-cpp
EXPORT XrplExports
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
RUNTIME DESTINATION bin
INCLUDES
DESTINATION include)
install(
FILES "${CMAKE_CURRENT_SOURCE_DIR}/cfg/xrpld-example.cfg"
DESTINATION "${CMAKE_INSTALL_SYSCONFDIR}/xrpld"
RENAME xrpld.cfg
COMPONENT runtime
)
install(DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/include/xrpl" DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}")
install(
FILES "${CMAKE_CURRENT_SOURCE_DIR}/cfg/validators-example.txt"
DESTINATION "${CMAKE_INSTALL_SYSCONFDIR}/xrpld"
RENAME validators.txt
COMPONENT runtime
)
endif()
install(EXPORT XrplExports FILE XrplTargets.cmake NAMESPACE Xrpl:: DESTINATION lib/cmake/xrpl)
include(CMakePackageConfigHelpers)
write_basic_package_version_file(XrplConfigVersion.cmake VERSION ${xrpld_version} COMPATIBILITY SameMajorVersion)
install(
TARGETS xrpl.libpb xrpl.libxrpl
LIBRARY DESTINATION "${CMAKE_INSTALL_LIBDIR}" COMPONENT development
ARCHIVE DESTINATION "${CMAKE_INSTALL_LIBDIR}" COMPONENT development
RUNTIME DESTINATION "${CMAKE_INSTALL_BINDIR}" COMPONENT development
)
if (is_root_project AND TARGET xrpld)
install(TARGETS xrpld RUNTIME DESTINATION bin)
set_target_properties(xrpld PROPERTIES INSTALL_RPATH_USE_LINK_PATH ON)
# sample configs should not overwrite existing files
# install if-not-exists workaround as suggested by
# https://cmake.org/Bug/view.php?id=12646
install(CODE "
macro (copy_if_not_exists SRC DEST NEWNAME)
if (NOT EXISTS \"\$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/\${DEST}/\${NEWNAME}\")
file (INSTALL FILE_PERMISSIONS OWNER_READ OWNER_WRITE DESTINATION \"\${CMAKE_INSTALL_PREFIX}/\${DEST}\" FILES \"\${SRC}\" RENAME \"\${NEWNAME}\")
else ()
message (\"-- Skipping : \$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/\${DEST}/\${NEWNAME}\")
endif ()
endmacro()
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/xrpld-example.cfg\" etc xrpld.cfg)
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/validators-example.txt\" etc validators.txt)
")
install(CODE "
set(CMAKE_MODULE_PATH \"${CMAKE_MODULE_PATH}\")
include(create_symbolic_link)
create_symbolic_link(xrpld${suffix} \
\$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_BINDIR}/rippled${suffix})
")
endif ()
install(FILES ${CMAKE_CURRENT_SOURCE_DIR}/cmake/XrplConfig.cmake ${CMAKE_CURRENT_BINARY_DIR}/XrplConfigVersion.cmake
DESTINATION lib/cmake/xrpl)
install(
DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/include/xrpl"
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}"
COMPONENT development
)

View File

@@ -5,44 +5,55 @@
include(CompilationEnv)
# Set defaults for optional variables to avoid uninitialized variable warnings
if (NOT DEFINED voidstar)
if(NOT DEFINED voidstar)
set(voidstar OFF)
endif ()
endif()
add_library(opts INTERFACE)
add_library(Xrpl::opts ALIAS opts)
target_compile_definitions(
opts
INTERFACE BOOST_ASIO_DISABLE_HANDLER_TYPE_REQUIREMENTS
BOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT
BOOST_CONTAINER_FWD_BAD_DEQUE
HAS_UNCAUGHT_EXCEPTIONS=1
$<$<BOOL:${boost_show_deprecated}>:
BOOST_ASIO_NO_DEPRECATED
BOOST_FILESYSTEM_NO_DEPRECATED
>
$<$<NOT:$<BOOL:${boost_show_deprecated}>>:
BOOST_COROUTINES_NO_DEPRECATION_WARNING
BOOST_BEAST_ALLOW_DEPRECATED
BOOST_FILESYSTEM_DEPRECATED
>
$<$<BOOL:${beast_no_unit_test_inline}>:BEAST_NO_UNIT_TEST_INLINE=1>
$<$<BOOL:${beast_disable_autolink}>:BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES=1>
$<$<BOOL:${single_io_service_thread}>:XRPL_SINGLE_IO_SERVICE_THREAD=1>
$<$<BOOL:${voidstar}>:ENABLE_VOIDSTAR>)
INTERFACE
BOOST_ASIO_DISABLE_HANDLER_TYPE_REQUIREMENTS
BOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT
BOOST_CONTAINER_FWD_BAD_DEQUE
HAS_UNCAUGHT_EXCEPTIONS=1
$<$<BOOL:${boost_show_deprecated}>:
BOOST_ASIO_NO_DEPRECATED
BOOST_FILESYSTEM_NO_DEPRECATED
>
$<$<NOT:$<BOOL:${boost_show_deprecated}>>:
BOOST_COROUTINES_NO_DEPRECATION_WARNING
BOOST_BEAST_ALLOW_DEPRECATED
BOOST_FILESYSTEM_DEPRECATED
>
$<$<BOOL:${beast_no_unit_test_inline}>:BEAST_NO_UNIT_TEST_INLINE=1>
$<$<BOOL:${beast_disable_autolink}>:BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES=1>
$<$<BOOL:${single_io_service_thread}>:XRPL_SINGLE_IO_SERVICE_THREAD=1>
$<$<BOOL:${voidstar}>:ENABLE_VOIDSTAR>
)
target_compile_options(
opts
INTERFACE $<$<AND:$<BOOL:${is_gcc}>,$<COMPILE_LANGUAGE:CXX>>:-Wsuggest-override>
$<$<BOOL:${is_gcc}>:-Wno-maybe-uninitialized> $<$<BOOL:${perf}>:-fno-omit-frame-pointer>
$<$<BOOL:${profile}>:-pg> $<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>)
INTERFACE
$<$<AND:$<BOOL:${is_gcc}>,$<COMPILE_LANGUAGE:CXX>>:-Wsuggest-override>
$<$<BOOL:${is_gcc}>:-Wno-maybe-uninitialized>
$<$<BOOL:${perf}>:-fno-omit-frame-pointer>
$<$<BOOL:${profile}>:-pg>
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>
)
target_link_libraries(opts INTERFACE $<$<BOOL:${profile}>:-pg> $<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>)
target_link_libraries(
opts
INTERFACE
$<$<BOOL:${profile}>:-pg>
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>
)
if (jemalloc)
if(jemalloc)
find_package(jemalloc REQUIRED)
target_compile_definitions(opts INTERFACE PROFILE_JEMALLOC)
target_link_libraries(opts INTERFACE jemalloc::jemalloc)
endif ()
endif()
#[===================================================================[
xrpld transitive library deps via an interface library
@@ -52,31 +63,33 @@ add_library(xrpl_syslibs INTERFACE)
add_library(Xrpl::syslibs ALIAS xrpl_syslibs)
target_link_libraries(
xrpl_syslibs
INTERFACE $<$<BOOL:${is_msvc}>:
legacy_stdio_definitions.lib
Shlwapi
kernel32
user32
gdi32
winspool
comdlg32
advapi32
shell32
ole32
oleaut32
uuid
odbc32
odbccp32
crypt32
>
$<$<NOT:$<BOOL:${is_msvc}>>:dl>
$<$<NOT:$<OR:$<BOOL:${is_msvc}>,$<BOOL:${is_macos}>>>:rt>)
INTERFACE
$<$<BOOL:${is_msvc}>:
legacy_stdio_definitions.lib
Shlwapi
kernel32
user32
gdi32
winspool
comdlg32
advapi32
shell32
ole32
oleaut32
uuid
odbc32
odbccp32
crypt32
>
$<$<NOT:$<BOOL:${is_msvc}>>:dl>
$<$<NOT:$<OR:$<BOOL:${is_msvc}>,$<BOOL:${is_macos}>>>:rt>
)
if (NOT is_msvc)
if(NOT is_msvc)
set(THREADS_PREFER_PTHREAD_FLAG ON)
find_package(Threads)
target_link_libraries(xrpl_syslibs INTERFACE Threads::Threads)
endif ()
endif()
add_library(xrpl_libs INTERFACE)
add_library(Xrpl::libs ALIAS xrpl_libs)

View File

@@ -44,23 +44,26 @@ include(CompilationEnv)
# Read environment variable
set(SANITIZERS "")
if (DEFINED ENV{SANITIZERS})
if(DEFINED ENV{SANITIZERS})
set(SANITIZERS "$ENV{SANITIZERS}")
endif ()
endif()
# Set SANITIZERS_ENABLED flag for use in other modules
if (SANITIZERS MATCHES "address|thread|undefinedbehavior")
if(SANITIZERS MATCHES "address|thread|undefinedbehavior")
set(SANITIZERS_ENABLED TRUE)
else ()
else()
set(SANITIZERS_ENABLED FALSE)
return()
endif ()
endif()
# Sanitizers are not supported on Windows/MSVC
if (is_msvc)
message(FATAL_ERROR "Sanitizers are not supported on Windows/MSVC. "
"Please unset the SANITIZERS environment variable.")
endif ()
if(is_msvc)
message(
FATAL_ERROR
"Sanitizers are not supported on Windows/MSVC. "
"Please unset the SANITIZERS environment variable."
)
endif()
message(STATUS "Configuring sanitizers: ${SANITIZERS}")
@@ -74,24 +77,30 @@ set(san_list "${SANITIZERS}")
string(REPLACE "," ";" san_list "${san_list}")
separate_arguments(san_list)
foreach (san IN LISTS san_list)
if (san STREQUAL "address")
foreach(san IN LISTS san_list)
if(san STREQUAL "address")
set(enable_asan TRUE)
elseif (san STREQUAL "thread")
elseif(san STREQUAL "thread")
set(enable_tsan TRUE)
elseif (san STREQUAL "undefinedbehavior")
elseif(san STREQUAL "undefinedbehavior")
set(enable_ubsan TRUE)
else ()
message(FATAL_ERROR "Unsupported sanitizer type: ${san}"
"Supported: address, thread, undefinedbehavior and their combinations.")
endif ()
endforeach ()
else()
message(
FATAL_ERROR
"Unsupported sanitizer type: ${san}"
"Supported: address, thread, undefinedbehavior and their combinations."
)
endif()
endforeach()
# Validate sanitizer compatibility
if (enable_asan AND enable_tsan)
message(FATAL_ERROR "AddressSanitizer and ThreadSanitizer are incompatible and cannot be enabled simultaneously. "
"Use 'address' or 'thread', optionally with 'undefinedbehavior'.")
endif ()
if(enable_asan AND enable_tsan)
message(
FATAL_ERROR
"AddressSanitizer and ThreadSanitizer are incompatible and cannot be enabled simultaneously. "
"Use 'address' or 'thread', optionally with 'undefinedbehavior'."
)
endif()
# Frame pointer is required for meaningful stack traces. Sanitizers recommend minimum of -O1 for reasonable performance
set(SANITIZERS_COMPILE_FLAGS "-fno-omit-frame-pointer" "-O1")
@@ -99,66 +108,79 @@ set(SANITIZERS_COMPILE_FLAGS "-fno-omit-frame-pointer" "-O1")
# Build the sanitizer flags list
set(SANITIZER_TYPES)
if (enable_asan)
if(enable_asan)
list(APPEND SANITIZER_TYPES "address")
elseif (enable_tsan)
elseif(enable_tsan)
list(APPEND SANITIZER_TYPES "thread")
endif ()
endif()
if (enable_ubsan)
if(enable_ubsan)
# UB sanitizer flags
list(APPEND SANITIZER_TYPES "undefined" "float-divide-by-zero")
if (is_clang)
if(is_clang)
# Clang supports additional UB checks. More info here
# https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html
list(APPEND SANITIZER_TYPES "unsigned-integer-overflow")
endif ()
endif ()
endif()
endif()
# Configure code model for GCC on amd64 Use large code model for ASAN to avoid relocation errors Use medium code model
# for TSAN (large is not compatible with TSAN)
set(SANITIZERS_RELOCATION_FLAGS)
# Compiler-specific configuration
if (is_gcc)
if(is_gcc)
# Disable mold, gold and lld linkers for GCC with sanitizers Use default linker (bfd/ld) which is more lenient with
# mixed code models This is needed since the size of instrumented binary exceeds the limits set by mold, lld and
# gold linkers
set(use_mold OFF CACHE BOOL "Use mold linker" FORCE)
set(use_gold OFF CACHE BOOL "Use gold linker" FORCE)
set(use_lld OFF CACHE BOOL "Use lld linker" FORCE)
message(STATUS " Disabled mold, gold, and lld linkers for GCC with sanitizers")
message(
STATUS
" Disabled mold, gold, and lld linkers for GCC with sanitizers"
)
# Suppress false positive warnings in GCC with stringop-overflow
list(APPEND SANITIZERS_COMPILE_FLAGS "-Wno-stringop-overflow")
if (is_amd64 AND enable_asan)
if(is_amd64 AND enable_asan)
message(STATUS " Using large code model (-mcmodel=large)")
list(APPEND SANITIZERS_COMPILE_FLAGS "-mcmodel=large")
list(APPEND SANITIZERS_RELOCATION_FLAGS "-mcmodel=large")
elseif (enable_tsan)
elseif(enable_tsan)
# GCC doesn't support atomic_thread_fence with tsan. Suppress warnings.
list(APPEND SANITIZERS_COMPILE_FLAGS "-Wno-tsan")
message(STATUS " Using medium code model (-mcmodel=medium)")
list(APPEND SANITIZERS_COMPILE_FLAGS "-mcmodel=medium")
list(APPEND SANITIZERS_RELOCATION_FLAGS "-mcmodel=medium")
endif ()
endif()
# Join sanitizer flags with commas for -fsanitize option
list(JOIN SANITIZER_TYPES "," SANITIZER_TYPES_STR)
# Add sanitizer to compile and link flags
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
set(SANITIZERS_LINK_FLAGS "${SANITIZERS_RELOCATION_FLAGS}" "-fsanitize=${SANITIZER_TYPES_STR}")
elseif (is_clang)
set(SANITIZERS_LINK_FLAGS
"${SANITIZERS_RELOCATION_FLAGS}"
"-fsanitize=${SANITIZER_TYPES_STR}"
)
elseif(is_clang)
# Add ignorelist for Clang (GCC doesn't support this) Use CMAKE_SOURCE_DIR to get the path to the ignorelist
set(IGNORELIST_PATH "${CMAKE_SOURCE_DIR}/sanitizers/suppressions/sanitizer-ignorelist.txt")
if (NOT EXISTS "${IGNORELIST_PATH}")
message(FATAL_ERROR "Sanitizer ignorelist not found: ${IGNORELIST_PATH}")
endif ()
set(IGNORELIST_PATH
"${CMAKE_SOURCE_DIR}/sanitizers/suppressions/sanitizer-ignorelist.txt"
)
if(NOT EXISTS "${IGNORELIST_PATH}")
message(
FATAL_ERROR
"Sanitizer ignorelist not found: ${IGNORELIST_PATH}"
)
endif()
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize-ignorelist=${IGNORELIST_PATH}")
list(
APPEND SANITIZERS_COMPILE_FLAGS
"-fsanitize-ignorelist=${IGNORELIST_PATH}"
)
message(STATUS " Using sanitizer ignorelist: ${IGNORELIST_PATH}")
# Join sanitizer flags with commas for -fsanitize option
@@ -167,31 +189,35 @@ elseif (is_clang)
# Add sanitizer to compile and link flags
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
set(SANITIZERS_LINK_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
endif ()
endif()
message(STATUS " Compile flags: ${SANITIZERS_COMPILE_FLAGS}")
message(STATUS " Link flags: ${SANITIZERS_LINK_FLAGS}")
# Apply the sanitizer flags to the 'common' interface library This is the same library used by XrplCompiler.cmake
target_compile_options(common INTERFACE $<$<COMPILE_LANGUAGE:CXX>:${SANITIZERS_COMPILE_FLAGS}>
$<$<COMPILE_LANGUAGE:C>:${SANITIZERS_COMPILE_FLAGS}>)
target_compile_options(
common
INTERFACE
$<$<COMPILE_LANGUAGE:CXX>:${SANITIZERS_COMPILE_FLAGS}>
$<$<COMPILE_LANGUAGE:C>:${SANITIZERS_COMPILE_FLAGS}>
)
# Apply linker flags
target_link_options(common INTERFACE ${SANITIZERS_LINK_FLAGS})
# Define SANITIZERS macro for BuildInfo.cpp
set(sanitizers_list)
if (enable_asan)
if(enable_asan)
list(APPEND sanitizers_list "ASAN")
endif ()
if (enable_tsan)
endif()
if(enable_tsan)
list(APPEND sanitizers_list "TSAN")
endif ()
if (enable_ubsan)
endif()
if(enable_ubsan)
list(APPEND sanitizers_list "UBSAN")
endif ()
endif()
if (sanitizers_list)
if(sanitizers_list)
list(JOIN sanitizers_list "." sanitizers_str)
target_compile_definitions(common INTERFACE SANITIZERS=${sanitizers_str})
endif ()
endif()

View File

@@ -7,38 +7,56 @@ include(CompilationEnv)
get_property(is_multiconfig GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
set(CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "" FORCE)
if (NOT is_multiconfig)
if (NOT CMAKE_BUILD_TYPE)
if(NOT is_multiconfig)
if(NOT CMAKE_BUILD_TYPE)
message(STATUS "Build type not specified - defaulting to Release")
set(CMAKE_BUILD_TYPE Release CACHE STRING "build type" FORCE)
elseif (NOT (CMAKE_BUILD_TYPE STREQUAL Debug OR CMAKE_BUILD_TYPE STREQUAL Release))
elseif(
NOT (CMAKE_BUILD_TYPE STREQUAL Debug OR CMAKE_BUILD_TYPE STREQUAL Release)
)
# for simplicity, these are the only two config types we care about. Limiting the build types simplifies dealing
# with external project builds especially
message(FATAL_ERROR " *** Only Debug or Release build types are currently supported ***")
endif ()
endif ()
message(
FATAL_ERROR
" *** Only Debug or Release build types are currently supported ***"
)
endif()
endif()
if (is_clang) # both Clang and AppleClang
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang" AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 16.0)
if(is_clang) # both Clang and AppleClang
if(
"${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang"
AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 16.0
)
message(FATAL_ERROR "This project requires clang 16 or later")
endif ()
elseif (is_gcc)
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 12.0)
endif()
elseif(is_gcc)
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS 12.0)
message(FATAL_ERROR "This project requires GCC 12 or later")
endif ()
endif ()
endif()
endif()
# check for in-source build and fail
if ("${CMAKE_CURRENT_SOURCE_DIR}" STREQUAL "${CMAKE_BINARY_DIR}")
message(FATAL_ERROR "Builds (in-source) are not allowed in "
"${CMAKE_CURRENT_SOURCE_DIR}. Please remove CMakeCache.txt and the CMakeFiles "
"directory from ${CMAKE_CURRENT_SOURCE_DIR} and try building in a separate directory.")
endif ()
if("${CMAKE_CURRENT_SOURCE_DIR}" STREQUAL "${CMAKE_BINARY_DIR}")
message(
FATAL_ERROR
"Builds (in-source) are not allowed in "
"${CMAKE_CURRENT_SOURCE_DIR}. Please remove CMakeCache.txt and the CMakeFiles "
"directory from ${CMAKE_CURRENT_SOURCE_DIR} and try building in a separate directory."
)
endif()
if (MSVC AND CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
if(MSVC AND CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
message(FATAL_ERROR "Visual Studio 32-bit build is not supported.")
endif ()
endif()
if (APPLE AND NOT HOMEBREW)
if(voidstar AND NOT is_amd64)
message(
FATAL_ERROR
"The voidstar library only supported on amd64/x86_64. Detected archictecture was: ${CMAKE_SYSTEM_PROCESSOR}"
)
endif()
if(APPLE AND NOT HOMEBREW)
find_program(HOMEBREW brew)
endif ()
endif()

View File

@@ -5,59 +5,67 @@
include(CompilationEnv)
set(is_ci FALSE)
if (DEFINED ENV{CI})
if ("$ENV{CI}" STREQUAL "true")
if(DEFINED ENV{CI})
if("$ENV{CI}" STREQUAL "true")
set(is_ci TRUE)
endif ()
endif ()
endif()
endif()
get_directory_property(has_parent PARENT_DIRECTORY)
if (has_parent)
if(has_parent)
set(is_root_project OFF)
else ()
else()
set(is_root_project ON)
endif ()
endif()
option(assert "Enables asserts, even in release builds" OFF)
option(xrpld "Build xrpld" ON)
option(tests "Build tests" ON)
if (tests)
if(tests)
# This setting allows making a separate workflow to test fees other than default 10
if (NOT UNIT_TEST_REFERENCE_FEE)
if(NOT UNIT_TEST_REFERENCE_FEE)
set(UNIT_TEST_REFERENCE_FEE "10" CACHE STRING "")
endif ()
endif ()
endif()
endif()
option(unity "Creates a build using UNITY support in cmake." OFF)
if (unity)
if (NOT is_ci)
if(unity)
if(NOT is_ci)
set(CMAKE_UNITY_BUILD_BATCH_SIZE 15 CACHE STRING "")
endif ()
endif()
set(CMAKE_UNITY_BUILD ON CACHE BOOL "Do a unity build")
endif ()
endif()
if (is_clang AND is_linux)
if(is_clang AND is_linux)
option(voidstar "Enable Antithesis instrumentation." OFF)
endif ()
endif()
if (is_gcc OR is_clang)
if(is_gcc OR is_clang)
include(ProcessorCount)
ProcessorCount(PROCESSOR_COUNT)
option(coverage "Generates coverage info." OFF)
option(profile "Add profiling flags" OFF)
set(coverage_format "html-details" CACHE STRING "Output format of the coverage report.")
set(coverage_extra_args "" CACHE STRING "Additional arguments to pass to gcovr.")
set(coverage_format
"html-details"
CACHE STRING
"Output format of the coverage report."
)
set(coverage_extra_args
""
CACHE STRING
"Additional arguments to pass to gcovr."
)
option(wextra "compile with extra gcc/clang warnings enabled" ON)
else ()
else()
set(profile OFF CACHE BOOL "gcc/clang only" FORCE)
set(coverage OFF CACHE BOOL "gcc/clang only" FORCE)
set(wextra OFF CACHE BOOL "gcc/clang only" FORCE)
endif ()
endif()
if (is_linux AND NOT SANITIZER)
if(is_linux AND NOT SANITIZER)
option(BUILD_SHARED_LIBS "build shared xrpl libraries" OFF)
option(static "link protobuf, openssl, libc++, and boost statically" ON)
option(perf "Enables flags that assist with perf recording" OFF)
@@ -65,50 +73,83 @@ if (is_linux AND NOT SANITIZER)
option(use_mold "enables detection of mold (binutils) linker" ON)
# Set a default value for the log flag based on the build type. This provides a sensible default (on for debug, off
# for release) while still allowing the user to override it for any build.
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
if(CMAKE_BUILD_TYPE STREQUAL "Debug")
set(TRUNCATED_LOGS_DEFAULT ON)
else ()
else()
set(TRUNCATED_LOGS_DEFAULT OFF)
endif ()
option(TRUNCATED_THREAD_NAME_LOGS "Show warnings about truncated thread names on Linux." ${TRUNCATED_LOGS_DEFAULT})
if (TRUNCATED_THREAD_NAME_LOGS)
endif()
option(
TRUNCATED_THREAD_NAME_LOGS
"Show warnings about truncated thread names on Linux."
${TRUNCATED_LOGS_DEFAULT}
)
if(TRUNCATED_THREAD_NAME_LOGS)
add_compile_definitions(TRUNCATED_THREAD_NAME_LOGS)
endif ()
else ()
endif()
else()
# we are not ready to allow shared-libs on windows because it would require export declarations. On macos it's more
# feasible, but static openssl produces odd linker errors, thus we disable shared lib builds for now.
set(BUILD_SHARED_LIBS OFF CACHE BOOL "build shared xrpl libraries - OFF for win/macos" FORCE)
set(BUILD_SHARED_LIBS
OFF
CACHE BOOL
"build shared xrpl libraries - OFF for win/macos"
FORCE
)
set(static ON CACHE BOOL "static link, linux only. ON for WIN/macos" FORCE)
set(perf OFF CACHE BOOL "perf flags, linux only" FORCE)
set(use_gold OFF CACHE BOOL "gold linker, linux only" FORCE)
set(use_mold OFF CACHE BOOL "mold linker, linux only" FORCE)
endif ()
endif()
if (is_clang)
if(is_clang)
option(use_lld "enables detection of lld linker" ON)
else ()
else()
set(use_lld OFF CACHE BOOL "try lld linker, clang only" FORCE)
endif ()
endif()
option(jemalloc "Enables jemalloc for heap profiling" OFF)
option(werr "treat warnings as errors" OFF)
option(local_protobuf "Force a local build of protobuf instead of looking for an installed version." OFF)
option(local_grpc "Force a local build of gRPC instead of looking for an installed version." OFF)
option(
local_protobuf
"Force a local build of protobuf instead of looking for an installed version."
OFF
)
option(
local_grpc
"Force a local build of gRPC instead of looking for an installed version."
OFF
)
# the remaining options are obscure and rarely used
option(beast_no_unit_test_inline "Prevents unit test definitions from being inserted into global table" OFF)
option(single_io_service_thread "Restricts the number of threads calling io_context::run to one. \
This can be useful when debugging." OFF)
option(boost_show_deprecated "Allow boost to fail on deprecated usage. Only useful if you're trying\
to find deprecated calls." OFF)
option(
beast_no_unit_test_inline
"Prevents unit test definitions from being inserted into global table"
OFF
)
option(
single_io_service_thread
"Restricts the number of threads calling io_context::run to one. \
This can be useful when debugging."
OFF
)
option(
boost_show_deprecated
"Allow boost to fail on deprecated usage. Only useful if you're trying\
to find deprecated calls."
OFF
)
if (WIN32)
option(beast_disable_autolink "Disables autolinking of system libraries on WIN32" OFF)
else ()
if(WIN32)
option(
beast_disable_autolink
"Disables autolinking of system libraries on WIN32"
OFF
)
else()
set(beast_disable_autolink OFF CACHE BOOL "WIN32 only" FORCE)
endif ()
endif()
if (coverage)
if(coverage)
message(STATUS "coverage build requested - forcing Debug build")
set(CMAKE_BUILD_TYPE Debug CACHE STRING "build type" FORCE)
endif ()
endif()

View File

@@ -1,17 +1,26 @@
option(validator_keys "Enables building of validator-keys tool as a separate target (imported via FetchContent)" OFF)
option(
validator_keys
"Enables building of validator-keys tool as a separate target (imported via FetchContent)"
OFF
)
if (validator_keys)
if(validator_keys)
git_branch(current_branch)
# default to tracking VK master branch unless we are on release
if (NOT (current_branch STREQUAL "release"))
if(NOT (current_branch STREQUAL "release"))
set(current_branch "master")
endif ()
endif()
message(STATUS "Tracking ValidatorKeys branch: ${current_branch}")
FetchContent_Declare(validator_keys GIT_REPOSITORY https://github.com/ripple/validator-keys-tool.git
GIT_TAG "${current_branch}")
FetchContent_Declare(
validator_keys
GIT_REPOSITORY https://github.com/ripple/validator-keys-tool.git
GIT_TAG "${current_branch}"
)
FetchContent_MakeAvailable(validator_keys)
set_target_properties(validator-keys PROPERTIES RUNTIME_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}")
set_target_properties(
validator-keys
PROPERTIES RUNTIME_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}"
)
install(TARGETS validator-keys RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR})
endif ()
endif()

View File

@@ -3,13 +3,13 @@
#]===================================================================]
file(STRINGS src/libxrpl/protocol/BuildInfo.cpp BUILD_INFO)
foreach (line_ ${BUILD_INFO})
if (line_ MATCHES "versionString[ ]*=[ ]*\"(.+)\"")
foreach(line_ ${BUILD_INFO})
if(line_ MATCHES "versionString[ ]*=[ ]*\"(.+)\"")
set(xrpld_version ${CMAKE_MATCH_1})
endif ()
endforeach ()
if (xrpld_version)
endif()
endforeach()
if(xrpld_version)
message(STATUS "xrpld version: ${xrpld_version}")
else ()
else()
message(FATAL_ERROR "unable to determine xrpld version")
endif ()
endif()

View File

@@ -12,14 +12,29 @@ include(isolate_headers)
# add_module(parent a)
# add_module(parent b)
# target_link_libraries(project.libparent.b PUBLIC project.libparent.a)
function (add_module parent name)
function(add_module parent name)
set(target ${PROJECT_NAME}.lib${parent}.${name})
add_library(${target} OBJECT)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}/*.cpp")
file(
GLOB_RECURSE sources
CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}/*.cpp"
)
target_sources(${target} PRIVATE ${sources})
target_include_directories(${target} PUBLIC "$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>")
isolate_headers(${target} "${CMAKE_CURRENT_SOURCE_DIR}/include"
"${CMAKE_CURRENT_SOURCE_DIR}/include/${parent}/${name}" PUBLIC)
isolate_headers(${target} "${CMAKE_CURRENT_SOURCE_DIR}/src" "${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}"
PRIVATE)
endfunction ()
target_include_directories(
${target}
PUBLIC "$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>"
)
isolate_headers(
${target}
"${CMAKE_CURRENT_SOURCE_DIR}/include"
"${CMAKE_CURRENT_SOURCE_DIR}/include/${parent}/${name}"
PUBLIC
)
isolate_headers(
${target}
"${CMAKE_CURRENT_SOURCE_DIR}/src"
"${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}"
PRIVATE
)
endfunction()

View File

@@ -1,19 +1,21 @@
# file(CREATE_SYMLINK) only works on Windows with administrator privileges. https://stackoverflow.com/a/61244115/618906
function (create_symbolic_link target link)
if (WIN32)
if (NOT IS_SYMLINK "${link}")
if (NOT IS_ABSOLUTE "${target}")
function(create_symbolic_link target link)
if(WIN32)
if(NOT IS_SYMLINK "${link}")
if(NOT IS_ABSOLUTE "${target}")
# Relative links work do not work on Windows.
set(target "${link}/../${target}")
endif ()
endif()
file(TO_NATIVE_PATH "${target}" target)
file(TO_NATIVE_PATH "${link}" link)
execute_process(COMMAND cmd.exe /c mklink /J "${link}" "${target}")
endif ()
else ()
execute_process(
COMMAND cmd.exe /c mklink /J "${link}" "${target}"
)
endif()
else()
file(CREATE_LINK "${target}" "${link}" SYMBOLIC)
endif ()
if (NOT IS_SYMLINK "${link}")
endif()
if(NOT IS_SYMLINK "${link}")
message(ERROR "failed to create symlink: <${link}>")
endif ()
endfunction ()
endif()
endfunction()

View File

@@ -1,44 +1,60 @@
include(CompilationEnv)
include(XrplSanitizers)
find_package(Boost REQUIRED
COMPONENTS chrono
container
coroutine
date_time
filesystem
json
program_options
regex
system
thread)
find_package(
Boost
REQUIRED
COMPONENTS
chrono
container
coroutine
date_time
filesystem
json
program_options
regex
system
thread
)
add_library(xrpl_boost INTERFACE)
add_library(Xrpl::boost ALIAS xrpl_boost)
target_link_libraries(
xrpl_boost
INTERFACE Boost::headers
Boost::chrono
Boost::container
Boost::coroutine
Boost::date_time
Boost::filesystem
Boost::json
Boost::process
Boost::program_options
Boost::regex
Boost::thread)
if (Boost_COMPILER)
INTERFACE
Boost::headers
Boost::chrono
Boost::container
Boost::coroutine
Boost::date_time
Boost::filesystem
Boost::json
Boost::process
Boost::program_options
Boost::regex
Boost::thread
)
if(Boost_COMPILER)
target_link_libraries(xrpl_boost INTERFACE Boost::disable_autolinking)
endif ()
if (SANITIZERS_ENABLED AND is_clang)
endif()
if(SANITIZERS_ENABLED AND is_clang)
# TODO: gcc does not support -fsanitize-blacklist...can we do something else for gcc ?
if (NOT Boost_INCLUDE_DIRS AND TARGET Boost::headers)
get_target_property(Boost_INCLUDE_DIRS Boost::headers INTERFACE_INCLUDE_DIRECTORIES)
endif ()
if(NOT Boost_INCLUDE_DIRS AND TARGET Boost::headers)
get_target_property(
Boost_INCLUDE_DIRS
Boost::headers
INTERFACE_INCLUDE_DIRECTORIES
)
endif()
message(STATUS "Adding [${Boost_INCLUDE_DIRS}] to sanitizer blacklist")
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt "src:${Boost_INCLUDE_DIRS}/*")
target_compile_options(opts INTERFACE # ignore boost headers for sanitizing
-fsanitize-blacklist=${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt)
endif ()
file(
WRITE ${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt
"src:${Boost_INCLUDE_DIRS}/*"
)
target_compile_options(
opts
INTERFACE # ignore boost headers for sanitizing
-fsanitize-blacklist=${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt
)
endif()

View File

@@ -37,7 +37,7 @@ include(create_symbolic_link)
# `${CMAKE_CURRENT_BINARY_DIR}/include/${target}`.
#
# isolate_headers(target A B scope)
function (isolate_headers target A B scope)
function(isolate_headers target A B scope)
file(RELATIVE_PATH C "${A}" "${B}")
set(X "${CMAKE_CURRENT_BINARY_DIR}/modules/${target}")
set(Y "${X}/${C}")
@@ -45,4 +45,4 @@ function (isolate_headers target A B scope)
file(MAKE_DIRECTORY "${parent}")
create_symbolic_link("${B}" "${Y}")
target_include_directories(${target} ${scope} "$<BUILD_INTERFACE:${X}>")
endfunction ()
endfunction()

View File

@@ -6,9 +6,9 @@
# target_link_libraries(project.libparent.b PUBLIC project.libparent.a)
# add_library(project.libparent)
# target_link_modules(parent PUBLIC a b)
function (target_link_modules parent scope)
function(target_link_modules parent scope)
set(library ${PROJECT_NAME}.lib${parent})
foreach (name ${ARGN})
foreach(name ${ARGN})
set(module ${library}.${name})
get_target_property(sources ${library} SOURCES)
list(LENGTH sources before)
@@ -17,8 +17,11 @@ function (target_link_modules parent scope)
list(REMOVE_ITEM sources ${dupes})
list(LENGTH sources after)
math(EXPR actual "${before} - ${after}")
message(STATUS "${module} with ${expected} sources took ${actual} sources from ${library}")
message(
STATUS
"${module} with ${expected} sources took ${actual} sources from ${library}"
)
set_target_properties(${library} PROPERTIES SOURCES "${sources}")
target_link_libraries(${library} ${scope} ${module})
endforeach ()
endfunction ()
endforeach()
endfunction()

View File

@@ -35,20 +35,31 @@ find_package(Protobuf REQUIRED)
# This prefix should appear at the start of all your consumer includes.
# ARGN:
# A list of .proto files.
function (target_protobuf_sources target prefix)
function(target_protobuf_sources target prefix)
set(dir "${CMAKE_CURRENT_BINARY_DIR}/pb-${target}")
file(MAKE_DIRECTORY "${dir}/${prefix}")
protobuf_generate(TARGET ${target} PROTOC_OUT_DIR "${dir}/${prefix}" "${ARGN}")
protobuf_generate(
TARGET ${target}
PROTOC_OUT_DIR "${dir}/${prefix}"
"${ARGN}"
)
target_include_directories(
${target} SYSTEM
${target}
SYSTEM
PUBLIC # Allows #include <package/path/to/file.proto> used by consumer files.
$<BUILD_INTERFACE:${dir}>
# Allows #include "path/to/file.proto" used by generated files.
$<BUILD_INTERFACE:${dir}/${prefix}>
# Allows #include <package/path/to/file.proto> used by consumer files.
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>
# Allows #include "path/to/file.proto" used by generated files.
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}/${prefix}>)
install(DIRECTORY ${dir}/ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR} FILES_MATCHING PATTERN "*.h")
endfunction ()
$<BUILD_INTERFACE:${dir}>
# Allows #include "path/to/file.proto" used by generated files.
$<BUILD_INTERFACE:${dir}/${prefix}>
# Allows #include <package/path/to/file.proto> used by consumer files.
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>
# Allows #include "path/to/file.proto" used by generated files.
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}/${prefix}>
)
install(
DIRECTORY ${dir}/
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}
FILES_MATCHING
PATTERN "*.h"
)
endfunction()

View File

@@ -10,10 +10,13 @@
"rocksdb/10.5.1#4a197eca381a3e5ae8adf8cffa5aacd0%1765850186.86",
"re2/20230301#ca3b241baec15bd31ea9187150e0b333%1765850148.103",
"protobuf/6.32.1#f481fd276fc23a33b85a3ed1e898b693%1765850161.038",
"openssl/3.5.5#05a4ac5b7323f7a329b2db1391d9941f%1769599205.414",
"opentelemetry-cpp/1.18.0#efd9851e173f8a13b9c7d35232de8cf1%1750409186.472",
"openssl/3.5.5#05a4ac5b7323f7a329b2db1391d9941f%1770229825.601",
"nudb/2.0.9#0432758a24204da08fee953ec9ea03cb%1769436073.32",
"nlohmann_json/3.11.3#45828be26eb619a2e04ca517bb7b828d%1701220705.259",
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504%1765850143.914",
"libiconv/1.17#1e65319e945f2d31941a9d28cc13c058%1765842973.492",
"libcurl/8.18.0#364bc3755cb9ef84ed9a7ae9c7efc1c1%1770984390.024",
"libbacktrace/cci.20210118#a7691bfccd8caaf66309df196790a5a1%1765842973.03",
"libarchive/3.8.1#ffee18995c706e02bf96e7a2f7042e0d%1765850144.736",
"jemalloc/5.3.0#e951da9cf599e956cebc117880d2d9f8%1729241615.244",
@@ -30,9 +33,15 @@
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1765850150.075",
"strawberryperl/5.32.1.1#707032463aa0620fa17ec0d887f5fe41%1765850165.196",
"protobuf/6.32.1#f481fd276fc23a33b85a3ed1e898b693%1765850161.038",
"pkgconf/2.5.1#93c2051284cba1279494a43a4fcfeae2%1757684701.089",
"opentelemetry-proto/1.4.0#4096a3b05916675ef9628f3ffd571f51%1732731336.11",
"ninja/1.13.2#c8c5dc2a52ed6e4e42a66d75b4717ceb%1764096931.974",
"nasm/2.16.01#31e26f2ee3c4346ecd347911bd126904%1765850144.707",
"msys2/cci.latest#eea83308ad7e9023f7318c60d5a9e6cb%1770199879.083",
"meson/1.10.0#60786758ea978964c24525de19603cf4%1768294926.103",
"m4/1.4.19#70dc8bbb33e981d119d2acc0175cf381%1763158052.846",
"libtool/2.4.7#14e7739cc128bc1623d2ed318008e47e%1755679003.847",
"gnu-config/cci.20210814#466e9d4d7779e1c142443f7ea44b4284%1762363589.329",
"cmake/4.2.0#ae0a44f44a1ef9ab68fd4b3e9a1f8671%1765850153.937",
"cmake/3.31.10#313d16a1aa16bbdb2ca0792467214b76%1765850153.479",
"b2/5.3.3#107c15377719889654eb9a162a673975%1765850144.355",

View File

@@ -22,6 +22,7 @@ class Xrpl(ConanFile):
"rocksdb": [True, False],
"shared": [True, False],
"static": [True, False],
"telemetry": [True, False],
"tests": [True, False],
"unity": [True, False],
"xrpld": [True, False],
@@ -54,6 +55,7 @@ class Xrpl(ConanFile):
"rocksdb": True,
"shared": False,
"static": True,
"telemetry": True,
"tests": False,
"unity": False,
"xrpld": False,
@@ -140,6 +142,10 @@ class Xrpl(ConanFile):
self.requires("jemalloc/5.3.0")
if self.options.rocksdb:
self.requires("rocksdb/10.5.1")
# OpenTelemetry C++ SDK for distributed tracing (optional).
# Provides OTLP/HTTP exporter, batch span processor, and trace API.
if self.options.telemetry:
self.requires("opentelemetry-cpp/1.18.0")
self.requires("xxhash/0.8.3", **transitive_headers_opt)
exports_sources = (
@@ -168,6 +174,7 @@ class Xrpl(ConanFile):
tc.variables["rocksdb"] = self.options.rocksdb
tc.variables["BUILD_SHARED_LIBS"] = self.options.shared
tc.variables["static"] = self.options.static
tc.variables["telemetry"] = self.options.telemetry
tc.variables["unity"] = self.options.unity
tc.variables["xrpld"] = self.options.xrpld
tc.generate()
@@ -220,3 +227,5 @@ class Xrpl(ConanFile):
]
if self.options.rocksdb:
libxrpl.requires.append("rocksdb::librocksdb")
if self.options.telemetry:
libxrpl.requires.append("opentelemetry-cpp::opentelemetry-cpp")

View File

@@ -6,6 +6,7 @@ ignorePaths:
- docs/**/*.puml
- cmake/**
- LICENSE.md
- .clang-tidy
language: en
allowCompoundWords: true # TODO (#6334)
ignoreRandomStrings: true
@@ -86,6 +87,8 @@ words:
- daria
- dcmake
- dearmor
- Dedup
- dedup
- deleteme
- demultiplexer
- deserializaton
@@ -172,8 +175,14 @@ words:
- nftokens
- nftpage
- nikb
- nixfmt
- nixos
- nixpkgs
- NOLINT
- NOLINTNEXTLINE
- nonxrp
- noripple
- nostd
- nudb
- nullptr
- nunl
@@ -188,6 +197,7 @@ words:
- permissioned
- pointee
- populator
- pratik
- preauth
- preauthorization
- preauthorize
@@ -201,6 +211,7 @@ words:
- qalloc
- queuable
- Raphson
- reparent
- replayer
- rerere
- retriable
@@ -303,3 +314,9 @@ words:
- xrplf
- xxhash
- xxhasher
- xychart
- otelc
- zpages
- traceql
- Gantt
- gantt

View File

@@ -0,0 +1,80 @@
# Docker Compose stack for rippled OpenTelemetry observability.
#
# Provides services for local development:
# - otel-collector: receives OTLP traces from rippled, batches and
# forwards them to Jaeger and Tempo. Listens on ports 4317 (gRPC)
# and 4318 (HTTP).
# - jaeger: all-in-one tracing backend with UI on port 16686.
# - tempo: Grafana Tempo tracing backend, queryable via Grafana Explore
# on port 3000. Recommended for production (S3/GCS storage, TraceQL).
# - grafana: dashboards on port 3000, pre-configured with Jaeger, Tempo
# datasources.
#
# Usage:
# docker compose -f docker/telemetry/docker-compose.yml up -d
#
# Configure rippled to export traces by adding to xrpld.cfg:
# [telemetry]
# enabled=1
# endpoint=http://localhost:4318/v1/traces
version: "3.8"
services:
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
command: ["--config=/etc/otel-collector-config.yaml"]
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
- "13133:13133" # Health check
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml:ro
depends_on:
- jaeger
- tempo
networks:
- rippled-telemetry
jaeger:
image: jaegertracing/all-in-one:latest
environment:
- COLLECTOR_OTLP_ENABLED=true
ports:
- "16686:16686" # Jaeger UI
- "14250:14250" # gRPC
networks:
- rippled-telemetry
tempo:
image: grafana/tempo:2.7.2
command: ["-config.file=/etc/tempo.yaml"]
ports:
- "3200:3200" # Tempo HTTP API (health, query)
volumes:
- ./tempo.yaml:/etc/tempo.yaml:ro
- tempo-data:/var/tempo
networks:
- rippled-telemetry
grafana:
image: grafana/grafana:latest
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
ports:
- "3000:3000"
volumes:
- ./grafana/provisioning:/etc/grafana/provisioning:ro
depends_on:
- jaeger
- tempo
networks:
- rippled-telemetry
volumes:
tempo-data:
networks:
rippled-telemetry:
driver: bridge

View File

@@ -0,0 +1,12 @@
# Grafana datasource provisioning for the rippled telemetry stack.
# Auto-configures Jaeger as a trace data source on Grafana startup.
# Access Grafana at http://localhost:3000, then use Explore -> Jaeger
# to browse rippled traces.
apiVersion: 1
datasources:
- name: Jaeger
type: jaeger
access: proxy
url: http://jaeger:16686

View File

@@ -0,0 +1,147 @@
# Grafana datasource provisioning for Grafana Tempo.
# Auto-configures Tempo as a trace data source on Grafana startup.
# Access Grafana at http://localhost:3000, then use Explore -> Tempo
# to browse rippled traces using TraceQL.
#
# Search filters provide pre-configured dropdowns in the Explore UI.
# Each phase adds filters for the span attributes it introduces.
# Phase 1b (infra): Base filters — node identity, service, span name, status.
# Phase 2 (RPC): RPC command, status, role filters.
# Phase 3 (TX): Transaction hash, local/peer origin, status.
# Phase 4 (Cons): Consensus mode, round, ledger sequence, close time.
apiVersion: 1
datasources:
- name: Tempo
type: tempo
access: proxy
url: http://tempo:3200
uid: tempo
jsonData:
nodeGraph:
enabled: true
serviceMap:
datasourceUid: prometheus
tracesToMetrics:
datasourceUid: prometheus
spanStartTimeShift: "-1h"
spanEndTimeShift: "1h"
search:
filters:
# --- Node identification filters ---
# service.name: logical service name (default: "rippled").
# Useful when running multiple service types in the same collector.
- id: service-name
tag: service.name
operator: "="
scope: resource
type: static
# service.instance.id: unique node identifier — defaults to the
# node's public key (e.g., nHB1X37...). Distinguishes individual
# nodes in a multi-node cluster or network.
- id: node-id
tag: service.instance.id
operator: "="
scope: resource
type: static
# service.version: rippled build version (e.g., "2.4.0-b1").
# Filter traces from specific software releases.
- id: node-version
tag: service.version
operator: "="
scope: resource
type: dynamic
# xrpl.network.id: numeric network identifier
# (0 = mainnet, 1 = testnet, 2 = devnet, etc.).
- id: network-id
tag: xrpl.network.id
operator: "="
scope: resource
type: dynamic
# xrpl.network.type: human-readable network name
# ("mainnet", "testnet", "devnet", "standalone").
- id: network-type
tag: xrpl.network.type
operator: "="
scope: resource
type: static
# --- Span intrinsic filters ---
- id: span-name
tag: name
operator: "="
scope: intrinsic
type: static
- id: span-status
tag: status
operator: "="
scope: intrinsic
type: static
- id: span-duration
tag: duration
operator: ">"
scope: intrinsic
type: static
# Phase 2: RPC tracing filters
- id: rpc-command
tag: xrpl.rpc.command
operator: "="
scope: span
type: static
- id: rpc-status
tag: xrpl.rpc.status
operator: "="
scope: span
type: dynamic
- id: rpc-role
tag: xrpl.rpc.role
operator: "="
scope: span
type: dynamic
# Phase 3: Transaction tracing filters
- id: tx-hash
tag: xrpl.tx.hash
operator: "="
scope: span
type: static
- id: tx-origin
tag: xrpl.tx.local
operator: "="
scope: span
type: dynamic
- id: tx-status
tag: xrpl.tx.status
operator: "="
scope: span
type: dynamic
# Phase 4: Consensus tracing filters
- id: consensus-mode
tag: xrpl.consensus.mode
operator: "="
scope: span
type: static
- id: consensus-round
tag: xrpl.consensus.round
operator: "="
scope: span
type: dynamic
- id: consensus-ledger-seq
tag: xrpl.consensus.ledger.seq
operator: "="
scope: span
type: static
- id: consensus-close-time-correct
tag: xrpl.consensus.close_time_correct
operator: "="
scope: span
type: dynamic
- id: consensus-state
tag: xrpl.consensus.state
operator: "="
scope: span
type: dynamic
- id: consensus-close-resolution
tag: xrpl.consensus.close_resolution_ms
operator: "="
scope: span
type: dynamic

View File

@@ -0,0 +1,39 @@
# OpenTelemetry Collector configuration for rippled development.
#
# Pipeline: OTLP receiver -> batch processor -> debug + Jaeger + Tempo.
# rippled sends traces via OTLP/HTTP to port 4318. The collector batches
# them and forwards to both Jaeger and Tempo via OTLP/gRPC on the Docker
# network. Jaeger provides a standalone UI at :16686; Tempo is queryable
# via Grafana Explore using TraceQL.
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 100
exporters:
debug:
verbosity: detailed
otlp/jaeger:
endpoint: jaeger:4317
tls:
insecure: true
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp/jaeger, otlp/tempo]

View File

@@ -0,0 +1,59 @@
# Grafana Tempo configuration for rippled telemetry stack.
#
# Runs in single-binary mode for local development.
# Receives traces via OTLP/gRPC from the OTel Collector and stores
# them locally. Queryable via Grafana Explore using the Tempo datasource.
#
# Search filters are configured on the Grafana datasource side
# (grafana/provisioning/datasources/tempo.yaml). Tempo auto-indexes
# all span attributes for search in single-binary mode.
#
# For production, replace local storage with S3/GCS backend and adjust
# retention via the compactor settings. See:
# https://grafana.com/docs/tempo/latest/configuration/
stream_over_http_enabled: true
server:
http_listen_port: 3200
distributor:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
ingester:
max_block_duration: 5m
compactor:
compaction:
block_retention: 1h
# Enable metrics generator for service graph and span metrics.
# Produces RED metrics (rate, errors, duration) per service/span,
# feeding Grafana's service map visualization.
metrics_generator:
registry:
external_labels:
source: tempo
storage:
path: /var/tempo/generator/wal
remote_write:
- url: http://prometheus:9090/api/v1/write
overrides:
defaults:
metrics_generator:
processors:
- service-graphs
- span-metrics
storage:
trace:
backend: local
wal:
path: /var/tempo/wal
local:
path: /var/tempo/blocks

View File

@@ -3,6 +3,8 @@ environment complete with Git, Python, Conan, CMake, and a C++ compiler.
This document exists to help readers set one up on any of the Big Three
platforms: Linux, macOS, or Windows.
As an alternative to system packages, the Nix development shell can be used to provide a development environment. See [using nix development shell](./nix.md) for more details.
[BUILD.md]: ../../BUILD.md
## Linux

95
docs/build/nix.md vendored Normal file
View File

@@ -0,0 +1,95 @@
# Using Nix Development Shell for xrpld Development
This guide explains how to use Nix to set up a reproducible development environment for xrpld. Using Nix eliminates the need to manually install utilities and ensures consistent tooling across different machines.
## Benefits of Using Nix
- **Reproducible environment**: Everyone gets the same versions of tools and compilers
- **No system pollution**: Dependencies are isolated and don't affect your system packages
- **Multiple compiler versions**: Easily switch between different GCC and Clang versions
- **Quick setup**: Get started with a single command
- **Works on Linux and macOS**: Consistent experience across platforms
## Install Nix
Please follow [the official installation instructions of nix package manager](https://nixos.org/download/) for your system.
## Entering the Development Shell
### Basic Usage
From the root of the xrpld repository, enter the default development shell:
```bash
nix --experimental-features 'nix-command flakes' develop
```
This will:
- Download and set up all required development tools (CMake, Ninja, Conan, etc.)
- Configure the appropriate compiler for your platform:
- **macOS**: Apple Clang (default system compiler)
- **Linux**: GCC 15
The first time you run this command, it will take a few minutes to download and build the environment. Subsequent runs will be much faster.
> [!TIP]
> To avoid typing `--experimental-features 'nix-command flakes'` every time, you can permanently enable flakes by creating `~/.config/nix/nix.conf`:
>
> ```bash
> mkdir -p ~/.config/nix
> echo "experimental-features = nix-command flakes" >> ~/.config/nix/nix.conf
> ```
>
> After this, you can simply use `nix develop` instead.
> [!NOTE]
> The examples below assume you've enabled flakes in your config. If you haven't, add `--experimental-features 'nix-command flakes'` after each `nix` command.
### Choosing a different compiler
A compiler can be chosen by providing its name with the `.#` prefix, e.g. `nix develop .#gcc15`.
Use `nix flake show` to see all the available development shells.
Use `nix develop .#no_compiler` to use the compiler from your system.
### Example Usage
```bash
# Use GCC 14
nix develop .#gcc14
# Use Clang 19
nix develop .#clang19
# Use default for your platform
nix develop
```
### Using a different shell
`nix develop` opens bash by default. If you want to use another shell this could be done by adding `-c` flag. For example:
```bash
nix develop -c zsh
```
## Building xrpld with Nix
Once inside the Nix development shell, follow the standard [build instructions](../../BUILD.md#steps). The Nix shell provides all necessary tools (CMake, Ninja, Conan, etc.).
## Automatic Activation with direnv
[direnv](https://direnv.net/) or [nix-direnv](https://github.com/nix-community/nix-direnv) can automatically activate the Nix development shell when you enter the repository directory.
## Conan and Prebuilt Packages
Please note that there is no guarantee that binaries from conan cache will work when using nix. If you encounter any errors, please use `--build '*'` to force conan to compile everything from source:
```bash
conan install .. --output-folder . --build '*' --settings build_type=Release
```
## Updating `flake.lock` file
To update `flake.lock` to the latest revision use `nix flake update` command.

278
docs/build/telemetry.md vendored Normal file
View File

@@ -0,0 +1,278 @@
# OpenTelemetry Tracing for Rippled
This document explains how to build rippled with OpenTelemetry distributed tracing support, configure the runtime telemetry options, and set up the observability backend to view traces.
- [OpenTelemetry Tracing for Rippled](#opentelemetry-tracing-for-rippled)
- [Overview](#overview)
- [Building with Telemetry](#building-with-telemetry)
- [Summary](#summary)
- [Build steps](#build-steps)
- [Install dependencies](#install-dependencies)
- [Call CMake](#call-cmake)
- [Build](#build)
- [Building without telemetry](#building-without-telemetry)
- [Runtime Configuration](#runtime-configuration)
- [Configuration options](#configuration-options)
- [Observability Stack](#observability-stack)
- [Start the stack](#start-the-stack)
- [Verify the stack](#verify-the-stack)
- [View traces in Jaeger](#view-traces-in-jaeger)
- [Running Tests](#running-tests)
- [Troubleshooting](#troubleshooting)
- [No traces appear in Jaeger](#no-traces-appear-in-jaeger)
- [Conan lockfile error](#conan-lockfile-error)
- [CMake target not found](#cmake-target-not-found)
- [Architecture](#architecture)
- [Key files](#key-files)
- [Conditional compilation](#conditional-compilation)
## Overview
Rippled supports optional [OpenTelemetry](https://opentelemetry.io/) distributed tracing.
When enabled, it instruments RPC requests with trace spans that are exported via
OTLP/HTTP to an OpenTelemetry Collector, which forwards them to a tracing backend
such as Jaeger.
Telemetry is **off by default** at both compile time and runtime:
- **Compile time**: The Conan option `telemetry` and CMake option `telemetry` must be set to `True`/`ON`.
When disabled, all tracing macros compile to `((void)0)` with zero overhead.
- **Runtime**: The `[telemetry]` config section must set `enabled=1`.
When disabled at runtime, a no-op implementation is used.
## Building with Telemetry
### Summary
Follow the same instructions as mentioned in [BUILD.md](../../BUILD.md) but with the following changes:
1. Pass `-o telemetry=True` to `conan install` to pull the `opentelemetry-cpp` dependency.
2. CMake will automatically pick up `telemetry=ON` from the Conan-generated toolchain.
3. Build as usual.
---
### Build steps
```bash
cd /path/to/rippled
rm -rf .build
mkdir .build
cd .build
```
#### Install dependencies
The `telemetry` option adds `opentelemetry-cpp/1.18.0` as a dependency.
If the Conan lockfile does not yet include this package, bypass it with `--lockfile=""`.
```bash
conan install .. \
--output-folder . \
--build missing \
--settings build_type=Debug \
-o telemetry=True \
-o tests=True \
-o xrpld=True \
--lockfile=""
```
> **Note**: The first build with telemetry may take longer as `opentelemetry-cpp`
> and its transitive dependencies are compiled from source.
#### Call CMake
The Conan-generated toolchain file sets `telemetry=ON` automatically.
No additional CMake flags are needed beyond the standard ones.
```bash
cmake .. -G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=Debug \
-Dtests=ON -Dxrpld=ON
```
You should see in the CMake output:
```
-- OpenTelemetry tracing enabled
```
#### Build
```bash
cmake --build . --parallel $(nproc)
```
### Building without telemetry
Omit the `-o telemetry=True` option (or pass `-o telemetry=False`).
The `opentelemetry-cpp` dependency will not be downloaded,
the `XRPL_ENABLE_TELEMETRY` preprocessor define will not be set,
and all tracing macros will compile to no-ops.
The resulting binary is identical to one built before telemetry support was added.
## Runtime Configuration
Add a `[telemetry]` section to your `xrpld.cfg` file:
```ini
[telemetry]
enabled=1
service_name=rippled
endpoint=http://localhost:4318/v1/traces
sampling_ratio=1.0
trace_rpc=1
trace_transactions=1
trace_consensus=1
trace_peer=0
```
### Configuration options
| Option | Type | Default | Description |
| --------------------- | ------ | --------------------------------- | -------------------------------------------------- |
| `enabled` | int | `0` | Enable (`1`) or disable (`0`) telemetry at runtime |
| `service_name` | string | `rippled` | Service name reported in traces |
| `service_instance_id` | string | node public key | Unique instance identifier |
| `exporter` | string | `otlp_http` | Exporter type |
| `endpoint` | string | `http://localhost:4318/v1/traces` | OTLP/HTTP collector endpoint |
| `use_tls` | int | `0` | Enable TLS for the exporter connection |
| `tls_ca_cert` | string | (empty) | Path to CA certificate for TLS |
| `sampling_ratio` | double | `1.0` | Fraction of traces to sample (`0.0` to `1.0`) |
| `batch_size` | uint32 | `512` | Maximum spans per export batch |
| `batch_delay_ms` | uint32 | `5000` | Maximum delay (ms) before flushing a batch |
| `max_queue_size` | uint32 | `2048` | Maximum spans queued in memory |
| `trace_rpc` | int | `1` | Enable RPC request tracing |
| `trace_transactions` | int | `1` | Enable transaction lifecycle tracing |
| `trace_consensus` | int | `1` | Enable consensus round tracing |
| `trace_peer` | int | `0` | Enable peer message tracing (high volume) |
| `trace_ledger` | int | `1` | Enable ledger close tracing |
## Observability Stack
A Docker Compose stack is provided in `docker/telemetry/` with three services:
| Service | Port | Purpose |
| ------------------ | ---------------------------------------------- | ---------------------------------------------------- |
| **OTel Collector** | `4317` (gRPC), `4318` (HTTP), `13133` (health) | Receives OTLP spans, batches, and forwards to Jaeger |
| **Jaeger** | `16686` (UI) | Trace storage and visualization |
| **Grafana** | `3000` | Dashboards (Jaeger pre-configured as datasource) |
### Start the stack
```bash
docker compose -f docker/telemetry/docker-compose.yml up -d
```
### Verify the stack
```bash
# Collector health
curl http://localhost:13133
# Jaeger UI
open http://localhost:16686
# Grafana
open http://localhost:3000
```
### View traces in Jaeger
1. Open `http://localhost:16686` in a browser.
2. Select the service name (e.g. `rippled`) from the **Service** dropdown.
3. Click **Find Traces**.
4. Click into any trace to see the span tree and attributes.
Traced RPC operations produce a span hierarchy like:
```
rpc.request
└── rpc.command.server_info (xrpl.rpc.command=server_info, xrpl.rpc.status=success)
```
Each span includes attributes:
- `xrpl.rpc.command` — the RPC method name
- `xrpl.rpc.version` — API version
- `xrpl.rpc.role``admin` or `user`
- `xrpl.rpc.status``success` or `error`
## Running Tests
Unit tests run with the telemetry-enabled build regardless of whether the
observability stack is running. When no collector is available, the exporter
silently drops spans with no impact on test results.
```bash
# Run all RPC tests
./xrpld --unittest=RPCCall,ServerInfo,AccountTx,LedgerRPC,Transaction --unittest-jobs $(nproc)
# Run the full test suite
./xrpld --unittest --unittest-jobs $(nproc)
```
To generate traces during manual testing, start rippled in standalone mode:
```bash
./xrpld --conf /path/to/xrpld.cfg --standalone --start
```
Then send RPC requests:
```bash
curl -s -X POST http://127.0.0.1:5005/ \
-H "Content-Type: application/json" \
-d '{"method":"server_info","params":[{}]}'
```
## Troubleshooting
### No traces appear in Jaeger
1. Confirm the OTel Collector is running: `docker compose -f docker/telemetry/docker-compose.yml ps`
2. Check collector logs for errors: `docker compose -f docker/telemetry/docker-compose.yml logs otel-collector`
3. Confirm `[telemetry] enabled=1` is set in the rippled config.
4. Confirm `endpoint` points to the correct collector address (`http://localhost:4318/v1/traces`).
5. Wait for the batch delay to elapse (default `5000` ms) before checking Jaeger.
### Conan lockfile error
If you see `ERROR: Requirement 'opentelemetry-cpp/1.18.0' not in lockfile 'requires'`,
the lockfile was generated without the telemetry dependency.
Pass `--lockfile=""` to bypass the lockfile, or regenerate it with telemetry enabled.
### CMake target not found
If CMake reports that `opentelemetry-cpp` targets are not found,
ensure you ran `conan install` with `-o telemetry=True` and that the
Conan-generated toolchain file is being used.
The Conan package provides a single umbrella target
`opentelemetry-cpp::opentelemetry-cpp` (not individual component targets).
## Architecture
### Key files
| File | Purpose |
| ---------------------------------------------- | ----------------------------------------------------------- |
| `include/xrpl/telemetry/Telemetry.h` | Abstract telemetry interface and `Setup` struct |
| `include/xrpl/telemetry/SpanGuard.h` | RAII span guard (activates scope, ends span on destruction) |
| `src/libxrpl/telemetry/Telemetry.cpp` | OTel-backed implementation (`TelemetryImpl`) |
| `src/libxrpl/telemetry/TelemetryConfig.cpp` | Config parser (`setup_Telemetry()`) |
| `src/libxrpl/telemetry/NullTelemetry.cpp` | No-op implementation (used when disabled) |
| `src/xrpld/telemetry/TracingInstrumentation.h` | Convenience macros (`XRPL_TRACE_RPC`, etc.) |
| `src/xrpld/rpc/detail/ServerHandler.cpp` | RPC entry point instrumentation |
| `src/xrpld/rpc/detail/RPCHandler.cpp` | Per-command instrumentation |
| `docker/telemetry/docker-compose.yml` | Observability stack (Collector + Jaeger + Grafana) |
| `docker/telemetry/otel-collector-config.yaml` | OTel Collector pipeline configuration |
### Conditional compilation
All OpenTelemetry SDK headers are guarded behind `#ifdef XRPL_ENABLE_TELEMETRY`.
The instrumentation macros in `TracingInstrumentation.h` compile to `((void)0)` when
the define is absent.
At runtime, if `enabled=0` is set in config (or the section is omitted), a
`NullTelemetry` implementation is used that returns no-op spans.
This two-layer approach ensures zero overhead when telemetry is not wanted.

26
flake.lock generated Normal file
View File

@@ -0,0 +1,26 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1769461804,
"narHash": "sha256-6h5sROT/3CTHvzPy9koKBmoCa2eJKh4fzQK8eYFEgl8=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "b579d443b37c9c5373044201ea77604e37e748c8",
"type": "github"
},
"original": {
"id": "nixpkgs",
"ref": "nixos-unstable",
"type": "indirect"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
}
}
},
"root": "root",
"version": 7
}

16
flake.nix Normal file
View File

@@ -0,0 +1,16 @@
{
description = "Nix related things for xrpld";
inputs = {
nixpkgs.url = "nixpkgs/nixos-unstable";
};
outputs =
{ nixpkgs, ... }:
let
forEachSystem = (import ./nix/utils.nix { inherit nixpkgs; }).forEachSystem;
in
{
devShells = forEachSystem (import ./nix/devshell.nix);
formatter = forEachSystem ({ pkgs, ... }: pkgs.nixfmt);
};
}

View File

@@ -84,7 +84,8 @@ public:
if (lines_.empty())
return "";
if (lines_.size() > 1)
Throw<std::runtime_error>("A legacy value must have exactly one line. Section: " + name_);
Throw<std::runtime_error>(
"A legacy value must have exactly one line. Section: " + name_);
return lines_[0];
}
@@ -268,7 +269,8 @@ public:
bool
had_trailing_comments() const
{
return std::any_of(map_.cbegin(), map_.cend(), [](auto s) { return s.second.had_trailing_comments(); });
return std::any_of(
map_.cbegin(), map_.cend(), [](auto s) { return s.second.had_trailing_comments(); });
}
protected:

View File

@@ -35,7 +35,10 @@ lz4Compress(void const* in, std::size_t inSize, BufferFactory&& bf)
auto compressed = bf(outCapacity);
auto compressedSize = LZ4_compress_default(
reinterpret_cast<char const*>(in), reinterpret_cast<char*>(compressed), inSize, outCapacity);
reinterpret_cast<char const*>(in),
reinterpret_cast<char*>(compressed),
inSize,
outCapacity);
if (compressedSize == 0)
Throw<std::runtime_error>("lz4 compress: failed");
@@ -66,8 +69,10 @@ lz4Decompress(
Throw<std::runtime_error>("lz4Decompress: integer overflow (output)");
if (LZ4_decompress_safe(
reinterpret_cast<char const*>(in), reinterpret_cast<char*>(decompressed), inSize, decompressedSize) !=
decompressedSize)
reinterpret_cast<char const*>(in),
reinterpret_cast<char*>(decompressed),
inSize,
decompressedSize) != decompressedSize)
Throw<std::runtime_error>("lz4Decompress: failed");
return decompressedSize;
@@ -83,7 +88,11 @@ lz4Decompress(
*/
template <typename InputStream>
std::size_t
lz4Decompress(InputStream& in, std::size_t inSize, std::uint8_t* decompressed, std::size_t decompressedSize)
lz4Decompress(
InputStream& in,
std::size_t inSize,
std::uint8_t* decompressed,
std::size_t decompressedSize)
{
std::vector<std::uint8_t> compressed;
std::uint8_t const* chunk = nullptr;

View File

@@ -55,7 +55,8 @@ private:
if (m_value != value_type())
{
std::size_t elapsed = std::chrono::duration_cast<std::chrono::seconds>(now - m_when).count();
std::size_t elapsed =
std::chrono::duration_cast<std::chrono::seconds>(now - m_when).count();
// A span larger than four times the window decays the
// value to an insignificant amount so just reset it.

View File

@@ -120,7 +120,8 @@ public:
template <typename U>
requires std::convertible_to<U, E> && (!std::is_reference_v<U>)
constexpr Expected(Unexpected<U> e) : Base(boost::outcome_v2::in_place_type_t<E>{}, std::move(e.value()))
constexpr Expected(Unexpected<U> e)
: Base(boost::outcome_v2::in_place_type_t<E>{}, std::move(e.value()))
{
}
@@ -191,7 +192,8 @@ public:
// Specialization of Expected<void, E>. Allows returning either success
// (without a value) or the reason for the failure.
template <class E>
class [[nodiscard]] Expected<void, E> : private boost::outcome_v2::result<void, E, detail::throw_policy>
class [[nodiscard]]
Expected<void, E> : private boost::outcome_v2::result<void, E, detail::throw_policy>
{
using Base = boost::outcome_v2::result<void, E, detail::throw_policy>;

View File

@@ -14,6 +14,9 @@ getFileContents(
std::optional<std::size_t> maxSize = std::nullopt);
void
writeFileContents(boost::system::error_code& ec, boost::filesystem::path const& destPath, std::string const& contents);
writeFileContents(
boost::system::error_code& ec,
boost::filesystem::path const& destPath,
std::string const& contents);
} // namespace xrpl

View File

@@ -44,8 +44,8 @@ struct SharedIntrusiveAdoptNoIncrementTag
//
template <class T>
concept CAdoptTag =
std::is_same_v<T, SharedIntrusiveAdoptIncrementStrongTag> || std::is_same_v<T, SharedIntrusiveAdoptNoIncrementTag>;
concept CAdoptTag = std::is_same_v<T, SharedIntrusiveAdoptIncrementStrongTag> ||
std::is_same_v<T, SharedIntrusiveAdoptNoIncrementTag>;
//------------------------------------------------------------------------------
@@ -443,7 +443,8 @@ make_SharedIntrusive(Args&&... args)
auto p = new TT(std::forward<Args>(args)...);
static_assert(
noexcept(SharedIntrusive<TT>(std::declval<TT*>(), std::declval<SharedIntrusiveAdoptNoIncrementTag>())),
noexcept(SharedIntrusive<TT>(
std::declval<TT*>(), std::declval<SharedIntrusiveAdoptNoIncrementTag>())),
"SharedIntrusive constructor should not throw or this can leak "
"memory");

View File

@@ -184,7 +184,8 @@ private:
/** Mask that will zero out everything except the weak count.
*/
static constexpr FieldType weakMask = (((one << WeakCountNumBits) - 1) << StrongCountNumBits) & valueMask;
static constexpr FieldType weakMask =
(((one << WeakCountNumBits) - 1) << StrongCountNumBits) & valueMask;
/** Unpack the count and tag fields from the packed atomic integer form. */
struct RefCountPair
@@ -209,8 +210,10 @@ private:
FieldType
combinedValue() const noexcept;
static constexpr CountType maxStrongValue = static_cast<CountType>((one << StrongCountNumBits) - 1);
static constexpr CountType maxWeakValue = static_cast<CountType>((one << WeakCountNumBits) - 1);
static constexpr CountType maxStrongValue =
static_cast<CountType>((one << StrongCountNumBits) - 1);
static constexpr CountType maxWeakValue =
static_cast<CountType>((one << WeakCountNumBits) - 1);
/** Put an extra margin to detect when running up against limits.
This is only used in debug code, and is useful if we reduce the
number of bits in the strong and weak counts (to 16 and 14 bits).
@@ -395,7 +398,8 @@ inline IntrusiveRefCounts::~IntrusiveRefCounts() noexcept
{
#ifndef NDEBUG
auto v = refCounts.load(std::memory_order_acquire);
XRPL_ASSERT((!(v & valueMask)), "xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : count must be zero");
XRPL_ASSERT(
(!(v & valueMask)), "xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : count must be zero");
auto t = v & tagMask;
XRPL_ASSERT((!t || t == tagMask), "xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : valid tag");
#endif
@@ -433,8 +437,10 @@ IntrusiveRefCounts::RefCountPair::combinedValue() const noexcept
(strong < checkStrongMaxValue && weak < checkWeakMaxValue),
"xrpl::IntrusiveRefCounts::RefCountPair::combinedValue : inputs "
"inside range");
return (static_cast<IntrusiveRefCounts::FieldType>(weak) << IntrusiveRefCounts::StrongCountNumBits) |
static_cast<IntrusiveRefCounts::FieldType>(strong) | partialDestroyStartedBit | partialDestroyFinishedBit;
return (static_cast<IntrusiveRefCounts::FieldType>(weak)
<< IntrusiveRefCounts::StrongCountNumBits) |
static_cast<IntrusiveRefCounts::FieldType>(strong) | partialDestroyStartedBit |
partialDestroyFinishedBit;
}
template <class T>
@@ -442,7 +448,8 @@ inline void
partialDestructorFinished(T** o)
{
T& self = **o;
IntrusiveRefCounts::RefCountPair p = self.refCounts.fetch_or(IntrusiveRefCounts::partialDestroyFinishedMask);
IntrusiveRefCounts::RefCountPair p =
self.refCounts.fetch_or(IntrusiveRefCounts::partialDestroyFinishedMask);
XRPL_ASSERT(
(!p.partialDestroyFinishedBit && p.partialDestroyStartedBit && !p.strong),
"xrpl::partialDestructorFinished : not a weak ref");

View File

@@ -103,6 +103,7 @@ LocalValue<T>::operator*()
}
return *reinterpret_cast<T*>(
lvs->values.emplace(this, std::make_unique<detail::LocalValues::Value<T>>(t_)).first->second->get());
lvs->values.emplace(this, std::make_unique<detail::LocalValues::Value<T>>(t_))
.first->second->get());
}
} // namespace xrpl

View File

@@ -170,7 +170,11 @@ public:
partition_severities() const;
void
write(beast::severities::Severity level, std::string const& partition, std::string const& text, bool console);
write(
beast::severities::Severity level,
std::string const& partition,
std::string const& text,
bool console);
std::string
rotate();

View File

@@ -0,0 +1,155 @@
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
*/
#pragma once
#include <mutex>
#include <type_traits>
namespace xrpl {
template <typename ProtectedDataType, typename MutexType>
class Mutex;
/**
* @brief A lock on a mutex that provides access to the protected data.
*
* @tparam ProtectedDataType data type to hold
* @tparam LockType type of lock
* @tparam MutexType type of mutex
*/
template <typename ProtectedDataType, template <typename...> typename LockType, typename MutexType>
class Lock
{
LockType<MutexType> lock_;
ProtectedDataType& data_;
public:
/** @cond */
ProtectedDataType const&
operator*() const
{
return data_;
}
ProtectedDataType&
operator*()
{
return data_;
}
ProtectedDataType const&
get() const
{
return data_;
}
ProtectedDataType&
get()
{
return data_;
}
ProtectedDataType const*
operator->() const
{
return &data_;
}
ProtectedDataType*
operator->()
{
return &data_;
}
operator LockType<MutexType>&() &
{
return lock_;
}
operator LockType<MutexType> const&() const&
{
return lock_;
}
/** @endcond */
private:
friend class Mutex<std::remove_const_t<ProtectedDataType>, MutexType>;
Lock(MutexType& mutex, ProtectedDataType& data) : lock_(mutex), data_(data)
{
}
};
/**
* @brief A container for data that is protected by a mutex. Inspired by Mutex in Rust.
*
* @tparam ProtectedDataType data type to hold
* @tparam MutexType type of mutex
*/
template <typename ProtectedDataType, typename MutexType = std::mutex>
class Mutex
{
mutable MutexType mutex_;
ProtectedDataType data_{};
public:
Mutex() = default;
/**
* @brief Construct a new Mutex object with the given data
*
* @param data The data to protect
*/
explicit Mutex(ProtectedDataType data) : data_(std::move(data))
{
}
/**
* @brief Make a new Mutex object with the given data
*
* @tparam Args The types of the arguments to forward to the constructor of the protected data
* @param args The arguments to forward to the constructor of the protected data
* @return The Mutex object that protects the given data
*/
template <typename... Args>
static Mutex
make(Args&&... args)
{
return Mutex{ProtectedDataType{std::forward<Args>(args)...}};
}
/**
* @brief Lock the mutex and get a lock object allowing access to the protected data
*
* @tparam LockType The type of lock to use
* @return A lock on the mutex and a reference to the protected data
*/
template <template <typename...> typename LockType = std::lock_guard>
Lock<ProtectedDataType const, LockType, MutexType>
lock() const
{
return {mutex_, data_};
}
/**
* @brief Lock the mutex and get a lock object allowing access to the protected data
*
* @tparam LockType The type of lock to use
* @return A lock on the mutex and a reference to the protected data
*/
template <template <typename...> typename LockType = std::lock_guard>
Lock<ProtectedDataType, LockType, MutexType>
lock()
{
return {mutex_, data_};
}
};
} // namespace xrpl

View File

@@ -240,7 +240,11 @@ public:
Number(rep mantissa);
explicit Number(rep mantissa, int exponent);
explicit constexpr Number(bool negative, internalrep mantissa, int exponent, unchecked) noexcept;
explicit constexpr Number(
bool negative,
internalrep mantissa,
int exponent,
unchecked) noexcept;
// Assume unsigned values are... unsigned. i.e. positive
explicit constexpr Number(internalrep mantissa, int exponent, unchecked) noexcept;
// Only unit tests are expected to use this ctor
@@ -294,7 +298,8 @@ public:
friend constexpr bool
operator==(Number const& x, Number const& y) noexcept
{
return x.negative_ == y.negative_ && x.mantissa_ == y.mantissa_ && x.exponent_ == y.exponent_;
return x.negative_ == y.negative_ && x.mantissa_ == y.mantissa_ &&
x.exponent_ == y.exponent_;
}
friend constexpr bool
@@ -502,7 +507,11 @@ private:
class Guard;
};
inline constexpr Number::Number(bool negative, internalrep mantissa, int exponent, unchecked) noexcept
inline constexpr Number::Number(
bool negative,
internalrep mantissa,
int exponent,
unchecked) noexcept
: negative_(negative), mantissa_{mantissa}, exponent_{exponent}
{
}
@@ -520,7 +529,8 @@ inline Number::Number(bool negative, internalrep mantissa, int exponent, normali
normalize();
}
inline Number::Number(internalrep mantissa, int exponent, normalized) : Number(false, mantissa, exponent, normalized{})
inline Number::Number(internalrep mantissa, int exponent, normalized)
: Number(false, mantissa, exponent, normalized{})
{
}
@@ -682,8 +692,8 @@ Number::isnormal() const noexcept
MantissaRange const& range = range_;
auto const abs_m = mantissa_;
return *this == Number{} ||
(range.min <= abs_m && abs_m <= range.max && (abs_m <= maxRep || abs_m % 10 == 0) && minExponent <= exponent_ &&
exponent_ <= maxExponent);
(range.min <= abs_m && abs_m <= range.max && (abs_m <= maxRep || abs_m % 10 == 0) &&
minExponent <= exponent_ && exponent_ <= maxExponent);
}
template <Integral64 T>
@@ -695,7 +705,10 @@ Number::normalizeToRange(T minMantissa, T maxMantissa) const
int exponent = exponent_;
if constexpr (std::is_unsigned_v<T>)
XRPL_ASSERT_PARTS(!negative, "xrpl::Number::normalizeToRange", "Number is non-negative for unsigned range.");
XRPL_ASSERT_PARTS(
!negative,
"xrpl::Number::normalizeToRange",
"Number is non-negative for unsigned range.");
Number::normalize(negative, mantissa, exponent, minMantissa, maxMantissa);
auto const sign = negative ? -1 : 1;
@@ -781,7 +794,8 @@ class NumberRoundModeGuard
saveNumberRoundMode saved_;
public:
explicit NumberRoundModeGuard(Number::rounding_mode mode) noexcept : saved_{Number::setround(mode)}
explicit NumberRoundModeGuard(Number::rounding_mode mode) noexcept
: saved_{Number::setround(mode)}
{
}
@@ -801,7 +815,8 @@ class NumberMantissaScaleGuard
MantissaRange::mantissa_scale const saved_;
public:
explicit NumberMantissaScaleGuard(MantissaRange::mantissa_scale scale) noexcept : saved_{Number::getMantissaScale()}
explicit NumberMantissaScaleGuard(MantissaRange::mantissa_scale scale) noexcept
: saved_{Number::getMantissaScale()}
{
Number::setMantissaScale(scale);
}

View File

@@ -19,7 +19,8 @@ SharedWeakCachePointer<T>::SharedWeakCachePointer(SharedWeakCachePointer&& rhs)
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer<T>::SharedWeakCachePointer(std::shared_ptr<TT>&& rhs) : combo_{std::move(rhs)}
SharedWeakCachePointer<T>::SharedWeakCachePointer(std::shared_ptr<TT>&& rhs)
: combo_{std::move(rhs)}
{
}

View File

@@ -155,13 +155,17 @@ public:
contexts (e.g. when minimal memory usage is needed) and
allows for graceful failure.
*/
constexpr explicit SlabAllocator(std::size_t extra, std::size_t alloc = 0, std::size_t align = 0)
constexpr explicit SlabAllocator(
std::size_t extra,
std::size_t alloc = 0,
std::size_t align = 0)
: itemAlignment_(align ? align : alignof(Type))
, itemSize_(boost::alignment::align_up(sizeof(Type) + extra, itemAlignment_))
, slabSize_(alloc)
{
XRPL_ASSERT(
(itemAlignment_ & (itemAlignment_ - 1)) == 0, "xrpl::SlabAllocator::SlabAllocator : valid alignment");
(itemAlignment_ & (itemAlignment_ - 1)) == 0,
"xrpl::SlabAllocator::SlabAllocator : valid alignment");
}
SlabAllocator(SlabAllocator const& other) = delete;
@@ -228,7 +232,8 @@ public:
// We need to carve out a bit of memory for the slab header
// and then align the rest appropriately:
auto slabData = reinterpret_cast<void*>(reinterpret_cast<std::uint8_t*>(buf) + sizeof(SlabBlock));
auto slabData =
reinterpret_cast<void*>(reinterpret_cast<std::uint8_t*>(buf) + sizeof(SlabBlock));
auto slabSize = size - sizeof(SlabBlock);
// This operation is essentially guaranteed not to fail but
@@ -239,10 +244,12 @@ public:
return nullptr;
}
slab = new (buf) SlabBlock(slabs_.load(), reinterpret_cast<std::uint8_t*>(slabData), slabSize, itemSize_);
slab = new (buf) SlabBlock(
slabs_.load(), reinterpret_cast<std::uint8_t*>(slabData), slabSize, itemSize_);
// Link the new slab
while (!slabs_.compare_exchange_weak(slab->next_, slab, std::memory_order_release, std::memory_order_relaxed))
while (!slabs_.compare_exchange_weak(
slab->next_, slab, std::memory_order_release, std::memory_order_relaxed))
{
; // Nothing to do
}
@@ -299,7 +306,10 @@ public:
std::size_t align;
public:
constexpr SlabConfig(std::size_t extra_, std::size_t alloc_ = 0, std::size_t align_ = alignof(Type))
constexpr SlabConfig(
std::size_t extra_,
std::size_t alloc_ = 0,
std::size_t align_ = alignof(Type))
: extra(extra_), alloc(alloc_), align(align_)
{
}
@@ -309,15 +319,18 @@ public:
{
// Ensure that the specified allocators are sorted from smallest to
// largest by size:
std::sort(
std::begin(cfg), std::end(cfg), [](SlabConfig const& a, SlabConfig const& b) { return a.extra < b.extra; });
std::sort(std::begin(cfg), std::end(cfg), [](SlabConfig const& a, SlabConfig const& b) {
return a.extra < b.extra;
});
// We should never have two slabs of the same size
if (std::adjacent_find(std::begin(cfg), std::end(cfg), [](SlabConfig const& a, SlabConfig const& b) {
return a.extra == b.extra;
}) != cfg.end())
if (std::adjacent_find(
std::begin(cfg), std::end(cfg), [](SlabConfig const& a, SlabConfig const& b) {
return a.extra == b.extra;
}) != cfg.end())
{
throw std::runtime_error("SlabAllocatorSet<" + beast::type_name<Type>() + ">: duplicate slab size");
throw std::runtime_error(
"SlabAllocatorSet<" + beast::type_name<Type>() + ">: duplicate slab size");
}
for (auto const& c : cfg)

View File

@@ -40,7 +40,8 @@ public:
operator=(Slice const&) noexcept = default;
/** Create a slice pointing to existing memory. */
Slice(void const* data, std::size_t size) noexcept : data_(reinterpret_cast<std::uint8_t const*>(data)), size_(size)
Slice(void const* data, std::size_t size) noexcept
: data_(reinterpret_cast<std::uint8_t const*>(data)), size_(size)
{
}
@@ -197,7 +198,8 @@ operator!=(Slice const& lhs, Slice const& rhs) noexcept
inline bool
operator<(Slice const& lhs, Slice const& rhs) noexcept
{
return std::lexicographical_compare(lhs.data(), lhs.data() + lhs.size(), rhs.data(), rhs.data() + rhs.size());
return std::lexicographical_compare(
lhs.data(), lhs.data() + lhs.size(), rhs.data(), rhs.data() + rhs.size());
}
template <class Stream>

View File

@@ -108,7 +108,8 @@ struct parsedURL
bool
operator==(parsedURL const& other) const
{
return scheme == other.scheme && domain == other.domain && port == other.port && path == other.path;
return scheme == other.scheme && domain == other.domain && port == other.port &&
path == other.path;
}
};

View File

@@ -175,7 +175,10 @@ private:
struct Stats
{
template <class Handler>
Stats(std::string const& prefix, Handler const& handler, beast::insight::Collector::ptr const& collector)
Stats(
std::string const& prefix,
Handler const& handler,
beast::insight::Collector::ptr const& collector)
: hook(collector->make_hook(handler))
, size(collector->make_gauge(prefix, "size"))
, hit_rate(collector->make_gauge(prefix, "hit_rate"))
@@ -197,7 +200,8 @@ private:
public:
clock_type::time_point last_access;
explicit KeyOnlyEntry(clock_type::time_point const& last_access_) : last_access(last_access_)
explicit KeyOnlyEntry(clock_type::time_point const& last_access_)
: last_access(last_access_)
{
}

View File

@@ -14,13 +14,22 @@ template <
class Hash,
class KeyEqual,
class Mutex>
inline TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::TaggedCache(
std::string const& name,
int size,
clock_type::duration expiration,
clock_type& clock,
beast::Journal journal,
beast::insight::Collector::ptr const& collector)
inline TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
TaggedCache(
std::string const& name,
int size,
clock_type::duration expiration,
clock_type& clock,
beast::Journal journal,
beast::insight::Collector::ptr const& collector)
: m_journal(journal)
, m_clock(clock)
, m_stats(name, std::bind(&TaggedCache::collect_metrics, this), collector)
@@ -43,8 +52,8 @@ template <
class KeyEqual,
class Mutex>
inline auto
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::clock()
-> clock_type&
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
clock() -> clock_type&
{
return m_clock;
}
@@ -59,7 +68,8 @@ template <
class KeyEqual,
class Mutex>
inline std::size_t
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::size() const
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
size() const
{
std::lock_guard lock(m_mutex);
return m_cache.size();
@@ -75,7 +85,8 @@ template <
class KeyEqual,
class Mutex>
inline int
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::getCacheSize() const
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
getCacheSize() const
{
std::lock_guard lock(m_mutex);
return m_cache_count;
@@ -91,7 +102,8 @@ template <
class KeyEqual,
class Mutex>
inline int
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::getTrackSize() const
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
getTrackSize() const
{
std::lock_guard lock(m_mutex);
return m_cache.size();
@@ -107,7 +119,8 @@ template <
class KeyEqual,
class Mutex>
inline float
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::getHitRate()
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
getHitRate()
{
std::lock_guard lock(m_mutex);
auto const total = static_cast<float>(m_hits + m_misses);
@@ -124,7 +137,8 @@ template <
class KeyEqual,
class Mutex>
inline void
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::clear()
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
clear()
{
std::lock_guard lock(m_mutex);
m_cache.clear();
@@ -141,7 +155,8 @@ template <
class KeyEqual,
class Mutex>
inline void
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::reset()
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
reset()
{
std::lock_guard lock(m_mutex);
m_cache.clear();
@@ -161,8 +176,8 @@ template <
class Mutex>
template <class KeyComparable>
inline bool
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::touch_if_exists(
KeyComparable const& key)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
touch_if_exists(KeyComparable const& key)
{
std::lock_guard lock(m_mutex);
auto const iter(m_cache.find(key));
@@ -186,7 +201,8 @@ template <
class KeyEqual,
class Mutex>
inline void
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::sweep()
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
sweep()
{
// Keep references to all the stuff we sweep
// For performance, each worker thread should exit before the swept data
@@ -212,8 +228,9 @@ TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash,
if (when_expire > (now - minimumAge))
when_expire = now - minimumAge;
JLOG(m_journal.trace()) << m_name << " is growing fast " << m_cache.size() << " of " << m_target_size
<< " aging at " << (now - when_expire).count() << " of " << m_target_age.count();
JLOG(m_journal.trace())
<< m_name << " is growing fast " << m_cache.size() << " of " << m_target_size
<< " aging at " << (now - when_expire).count() << " of " << m_target_age.count();
}
std::vector<std::thread> workers;
@@ -222,7 +239,8 @@ TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash,
for (std::size_t p = 0; p < m_cache.partitions(); ++p)
{
workers.push_back(sweepHelper(when_expire, now, m_cache.map()[p], allStuffToSweep[p], allRemovals, lock));
workers.push_back(sweepHelper(
when_expire, now, m_cache.map()[p], allStuffToSweep[p], allRemovals, lock));
}
for (std::thread& worker : workers)
worker.join();
@@ -231,10 +249,11 @@ TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash,
}
// At this point allStuffToSweep will go out of scope outside the lock
// and decrement the reference count on each strong pointer.
JLOG(m_journal.debug())
<< m_name << " TaggedCache sweep lock duration "
<< std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - start).count()
<< "ms";
JLOG(m_journal.debug()) << m_name << " TaggedCache sweep lock duration "
<< std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now() - start)
.count()
<< "ms";
}
template <
@@ -247,9 +266,8 @@ template <
class KeyEqual,
class Mutex>
inline bool
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::del(
key_type const& key,
bool valid)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
del(key_type const& key, bool valid)
{
// Remove from cache, if !valid, remove from map too. Returns true if
// removed from cache
@@ -288,10 +306,8 @@ template <
class Mutex>
template <class R>
inline bool
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::canonicalize(
key_type const& key,
SharedPointerType& data,
R&& replaceCallback)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
canonicalize(key_type const& key, SharedPointerType& data, R&& replaceCallback)
{
// Return canonical value, store if needed, refresh in cache
// Return values: true=we had the data already
@@ -302,7 +318,9 @@ TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash,
if (cit == m_cache.end())
{
m_cache.emplace(
std::piecewise_construct, std::forward_as_tuple(key), std::forward_as_tuple(m_clock.now(), data));
std::piecewise_construct,
std::forward_as_tuple(key),
std::forward_as_tuple(m_clock.now(), data));
++m_cache_count;
return false;
}
@@ -404,8 +422,8 @@ template <
class KeyEqual,
class Mutex>
inline SharedPointerType
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::fetch(
key_type const& key)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
fetch(key_type const& key)
{
std::lock_guard<mutex_type> l(m_mutex);
auto ret = initialFetch(key, l);
@@ -425,9 +443,8 @@ template <
class Mutex>
template <class ReturnType>
inline auto
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::insert(
key_type const& key,
T const& value) -> std::enable_if_t<!IsKeyCache, ReturnType>
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
insert(key_type const& key, T const& value) -> std::enable_if_t<!IsKeyCache, ReturnType>
{
static_assert(
std::is_same_v<std::shared_ptr<T>, SharedPointerType> ||
@@ -456,13 +473,13 @@ template <
class Mutex>
template <class ReturnType>
inline auto
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::insert(
key_type const& key) -> std::enable_if_t<IsKeyCache, ReturnType>
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
insert(key_type const& key) -> std::enable_if_t<IsKeyCache, ReturnType>
{
std::lock_guard lock(m_mutex);
clock_type::time_point const now(m_clock.now());
auto [it, inserted] =
m_cache.emplace(std::piecewise_construct, std::forward_as_tuple(key), std::forward_as_tuple(now));
auto [it, inserted] = m_cache.emplace(
std::piecewise_construct, std::forward_as_tuple(key), std::forward_as_tuple(now));
if (!inserted)
it->second.last_access = now;
return inserted;
@@ -478,9 +495,8 @@ template <
class KeyEqual,
class Mutex>
inline bool
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::retrieve(
key_type const& key,
T& data)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
retrieve(key_type const& key, T& data)
{
// retrieve the value of the stored data
auto entry = fetch(key);
@@ -502,8 +518,8 @@ template <
class KeyEqual,
class Mutex>
inline auto
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::peekMutex()
-> mutex_type&
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
peekMutex() -> mutex_type&
{
return m_mutex;
}
@@ -518,8 +534,8 @@ template <
class KeyEqual,
class Mutex>
inline auto
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::getKeys() const
-> std::vector<key_type>
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
getKeys() const -> std::vector<key_type>
{
std::vector<key_type> v;
@@ -543,7 +559,8 @@ template <
class KeyEqual,
class Mutex>
inline double
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::rate() const
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
rate() const
{
std::lock_guard lock(m_mutex);
auto const tot = m_hits + m_misses;
@@ -563,9 +580,8 @@ template <
class Mutex>
template <class Handler>
inline SharedPointerType
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::fetch(
key_type const& digest,
Handler const& h)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
fetch(key_type const& digest, Handler const& h)
{
{
std::lock_guard l(m_mutex);
@@ -596,9 +612,8 @@ template <
class KeyEqual,
class Mutex>
inline SharedPointerType
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::initialFetch(
key_type const& key,
std::lock_guard<mutex_type> const& l)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
initialFetch(key_type const& key, std::lock_guard<mutex_type> const& l)
{
auto cit = m_cache.find(key);
if (cit == m_cache.end())
@@ -634,7 +649,8 @@ template <
class KeyEqual,
class Mutex>
inline void
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::collect_metrics()
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
collect_metrics()
{
m_stats.size.set(getCacheSize());
@@ -660,13 +676,14 @@ template <
class KeyEqual,
class Mutex>
inline std::thread
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::sweepHelper(
clock_type::time_point const& when_expire,
[[maybe_unused]] clock_type::time_point const& now,
typename KeyValueCacheType::map_type& partition,
SweptPointersVector& stuffToSweep,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
sweepHelper(
clock_type::time_point const& when_expire,
[[maybe_unused]] clock_type::time_point const& now,
typename KeyValueCacheType::map_type& partition,
SweptPointersVector& stuffToSweep,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
{
return std::thread([&, this]() {
int cacheRemovals = 0;
@@ -720,8 +737,9 @@ TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash,
if (mapRemovals || cacheRemovals)
{
JLOG(m_journal.debug()) << "TaggedCache partition sweep " << m_name << ": cache = " << partition.size()
<< "-" << cacheRemovals << ", map-=" << mapRemovals;
JLOG(m_journal.debug())
<< "TaggedCache partition sweep " << m_name << ": cache = " << partition.size()
<< "-" << cacheRemovals << ", map-=" << mapRemovals;
}
allRemovals += cacheRemovals;
@@ -738,13 +756,14 @@ template <
class KeyEqual,
class Mutex>
inline std::thread
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::sweepHelper(
clock_type::time_point const& when_expire,
clock_type::time_point const& now,
typename KeyOnlyCacheType::map_type& partition,
SweptPointersVector&,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
sweepHelper(
clock_type::time_point const& when_expire,
clock_type::time_point const& now,
typename KeyOnlyCacheType::map_type& partition,
SweptPointersVector&,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
{
return std::thread([&, this]() {
int cacheRemovals = 0;
@@ -774,8 +793,9 @@ TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash,
if (mapRemovals || cacheRemovals)
{
JLOG(m_journal.debug()) << "TaggedCache partition sweep " << m_name << ": cache = " << partition.size()
<< "-" << cacheRemovals << ", map-=" << mapRemovals;
JLOG(m_journal.debug())
<< "TaggedCache partition sweep " << m_name << ": cache = " << partition.size()
<< "-" << cacheRemovals << ", map-=" << mapRemovals;
}
allRemovals += cacheRemovals;

View File

@@ -51,7 +51,13 @@ generalized_set_intersection(
// std::set_intersection.
template <class FwdIter1, class InputIter2, class Pred, class Comp>
FwdIter1
remove_if_intersect_or_match(FwdIter1 first1, FwdIter1 last1, InputIter2 first2, InputIter2 last2, Pred pred, Comp comp)
remove_if_intersect_or_match(
FwdIter1 first1,
FwdIter1 last1,
InputIter2 first2,
InputIter2 last2,
Pred pred,
Comp comp)
{
// [original-first1, current-first1) is the set of elements to be preserved.
// [current-first1, i) is the set of elements that have been removed.

View File

@@ -214,7 +214,8 @@ private:
std::uint32_t accum = {};
for (std::uint32_t shift : {4u, 0u, 12u, 8u, 20u, 16u, 28u, 24u})
{
if (auto const result = hexCharToUInt(*in++, shift, accum); result != ParseResult::okay)
if (auto const result = hexCharToUInt(*in++, shift, accum);
result != ParseResult::okay)
return Unexpected(result);
}
ret[i++] = accum;
@@ -253,7 +254,8 @@ public:
// This constructor is intended to be used at compile time since it might
// throw at runtime. Consider declaring this constructor consteval once
// we get to C++23.
explicit constexpr base_uint(std::string_view sv) noexcept(false) : data_(parseFromStringViewThrows(sv))
explicit constexpr base_uint(std::string_view sv) noexcept(false)
: data_(parseFromStringViewThrows(sv))
{
}
@@ -442,7 +444,8 @@ public:
for (int i = WIDTH; i--;)
{
std::uint64_t n = carry + boost::endian::big_to_native(data_[i]) + boost::endian::big_to_native(b.data_[i]);
std::uint64_t n = carry + boost::endian::big_to_native(data_[i]) +
boost::endian::big_to_native(b.data_[i]);
data_[i] = boost::endian::native_to_big(static_cast<std::uint32_t>(n));
carry = n >> 32;

View File

@@ -15,7 +15,8 @@ namespace xrpl {
// A few handy aliases
using days = std::chrono::duration<int, std::ratio_multiply<std::chrono::hours::period, std::ratio<24>>>;
using days =
std::chrono::duration<int, std::ratio_multiply<std::chrono::hours::period, std::ratio<24>>>;
using weeks = std::chrono::duration<int, std::ratio_multiply<days::period, std::ratio<7>>>;

Some files were not shown because too many files have changed in this diff Show More