Compare commits

...

52 Commits

Author SHA1 Message Date
Pratik Mankawde
014060370a fix(telemetry): move quorum/proposers attributes to consensus.accept span
Move validation_quorum and proposers_validated attributes from
consensus.accept.apply to consensus.accept span to match the design
spec. Both values are available in onAccept() scope.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 22:28:33 +01:00
Pratik Mankawde
8c222b9e05 feat(telemetry): add consensus validation span enrichment (Task 4.8)
Add validation ledger hash and full-validation flag to
consensus.validation.send spans, plus quorum and proposer count to
consensus.accept spans for trace-level agreement analysis.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 22:28:33 +01:00
Pratik Mankawde
95f0c8bf51 docs: add Task 4.8 consensus validation span enrichment for external dashboard parity
Adds ledger_hash, validation.full to validation send/receive spans,
and validation_quorum, proposers_validated to consensus.accept spans.
Foundation for Phase 7 ValidationTracker agreement computation.

Part of the external dashboard parity initiative across phases 2-11.
See docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 22:28:33 +01:00
Pratik Mankawde
a127711b86 Phase 4: Consensus tracing - round lifecycle, proposals, validations, close time
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 22:28:33 +01:00
Pratik Mankawde
715c531512 feat(telemetry): add peer version attribute to tx.receive spans (Task 3.7)
Tag transaction receive spans with the relaying peer's rippled version
to enable version-mismatch correlation during network upgrades.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 22:28:27 +01:00
Pratik Mankawde
e6508a5bbc docs: add Task 3.8 TX span peer version attribute for external dashboard parity
Adds xrpl.peer.version attribute to tx.receive spans for version-mismatch
correlation during network upgrades.

Part of the external dashboard parity initiative across phases 2-11.
See docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 22:28:27 +01:00
Pratik Mankawde
88d17e4c04 Phase 3: Transaction tracing - protobuf context propagation, PeerImp, NetworkOPs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 22:28:27 +01:00
Pratik Mankawde
9ab8570153 docs(telemetry): replace Jaeger references with Tempo in Phase 2-5 task lists
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 22:28:22 +01:00
Pratik Mankawde
8f2507a945 feat(telemetry): add node health attributes to RPC spans (Task 2.8)
Add amendment_blocked and server_state span attributes to every
rpc.command.* span so operators can correlate RPC behavior with node state.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 22:28:22 +01:00
Pratik Mankawde
befffc573c docs: add Task 2.8 RPC span attribute enrichment for external dashboard parity
Adds node health context (amendment_blocked, server_state) to rpc.command.*
spans, inspired by the community xrpl-validator-dashboard.

Part of the external dashboard parity initiative across phases 2-11.
See docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 22:28:22 +01:00
Pratik Mankawde
945faac770 Phase 2: RPC tracing - span macros, attributes, WebSocket, command spans
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 22:28:22 +01:00
Pratik Mankawde
c8b1686ce4 Phase 1b: Telemetry core infrastructure - CMake, Conan, SpanGuard, config
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 22:28:22 +01:00
Pratik Mankawde
ba92ccad14 Phase 1b: Telemetry core infrastructure - CMake, Conan, SpanGuard, config
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 22:28:22 +01:00
Pratik Mankawde
012e453997 Phase 1c: RPC integration - ServerHandler tracing, telemetry config wiring
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 22:28:17 +01:00
Pratik Mankawde
79b95c8cc6 Phase 1b: Telemetry core infrastructure - CMake, Conan, SpanGuard, config
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 22:28:17 +01:00
Pratik Mankawde
34d0f40ee7 Phase 1b: Telemetry core infrastructure - CMake, Conan, SpanGuard, config
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 22:28:17 +01:00
Pratik Mankawde
8421134420 refactor(telemetry): remove Jaeger service, exporter, and datasource
Tempo is now the sole trace backend. Remove Jaeger all-in-one service
from docker-compose, otlp/jaeger exporter from OTel Collector config,
and Jaeger Grafana datasource provisioning file.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 22:28:12 +01:00
Pratik Mankawde
a7470615be Phase 1b: Telemetry core infrastructure - CMake, Conan, SpanGuard, config
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 22:28:12 +01:00
Pratik Mankawde
33b09d29e1 docs(telemetry): replace Jaeger with Tempo in architecture diagram
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 22:22:34 +01:00
Pratik Mankawde
f135842071 docs: correct OTel overhead estimates against SDK benchmarks
Verified CPU, memory, and network overhead calculations against
official OTel C++ SDK benchmarks (969 CI runs) and source code
analysis. Key corrections:

- Span creation: 200-500ns → 500-1000ns (SDK BM_SpanCreation median
  ~1000ns; original estimate matched API no-op, not SDK path)
- Per-TX overhead: 2.4μs → 4.0μs (2.0% vs 1.2%; still within 1-3%)
- Active span memory: ~200 bytes → ~500-800 bytes (Span wrapper +
  SpanData + std::map attribute storage)
- Static memory: ~456KB → ~8.3MB (BatchSpanProcessor worker thread
  stack ~8MB was omitted)
- Total memory ceiling: ~2.3MB → ~10MB
- Memory success metric target: <5MB → <10MB
- AddEvent: 50-80ns → 100-200ns

Added Section 3.5.4 with links to all benchmark sources.
Updated presentation.md with matching corrections.
High-level conclusions unchanged (1-3% CPU, negligible consensus).

Also includes: review fixes, cross-document consistency improvements,
additional component tracing docs (PathFinding, TxQ, Validator, etc.),
context size corrections (32 → 25 bytes).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 15:55:26 +01:00
Pratik Mankawde
a9bc525f22 moved presentation.md file
Signed-off-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
2026-03-30 15:55:26 +01:00
Pratik Mankawde
5c9102bd9a Remove effort estimates from implementation phases document
Strip effort/risk columns from task tables and remove the §6.9 Effort
Summary section with its pie chart and resource requirements table.
Renumber §6.10 Quick Wins → §6.9.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 15:55:26 +01:00
Pratik Mankawde
c556f3471b Add Phase 4a implementation status to plan docs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 15:55:26 +01:00
Pratik Mankawde
2fb6124412 Appendix: add 00-tracing-fundamentals.md and POC_taskList.md to document index
Split document index into Plan Documents and Task Lists sections.
These files were introduced in this branch but missing from the index.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 15:55:26 +01:00
Pratik Mankawde
e482b56f58 Phase 1a: OpenTelemetry plan documentation
Add comprehensive planning documentation for the OpenTelemetry
distributed tracing integration:

- Tracing fundamentals and concepts
- Architecture analysis of rippled's tracing surface area
- Design decisions and trade-offs
- Implementation strategy and code samples
- Configuration reference
- Implementation phases roadmap
- Observability backend comparison
- POC task list and presentation materials

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 15:55:26 +01:00
dependabot[bot]
de671863e2 ci: [DEPENDABOT] bump actions/deploy-pages from 4.0.5 to 5.0.0 (#6684)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 14:09:57 +00:00
dependabot[bot]
e0cabb9f8c ci: [DEPENDABOT] bump codecov/codecov-action from 5.5.3 to 6.0.0 (#6685)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 13:57:32 +00:00
Pratik Mankawde
3d9c545f59 fix: Guard Coro::resume() against completed coroutines (#6608)
Signed-off-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-27 18:52:18 +00:00
Vito Tumas
9b944ee8c2 refactor: Split LoanInvariant into LoanBrokerInvariant and LoanInvariant (#6674) 2026-03-27 18:35:42 +00:00
Ayaz Salikhov
509677abfd ci: Don't publish docs on release branches (#6673) 2026-03-26 14:11:37 +00:00
Jingchen
addc1e8e25 refactor: Make function naming in ServiceRegistry consistent (#6390)
Signed-off-by: JCW <a1q123456@users.noreply.github.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
2026-03-26 14:11:16 +00:00
Valentin Balaschenko
faf69da4b0 chore: Shorten job names to stay within Linux 15-char thread limit (#6669) 2026-03-26 14:10:51 +00:00
Vito Tumas
76e3b4fb0f fix: Improve loan invariant message (#6668) 2026-03-26 12:40:26 +00:00
Ayaz Salikhov
e8bdbf975a ci: Upload artifacts only in public repositories (#6670) 2026-03-26 12:37:37 +00:00
Ayaz Salikhov
2c765f6eb0 ci: Add conflicting-pr workflow (#6656)
Co-authored-by: Bart <bthomee@users.noreply.github.com>
2026-03-26 00:18:17 +00:00
Mayukha Vadari
a9269fa846 chore: Add more AI tools to .gitignore (#6658)
Co-authored-by: xrplf-ai-reviewer[bot] <266832837+xrplf-ai-reviewer[bot]@users.noreply.github.com>
2026-03-25 23:55:50 +00:00
Jingchen
15fd9feae5 chore: Show warning message if user may need to connect to VPN (#6619)
Signed-off-by: JCW <a1q123456@users.noreply.github.com>
2026-03-25 23:54:49 +00:00
Vito Tumas
b9d07730f3 feat: Add placeholder amendment for assorted bug fixes (#6652) 2026-03-25 23:54:33 +00:00
Ayaz Salikhov
85b65c8e9a chore: Update sqlite3->3.51.0, protobuf->6.33.5, openssl->3.6.1, grpc->1.78.1 (#6653) 2026-03-25 18:22:50 +00:00
Jingchen
8f182e825a refactor: Modularise ledger (#6536)
Signed-off-by: JCW <a1q123456@users.noreply.github.com>
Co-authored-by: Bart <bthomee@users.noreply.github.com>
Co-authored-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
2026-03-25 16:32:45 +00:00
Ayaz Salikhov
cd78569d94 chore: Use unpatched version of soci (#6649) 2026-03-25 16:06:31 +00:00
Mayukha Vadari
2c7af360c2 fix: Remove unused/unreachable transactor code (#6612) 2026-03-25 16:02:14 +00:00
Alex Kremer
403fd7c649 fix: More clang-tidy issues found after merging to develop (#6640)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
Co-authored-by: Bart <bthomee@users.noreply.github.com>
2026-03-25 14:28:28 +00:00
Ayaz Salikhov
dfed0481f7 docs: Rewrite conan docs for custom recipes (#6647) 2026-03-25 14:25:33 +00:00
Bart
0dc0c8e912 docs: Update LICENSE.md year to present (#6636)
Co-authored-by: Bart <11445373+bthomee@users.noreply.github.com>
2026-03-25 14:24:10 +00:00
Ayaz Salikhov
0510ee47d7 chore: Update some external dependencies (#6642) 2026-03-25 10:44:14 +00:00
Ayaz Salikhov
589c9c694c chore: Update external dependencies due to upstream merge (#6630) 2026-03-24 23:18:41 +00:00
Jingchen
4096623ae1 chore: Remove the forward declarations that cause build errors when unity build is enabled (#6633)
Signed-off-by: JCW <a1q123456@users.noreply.github.com>
2026-03-24 23:00:41 +00:00
Alex Kremer
dda162087f docs: Add note about clang-tidy installation (#6634) 2026-03-24 21:18:03 +00:00
Mayukha Vadari
85a4015a64 fix: Assorted Permissioned Domain fixes (#6587) 2026-03-24 18:53:57 +00:00
Mayukha Vadari
f7bb4018fa fix: Assorted Vault fixes (#6607) 2026-03-24 18:53:49 +00:00
Alex Kremer
0eedefbf45 refactor: Enable more clang-tidy readability checks (#6595)
Co-authored-by: Sergey Kuznetsov <kuzzz99@gmail.com>
2026-03-24 15:42:12 +00:00
447 changed files with 15558 additions and 3099 deletions

View File

@@ -104,10 +104,13 @@ Checks: "-*,
readability-const-return-type,
readability-container-contains,
readability-container-size-empty,
readability-convert-member-functions-to-static,
readability-duplicate-include,
readability-else-after-return,
readability-enum-initial-value,
readability-implicit-bool-conversion,
readability-make-member-function-const,
readability-math-missing-parentheses,
readability-misleading-indentation,
readability-non-const-parameter,
readability-redundant-casting,
@@ -116,7 +119,9 @@ Checks: "-*,
readability-redundant-member-init,
readability-redundant-string-init,
readability-reference-to-constructed-temporary,
readability-simplify-boolean-expr,
readability-static-definition-in-anonymous-namespace,
readability-suspicious-call-argument,
readability-use-std-min-max
"
# ---
@@ -127,14 +132,9 @@ Checks: "-*,
# misc-include-cleaner,
# misc-redundant-expression,
#
# readability-convert-member-functions-to-static,
# readability-implicit-bool-conversion,
# readability-inconsistent-declaration-parameter-name,
# readability-inconsistent-declaration-parameter-name, # in this codebase this check will break a lot of arg names
# readability-static-accessed-through-instance, # this check is probably unnecessary. it makes the code less readable
# readability-identifier-naming,
# readability-math-missing-parentheses,
# readability-simplify-boolean-expr,
# readability-suspicious-call-argument,
# readability-static-accessed-through-instance,
#
# modernize-concat-nested-namespaces,
# modernize-pass-by-value,

View File

@@ -21,6 +21,7 @@ libxrpl.protocol > xrpl.json
libxrpl.protocol > xrpl.protocol
libxrpl.protocol_autogen > xrpl.protocol_autogen
libxrpl.rdb > xrpl.basics
libxrpl.rdb > xrpl.core
libxrpl.rdb > xrpl.rdb
libxrpl.resource > xrpl.basics
libxrpl.resource > xrpl.json
@@ -33,6 +34,8 @@ libxrpl.server > xrpl.server
libxrpl.shamap > xrpl.basics
libxrpl.shamap > xrpl.protocol
libxrpl.shamap > xrpl.shamap
libxrpl.telemetry > xrpl.basics
libxrpl.telemetry > xrpl.telemetry
libxrpl.tx > xrpl.basics
libxrpl.tx > xrpl.conditions
libxrpl.tx > xrpl.core
@@ -90,7 +93,9 @@ test.core > xrpl.server
test.csf > xrpl.basics
test.csf > xrpld.consensus
test.csf > xrpl.json
test.csf > xrpl.ledger
test.csf > xrpl.protocol
test.csf > xrpl.telemetry
test.json > test.jtx
test.json > xrpl.json
test.jtx > xrpl.basics
@@ -108,7 +113,6 @@ test.jtx > xrpl.tx
test.ledger > test.jtx
test.ledger > test.toplevel
test.ledger > xrpl.basics
test.ledger > xrpld.app
test.ledger > xrpld.core
test.ledger > xrpl.ledger
test.ledger > xrpl.protocol
@@ -125,6 +129,7 @@ test.overlay > xrpl.basics
test.overlay > xrpld.app
test.overlay > xrpld.overlay
test.overlay > xrpld.peerfinder
test.overlay > xrpl.ledger
test.overlay > xrpl.nodestore
test.overlay > xrpl.protocol
test.overlay > xrpl.shamap
@@ -175,15 +180,16 @@ test.toplevel > xrpl.json
test.unit_test > xrpl.basics
test.unit_test > xrpl.protocol
tests.libxrpl > xrpl.basics
tests.libxrpl > xrpld.telemetry
tests.libxrpl > xrpl.json
tests.libxrpl > xrpl.net
tests.libxrpl > xrpl.protocol
tests.libxrpl > xrpl.protocol_autogen
tests.libxrpl > xrpl.telemetry
xrpl.conditions > xrpl.basics
xrpl.conditions > xrpl.protocol
xrpl.core > xrpl.basics
xrpl.core > xrpl.json
xrpl.core > xrpl.ledger
xrpl.core > xrpl.protocol
xrpl.json > xrpl.basics
xrpl.ledger > xrpl.basics
@@ -213,6 +219,7 @@ xrpl.server > xrpl.shamap
xrpl.shamap > xrpl.basics
xrpl.shamap > xrpl.nodestore
xrpl.shamap > xrpl.protocol
xrpl.telemetry > xrpl.basics
xrpl.tx > xrpl.basics
xrpl.tx > xrpl.core
xrpl.tx > xrpl.ledger
@@ -222,6 +229,7 @@ xrpld.app > xrpl.basics
xrpld.app > xrpl.core
xrpld.app > xrpld.consensus
xrpld.app > xrpld.core
xrpld.app > xrpld.telemetry
xrpld.app > xrpl.json
xrpld.app > xrpl.ledger
xrpld.app > xrpl.net
@@ -231,10 +239,14 @@ xrpld.app > xrpl.rdb
xrpld.app > xrpl.resource
xrpld.app > xrpl.server
xrpld.app > xrpl.shamap
xrpld.app > xrpl.telemetry
xrpld.app > xrpl.tx
xrpld.consensus > xrpl.basics
xrpld.consensus > xrpld.telemetry
xrpld.consensus > xrpl.json
xrpld.consensus > xrpl.ledger
xrpld.consensus > xrpl.protocol
xrpld.consensus > xrpl.telemetry
xrpld.core > xrpl.basics
xrpld.core > xrpl.core
xrpld.core > xrpl.json
@@ -245,6 +257,7 @@ xrpld.overlay > xrpl.basics
xrpld.overlay > xrpl.core
xrpld.overlay > xrpld.core
xrpld.overlay > xrpld.peerfinder
xrpld.overlay > xrpld.telemetry
xrpld.overlay > xrpl.json
xrpld.overlay > xrpl.protocol
xrpld.overlay > xrpl.rdb
@@ -262,6 +275,7 @@ xrpld.perflog > xrpl.json
xrpld.rpc > xrpl.basics
xrpld.rpc > xrpl.core
xrpld.rpc > xrpld.core
xrpld.rpc > xrpld.telemetry
xrpld.rpc > xrpl.json
xrpld.rpc > xrpl.ledger
xrpld.rpc > xrpl.net
@@ -272,3 +286,4 @@ xrpld.rpc > xrpl.resource
xrpld.rpc > xrpl.server
xrpld.rpc > xrpl.tx
xrpld.shamap > xrpl.shamap
xrpld.telemetry > xrpl.telemetry

25
.github/workflows/conflicting-pr.yml vendored Normal file
View File

@@ -0,0 +1,25 @@
name: Label PRs with merge conflicts
on:
# So that PRs touching the same files as the push are updated.
push:
# So that the `dirtyLabel` is removed if conflicts are resolved.
# We recommend `pull_request_target` so that github secrets are available.
# In `pull_request` we wouldn't be able to change labels of fork PRs.
pull_request_target:
types: [synchronize]
permissions:
pull-requests: write
jobs:
main:
runs-on: ubuntu-latest
steps:
- name: Check if PRs are dirty
uses: eps1lon/actions-label-merge-conflict@1df065ebe6e3310545d4f4c4e862e43bdca146f0 # v3.0.3
with:
dirtyLabel: "PR: has conflicts"
repoToken: "${{ secrets.GITHUB_TOKEN }}"
commentOnDirty: "This PR has conflicts, please resolve them in order for the PR to be reviewed."
commentOnClean: "All conflicts have been resolved. Assigned reviewers can now start or resume their review."

View File

@@ -6,7 +6,6 @@ on:
push:
branches:
- "develop"
- "release*"
paths:
- ".github/workflows/publish-docs.yml"
- "*.md"
@@ -100,4 +99,4 @@ jobs:
steps:
- name: Deploy to GitHub Pages
id: deploy
uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e # v4.0.5
uses: actions/deploy-pages@cd2ce8fcbc39b97be8ca5fce6e763baed58fa128 # v5.0.0

View File

@@ -101,7 +101,7 @@ jobs:
steps:
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
uses: XRPLF/actions/cleanup-workspace@c7d9ce5ebb03c752a354889ecd870cadfc2b1cd4
uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
@@ -199,7 +199,7 @@ jobs:
fi
- name: Upload the binary (Linux)
if: ${{ github.repository == 'XRPLF/rippled' && runner.os == 'Linux' }}
if: ${{ github.event.repository.visibility == 'public' && runner.os == 'Linux' }}
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: xrpld-${{ inputs.config_name }}
@@ -263,18 +263,6 @@ jobs:
[ "$COVERAGE_ENABLED" = "true" ] && BUILD_NPROC=$(( BUILD_NPROC - 2 ))
./xrpld --unittest --unittest-jobs "${BUILD_NPROC}" 2>&1 | tee unittest.log
- name: Show test failure summary
if: ${{ failure() && !inputs.build_only }}
working-directory: ${{ runner.os == 'Windows' && format('{0}/{1}', env.BUILD_DIR, inputs.build_type) || env.BUILD_DIR }}
run: |
if [ ! -f unittest.log ]; then
echo "unittest.log not found; embedded tests may not have run."
exit 0
fi
if ! grep -E "failed" unittest.log; then
echo "Log present but no failure lines found in unittest.log."
fi
- name: Debug failure (Linux)
if: ${{ failure() && runner.os == 'Linux' && !inputs.build_only }}
run: |
@@ -298,7 +286,7 @@ jobs:
- name: Upload coverage report
if: ${{ github.repository == 'XRPLF/rippled' && !inputs.build_only && env.COVERAGE_ENABLED == 'true' }}
uses: codecov/codecov-action@1af58845a975a7985b0beb0cbe6fbbb71a41dbad # v5.5.3
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0
with:
disable_search: true
disable_telem: true

View File

@@ -78,12 +78,12 @@ jobs:
id: run_clang_tidy
continue-on-error: true
env:
TARGETS: ${{ inputs.files != '' && inputs.files || 'src tests' }}
FILES: ${{ inputs.files }}
run: |
run-clang-tidy -j ${{ steps.nproc.outputs.nproc }} -p "${BUILD_DIR}" ${TARGETS} 2>&1 | tee clang-tidy-output.txt
run-clang-tidy -j ${{ steps.nproc.outputs.nproc }} -p "$BUILD_DIR" $FILES 2>&1 | tee clang-tidy-output.txt
- name: Upload clang-tidy output
if: steps.run_clang_tidy.outcome != 'success'
if: ${{ github.event.repository.visibility == 'public' && steps.run_clang_tidy.outcome != 'success' }}
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: clang-tidy-results

View File

@@ -51,5 +51,5 @@ jobs:
if: ${{ always() && !cancelled() && (!inputs.check_only_changed || needs.determine-files.outputs.any_cpp_changed == 'true' || needs.determine-files.outputs.clang_tidy_config_changed == 'true') }}
uses: ./.github/workflows/reusable-clang-tidy-files.yml
with:
files: ${{ needs.determine-files.outputs.clang_tidy_config_changed == 'true' && '' || (inputs.check_only_changed && needs.determine-files.outputs.all_changed_files || '') }}
files: ${{ (needs.determine-files.outputs.clang_tidy_config_changed != 'true' && inputs.check_only_changed) && needs.determine-files.outputs.all_changed_files || '' }}
create_issue_on_failure: ${{ inputs.create_issue_on_failure }}

View File

@@ -64,7 +64,7 @@ jobs:
steps:
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
uses: XRPLF/actions/cleanup-workspace@c7d9ce5ebb03c752a354889ecd870cadfc2b1cd4
uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2

2
.gitignore vendored
View File

@@ -71,6 +71,8 @@ DerivedData
/.zed/
# AI tools.
/.agent
/.agents
/.augment
/.claude
/CLAUDE.md

View File

@@ -125,9 +125,9 @@ default profile.
### Patched recipes
The recipes in Conan Center occasionally need to be patched for compatibility
with the latest version of `xrpld`. We maintain a fork of the Conan Center
[here](https://github.com/XRPLF/conan-center-index/) containing the patches.
Occasionally, we need patched recipes or recipes not present in Conan Center.
We maintain a fork of the Conan Center Index
[here](https://github.com/XRPLF/conan-center-index/) containing the modified and newly added recipes.
To ensure our patched recipes are used, you must add our Conan remote at a
higher index than the default Conan Center remote, so it is consulted first. You
@@ -137,19 +137,11 @@ can do this by running:
conan remote add --index 0 xrplf https://conan.ripplex.io
```
Alternatively, you can pull the patched recipes into the repository and use them
locally:
Alternatively, you can pull our recipes from the repository and export them locally:
```bash
# Extract the version number from the lockfile.
function extract_version {
version=$(cat conan.lock | sed -nE "s@.+${1}/(.+)#.+@\1@p" | head -n1)
echo ${version}
}
# Define which recipes to export.
recipes=('ed25519' 'grpc' 'nudb' 'openssl' 'secp256k1' 'snappy' 'soci')
folders=('all' 'all' 'all' '3.x.x' 'all' 'all' 'all')
recipes=('abseil' 'ed25519' 'grpc' 'm4' 'mpt-crypto' 'nudb' 'openssl' 'secp256k1' 'snappy' 'soci' 'wasm-xrplf' 'wasmi')
# Selectively check out the recipes from our CCI fork.
cd external
@@ -158,29 +150,19 @@ cd conan-center-index
git init
git remote add origin git@github.com:XRPLF/conan-center-index.git
git sparse-checkout init
for ((index = 1; index <= ${#recipes[@]}; index++)); do
recipe=${recipes[index]}
folder=${folders[index]}
echo "Checking out recipe '${recipe}' from folder '${folder}'..."
git sparse-checkout add recipes/${recipe}/${folder}
for recipe in "${recipes[@]}"; do
echo "Checking out recipe '${recipe}'..."
git sparse-checkout add recipes/${recipe}
done
git fetch origin master
git checkout master
cd ../..
# Export the recipes into the local cache.
for ((index = 1; index <= ${#recipes[@]}; index++)); do
recipe=${recipes[index]}
folder=${folders[index]}
version=$(extract_version ${recipe})
echo "Exporting '${recipe}/${version}' from '${recipe}/${folder}'..."
conan export --version $(extract_version ${recipe}) \
external/conan-center-index/recipes/${recipe}/${folder}
done
./export_all.sh
cd ../../
```
In the case we switch to a newer version of a dependency that still requires a
patch, it will be necessary for you to pull in the changes and re-export the
patch or add a new dependency, it will be necessary for you to pull in the changes and re-export the
updated dependencies with the newer version. However, if we switch to a newer
version that no longer requires a patch, no action is required on your part, as
the new recipe will be automatically pulled from the official Conan Center.
@@ -189,6 +171,8 @@ the new recipe will be automatically pulled from the official Conan Center.
> You might need to add `--lockfile=""` to your `conan install` command
> to avoid automatic use of the existing `conan.lock` file when you run
> `conan export` manually on your machine
>
> This is not recommended though, as you might end up using different revisions of recipes.
### Conan profile tweaks
@@ -204,39 +188,14 @@ Possible values are ['5.0', '5.1', '6.0', '6.1', '7.0', '7.3', '8.0', '8.1',
Read "http://docs.conan.io/2/knowledge/faq.html#error-invalid-setting"
```
you need to amend the list of compiler versions in
`$(conan config home)/settings.yml`, by appending the required version number(s)
you need to add your compiler to the list of compiler versions in
`$(conan config home)/settings_user.yml`, by adding the required version number(s)
to the `version` array specific for your compiler. For example:
```yaml
apple-clang:
version:
[
"5.0",
"5.1",
"6.0",
"6.1",
"7.0",
"7.3",
"8.0",
"8.1",
"9.0",
"9.1",
"10.0",
"11.0",
"12.0",
"13",
"13.0",
"13.1",
"14",
"14.0",
"15",
"15.0",
"16",
"16.0",
"17",
"17.0",
]
compiler:
apple-clang:
version: ["17.0"]
```
#### Multiple compilers

View File

@@ -117,6 +117,18 @@ if(rocksdb)
target_link_libraries(xrpl_libs INTERFACE RocksDB::rocksdb)
endif()
# OpenTelemetry distributed tracing (optional).
# When ON, links against opentelemetry-cpp and defines XRPL_ENABLE_TELEMETRY
# so that tracing macros in TracingInstrumentation.h are compiled in.
# When OFF (default), all tracing code compiles to no-ops with zero overhead.
# Enable via: conan install -o telemetry=True, or cmake -Dtelemetry=ON.
option(telemetry "Enable OpenTelemetry tracing" OFF)
if(telemetry)
find_package(opentelemetry-cpp CONFIG REQUIRED)
add_compile_definitions(XRPL_ENABLE_TELEMETRY)
message(STATUS "OpenTelemetry tracing enabled")
endif()
# Work around changes to Conan recipe for now.
if(TARGET nudb::core)
set(nudb nudb::core)

View File

@@ -259,6 +259,10 @@ There is a Continuous Integration job that runs clang-tidy on pull requests. The
This ensures that configuration changes don't introduce new warnings across the codebase.
### Installing clang-tidy
See the [environment setup guide](./docs/build/environment.md#clang-tidy) for platform-specific installation instructions.
### Running clang-tidy locally
Before running clang-tidy, you must build the project to generate required files (particularly protobuf headers). Refer to [`BUILD.md`](./BUILD.md) for build instructions.
@@ -266,10 +270,15 @@ Before running clang-tidy, you must build the project to generate required files
Then run clang-tidy on your local changes:
```
run-clang-tidy -p build src tests
run-clang-tidy -p build src include tests
```
This will check all source files in the `src` and `tests` directories using the compile commands from your `build` directory.
This will check all source files in the `src`, `include` and `tests` directories using the compile commands from your `build` directory.
If you wish to automatically fix whatever clang-tidy finds _and_ is capable of fixing, add `-fix` to the above command:
```
run-clang-tidy -p build -fix src include tests
```
## Contracts and instrumentation

View File

@@ -1,7 +1,7 @@
ISC License
Copyright (c) 2011, Arthur Britto, David Schwartz, Jed McCaleb, Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant.
Copyright (c) 2012-2025, the XRP Ledger developers.
Copyright (c) 2012-present, the XRP Ledger developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above

View File

@@ -0,0 +1,567 @@
# Distributed Tracing Fundamentals
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Next**: [Architecture Analysis](./01-architecture-analysis.md)
---
## What is Distributed Tracing?
Distributed tracing is a method for tracking data objects as they flow through distributed systems. In a network like XRP Ledger, a single transaction touches multiple independent nodes—each with no shared memory or logging. Distributed tracing connects these dots.
**Without tracing:** You see isolated logs on each node with no way to correlate them.
**With tracing:** You see the complete journey of a transaction or an event across all nodes it touched.
---
## Actors and Actions at a Glance
### Actors
| Who (Plain English) | Technical Term |
| ---------------------------------------------- | --------------- |
| A single unit of work being tracked | Span |
| The complete journey of a request | Trace |
| Data that links spans across services | Trace Context |
| Code that creates spans and propagates context | Instrumentation |
| Service that receives and processes traces | Collector |
| Storage and visualization system | Backend (Tempo) |
| Decision logic for which traces to keep | Sampler |
### Actions
| What Happens (Plain English) | Technical Term |
| --------------------------------------- | ----------------------- |
| Start tracking a new operation | Create a Span |
| Connect a child operation to its parent | Set `parent_span_id` |
| Group all related operations together | Share a `trace_id` |
| Pass tracking data between services | Context Propagation |
| Decide whether to record a trace | Sampling (Head or Tail) |
| Send completed traces to storage | Export (OTLP) |
---
## Core Concepts
### 1. Trace
A **trace** represents the entire journey of a request through the system. It has a unique `trace_id` that stays constant across all nodes.
```
Trace ID: abc123
├── Node A: received transaction
├── Node B: relayed transaction
├── Node C: included in consensus
└── Node D: applied to ledger
```
### 2. Span
A **span** represents a single unit of work within a trace. Each span has:
| Attribute | Description | Example |
| ---------------- | -------------------------------- | -------------------------- |
| `trace_id` | Identifies the trace | `event123` |
| `span_id` | Unique identifier | `span456` |
| `parent_span_id` | Parent span (if any) | `p_span123` |
| `name` | Operation name | `rpc.submit` |
| `start_time` | When work began (local time) | `2024-01-15T10:30:00Z` |
| `end_time` | When work completed (local time) | `2024-01-15T10:30:00.050Z` |
| `attributes` | Key-value metadata | `tx.hash=ABC...` |
| `status` | OK, ERROR MSG | `OK` |
### 3. Trace Context
**Trace context** is the data that propagates between services to link spans together. It contains:
- `trace_id` - The trace this span belongs to
- `span_id` - The current span (becomes parent for child spans)
- `trace_flags` - Sampling decisions
---
## How Spans Form a Trace
Spans have parent-child relationships forming a tree structure:
```mermaid
flowchart TB
subgraph trace["Trace: abc123"]
A["tx.submit<br/>span_id: 001<br/>50ms"] --> B["tx.validate<br/>span_id: 002<br/>5ms"]
A --> C["tx.relay<br/>span_id: 003<br/>10ms"]
A --> D["tx.apply<br/>span_id: 004<br/>30ms"]
D --> E["ledger.update<br/>span_id: 005<br/>20ms"]
end
style A fill:#0d47a1,stroke:#082f6a,color:#ffffff
style B fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style C fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style D fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style E fill:#bf360c,stroke:#8c2809,color:#ffffff
```
**Reading the diagram:**
- **tx.submit (blue, root)**: The top-level span representing the entire transaction submission; all other spans are its descendants.
- **tx.validate, tx.relay, tx.apply (green)**: Direct children of tx.submit, representing the three main stages -- validation, relay to peers, and application to the ledger.
- **ledger.update (red)**: A grandchild span nested under tx.apply, representing the actual ledger state mutation triggered by applying the transaction.
- **Arrows (parent to child)**: Each arrow indicates a parent-child span relationship where the parent's completion depends on the child finishing.
The same trace visualized as a **timeline (Gantt chart)**:
```
Time → 0ms 10ms 20ms 30ms 40ms 50ms
├───────────────────────────────────────────┤
tx.submit│▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
├─────┤
tx.valid │▓▓▓▓▓│
│ ├──────────┤
tx.relay │ │▓▓▓▓▓▓▓▓▓▓│
│ ├────────────────────────────┤
tx.apply │ │▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
│ ├──────────────────┤
ledger │ │▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
```
---
## Span Relationships
Spans don't always form simple parent-child trees. Distributed tracing defines several relationship types to capture different causal patterns:
### 1. Parent-Child (ChildOf)
The default relationship. The parent span **depends on** or **contains** the child span. The child runs within the scope of the parent.
```
tx.submit (parent)
├── tx.validate (child) ← parent waits for this
├── tx.relay (child) ← parent waits for this
└── tx.apply (child) ← parent waits for this
```
**When to use:** Synchronous calls, nested operations, any case where the parent's completion depends on the child.
### 2. Follows-From
A causal relationship where the first span **triggers** the second, but does **not wait** for it. The originator fires and moves on.
```
Time →
tx.receive [=======]
↓ triggers (follows-from)
tx.relay [===========] ← runs independently
```
**When to use:** Asynchronous jobs, queued work, fire-and-forget patterns. For example, a node receives a transaction and queues it for relay — the relay span _follows from_ the receive span but the receiver doesn't wait for relaying to complete.
> **OpenTracing** defined `FollowsFrom` as a first-class reference type alongside `ChildOf`.
> **OpenTelemetry** represents this using **Span Links** with descriptive attributes instead (see below).
### 3. Span Links (Cross-Trace and Non-Hierarchical)
Links connect spans that are **causally related but not in a parent-child hierarchy**. Unlike parent-child, links can cross trace boundaries.
```
Trace A Trace B
────── ──────
batch.schedule batch.execute
├─ item.enqueue (span X) ┌──► process.item
├─ item.enqueue (span Y) ───┤ (links to X, Y, Z)
├─ item.enqueue (span Z) └──►
```
**Use cases:**
| Pattern | Description |
| -------------------- | --------------------------------------------------------------------------- |
| **Batch processing** | A batch span links back to all individual spans that contributed to it |
| **Fan-in** | An aggregation span links to the multiple producer spans it merges |
| **Fan-out** | Multiple downstream spans link back to the single span that triggered them |
| **Async handoff** | A deferred job links back to the request that queued it (follows-from) |
| **Cross-trace** | Correlating spans across independent traces (e.g., retries, related events) |
**Link structure:** Each link carries the target span's context plus optional attributes:
```
Link {
trace_id: <target trace>
span_id: <target span>
attributes: { "link.description": "triggered by batch scheduler" }
}
```
### Relationship Summary
```mermaid
flowchart LR
subgraph parent_child["Parent-Child"]
direction TB
P["Parent"] --> C["Child"]
end
subgraph follows_from["Follows-From"]
direction TB
A["Span A"] -.->|triggers| B["Span B"]
end
subgraph links["Span Links"]
direction TB
X["Span X\n(Trace 1)"] -.-|link| Y["Span Y\n(Trace 2)"]
end
parent_child ~~~ follows_from ~~~ links
style P fill:#0d47a1,stroke:#082f6a,color:#ffffff
style C fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style A fill:#0d47a1,stroke:#082f6a,color:#ffffff
style B fill:#bf360c,stroke:#8c2809,color:#ffffff
style X fill:#4a148c,stroke:#38006b,color:#ffffff
style Y fill:#4a148c,stroke:#38006b,color:#ffffff
```
| Relationship | Same Trace? | Dependency? | OTel Mechanism |
| ---------------- | ----------- | -------------------------- | ----------------- |
| **Parent-Child** | Yes | Parent depends on child | `parent_span_id` |
| **Follows-From** | Usually | Causal but no dependency | Link + attributes |
| **Span Link** | Either | Correlation, no dependency | Link + attributes |
---
## Trace ID Generation
A `trace_id` is a 128-bit (16-byte) identifier that groups all spans belonging to one logical operation. How it's generated determines how easily you can find and correlate traces later.
### General Approaches
#### 1. Random (W3C Default)
Generate a random 128-bit ID when a trace starts. Standard approach for most services.
```
trace_id = random_128_bits()
```
| Pros | Cons |
| --------------------------- | --------------------------------------------- |
| Simple, standard | No natural correlation to domain events |
| Guaranteed unique per trace | If propagation is lost, trace is broken |
| Works with all OTel tooling | "Find trace for TX abc" requires index lookup |
#### 2. Deterministic (Derived from Domain Data)
Compute the trace_id from a hash of a natural identifier. Every node independently derives the **same** trace_id for the same event.
```
trace_id = SHA-256(domain_identifier)[0:16] // truncate to 128 bits
```
| Pros | Cons |
| --------------------------------------------------- | ---------------------------------------------------------- |
| Propagation-resilient — same ID computed everywhere | Same event processed twice (retry) shares trace_id |
| Natural search — domain ID maps directly to trace | Non-standard (tooling assumes random) |
| No coordination needed between nodes | 256→128 bit truncation (collision risk negligible at ~2⁶⁴) |
#### 3. Hybrid (Deterministic Prefix + Random Suffix)
First 8 bytes derived from domain data, last 8 bytes random.
```
trace_id = SHA-256(domain_identifier)[0:8] || random_64_bits()
```
| Pros | Cons |
| ------------------------------------------- | ---------------------------------------- |
| Prefix search: "find all traces for TX abc" | Must propagate to maintain full trace_id |
| Unique per processing instance | More complex generation logic |
| Retries get distinct trace_ids | Partial correlation only (prefix match) |
### XRPL Workflow Analysis
XRPL has a unique advantage: its core workflows produce **globally unique 256-bit hashes** that are known on every node. This makes deterministic trace_id generation practical in ways most systems can't achieve.
#### Natural Identifiers by Workflow
| Workflow | Natural Identifier | Size | Known at Start? | Same on All Nodes? |
| ------------------- | --------------------------------- | ---------- | ----------------------------- | -------------------------------- |
| **Transaction** | Transaction hash (`tid_`) | 256-bit | Yes — computed before signing | Yes — hash of canonical tx data |
| **Consensus round** | Previous ledger hash + ledger seq | 256+32 bit | Yes — known when round opens | Yes — all validators agree |
| **Validation** | Ledger hash being validated | 256-bit | Yes — from consensus result | Yes — same closed ledger |
| **Ledger catch-up** | Target ledger hash | 256-bit | Yes — we know what to fetch | Yes — identifies ledger globally |
#### Where These Identifiers Live in Code
```
Transaction: STTx::getTransactionID() → uint256 tid_
TMTransaction::rawTransaction → recompute hash from bytes
Consensus: ConsensusProposal::prevLedger_ → uint256 (previous ledger hash)
ConsensusProposal::position_ → uint256 (TxSet hash)
LedgerHeader::seq → uint32_t (ledger sequence)
Validation: STValidation::getLedgerHash() → uint256
STValidation::getNodeID() → NodeID (160-bit)
Ledger fetch: InboundLedger constructor → uint256 hash, uint32_t seq
TMGetLedger::ledgerHash → bytes (uint256)
```
### Recommended Strategy: Workflow-Scoped Deterministic
Each workflow type derives its trace_id from its natural domain identifier:
```
Transaction trace: trace_id = SHA-256("tx" || tx_hash)[0:16]
Consensus trace: trace_id = SHA-256("cons" || prev_ledger_hash || ledger_seq)[0:16]
Ledger catch-up: trace_id = SHA-256("fetch" || target_ledger_hash)[0:16]
```
The string prefix (`"tx"`, `"cons"`, `"fetch"`) prevents collisions between workflows that might share underlying hashes.
**Why this works for XRPL:**
1. **Propagation-resilient** — Even if a P2P message drops trace context, every node independently computes the same trace_id from the same tx_hash or ledger_hash. Spans still correlate.
2. **Zero-cost search** — "Show me the trace for transaction ABC" becomes a direct lookup: compute `SHA-256("tx" || ABC)[0:16]` and query. No secondary index needed.
3. **Cross-workflow linking via Span Links** — A consensus trace links to individual transaction traces. A validation span links to the consensus trace. This connects the full picture without forcing everything into one giant trace.
### Cross-Workflow Correlation
Each workflow gets its own trace. Span Links tie them together:
```mermaid
flowchart TB
subgraph tx_trace["Transaction Trace"]
direction LR
Tn["trace_id = f(tx_hash)"]:::note --> T1["tx.receive"] --> T2["tx.validate"] --> T3["tx.relay"]
end
subgraph cons_trace["Consensus Trace"]
direction LR
Cn["trace_id = f(prev_ledger, seq)"]:::note --> C1["cons.open"] --> C2["cons.propose"] --> C3["cons.accept"]
end
subgraph val_trace["Validation"]
direction LR
Vn["spans within consensus trace"]:::note --> V1["val.create"] --> V2["val.broadcast"]
end
subgraph fetch_trace["Catch-Up Trace"]
direction LR
Fn["trace_id = f(ledger_hash)"]:::note --> F1["fetch.request"] --> F2["fetch.receive"] --> F3["fetch.apply"]
end
C1 -.-|"span link\n(tx traces)"| T3
C3 --> V1
F1 -.-|"span link\n(target ledger)"| C3
classDef note fill:none,stroke:#888,stroke-dasharray:5 5,color:#333,font-style:italic
style T1 fill:#0d47a1,stroke:#082f6a,color:#ffffff
style T2 fill:#0d47a1,stroke:#082f6a,color:#ffffff
style T3 fill:#0d47a1,stroke:#082f6a,color:#ffffff
style C1 fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style C2 fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style C3 fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style V1 fill:#bf360c,stroke:#8c2809,color:#ffffff
style V2 fill:#bf360c,stroke:#8c2809,color:#ffffff
style F1 fill:#4a148c,stroke:#38006b,color:#ffffff
style F2 fill:#4a148c,stroke:#38006b,color:#ffffff
style F3 fill:#4a148c,stroke:#38006b,color:#ffffff
```
**Reading the diagram:**
- **Transaction Trace (blue)**: An independent trace whose `trace_id` is deterministically derived from the transaction hash. Contains receive, validate, and relay spans.
- **Consensus Trace (green)**: An independent trace whose `trace_id` is derived from the previous ledger hash and sequence number. Covers the open, propose, and accept phases.
- **Validation (red)**: Validation spans live within the consensus trace (not a separate trace). They are created after the accept phase completes.
- **Catch-Up Trace (purple)**: An independent trace for ledger acquisition, derived from the target ledger hash. Used when a node is behind and fetching missing ledgers.
- **Dotted arrows (span links)**: Cross-trace correlations. Consensus links to transaction traces it included; catch-up links to the consensus trace that produced the target ledger.
- **Solid arrow (C3 to V1)**: A parent-child relationship -- validation spans are direct children of the consensus accept span within the same trace.
**How a query flows:**
```
"Why was TX abc slow?"
1. Compute trace_id = SHA-256("tx" || abc)[0:16]
2. Find transaction trace → see it was included in consensus round N
3. Follow span link → consensus trace for round N
4. See which phase was slow (propose? accept?)
5. If a node was catching up, follow link → catch-up trace
```
### Trade-offs to Consider
| Concern | Mitigation |
| ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| **Retries get same trace_id** | Add `attempt` attribute to root span; spans have unique span_ids and timestamps |
| **256→128 bit truncation** | Birthday-bound collision at ~2⁶⁴ operations — negligible for XRPL's throughput |
| **Non-standard generation** | OTel spec allows any 16-byte non-zero value; tooling works on the hex string |
| **Hash computation cost** | SHA-256 is ~0.3μs per call; XRPL already computes these hashes for other purposes |
| **Late-binding identifiers** | Ledger hash isn't known until after consensus — validation spans use ledger_seq as fallback, then link to the consensus trace |
---
## Distributed Traces Across Nodes
In distributed systems like rippled, traces span **multiple independent nodes**. The trace context must be propagated in network messages:
```mermaid
sequenceDiagram
participant Client
participant NodeA as Node A
participant NodeB as Node B
participant NodeC as Node C
Client->>NodeA: Submit TX<br/>(no trace context)
Note over NodeA: Creates new trace<br/>trace_id: abc123<br/>span: tx.receive
NodeA->>NodeB: Relay TX<br/>(trace_id: abc123, parent: 001)
Note over NodeB: Creates child span<br/>span: tx.relay<br/>parent_span_id: 001
NodeA->>NodeC: Relay TX<br/>(trace_id: abc123, parent: 001)
Note over NodeC: Creates child span<br/>span: tx.relay<br/>parent_span_id: 001
Note over NodeA,NodeC: All spans share trace_id: abc123<br/>enabling correlation across nodes
```
**Reading the diagram:**
- **Client**: The external entity that submits a transaction. It does not carry trace context -- the trace originates at the first node.
- **Node A**: The entry point that creates a new trace (trace_id: abc123) and the root span `tx.receive`. It relays the transaction to peers with trace context attached.
- **Node B and Node C**: Peer nodes that receive the relayed transaction along with the propagated trace context. Each creates a child span under Node A's span, preserving the same `trace_id`.
- **Arrows with trace context**: The relay messages carry `trace_id` and `parent_span_id`, allowing each downstream node to link its spans back to the originating span on Node A.
---
## Context Propagation
For traces to work across nodes, **trace context must be propagated** in messages.
### What's in the Context (~26 bytes)
| Field | Size | Description |
| ------------- | -------- | ------------------------------------------------------- |
| `trace_id` | 16 bytes | Identifies the entire trace (constant across all nodes) |
| `span_id` | 8 bytes | The sender's current span (becomes parent on receiver) |
| `trace_flags` | 1 byte | Sampling decision (bit 0 = sampled; bits 1-7 reserved) |
| `trace_state` | variable | Optional vendor-specific data (typically omitted) |
### How span_id Changes at Each Hop
Only **one** `span_id` travels in the context - the sender's current span. Each node:
1. Extracts the received `span_id` and uses it as the `parent_span_id`
2. Creates a **new** `span_id` for its own span
3. Sends its own `span_id` as the parent when forwarding
```
Node A Node B Node C
────── ────── ──────
Span AAA Span BBB Span CCC
│ │ │
▼ ▼ ▼
Context out: Context out: Context out:
├─ trace_id: abc123 ├─ trace_id: abc123 ├─ trace_id: abc123
├─ span_id: AAA ──────────► ├─ span_id: BBB ──────────► ├─ span_id: CCC ──────►
└─ flags: 01 └─ flags: 01 └─ flags: 01
│ │
parent = AAA parent = BBB
```
The `trace_id` stays constant, but `span_id` **changes at every hop** to maintain the parent-child chain.
### Propagation Formats
There are two patterns:
### HTTP/RPC Headers (W3C Trace Context)
```
traceparent: 00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01
│ │ │ │
│ │ │ └── Flags (sampled)
│ │ └── Parent span ID (16 hex)
│ └── Trace ID (32 hex)
└── Version
```
### Protocol Buffers (rippled P2P messages)
```protobuf
message TMTransaction {
bytes rawTransaction = 1;
// ... existing fields ...
// Trace context extension
bytes trace_parent = 100; // W3C traceparent
bytes trace_state = 101; // W3C tracestate
}
```
---
## Sampling
Not every trace needs to be recorded. **Sampling** reduces overhead:
### Head Sampling (at trace start)
```
Request arrives → Random 10% chance → Record or skip entire trace
```
- ✅ Low overhead
- ❌ May miss interesting traces
### Tail Sampling (after trace completes)
```
Trace completes → Collector evaluates:
- Error? → KEEP
- Slow? → KEEP
- Normal? → Sample 10%
```
- ✅ Never loses important traces
- ❌ Higher memory usage at collector
---
## Key Benefits for rippled
| Challenge | How Tracing Helps |
| ---------------------------------- | ---------------------------------------- |
| "Where is my transaction?" | Follow trace across all nodes it touched |
| "Why was consensus slow?" | See timing breakdown of each phase |
| "Which node is the bottleneck?" | Compare span durations across nodes |
| "What happened during the outage?" | Correlate errors across the network |
---
## Glossary
| Term | Definition |
| -------------------- | ------------------------------------------------------------------- |
| **Trace** | Complete journey of a request, identified by `trace_id` |
| **Span** | Single operation within a trace |
| **Parent-Child** | Span relationship where the parent depends on the child |
| **Follows-From** | Causal relationship where originator doesn't wait for the result |
| **Span Link** | Non-hierarchical connection between spans, possibly across traces |
| **Deterministic ID** | Trace ID derived from domain data (e.g., tx_hash) instead of random |
| **Context** | Data propagated between services (`trace_id`, `span_id`, flags) |
| **Instrumentation** | Code that creates spans and propagates context |
| **Collector** | Service that receives, processes, and exports traces |
| **Backend** | Storage/visualization system (Tempo) |
| **Head Sampling** | Sampling decision at trace start |
| **Tail Sampling** | Sampling decision after trace completes |
---
_Next: [Architecture Analysis](./01-architecture-analysis.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,467 @@
# Architecture Analysis
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Design Decisions](./02-design-decisions.md) | [Implementation Strategy](./03-implementation-strategy.md)
---
## 1.1 Current rippled Architecture Overview
> **WS** = WebSocket | **UNL** = Unique Node List | **TxQ** = Transaction Queue | **StatsD** = Statistics Daemon
The rippled node software consists of several interconnected components that need instrumentation for distributed tracing:
```mermaid
flowchart TB
subgraph rippled["rippled Node"]
subgraph services["Core Services"]
RPC["RPC Server<br/>(HTTP/WS/gRPC)"]
Overlay["Overlay<br/>(P2P Network)"]
Consensus["Consensus<br/>(RCLConsensus)"]
ValidatorList["ValidatorList<br/>(UNL Mgmt)"]
end
JobQueue["JobQueue<br/>(Thread Pool)"]
subgraph processing["Processing Layer"]
NetworkOPs["NetworkOPs<br/>(Tx Processing)"]
LedgerMaster["LedgerMaster<br/>(Ledger Mgmt)"]
NodeStore["NodeStore<br/>(Database)"]
InboundLedgers["InboundLedgers<br/>(Ledger Sync)"]
end
subgraph appservices["Application Services"]
PathFind["PathFinding<br/>(Payment Paths)"]
TxQ["TxQ<br/>(Fee Escalation)"]
LoadMgr["LoadManager<br/>(Fee/Load)"]
end
subgraph observability["Existing Observability"]
PerfLog["PerfLog<br/>(JSON)"]
Insight["Insight<br/>(StatsD)"]
Logging["Logging<br/>(Journal)"]
end
services --> JobQueue
JobQueue --> processing
JobQueue --> appservices
end
style rippled fill:#424242,stroke:#212121,color:#ffffff
style services fill:#1565c0,stroke:#0d47a1,color:#ffffff
style processing fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style appservices fill:#6a1b9a,stroke:#4a148c,color:#ffffff
style observability fill:#e65100,stroke:#bf360c,color:#ffffff
```
**Reading the diagram:**
- **Core Services (blue)**: The entry points into rippled -- RPC Server handles client requests, Overlay manages peer-to-peer networking, Consensus drives agreement, and ValidatorList manages trusted validators.
- **JobQueue (center)**: The asynchronous thread pool that decouples Core Services from the Processing and Application layers. All work flows through it.
- **Processing Layer (green)**: Core business logic -- NetworkOPs processes transactions, LedgerMaster manages ledger state, NodeStore handles persistence, and InboundLedgers synchronizes missing data.
- **Application Services (purple)**: Higher-level features -- PathFinding computes payment routes, TxQ manages fee-based queuing, and LoadManager tracks server load.
- **Existing Observability (orange)**: The current monitoring stack (PerfLog, Insight, Journal logging) that OpenTelemetry will complement, not replace.
- **Arrows (Services to JobQueue to layers)**: Work originates at Core Services, is enqueued onto the JobQueue, and dispatched to Processing or Application layers for execution.
---
## 1.1.1 Actors and Actions
### Actors
| Who (Plain English) | Technical Term |
| ----------------------------------------- | -------------------------- |
| Network node running XRPL software | rippled node |
| External client submitting requests | RPC Client |
| Network neighbor sharing data | Peer (PeerImp) |
| Request handler for client queries | RPC Server (ServerHandler) |
| Command executor for specific RPC methods | RPCHandler |
| Agreement process between nodes | Consensus (RCLConsensus) |
| Transaction processing coordinator | NetworkOPs |
| Background task scheduler | JobQueue |
| Ledger state manager | LedgerMaster |
| Payment route calculator | PathFinding (Pathfinder) |
| Transaction waiting room | TxQ (Transaction Queue) |
| Fee adjustment system | LoadManager |
| Trusted validator list manager | ValidatorList |
| Protocol upgrade tracker | AmendmentTable |
| Ledger state hash tree | SHAMap |
| Persistent key-value storage | NodeStore |
### Actions
| What Happens (Plain English) | Technical Term |
| ---------------------------------------------- | ---------------------- |
| Client sends a request to a node | `rpc.request` |
| Node executes a specific RPC command | `rpc.command.*` |
| Node receives a transaction from a peer | `tx.receive` |
| Node checks if a transaction is valid | `tx.validate` |
| Node forwards a transaction to neighbors | `tx.relay` |
| Nodes agree on which transactions to include | `consensus.round` |
| Consensus progresses through phases | `consensus.phase.*` |
| Node builds a new confirmed ledger | `ledger.build` |
| Node fetches missing ledger data from peers | `ledger.acquire` |
| Node computes payment routes | `pathfind.compute` |
| Node queues a transaction for later processing | `txq.enqueue` |
| Node increases fees due to high load | `fee.escalate` |
| Node fetches the latest trusted validator list | `validator.list.fetch` |
| Node votes on a protocol amendment | `amendment.vote` |
| Node synchronizes state tree data | `shamap.sync` |
---
## 1.2 Key Components for Instrumentation
> **TxQ** = Transaction Queue | **UNL** = Unique Node List
| Component | Location | Purpose | Trace Value |
| ------------------ | ------------------------------------------ | ------------------------ | -------------------------------- |
| **Overlay** | `src/xrpld/overlay/` | P2P communication | Message propagation timing |
| **PeerImp** | `src/xrpld/overlay/detail/PeerImp.cpp` | Individual peer handling | Per-peer latency |
| **RCLConsensus** | `src/xrpld/app/consensus/RCLConsensus.cpp` | Consensus algorithm | Round timing, phase analysis |
| **NetworkOPs** | `src/xrpld/app/misc/NetworkOPs.cpp` | Transaction processing | Tx lifecycle tracking |
| **ServerHandler** | `src/xrpld/rpc/detail/ServerHandler.cpp` | RPC entry point | Request latency |
| **RPCHandler** | `src/xrpld/rpc/detail/RPCHandler.cpp` | Command execution | Per-command timing |
| **JobQueue** | `src/xrpl/core/JobQueue.h` | Async task execution | Queue wait times |
| **PathFinding** | `src/xrpld/app/paths/` | Payment path computation | Path latency, cache hits |
| **TxQ** | `src/xrpld/app/misc/TxQ.cpp` | Transaction queue/fees | Queue depth, eviction rates |
| **LoadManager** | `src/xrpld/app/main/LoadManager.cpp` | Fee escalation/load | Fee levels, load factors |
| **InboundLedgers** | `src/xrpld/app/ledger/InboundLedgers.cpp` | Ledger acquisition | Sync time, peer reliability |
| **ValidatorList** | `src/xrpld/app/misc/ValidatorList.cpp` | UNL management | List freshness, fetch failures |
| **AmendmentTable** | `src/xrpld/app/misc/AmendmentTable.cpp` | Protocol amendments | Voting status, activation events |
| **SHAMap** | `src/xrpld/shamap/` | State hash tree | Sync speed, missing nodes |
---
## 1.3 Transaction Flow Diagram
Transaction flow spans multiple nodes in the network. Each node creates linked spans to form a distributed trace:
```mermaid
sequenceDiagram
participant Client
participant PeerA as Peer A (Receive)
participant PeerB as Peer B (Relay)
participant PeerC as Peer C (Validate)
Client->>PeerA: 1. Submit TX
rect rgb(230, 245, 255)
Note over PeerA: tx.receive SPAN START
PeerA->>PeerA: HashRouter Deduplication
PeerA->>PeerA: tx.validate (child span)
end
PeerA->>PeerB: 2. Relay TX (with trace ctx)
rect rgb(230, 245, 255)
Note over PeerB: tx.receive (linked span)
end
PeerB->>PeerC: 3. Relay TX
rect rgb(230, 245, 255)
Note over PeerC: tx.receive (linked span)
PeerC->>PeerC: tx.process
end
Note over Client,PeerC: DISTRIBUTED TRACE (same trace_id: abc123)
```
**Reading the diagram:**
- **Client**: The external entity that submits a transaction to Peer A. It has no trace context -- the trace starts at the first node.
- **Peer A (Receive)**: The entry node that creates the root span `tx.receive`, runs HashRouter deduplication to avoid processing duplicates, and creates a child `tx.validate` span.
- **Peer A to Peer B arrow**: The relay message carries trace context (trace_id + parent span_id), enabling Peer B to create a linked span under the same trace.
- **Peer B (Relay)**: Receives the transaction and trace context, creates a `tx.receive` span linked to Peer A's trace, then relays onward.
- **Peer C (Validate)**: Final hop in this example. Creates a linked `tx.receive` span and runs `tx.process` to fully process the transaction.
- **Blue rectangles**: Highlight the span boundaries on each node, showing where instrumentation creates and closes spans.
### Trace Structure
```
trace_id: abc123
├── span: tx.receive (Peer A)
│ ├── span: tx.validate
│ └── span: tx.relay
├── span: tx.receive (Peer B) [parent: Peer A]
│ └── span: tx.relay
└── span: tx.receive (Peer C) [parent: Peer B]
└── span: tx.process
```
---
## 1.4 Consensus Round Flow
Consensus rounds are multi-phase operations that benefit significantly from tracing:
```mermaid
flowchart TB
subgraph round["consensus.round (root span)"]
attrs["Attributes:<br/>xrpl.consensus.ledger.seq = 12345678<br/>xrpl.consensus.mode = proposing<br/>xrpl.consensus.proposers = 35"]
subgraph open["consensus.phase.open"]
open_desc["Duration: ~3s<br/>Waiting for transactions"]
end
subgraph establish["consensus.phase.establish"]
est_attrs["proposals_received = 28<br/>disputes_resolved = 3"]
est_children["├── consensus.proposal.receive (×28)<br/>├── consensus.proposal.send (×1)<br/>└── consensus.dispute.resolve (×3)"]
end
subgraph accept["consensus.phase.accept"]
acc_attrs["transactions_applied = 150<br/>ledger.hash = DEF456..."]
acc_children["├── ledger.build<br/>└── ledger.validate"]
end
attrs --> open
open --> establish
establish --> accept
end
style round fill:#f57f17,stroke:#e65100,color:#ffffff
style open fill:#1565c0,stroke:#0d47a1,color:#ffffff
style establish fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style accept fill:#c2185b,stroke:#880e4f,color:#ffffff
```
**Reading the diagram:**
- **consensus.round (orange, root span)**: The top-level span encompassing the entire consensus round, with attributes like ledger sequence, mode, and proposer count.
- **consensus.phase.open (blue)**: The first phase where the node waits (~3s) to collect incoming transactions before proposing.
- **consensus.phase.establish (green)**: The negotiation phase where validators exchange proposals, resolve disputes, and converge on a transaction set. Child spans track each proposal received/sent and each dispute resolved.
- **consensus.phase.accept (pink)**: The final phase where the agreed transaction set is applied, a new ledger is built, and the ledger is validated. Child spans cover `ledger.build` and `ledger.validate`.
- **Arrows (open to establish to accept)**: The sequential flow through the three consensus phases. Each phase must complete before the next begins.
---
## 1.5 RPC Request Flow
> **WS** = WebSocket
RPC requests support W3C Trace Context headers for distributed tracing across services:
```mermaid
flowchart TB
subgraph request["rpc.request (root span)"]
http["HTTP Request — POST /<br/>traceparent:<br/>00-abc123...-def456...-01"]
attrs["Attributes:<br/>http.method = POST<br/>net.peer.ip = 192.168.1.100<br/>xrpl.rpc.command = submit"]
subgraph enqueue["jobqueue.enqueue"]
job_attr["xrpl.job.type = jtCLIENT_RPC"]
end
subgraph command["rpc.command.submit"]
cmd_attrs["xrpl.rpc.version = 2<br/>xrpl.rpc.role = user"]
cmd_children["├── tx.deserialize<br/>├── tx.validate_local<br/>└── tx.submit_to_network"]
end
response["Response: 200 OK<br/>Duration: 45ms"]
http --> attrs
attrs --> enqueue
enqueue --> command
command --> response
end
style request fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style enqueue fill:#1565c0,stroke:#0d47a1,color:#ffffff
style command fill:#e65100,stroke:#bf360c,color:#ffffff
```
**Reading the diagram:**
- **rpc.request (green, root span)**: The outermost span representing the full RPC request lifecycle, from HTTP receipt to response. Carries the W3C `traceparent` header for distributed tracing.
- **HTTP Request node**: Shows the incoming POST request with its `traceparent` header and extracted attributes (method, peer IP, command name).
- **jobqueue.enqueue (blue)**: The span covering the asynchronous handoff from the RPC thread to the JobQueue worker thread. The trace context is preserved across this async boundary.
- **rpc.command.submit (orange)**: The span for the actual command execution, with child spans for deserialization, local validation, and network submission.
- **Response node**: The final output with HTTP status and total duration, marking the end of the root span.
- **Arrows (top to bottom)**: The sequential processing pipeline -- receive request, extract attributes, enqueue job, execute command, return response.
---
## 1.6 Key Trace Points
> **TxQ** = Transaction Queue
The following table identifies priority instrumentation points across the codebase:
| Category | Span Name | File | Method | Priority |
| --------------- | ---------------------- | ---------------------- | ----------------------- | -------- |
| **Transaction** | `tx.receive` | `PeerImp.cpp` | `handleTransaction()` | High |
| **Transaction** | `tx.validate` | `NetworkOPs.cpp` | `processTransaction()` | High |
| **Transaction** | `tx.process` | `NetworkOPs.cpp` | `doTransactionSync()` | High |
| **Transaction** | `tx.relay` | `OverlayImpl.cpp` | `relay()` | Medium |
| **Consensus** | `consensus.round` | `RCLConsensus.cpp` | `startRound()` | High |
| **Consensus** | `consensus.phase.*` | `Consensus.h` | `timerEntry()` | High |
| **Consensus** | `consensus.proposal.*` | `RCLConsensus.cpp` | `peerProposal()` | Medium |
| **RPC** | `rpc.request` | `ServerHandler.cpp` | `onRequest()` | High |
| **RPC** | `rpc.command.*` | `RPCHandler.cpp` | `doCommand()` | High |
| **Peer** | `peer.connect` | `OverlayImpl.cpp` | `onHandoff()` | Low |
| **Peer** | `peer.message.*` | `PeerImp.cpp` | `onMessage()` | Low |
| **Ledger** | `ledger.acquire` | `InboundLedgers.cpp` | `acquire()` | Medium |
| **Ledger** | `ledger.build` | `RCLConsensus.cpp` | `buildLCL()` | High |
| **PathFinding** | `pathfind.request` | `PathRequest.cpp` | `doUpdate()` | High |
| **PathFinding** | `pathfind.compute` | `Pathfinder.cpp` | `findPaths()` | High |
| **TxQ** | `txq.enqueue` | `TxQ.cpp` | `apply()` | High |
| **TxQ** | `txq.apply` | `TxQ.cpp` | `processClosedLedger()` | High |
| **Fee** | `fee.escalate` | `LoadManager.cpp` | `raiseLocalFee()` | Medium |
| **Ledger** | `ledger.replay` | `LedgerReplayer.h` | `replay()` | Medium |
| **Ledger** | `ledger.delta` | `LedgerDeltaAcquire.h` | `processData()` | Medium |
| **Validator** | `validator.list.fetch` | `ValidatorList.cpp` | `verify()` | Medium |
| **Validator** | `validator.manifest` | `Manifest.cpp` | `applyManifest()` | Low |
| **Amendment** | `amendment.vote` | `AmendmentTable.cpp` | `doVoting()` | Low |
| **SHAMap** | `shamap.sync` | `SHAMap.cpp` | `fetchRoot()` | Medium |
---
## 1.7 Instrumentation Priority
> **TxQ** = Transaction Queue
```mermaid
quadrantChart
title Instrumentation Priority Matrix
x-axis Low Complexity --> High Complexity
y-axis Low Value --> High Value
quadrant-1 Implement First
quadrant-2 Plan Carefully
quadrant-3 Quick Wins
quadrant-4 Consider Later
RPC Tracing: [0.2, 0.92]
Transaction Tracing: [0.55, 0.88]
Consensus Tracing: [0.78, 0.82]
PathFinding: [0.38, 0.75]
TxQ and Fees: [0.25, 0.65]
Ledger Sync: [0.62, 0.58]
Peer Message Tracing: [0.35, 0.25]
JobQueue Tracing: [0.2, 0.48]
Validator Mgmt: [0.48, 0.42]
Amendment Tracking: [0.15, 0.32]
SHAMap Operations: [0.72, 0.45]
```
---
## 1.8 Observable Outcomes
> **TxQ** = Transaction Queue | **UNL** = Unique Node List
After implementing OpenTelemetry, operators and developers will gain visibility into the following:
### 1.8.1 What You Will See: Traces
| Trace Type | Description | Example Query in Grafana/Tempo |
| -------------------------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| **Transaction Lifecycle** | Full journey from RPC submission through validation, relay, consensus, and ledger inclusion | `{service.name="rippled" && xrpl.tx.hash="ABC123..."}` |
| **Cross-Node Propagation** | Transaction path across multiple rippled nodes with timing | `{xrpl.tx.relay_count > 0}` |
| **Consensus Rounds** | Complete round with all phases (open, establish, accept) | `{span.name=~"consensus.round.*"}` |
| **RPC Request Processing** | Individual command execution with timing breakdown | `{xrpl.rpc.command="account_info"}` |
| **Ledger Acquisition** | Peer-to-peer ledger data requests and responses | `{span.name="ledger.acquire"}` |
| **PathFinding Latency** | Path computation time and cache effectiveness for payment RPCs | `{span.name="pathfind.compute"}` |
| **TxQ Behavior** | Queue depth, eviction patterns, fee escalation during congestion | `{span.name=~"txq.*"}` |
| **Ledger Sync** | Full acquisition timeline including delta and transaction fetches | `{span.name=~"ledger.acquire.*"}` |
| **Validator Health** | UNL fetch success, manifest updates, stale list detection | `{span.name=~"validator.*"}` |
### 1.8.2 What You Will See: Metrics (Derived from Traces)
| Metric | Description | Dashboard Panel |
| ----------------------------- | --------------------------------------- | --------------------------- |
| **RPC Latency (p50/p95/p99)** | Response time distribution per command | Heatmap by command |
| **Transaction Throughput** | Transactions processed per second | Time series graph |
| **Consensus Round Duration** | Time to complete consensus phases | Histogram |
| **Cross-Node Latency** | Time for transaction to reach N nodes | Line chart with percentiles |
| **Error Rate** | Failed transactions/RPC calls by type | Stacked bar chart |
| **PathFinding Latency** | Path computation time per currency pair | Heatmap by currency |
| **TxQ Depth** | Queued transactions over time | Time series with thresholds |
| **Fee Escalation Level** | Current fee multiplier | Gauge with alert thresholds |
| **Ledger Sync Duration** | Time to acquire missing ledgers | Histogram |
### 1.8.3 Concrete Dashboard Examples
**Transaction Trace View (Tempo):**
```
┌────────────────────────────────────────────────────────────────────────────────┐
│ Trace: abc123... (Transaction Submission) Duration: 847ms │
├────────────────────────────────────────────────────────────────────────────────┤
│ ├── rpc.request [ServerHandler] ████░░░░░░ 45ms │
│ │ └── rpc.command.submit [RPCHandler] ████░░░░░░ 42ms │
│ │ └── tx.receive [NetworkOPs] ███░░░░░░░ 35ms │
│ │ ├── tx.validate [TxQ] █░░░░░░░░░ 8ms │
│ │ └── tx.relay [Overlay] ██░░░░░░░░ 15ms │
│ │ ├── tx.receive [Node-B] █████░░░░░ 52ms │
│ │ │ └── tx.relay [Node-B] ██░░░░░░░░ 18ms │
│ │ └── tx.receive [Node-C] ██████░░░░ 65ms │
│ └── consensus.round [RCLConsensus] ████████░░ 720ms │
│ ├── consensus.phase.open ██░░░░░░░░ 180ms │
│ ├── consensus.phase.establish █████░░░░░ 480ms │
│ └── consensus.phase.accept █░░░░░░░░░ 60ms │
└────────────────────────────────────────────────────────────────────────────────┘
```
**RPC Performance Dashboard Panel:**
```
┌─────────────────────────────────────────────────────────────┐
│ RPC Command Latency (Last 1 Hour) │
├─────────────────────────────────────────────────────────────┤
│ Command │ p50 │ p95 │ p99 │ Errors │ Rate │
│──────────────────┼────────┼────────┼────────┼────────┼──────│
│ account_info │ 12ms │ 45ms │ 89ms │ 0.1% │ 150/s│
│ submit │ 35ms │ 120ms │ 250ms │ 2.3% │ 45/s│
│ ledger │ 8ms │ 25ms │ 55ms │ 0.0% │ 80/s│
│ tx │ 15ms │ 50ms │ 100ms │ 0.5% │ 60/s│
│ server_info │ 5ms │ 12ms │ 20ms │ 0.0% │ 200/s│
└─────────────────────────────────────────────────────────────┘
```
**Consensus Health Dashboard Panel:**
```mermaid
---
config:
xyChart:
width: 1200
height: 400
plotReservedSpacePercent: 50
chartOrientation: vertical
themeVariables:
xyChart:
plotColorPalette: "#3498db"
---
xychart-beta
title "Consensus Round Duration (Last 24 Hours)"
x-axis "Time of Day (Hours)" [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24]
y-axis "Duration (seconds)" 1 --> 5
line [2.1, 2.4, 2.8, 3.2, 3.8, 4.3, 4.5, 5.0, 4.7, 4.0, 3.2, 2.6, 2.0]
```
### 1.8.4 Operator Actionable Insights
| Scenario | What You'll See | Action |
| ------------------------- | ---------------------------------------------------------------------------- | ------------------------------------------------ |
| **Slow RPC** | Span showing which phase is slow (parsing, execution, serialization) | Optimize specific code path |
| **Transaction Stuck** | Trace stops at validation; error attribute shows reason | Fix transaction parameters |
| **Consensus Delay** | Phase.establish taking too long; proposer attribute shows missing validators | Investigate network connectivity |
| **Memory Spike** | Large batch of spans correlating with memory increase | Tune batch_size or sampling |
| **Network Partition** | Traces missing cross-node links for specific peer | Check peer connectivity |
| **Path Computation Slow** | pathfind.compute span shows high latency; cache miss rate in attributes | Warm the RippleLineCache, check order book depth |
| **TxQ Full** | txq.enqueue spans show evictions; fee.escalate spans increasing | Monitor fee levels, alert operators |
| **Ledger Sync Stalled** | ledger.acquire spans timing out; peer reliability attributes show issues | Check peer connectivity, add trusted peers |
| **UNL Stale** | validator.list.fetch spans failing; last_update attribute aging | Verify validator site URLs, check DNS |
### 1.8.5 Developer Debugging Workflow
1. **Find Transaction**: Query by `xrpl.tx.hash` to get full trace
2. **Identify Bottleneck**: Look at span durations to find slowest component
3. **Check Attributes**: Review `xrpl.tx.validity`, `xrpl.rpc.status` for errors
4. **Correlate Logs**: Use `trace_id` to find related PerfLog entries
5. **Compare Nodes**: Filter by `service.instance.id` to compare behavior across nodes
---
_Next: [Design Decisions](./02-design-decisions.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,627 @@
# Design Decisions
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Architecture Analysis](./01-architecture-analysis.md) | [Code Samples](./04-code-samples.md)
---
## 2.1 OpenTelemetry Components
> **OTLP** = OpenTelemetry Protocol
### 2.1.1 SDK Selection
**Primary Choice**: OpenTelemetry C++ SDK (`opentelemetry-cpp`)
| Component | Purpose | Required |
| --------------------------------------- | ---------------------- | ----------- |
| `opentelemetry-cpp::api` | Tracing API headers | Yes |
| `opentelemetry-cpp::sdk` | SDK implementation | Yes |
| `opentelemetry-cpp::ext` | Extensions (exporters) | Yes |
| `opentelemetry-cpp::otlp_grpc_exporter` | OTLP/gRPC export | Recommended |
| `opentelemetry-cpp::otlp_http_exporter` | OTLP/HTTP export | Alternative |
### 2.1.2 Instrumentation Strategy
**Manual Instrumentation** (recommended):
| Approach | Pros | Cons |
| ---------- | ----------------------------------------------------------------- | ------------------------------------------------------- |
| **Manual** | Precise control, optimized placement, rippled-specific attributes | More development effort |
| **Auto** | Less code, automatic coverage | Less control, potential overhead, limited customization |
---
## 2.2 Exporter Configuration
> **OTLP** = OpenTelemetry Protocol
```mermaid
flowchart TB
subgraph nodes["rippled Nodes"]
node1["rippled<br/>Node 1"]
node2["rippled<br/>Node 2"]
node3["rippled<br/>Node 3"]
end
collector["OpenTelemetry<br/>Collector<br/>(sidecar or standalone)"]
subgraph backends["Observability Backends"]
tempo["Tempo"]
elastic["Elastic<br/>APM"]
end
node1 -->|"OTLP/gRPC<br/>:4317"| collector
node2 -->|"OTLP/gRPC<br/>:4317"| collector
node3 -->|"OTLP/gRPC<br/>:4317"| collector
collector --> tempo
collector --> elastic
style nodes fill:#0d47a1,stroke:#082f6a,color:#ffffff
style backends fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style collector fill:#bf360c,stroke:#8c2809,color:#ffffff
```
**Reading the diagram:**
- **rippled Nodes (blue)**: The source of telemetry data. Each rippled node exports spans via OTLP/gRPC on port 4317.
- **OpenTelemetry Collector (red)**: The central aggregation point that receives spans from all nodes. Can run as a sidecar (per-node) or standalone (shared). Handles batching, filtering, and routing.
- **Observability Backends (green)**: The storage and visualization destinations. Tempo is the recommended backend for both development and production, and Elastic APM is an alternative. The Collector routes to one or more backends.
- **Arrows (nodes to collector to backends)**: The data pipeline -- spans flow from nodes to the Collector over gRPC, then the Collector fans out to the configured backends.
### 2.2.1 OTLP/gRPC (Recommended)
```cpp
// Configuration for OTLP over gRPC
namespace otlp = opentelemetry::exporter::otlp;
otlp::OtlpGrpcExporterOptions opts;
opts.endpoint = "localhost:4317";
opts.useTls = true;
opts.sslCaCertPath = "/path/to/ca.crt";
```
### 2.2.2 OTLP/HTTP (Alternative)
```cpp
// Configuration for OTLP over HTTP
namespace otlp = opentelemetry::exporter::otlp;
otlp::OtlpHttpExporterOptions opts;
opts.url = "http://localhost:4318/v1/traces";
opts.content_type = otlp::HttpRequestContentType::kJson; // or kBinary
```
---
## 2.3 Span Naming Conventions
> **TxQ** = Transaction Queue | **UNL** = Unique Node List | **WS** = WebSocket
### 2.3.1 Naming Schema
```
<component>.<operation>[.<sub-operation>]
```
**Examples**:
- `tx.receive` - Transaction received from peer
- `consensus.phase.establish` - Consensus establish phase
- `rpc.command.server_info` - server_info RPC command
### 2.3.2 Complete Span Catalog
```yaml
# Transaction Spans
tx:
receive: "Transaction received from network"
validate: "Transaction signature/format validation"
process: "Full transaction processing"
relay: "Transaction relay to peers"
apply: "Apply transaction to ledger"
# Consensus Spans
consensus:
round: "Complete consensus round"
phase:
open: "Open phase - collecting transactions"
establish: "Establish phase - reaching agreement"
accept: "Accept phase - applying consensus"
proposal:
receive: "Receive peer proposal"
send: "Send our proposal"
validation:
receive: "Receive peer validation"
send: "Send our validation"
# RPC Spans
rpc:
request: "HTTP/WebSocket request handling"
command:
"*": "Specific RPC command (dynamic)"
# Peer Spans
peer:
connect: "Peer connection establishment"
disconnect: "Peer disconnection"
message:
send: "Send protocol message"
receive: "Receive protocol message"
# Ledger Spans
ledger:
acquire: "Ledger acquisition from network"
build: "Build new ledger"
validate: "Ledger validation"
close: "Close ledger"
replay: "Ledger replay executed"
delta: "Delta-based ledger acquired"
# PathFinding Spans
pathfind:
request: "Path request initiated"
compute: "Path computation executed"
# TxQ Spans
txq:
enqueue: "Transaction queued"
apply: "Queued transaction applied"
# Fee/Load Spans
fee:
escalate: "Fee escalation triggered"
# Validator Spans
validator:
list:
fetch: "UNL list fetched"
manifest: "Manifest update processed"
# Amendment Spans
amendment:
vote: "Amendment voting executed"
# SHAMap Spans
shamap:
sync: "State tree synchronization"
# Job Spans
job:
enqueue: "Job added to queue"
execute: "Job execution"
```
---
## 2.4 Attribute Schema
> **TxQ** = Transaction Queue | **UNL** = Unique Node List | **OTLP** = OpenTelemetry Protocol
### 2.4.1 Resource Attributes (Set Once at Startup)
```cpp
// Standard OpenTelemetry semantic conventions
resource::SemanticConventions::SERVICE_NAME = "rippled"
resource::SemanticConventions::SERVICE_VERSION = BuildInfo::getVersionString()
resource::SemanticConventions::SERVICE_INSTANCE_ID = <node_public_key_base58>
// Custom rippled attributes
"xrpl.network.id" = <network_id> // e.g., 0 for mainnet
"xrpl.network.type" = "mainnet" | "testnet" | "devnet" | "standalone"
"xrpl.node.type" = "validator" | "stock" | "reporting"
"xrpl.node.cluster" = <cluster_name> // If clustered
```
### 2.4.2 Span Attributes by Category
#### Transaction Attributes
```cpp
"xrpl.tx.hash" = string // Transaction hash (hex)
"xrpl.tx.type" = string // "Payment", "OfferCreate", etc.
"xrpl.tx.account" = string // Source account (redacted in prod)
"xrpl.tx.sequence" = int64 // Account sequence number
"xrpl.tx.fee" = int64 // Fee in drops
"xrpl.tx.result" = string // "tesSUCCESS", "tecPATH_DRY", etc.
"xrpl.tx.ledger_index" = int64 // Ledger containing transaction
```
#### Consensus Attributes
```cpp
"xrpl.consensus.round" = int64 // Round number
"xrpl.consensus.phase" = string // "open", "establish", "accept"
"xrpl.consensus.mode" = string // "proposing", "observing", etc.
"xrpl.consensus.proposers" = int64 // Number of proposers
"xrpl.consensus.ledger.prev" = string // Previous ledger hash
"xrpl.consensus.ledger.seq" = int64 // Ledger sequence
"xrpl.consensus.tx_count" = int64 // Transactions in consensus set
"xrpl.consensus.duration_ms" = float64 // Round duration
// Phase 4a: Establish-phase gap fill & cross-node correlation
"xrpl.consensus.round_id" = int64 // Consensus round number
"xrpl.consensus.ledger_id" = string // previousLedger.id() — shared across nodes
"xrpl.consensus.trace_strategy" = string // "deterministic" or "attribute"
"xrpl.consensus.converge_percent" = int64 // Convergence % (0-100+)
"xrpl.consensus.establish_count" = int64 // Number of establish iterations
"xrpl.consensus.disputes_count" = int64 // Active disputed transactions
"xrpl.consensus.proposers_agreed" = int64 // Peers agreeing with our position
"xrpl.consensus.proposers_total" = int64 // Total peer positions
"xrpl.consensus.agree_count" = int64 // Peers that agree (haveConsensus)
"xrpl.consensus.disagree_count" = int64 // Peers that disagree
"xrpl.consensus.threshold_percent" = int64 // Current threshold (50/65/70/95)
"xrpl.consensus.result" = string // "yes", "no", "moved_on"
"xrpl.consensus.mode.old" = string // Previous consensus mode
"xrpl.consensus.mode.new" = string // New consensus mode
```
#### RPC Attributes
```cpp
"xrpl.rpc.command" = string // Command name
"xrpl.rpc.version" = int64 // API version
"xrpl.rpc.role" = string // "admin" or "user"
"xrpl.rpc.params" = string // Sanitized parameters (optional)
```
#### Peer & Message Attributes
```cpp
"xrpl.peer.id" = string // Peer public key (base58)
"xrpl.peer.address" = string // IP:port
"xrpl.peer.latency_ms" = float64 // Measured latency
"xrpl.peer.cluster" = string // Cluster name if clustered
"xrpl.message.type" = string // Protocol message type name
"xrpl.message.size_bytes" = int64 // Message size
"xrpl.message.compressed" = bool // Whether compressed
```
#### Ledger & Job Attributes
```cpp
"xrpl.ledger.hash" = string // Ledger hash
"xrpl.ledger.index" = int64 // Ledger sequence/index
"xrpl.ledger.close_time" = int64 // Close time (epoch)
"xrpl.ledger.tx_count" = int64 // Transaction count
"xrpl.job.type" = string // Job type name
"xrpl.job.queue_ms" = float64 // Time spent in queue
"xrpl.job.worker" = int64 // Worker thread ID
```
#### PathFinding Attributes
```cpp
"xrpl.pathfind.source_currency" = string // Source currency code
"xrpl.pathfind.dest_currency" = string // Destination currency code
"xrpl.pathfind.path_count" = int64 // Number of paths found
"xrpl.pathfind.cache_hit" = bool // RippleLineCache hit
```
#### TxQ Attributes
```cpp
"xrpl.txq.queue_depth" = int64 // Current queue depth
"xrpl.txq.fee_level" = int64 // Fee level of transaction
"xrpl.txq.eviction_reason" = string // Why transaction was evicted
```
#### Fee Attributes
```cpp
"xrpl.fee.load_factor" = int64 // Current load factor
"xrpl.fee.escalation_level" = int64 // Fee escalation multiplier
```
#### Validator Attributes
```cpp
"xrpl.validator.list_size" = int64 // UNL size
"xrpl.validator.list_age_sec" = int64 // Seconds since last update
```
#### Amendment Attributes
```cpp
"xrpl.amendment.name" = string // Amendment name
"xrpl.amendment.status" = string // "enabled", "vetoed", "supported"
```
#### SHAMap Attributes
```cpp
"xrpl.shamap.type" = string // "transaction", "state", "account_state"
"xrpl.shamap.missing_nodes" = int64 // Number of missing nodes during sync
"xrpl.shamap.duration_ms" = float64 // Sync duration
```
### 2.4.3 Data Collection Summary
The following table summarizes what data is collected by category:
| Category | Attributes Collected | Purpose |
| --------------- | ---------------------------------------------------------------------- | ---------------------------- |
| **Transaction** | `tx.hash`, `tx.type`, `tx.result`, `tx.fee`, `ledger_index` | Trace transaction lifecycle |
| **Consensus** | `round`, `phase`, `mode`, `proposers` (public keys), `duration_ms` | Analyze consensus timing |
| **RPC** | `command`, `version`, `status`, `duration_ms` | Monitor RPC performance |
| **Peer** | `peer.id` (public key), `latency_ms`, `message.type`, `message.size` | Network topology analysis |
| **Ledger** | `ledger.hash`, `ledger.index`, `close_time`, `tx_count` | Ledger progression tracking |
| **Job** | `job.type`, `queue_ms`, `worker` | JobQueue performance |
| **PathFinding** | `pathfind.source_currency`, `dest_currency`, `path_count`, `cache_hit` | Payment path analysis |
| **TxQ** | `txq.queue_depth`, `fee_level`, `eviction_reason` | Queue depth and fee tracking |
| **Fee** | `fee.load_factor`, `escalation_level` | Fee escalation monitoring |
| **Validator** | `validator.list_size`, `list_age_sec` | UNL health monitoring |
| **Amendment** | `amendment.name`, `status` | Protocol upgrade tracking |
| **SHAMap** | `shamap.type`, `missing_nodes`, `duration_ms` | State tree sync performance |
### 2.4.4 Privacy & Sensitive Data Policy
> **PII** = Personally Identifiable Information
OpenTelemetry instrumentation is designed to collect **operational metadata only**, never sensitive content.
#### Data NOT Collected
The following data is explicitly **excluded** from telemetry collection:
| Excluded Data | Reason |
| ----------------------- | ----------------------------------------- |
| **Private Keys** | Never exposed; not relevant to tracing |
| **Account Balances** | Financial data; privacy sensitive |
| **Transaction Amounts** | Financial data; privacy sensitive |
| **Raw TX Payloads** | May contain sensitive memo/data fields |
| **Personal Data** | No PII collected |
| **IP Addresses** | Configurable; excluded by default in prod |
#### Privacy Protection Mechanisms
| Mechanism | Description |
| ----------------------------- | ------------------------------------------------------------------------- |
| **Account Hashing** | `xrpl.tx.account` is hashed at collector level before storage |
| **Configurable Redaction** | Sensitive fields can be excluded via `[telemetry]` config section |
| **Sampling** | Only 10% of traces recorded by default, reducing data exposure |
| **Local Control** | Node operators have full control over what gets exported |
| **No Raw Payloads** | Transaction content is never recorded, only metadata (hash, type, result) |
| **Collector-Level Filtering** | Additional redaction/hashing can be configured at OTel Collector |
#### Collector-Level Data Protection
The OpenTelemetry Collector can be configured to hash or redact sensitive attributes before export:
```yaml
processors:
attributes:
actions:
# Hash account addresses before storage
- key: xrpl.tx.account
action: hash
# Remove IP addresses entirely
- key: xrpl.peer.address
action: delete
# Redact specific fields
- key: xrpl.rpc.params
action: delete
```
#### Configuration Options for Privacy
In `rippled.cfg`, operators can control data collection granularity:
```ini
[telemetry]
enabled=1
# Disable collection of specific components
trace_transactions=1
trace_consensus=1
trace_rpc=1
trace_peer=0 # Disable peer tracing (high volume, includes addresses)
# Redact specific attributes
redact_account=1 # Hash account addresses before export
redact_peer_address=1 # Remove peer IP addresses
```
> **Note**: The `redact_account` configuration in `rippled.cfg` controls SDK-level redaction before export, while collector-level filtering (see [Collector-Level Data Protection](#collector-level-data-protection) above) provides an additional defense-in-depth layer. Both can operate independently.
> **Key Principle**: Telemetry collects **operational metadata** (timing, counts, hashes) — never **sensitive content** (keys, balances, amounts, raw payloads).
---
## 2.5 Context Propagation Design
> **WS** = WebSocket
### 2.5.1 Propagation Boundaries
```mermaid
flowchart TB
subgraph http["HTTP/WebSocket (RPC)"]
w3c["W3C Trace Context Headers:<br/>traceparent:<br/>00-trace_id-span_id-flags<br/>tracestate: rippled=..."]
end
subgraph protobuf["Protocol Buffers (P2P)"]
proto["message TraceContext {<br/> bytes trace_id = 1; // 16 bytes<br/> bytes span_id = 2; // 8 bytes<br/> uint32 trace_flags = 3;<br/> string trace_state = 4;<br/>}"]
end
subgraph jobqueue["JobQueue (Internal Async)"]
job["Context captured at job creation,<br/>restored at execution<br/><br/>class Job {<br/> otel::context::Context<br/> traceContext_;<br/>};"]
end
style http fill:#0d47a1,stroke:#082f6a,color:#ffffff
style protobuf fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style jobqueue fill:#bf360c,stroke:#8c2809,color:#ffffff
```
**Reading the diagram:**
- **HTTP/WebSocket - RPC (blue)**: For client-facing RPC requests, trace context is propagated using the W3C `traceparent` header. This is the standard approach and works with any OTel-compatible client.
- **Protocol Buffers - P2P (green)**: For peer-to-peer messages between rippled nodes, trace context is embedded as a protobuf `TraceContext` message carrying trace_id, span_id, flags, and optional trace_state.
- **JobQueue - Internal Async (red)**: For asynchronous work within a single node, the OTel context is captured when a job is created and restored when the job executes on a worker thread. This bridges the async gap so spans remain linked.
---
## 2.6 Integration with Existing Observability
> **OTLP** = OpenTelemetry Protocol | **WS** = WebSocket
### 2.6.1 Existing Frameworks Comparison
rippled already has two observability mechanisms. OpenTelemetry complements (not replaces) them:
| Aspect | PerfLog | Beast Insight (StatsD) | OpenTelemetry |
| --------------------- | ----------------------------- | ---------------------------- | ------------------------- |
| **Type** | Logging | Metrics | Distributed Tracing |
| **Data** | JSON log entries | Counters, gauges, histograms | Spans with context |
| **Scope** | Single node | Single node | **Cross-node** |
| **Output** | `perf.log` file | StatsD server | OTLP Collector |
| **Question answered** | "What happened on this node?" | "How many? How fast?" | "What was the journey?" |
| **Correlation** | By timestamp | By metric name | By `trace_id` |
| **Overhead** | Low (file I/O) | Low (UDP packets) | Low-Medium (configurable) |
### 2.6.2 What Each Framework Does Best
#### PerfLog
- **Purpose**: Detailed local event logging for RPC and job execution
- **Strengths**:
- Rich JSON output with timing data
- Already integrated in RPC handlers
- File-based, no external dependencies
- **Limitations**:
- Single-node only (no cross-node correlation)
- No parent-child relationships between events
- Manual log parsing required
```json
// Example PerfLog entry
{
"time": "2024-01-15T10:30:00.123Z",
"method": "submit",
"duration_us": 1523,
"result": "tesSUCCESS"
}
```
#### Beast Insight (StatsD)
- **Purpose**: Real-time metrics for monitoring dashboards
- **Strengths**:
- Aggregated metrics (counters, gauges, histograms)
- Low overhead (UDP, fire-and-forget)
- Good for alerting thresholds
- **Limitations**:
- No request-level detail
- No causal relationships
- Single-node perspective
```cpp
// Example StatsD usage in rippled
insight.increment("rpc.submit.count");
insight.gauge("ledger.age", age);
insight.timing("consensus.round", duration);
```
#### OpenTelemetry (NEW)
- **Purpose**: Distributed request tracing across nodes
- **Strengths**:
- **Cross-node correlation** via `trace_id`
- Parent-child span relationships
- Rich attributes per span
- Industry standard (CNCF)
- **Limitations**:
- Requires collector infrastructure
- Higher complexity than logging
```cpp
// Example OpenTelemetry span
auto span = telemetry.startSpan("tx.relay");
span->SetAttribute("tx.hash", hash);
span->SetAttribute("peer.id", peerId);
// Span automatically linked to parent via context
```
### 2.6.3 When to Use Each
| Scenario | PerfLog | StatsD | OpenTelemetry |
| --------------------------------------- | ---------- | ------ | ------------- |
| "How many TXs per second?" | ❌ | ✅ | ✅ |
| "What's the p99 RPC latency?" | ❌ | ✅ | ✅ |
| "Why was this specific TX slow?" | ⚠️ partial | ❌ | ✅ |
| "Which node delayed consensus?" | ❌ | ❌ | ✅ |
| "What happened on node X at time T?" | ✅ | ❌ | ✅ |
| "Show me the TX journey across 5 nodes" | ❌ | ❌ | ✅ |
### 2.6.4 Coexistence Strategy
```mermaid
flowchart TB
subgraph rippled["rippled Process"]
perflog["PerfLog<br/>(JSON to file)"]
insight["Beast Insight<br/>(StatsD)"]
otel["OpenTelemetry<br/>(Tracing)"]
end
perflog --> perffile["perf.log"]
insight --> statsd["StatsD Server"]
otel --> collector["OTLP Collector"]
perffile --> grafana["Grafana<br/>(Unified UI)"]
statsd --> grafana
collector --> grafana
style rippled fill:#212121,stroke:#0a0a0a,color:#ffffff
style grafana fill:#bf360c,stroke:#8c2809,color:#ffffff
```
**Reading the diagram:**
- **rippled Process (dark gray)**: The single rippled node running all three observability frameworks side by side. Each framework operates independently with no interference.
- **PerfLog to perf.log**: PerfLog writes JSON-formatted event logs to a local file. Grafana can ingest these via Loki or a file-based datasource.
- **Beast Insight to StatsD Server**: Insight sends aggregated metrics (counters, gauges) over UDP to a StatsD server. Grafana reads from StatsD-compatible backends like Graphite or Prometheus (via StatsD exporter).
- **OpenTelemetry to OTLP Collector**: OTel exports spans over OTLP/gRPC to a Collector, which then forwards to a trace backend (Tempo).
- **Grafana (red, unified UI)**: All three data streams converge in Grafana, enabling operators to correlate logs, metrics, and traces in a single dashboard.
### 2.6.5 Correlation with PerfLog
Trace IDs can be correlated with existing PerfLog entries for comprehensive debugging:
```cpp
// In RPCHandler.cpp - correlate trace with PerfLog
Status doCommand(RPC::JsonContext& context, Json::Value& result)
{
// Start OpenTelemetry span
auto span = context.app.getTelemetry().startSpan(
"rpc.command." + context.method);
// Get trace ID for correlation
auto traceId = span->GetContext().trace_id().IsValid()
? toHex(span->GetContext().trace_id())
: "";
// Use existing PerfLog with trace correlation
auto const curId = context.app.getPerfLog().currentId();
context.app.getPerfLog().rpcStart(context.method, curId);
// Future: Add trace ID to PerfLog entry
// context.app.getPerfLog().setTraceId(curId, traceId);
try {
auto ret = handler(context, result);
context.app.getPerfLog().rpcFinish(context.method, curId);
span->SetStatus(opentelemetry::trace::StatusCode::kOk);
return ret;
} catch (std::exception const& e) {
context.app.getPerfLog().rpcError(context.method, curId);
span->RecordException(e);
span->SetStatus(opentelemetry::trace::StatusCode::kError, e.what());
throw;
}
}
```
---
_Previous: [Architecture Analysis](./01-architecture-analysis.md)_ | _Next: [Implementation Strategy](./03-implementation-strategy.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,528 @@
# Implementation Strategy
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Code Samples](./04-code-samples.md) | [Configuration Reference](./05-configuration-reference.md)
---
## 3.1 Directory Structure
The telemetry implementation follows rippled's existing code organization pattern:
```
include/xrpl/
├── telemetry/
│ ├── Telemetry.h # Main telemetry interface
│ ├── TelemetryConfig.h # Configuration structures
│ ├── TraceContext.h # Context propagation utilities
│ ├── SpanGuard.h # RAII span management
│ └── SpanAttributes.h # Attribute helper functions
src/libxrpl/
├── telemetry/
│ ├── Telemetry.cpp # Implementation
│ ├── TelemetryConfig.cpp # Config parsing
│ ├── TraceContext.cpp # Context serialization
│ └── NullTelemetry.cpp # No-op implementation
src/xrpld/
├── telemetry/
│ ├── TracingInstrumentation.h # Instrumentation macros
│ └── TracingInstrumentation.cpp
```
---
## 3.2 Implementation Approach
<div align="center">
```mermaid
%%{init: {'flowchart': {'nodeSpacing': 20, 'rankSpacing': 30}}}%%
flowchart TB
subgraph phase1["Phase 1: Core"]
direction LR
sdk["SDK Integration"] ~~~ interface["Telemetry Interface"] ~~~ config["Configuration"]
end
subgraph phase2["Phase 2: RPC"]
direction LR
http["HTTP Context"] ~~~ rpc["RPC Handlers"]
end
subgraph phase3["Phase 3: P2P"]
direction LR
proto["Protobuf Context"] ~~~ tx["Transaction Relay"]
end
subgraph phase4["Phase 4: Consensus"]
direction LR
consensus["Consensus Rounds"] ~~~ proposals["Proposals"]
end
phase1 --> phase2 --> phase3 --> phase4
style phase1 fill:#1565c0,stroke:#0d47a1,color:#ffffff
style phase2 fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style phase3 fill:#e65100,stroke:#bf360c,color:#ffffff
style phase4 fill:#c2185b,stroke:#880e4f,color:#ffffff
```
</div>
### Key Principles
1. **Minimal Intrusion**: Instrumentation should not alter existing control flow
2. **Zero-Cost When Disabled**: Use compile-time flags and no-op implementations
3. **Backward Compatibility**: Protocol Buffer extensions use high field numbers
4. **Graceful Degradation**: Tracing failures must not affect node operation
---
## 3.3 Performance Overhead Summary
> **OTLP** = OpenTelemetry Protocol
| Metric | Overhead | Notes |
| ------------- | ---------- | ------------------------------------------------ |
| CPU | 1-3% | Of per-transaction CPU cost (~200μs baseline) |
| Memory | ~10 MB | SDK statics + batch buffer + worker thread stack |
| Network | 10-50 KB/s | Compressed OTLP export to collector |
| Latency (p99) | <2% | With proper sampling configuration |
---
## 3.4 Detailed CPU Overhead Analysis
### 3.4.1 Per-Operation Costs
> **Note on hardware assumptions**: The costs below are based on the official OTel C++ SDK CI benchmarks
> (969 runs on GitHub Actions 2-core shared runners). On production server hardware (3+ GHz Xeon),
> expect costs at the **lower end** of each range (~30-50% improvement over CI hardware).
| Operation | Time (ns) | Frequency | Impact |
| --------------------- | --------- | ---------------------- | ---------- |
| Span creation | 500-1000 | Every traced operation | Low |
| Span end | 100-200 | Every traced operation | Low |
| SetAttribute (string) | 80-120 | 3-5 per span | Low |
| SetAttribute (int) | 40-60 | 2-3 per span | Negligible |
| AddEvent | 100-200 | 0-2 per span | Low |
| Context injection | 150-250 | Per outgoing message | Low |
| Context extraction | 100-180 | Per incoming message | Low |
| GetCurrent context | 10-20 | Thread-local access | Negligible |
**Source**: Span creation based on OTel C++ SDK `BM_SpanCreation` benchmark (AlwaysOnSampler +
SimpleSpanProcessor + InMemoryExporter), median ~1,000 ns on CI hardware. AddEvent includes
timestamp read + string copy + vector push + mutex acquisition. Context injection/extraction
confirmed by `BM_SpanCreationWithScope` benchmark delta (~160 ns).
### 3.4.2 Transaction Processing Overhead
<div align="center">
```mermaid
%%{init: {'pie': {'textPosition': 0.75}}}%%
pie showData
"tx.receive (1400ns)" : 1400
"tx.validate (1200ns)" : 1200
"tx.relay (1200ns)" : 1200
"Context inject (200ns)" : 200
```
**Transaction Tracing Overhead (~4.0μs total)**
</div>
**Overhead percentage**: 4.0 μs / 200 μs (avg tx processing) = **~2.0%**
> **Breakdown**: Each span (tx.receive, tx.validate, tx.relay) costs ~1,000 ns for creation plus
> ~200-400 ns for 3-5 attribute sets. Context injection is ~200 ns (confirmed by benchmarks).
> On production hardware, expect ~2.6 μs total (~1.3% overhead) due to faster span creation (~500-600 ns).
### 3.4.3 Consensus Round Overhead
| Operation | Count | Cost (ns) | Total |
| ---------------------- | ----- | --------- | ---------- |
| consensus.round span | 1 | ~1200 | ~1.2 μs |
| consensus.phase spans | 3 | ~1100 | ~3.3 μs |
| proposal.receive spans | ~20 | ~1100 | ~22 μs |
| proposal.send spans | ~3 | ~1100 | ~3.3 μs |
| Context operations | ~30 | ~200 | ~6 μs |
| **TOTAL** | | | **~36 μs** |
> **Why higher**: Each span costs ~1,000 ns creation + ~100-200 ns for 1-2 attributes, totaling ~1,100-1,200 ns.
> Context operations remain ~200 ns (confirmed by benchmarks). On production hardware, expect ~24 μs total.
**Overhead percentage**: 36 μs / 3s (typical round) = **~0.001%** (negligible)
### 3.4.4 RPC Request Overhead
| Operation | Cost (ns) |
| ---------------- | ------------ |
| rpc.request span | ~1200 |
| rpc.command span | ~1100 |
| Context extract | ~250 |
| Context inject | ~200 |
| **TOTAL** | **~2.75 μs** |
> **Why higher**: Each span costs ~1,000 ns creation + ~100-200 ns for attributes (command name,
> version, role). Context extract/inject costs are confirmed by OTel C++ benchmarks.
- Fast RPC (1ms): 2.75 μs / 1ms = **~0.275%**
- Slow RPC (100ms): 2.75 μs / 100ms = **~0.003%**
---
## 3.5 Memory Overhead Analysis
> **OTLP** = OpenTelemetry Protocol
### 3.5.1 Static Memory
| Component | Size | Allocated |
| ------------------------------------ | ----------- | ---------- |
| TracerProvider singleton | ~64 KB | At startup |
| BatchSpanProcessor (circular buffer) | ~16 KB | At startup |
| BatchSpanProcessor (worker thread) | ~8 MB | At startup |
| OTLP exporter (gRPC channel init) | ~256 KB | At startup |
| Propagator registry | ~8 KB | At startup |
| **Total static** | **~8.3 MB** | |
> **Why higher than earlier estimate**: The BatchSpanProcessor's circular buffer itself is only ~16 KB
> (2049 x 8-byte `AtomicUniquePtr` entries), but it spawns a dedicated worker thread whose default
> stack size on Linux is ~8 MB. The OTLP gRPC exporter allocates memory for channel stubs and TLS
> initialization. The worker thread stack dominates the static footprint.
### 3.5.2 Dynamic Memory
| Component | Size per unit | Max units | Peak |
| -------------------- | -------------- | ---------- | --------------- |
| Active span | ~500-800 bytes | 1000 | ~500-800 KB |
| Queued span (export) | ~500 bytes | 2048 | ~1 MB |
| Attribute storage | ~80 bytes | 5 per span | Included |
| Context storage | ~64 bytes | Per thread | ~6.4 KB |
| **Total dynamic** | | | **~1.5-1.8 MB** |
> **Why active spans are larger**: An active `Span` object includes the wrapper (~88 bytes: shared_ptr,
> mutex, unique_ptr to Recordable) plus `SpanData` (~250 bytes: SpanContext, timestamps, name, status,
> empty containers) plus attribute storage (~200-500 bytes for 3-5 string attributes in a `std::map`).
> Source: `sdk/src/trace/span.h` and `sdk/include/opentelemetry/sdk/trace/span_data.h`.
> Queued spans release the wrapper, keeping only `SpanData` + attributes (~500 bytes).
### 3.5.3 Memory Growth Characteristics
```mermaid
---
config:
xyChart:
width: 700
height: 400
---
xychart-beta
title "Memory Usage vs Span Rate (bounded by queue limit)"
x-axis "Spans/second" [0, 200, 400, 600, 800, 1000]
y-axis "Memory (MB)" 0 --> 12
line [8.5, 9.2, 9.6, 9.9, 10.0, 10.0]
```
**Notes**:
- Memory increases with span rate but **plateaus at queue capacity** (default 2048 spans)
- Batch export prevents unbounded growth
- At queue limit, oldest spans are dropped (not blocked)
- Maximum memory is bounded: ~8.3 MB static (dominated by worker thread stack) + 2048 queued spans x ~500 bytes (~1 MB) + active spans (~0.8 MB) ≈ **~10 MB ceiling**
- The worker thread stack (~8 MB) is virtual memory; actual RSS depends on stack usage (typically much less)
### 3.5.4 Performance Data Sources
The overhead estimates in Sections 3.3-3.5 are derived from the following sources:
| Source | What it covers | URL |
| ------------------------------------------------ | ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| OTel C++ SDK CI benchmarks (969 runs) | Span creation, context activation, sampler overhead | [Benchmark Dashboard](https://open-telemetry.github.io/opentelemetry-cpp/benchmarks/) |
| `api/test/trace/span_benchmark.cc` | API-level span creation (~22 ns no-op) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/api/test/trace/span_benchmark.cc) |
| `sdk/test/trace/sampler_benchmark.cc` | SDK span creation with samplers (~1,000 ns AlwaysOn) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/sdk/test/trace/sampler_benchmark.cc) |
| `sdk/include/.../span_data.h` | SpanData memory layout (~250 bytes base) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/sdk/include/opentelemetry/sdk/trace/span_data.h) |
| `sdk/src/trace/span.h` | Span wrapper memory layout (~88 bytes) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/sdk/src/trace/span.h) |
| `sdk/include/.../batch_span_processor_options.h` | Default queue size (2048), batch size (512) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/sdk/include/opentelemetry/sdk/trace/batch_span_processor_options.h) |
| `sdk/include/.../circular_buffer.h` | CircularBuffer implementation (AtomicUniquePtr array) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/sdk/include/opentelemetry/sdk/common/circular_buffer.h) |
| OTLP proto definition | Serialized span size estimation | [Proto](https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/trace/v1/trace.proto) |
---
## 3.6 Network Overhead Analysis
### 3.6.1 Export Bandwidth
> **Bytes per span**: Estimates use ~500 bytes/span (conservative upper bound). OTLP protobuf analysis
> shows a typical span with 3-5 string attributes serializes to ~200-300 bytes raw; with gzip
> compression (~60-70% of raw) and batching (amortized headers), ~350 bytes/span is more realistic.
> The table uses the conservative estimate for capacity planning.
| Sampling Rate | Spans/sec | Bandwidth | Notes |
| ------------- | --------- | --------- | ---------------- |
| 100% | ~500 | ~250 KB/s | Development only |
| 10% | ~50 | ~25 KB/s | Staging |
| 1% | ~5 | ~2.5 KB/s | Production |
| Error-only | ~1 | ~0.5 KB/s | Minimal overhead |
### 3.6.2 Trace Context Propagation
| Message Type | Context Size | Messages/sec | Overhead |
| ---------------------- | ------------ | ------------ | ----------- |
| TMTransaction | 25 bytes | ~100 | ~2.5 KB/s |
| TMProposeSet | 25 bytes | ~10 | ~250 B/s |
| TMValidation | 25 bytes | ~50 | ~1.25 KB/s |
| **Total P2P overhead** | | | **~4 KB/s** |
---
## 3.7 Optimization Strategies
### 3.7.1 Sampling Strategies
#### Tail Sampling
```mermaid
flowchart TD
trace["New Trace"]
trace --> errors{"Is Error?"}
errors -->|Yes| sample["SAMPLE"]
errors -->|No| consensus{"Is Consensus?"}
consensus -->|Yes| sample
consensus -->|No| slow{"Is Slow?"}
slow -->|Yes| sample
slow -->|No| prob{"Random < 10%?"}
prob -->|Yes| sample
prob -->|No| drop["DROP"]
style sample fill:#4caf50,stroke:#388e3c,color:#fff
style drop fill:#f44336,stroke:#c62828,color:#fff
```
### 3.7.2 Batch Tuning Recommendations
| Environment | Batch Size | Batch Delay | Max Queue |
| ------------------ | ---------- | ----------- | --------- |
| Low-latency | 128 | 1000ms | 512 |
| High-throughput | 1024 | 10000ms | 8192 |
| Memory-constrained | 256 | 2000ms | 512 |
### 3.7.3 Conditional Instrumentation
```cpp
// Compile-time feature flag
#ifndef XRPL_ENABLE_TELEMETRY
// Zero-cost when disabled
#define XRPL_TRACE_SPAN(t, n) ((void)0)
#endif
// Runtime component filtering
if (telemetry.shouldTracePeer())
{
XRPL_TRACE_SPAN(telemetry, "peer.message.receive");
// ... instrumentation
}
// No overhead when component tracing disabled
```
---
## 3.8 Links to Detailed Documentation
- **[Code Samples](./04-code-samples.md)**: Complete implementation code for all components
- **[Configuration Reference](./05-configuration-reference.md)**: Configuration options and collector setup
- **[Implementation Phases](./06-implementation-phases.md)**: Detailed timeline and milestones
---
## 3.9 Code Intrusiveness Assessment
> **TxQ** = Transaction Queue
This section provides a detailed assessment of how intrusive the OpenTelemetry integration is to the existing rippled codebase.
### 3.9.1 Files Modified Summary
| Component | Files Modified | Lines Added | Lines Changed | Architectural Impact |
| --------------------- | -------------- | ----------- | ------------- | -------------------- |
| **Core Telemetry** | 5 new files | ~800 | 0 | None (new module) |
| **Application Init** | 2 files | ~30 | ~5 | Minimal |
| **RPC Layer** | 3 files | ~80 | ~20 | Minimal |
| **Transaction Relay** | 4 files | ~120 | ~40 | Low |
| **Consensus** | 3 files | ~100 | ~30 | Low-Medium |
| **Protocol Buffers** | 1 file | ~25 | 0 | Low |
| **CMake/Build** | 3 files | ~50 | ~10 | Minimal |
| **PathFinding** | 2 | ~80 | ~5 | Minimal |
| **TxQ/Fee** | 2 | ~60 | ~5 | Minimal |
| **Validator/Amend** | 3 | ~40 | ~5 | Minimal |
| **Total** | **~28 files** | **~1,490** | **~120** | **Low** |
### 3.9.2 Detailed File Impact
```mermaid
pie title Code Changes by Component
"New Telemetry Module" : 800
"Transaction Relay" : 160
"Consensus" : 130
"RPC Layer" : 100
"PathFinding" : 80
"TxQ/Fee" : 60
"Validator/Amendment" : 40
"Application Init" : 35
"Protocol Buffers" : 25
"Build System" : 60
```
#### New Files (No Impact on Existing Code)
| File | Lines | Purpose |
| ---------------------------------------------- | ----- | -------------------- |
| `include/xrpl/telemetry/Telemetry.h` | ~160 | Main interface |
| `include/xrpl/telemetry/SpanGuard.h` | ~120 | RAII wrapper |
| `include/xrpl/telemetry/TraceContext.h` | ~80 | Context propagation |
| `src/xrpld/telemetry/TracingInstrumentation.h` | ~60 | Macros |
| `src/libxrpl/telemetry/Telemetry.cpp` | ~200 | Implementation |
| `src/libxrpl/telemetry/TelemetryConfig.cpp` | ~60 | Config parsing |
| `src/libxrpl/telemetry/NullTelemetry.cpp` | ~40 | No-op implementation |
#### Modified Files (Existing Rippled Code)
| File | Lines Added | Lines Changed | Risk Level |
| ------------------------------------------------- | ----------- | ------------- | ---------- |
| `src/xrpld/app/main/Application.cpp` | ~15 | ~3 | Low |
| `include/xrpl/app/main/Application.h` | ~5 | ~2 | Low |
| `src/xrpld/rpc/detail/ServerHandler.cpp` | ~40 | ~10 | Low |
| `src/xrpld/rpc/handlers/*.cpp` | ~30 | ~8 | Low |
| `src/xrpld/overlay/detail/PeerImp.cpp` | ~60 | ~15 | Medium |
| `src/xrpld/overlay/detail/OverlayImpl.cpp` | ~30 | ~10 | Medium |
| `src/xrpld/app/consensus/RCLConsensus.cpp` | ~50 | ~15 | Medium |
| `src/xrpld/app/consensus/RCLConsensusAdaptor.cpp` | ~40 | ~12 | Medium |
| `src/xrpld/core/JobQueue.cpp` | ~20 | ~5 | Low |
| `src/xrpld/app/paths/PathRequest.cpp` | ~40 | ~3 | Low |
| `src/xrpld/app/paths/Pathfinder.cpp` | ~40 | ~2 | Low |
| `src/xrpld/app/misc/TxQ.cpp` | ~40 | ~3 | Low |
| `src/xrpld/app/main/LoadManager.cpp` | ~20 | ~2 | Low |
| `src/xrpld/app/misc/ValidatorList.cpp` | ~20 | ~2 | Low |
| `src/xrpld/app/misc/AmendmentTable.cpp` | ~10 | ~2 | Low |
| `src/xrpld/app/misc/Manifest.cpp` | ~10 | ~1 | Low |
| `src/xrpld/shamap/SHAMap.cpp` | ~20 | ~3 | Low |
| `src/xrpld/overlay/detail/ripple.proto` | ~25 | 0 | Low |
| `CMakeLists.txt` | ~40 | ~8 | Low |
| `cmake/FindOpenTelemetry.cmake` | ~50 | 0 | None (new) |
### 3.9.3 Risk Assessment by Component
<div align="center">
**Do First** ↖ ↗ **Plan Carefully**
```mermaid
quadrantChart
title Code Intrusiveness Risk Matrix
x-axis Low Risk --> High Risk
y-axis Low Value --> High Value
RPC Tracing: [0.2, 0.55]
Transaction Relay: [0.55, 0.85]
Consensus Tracing: [0.75, 0.92]
Peer Message Tracing: [0.85, 0.35]
JobQueue Context: [0.3, 0.42]
Ledger Acquisition: [0.48, 0.65]
PathFinding: [0.38, 0.72]
TxQ and Fees: [0.25, 0.62]
Validator Mgmt: [0.15, 0.35]
```
**Optional** ↙ ↘ **Avoid**
</div>
#### Risk Level Definitions
| Risk Level | Definition | Mitigation |
| ---------- | ---------------------------------------------------------------- | ---------------------------------- |
| **Low** | Additive changes only; no modification to existing logic | Standard code review |
| **Medium** | Minor modifications to existing functions; clear boundaries | Comprehensive unit tests |
| **High** | Changes to core logic or data structures; potential side effects | Integration tests + staged rollout |
### 3.9.4 Architectural Impact Assessment
| Aspect | Impact | Justification |
| -------------------- | ------- | -------------------------------------------------------------------------------- |
| **Data Flow** | Minimal | Read-only instrumentation; no modification to consensus or transaction data flow |
| **Threading Model** | Minimal | Context propagation uses thread-local storage (standard OTel pattern) |
| **Memory Model** | Low | Bounded queues prevent unbounded growth; RAII ensures cleanup |
| **Network Protocol** | Low | Optional fields in protobuf (high field numbers); backward compatible |
| **Configuration** | None | New config section; existing configs unaffected |
| **Build System** | Low | Optional CMake flag; builds work without OpenTelemetry |
| **Dependencies** | Low | OpenTelemetry SDK is optional; null implementation when disabled |
### 3.9.5 Backward Compatibility
| Compatibility | Status | Notes |
| --------------- | ------- | ----------------------------------------------------- |
| **Config File** | ✅ Full | New `[telemetry]` section is optional |
| **Protocol** | ✅ Full | Optional protobuf fields with high field numbers |
| **Build** | ✅ Full | `XRPL_ENABLE_TELEMETRY=OFF` produces identical binary |
| **Runtime** | ✅ Full | `enabled=0` produces zero overhead |
| **API** | ✅ Full | No changes to public RPC or P2P APIs |
### 3.9.6 Rollback Strategy
If issues are discovered after deployment:
1. **Immediate**: Set `enabled=0` in config and restart (zero code change)
2. **Quick**: Rebuild with `XRPL_ENABLE_TELEMETRY=OFF`
3. **Complete**: Revert telemetry commits (clean separation makes this easy)
### 3.9.7 Code Change Examples
**Minimal RPC Instrumentation (Low Intrusiveness):**
```cpp
// Before
void ServerHandler::onRequest(...) {
auto result = processRequest(req);
send(result);
}
// After (only ~10 lines added)
void ServerHandler::onRequest(...) {
XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.request"); // +1 line
XRPL_TRACE_SET_ATTR("xrpl.rpc.command", command); // +1 line
auto result = processRequest(req);
XRPL_TRACE_SET_ATTR("xrpl.rpc.status", status); // +1 line
send(result);
}
```
**Consensus Instrumentation (Medium Intrusiveness):**
```cpp
// Before
void RCLConsensusAdaptor::startRound(...) {
// ... existing logic
}
// After (context storage required)
void RCLConsensusAdaptor::startRound(...) {
XRPL_TRACE_CONSENSUS(app_.getTelemetry(), "consensus.round");
XRPL_TRACE_SET_ATTR("xrpl.consensus.ledger.seq", seq);
// Store context for child spans in phase transitions
currentRoundContext_ = _xrpl_guard_->context(); // New member variable
// ... existing logic unchanged
}
```
---
_Previous: [Design Decisions](./02-design-decisions.md)_ | _Next: [Code Samples](./04-code-samples.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,972 @@
# Configuration Reference
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Code Samples](./04-code-samples.md) | [Implementation Phases](./06-implementation-phases.md)
---
## 5.1 rippled Configuration
> **OTLP** = OpenTelemetry Protocol | **TxQ** = Transaction Queue
### 5.1.1 Configuration File Section
Add to `cfg/xrpld-example.cfg`:
```ini
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY (OpenTelemetry Distributed Tracing)
# ═══════════════════════════════════════════════════════════════════════════════
#
# Enables distributed tracing for transaction flow, consensus, and RPC calls.
# Traces are exported to an OpenTelemetry Collector using OTLP protocol.
#
# [telemetry]
#
# # Enable/disable telemetry (default: 0 = disabled)
# enabled=1
#
# # Exporter type: "otlp_grpc" (default), "otlp_http", or "none"
# exporter=otlp_grpc
#
# # OTLP endpoint (default: localhost:4317 for gRPC, localhost:4318 for HTTP)
# endpoint=localhost:4317
#
# # Use TLS for exporter connection (default: 0)
# use_tls=0
#
# # Path to CA certificate for TLS (optional)
# # tls_ca_cert=/path/to/ca.crt
#
# # Sampling ratio: 0.0-1.0 (default: 1.0 = 100% sampling)
# # Use lower values in production to reduce overhead
# # Default: 1.0 (all traces). For production deployments with high
# # throughput, 0.1 (10%) is recommended to reduce overhead.
# # See Section 7.4.2 for sampling strategy details.
# sampling_ratio=0.1
#
# # Batch processor settings
# batch_size=512 # Spans per batch (default: 512)
# batch_delay_ms=5000 # Max delay before sending batch (default: 5000)
# max_queue_size=2048 # Max queued spans (default: 2048)
#
# # Component-specific tracing (default: all enabled except peer)
# trace_transactions=1 # Transaction relay and processing
# trace_consensus=1 # Consensus rounds and proposals
# trace_rpc=1 # RPC request handling
# trace_peer=0 # Peer messages (high volume, disabled by default)
# trace_ledger=1 # Ledger acquisition and building
# trace_pathfind=1 # Path computation (can be expensive)
# trace_txq=1 # Transaction queue and fee escalation
# trace_validator=0 # Validator list and manifest updates (low volume)
# trace_amendment=0 # Amendment voting (very low volume)
#
# # Service identification (automatically detected if not specified)
# # service_name=rippled
# # service_instance_id=<node_public_key>
[telemetry]
enabled=0
```
### 5.1.2 Configuration Options Summary
| Option | Type | Default | Description |
| --------------------- | ------ | ---------------- | ----------------------------------------- |
| `enabled` | bool | `false` | Enable/disable telemetry |
| `exporter` | string | `"otlp_grpc"` | Exporter type: otlp_grpc, otlp_http, none |
| `endpoint` | string | `localhost:4317` | OTLP collector endpoint |
| `use_tls` | bool | `false` | Enable TLS for exporter connection |
| `tls_ca_cert` | string | `""` | Path to CA certificate file |
| `sampling_ratio` | float | `1.0` | Sampling ratio (0.0-1.0) |
| `batch_size` | uint | `512` | Spans per export batch |
| `batch_delay_ms` | uint | `5000` | Max delay before sending batch (ms) |
| `max_queue_size` | uint | `2048` | Maximum queued spans |
| `trace_transactions` | bool | `true` | Enable transaction tracing |
| `trace_consensus` | bool | `true` | Enable consensus tracing |
| `trace_rpc` | bool | `true` | Enable RPC tracing |
| `trace_peer` | bool | `false` | Enable peer message tracing (high volume) |
| `trace_ledger` | bool | `true` | Enable ledger tracing |
| `trace_pathfind` | bool | `true` | Enable path computation tracing |
| `trace_txq` | bool | `true` | Enable transaction queue tracing |
| `trace_validator` | bool | `false` | Enable validator list/manifest tracing |
| `trace_amendment` | bool | `false` | Enable amendment voting tracing |
| `service_name` | string | `"rippled"` | Service name for traces |
| `service_instance_id` | string | `<node_pubkey>` | Instance identifier |
---
## 5.2 Configuration Parser
> **TxQ** = Transaction Queue
```cpp
// src/libxrpl/telemetry/TelemetryConfig.cpp
#include <xrpl/telemetry/Telemetry.h>
#include <xrpl/basics/Log.h>
namespace xrpl {
namespace telemetry {
Telemetry::Setup
setup_Telemetry(
Section const& section,
std::string const& nodePublicKey,
std::string const& version)
{
Telemetry::Setup setup;
// Basic settings
setup.enabled = section.value_or("enabled", false);
setup.serviceName = section.value_or("service_name", "rippled");
setup.serviceVersion = version;
setup.serviceInstanceId = section.value_or(
"service_instance_id", nodePublicKey);
// Exporter settings
setup.exporterType = section.value_or("exporter", "otlp_grpc");
if (setup.exporterType == "otlp_grpc")
setup.exporterEndpoint = section.value_or("endpoint", "localhost:4317");
else if (setup.exporterType == "otlp_http")
setup.exporterEndpoint = section.value_or("endpoint", "localhost:4318");
setup.useTls = section.value_or("use_tls", false);
setup.tlsCertPath = section.value_or("tls_ca_cert", "");
// Sampling
setup.samplingRatio = section.value_or("sampling_ratio", 1.0);
if (setup.samplingRatio < 0.0 || setup.samplingRatio > 1.0)
{
Throw<std::runtime_error>(
"telemetry.sampling_ratio must be between 0.0 and 1.0");
}
// Batch processor
setup.batchSize = section.value_or("batch_size", 512u);
setup.batchDelay = std::chrono::milliseconds{
section.value_or("batch_delay_ms", 5000u)};
setup.maxQueueSize = section.value_or("max_queue_size", 2048u);
// Component filtering
setup.traceTransactions = section.value_or("trace_transactions", true);
setup.traceConsensus = section.value_or("trace_consensus", true);
setup.traceRpc = section.value_or("trace_rpc", true);
setup.tracePeer = section.value_or("trace_peer", false);
setup.traceLedger = section.value_or("trace_ledger", true);
setup.tracePathfind = section.value_or("trace_pathfind", true);
setup.traceTxQ = section.value_or("trace_txq", true);
setup.traceValidator = section.value_or("trace_validator", false);
setup.traceAmendment = section.value_or("trace_amendment", false);
return setup;
}
} // namespace telemetry
} // namespace xrpl
```
---
## 5.3 Application Integration
### 5.3.1 ApplicationImp Changes
```cpp
// src/xrpld/app/main/Application.cpp (modified)
#include <xrpl/telemetry/Telemetry.h>
class ApplicationImp : public Application
{
// ... existing members ...
// Telemetry (must be constructed early, destroyed late)
std::unique_ptr<telemetry::Telemetry> telemetry_;
public:
ApplicationImp(...)
{
// Initialize telemetry early (before other components)
auto telemetrySection = config_->section("telemetry");
auto telemetrySetup = telemetry::setup_Telemetry(
telemetrySection,
toBase58(TokenType::NodePublic, nodeIdentity_.publicKey()),
BuildInfo::getVersionString());
// Set network attributes
telemetrySetup.networkId = config_->NETWORK_ID;
telemetrySetup.networkType = [&]() {
if (config_->NETWORK_ID == 0) return "mainnet";
if (config_->NETWORK_ID == 1) return "testnet";
if (config_->NETWORK_ID == 2) return "devnet";
return "custom";
}();
telemetry_ = telemetry::make_Telemetry(
telemetrySetup,
logs_->journal("Telemetry"));
// ... rest of initialization ...
}
void start() override
{
// Start telemetry first
if (telemetry_)
telemetry_->start();
// ... existing start code ...
}
void stop() override
{
// ... existing stop code ...
// Stop telemetry last (to capture shutdown spans)
if (telemetry_)
telemetry_->stop();
}
telemetry::Telemetry& getTelemetry() override
{
assert(telemetry_);
return *telemetry_;
}
};
```
### 5.3.2 Application Interface Addition
```cpp
// include/xrpl/app/main/Application.h (modified)
namespace telemetry { class Telemetry; }
class Application
{
public:
// ... existing virtual methods ...
/** Get the telemetry system for distributed tracing */
virtual telemetry::Telemetry& getTelemetry() = 0;
};
```
---
## 5.4 CMake Integration
> **OTLP** = OpenTelemetry Protocol
### 5.4.1 Find OpenTelemetry Module
```cmake
# cmake/FindOpenTelemetry.cmake
# Find OpenTelemetry C++ SDK
#
# This module defines:
# OpenTelemetry_FOUND - System has OpenTelemetry
# OpenTelemetry::api - API library target
# OpenTelemetry::sdk - SDK library target
# OpenTelemetry::otlp_grpc_exporter - OTLP gRPC exporter target
# OpenTelemetry::otlp_http_exporter - OTLP HTTP exporter target
find_package(opentelemetry-cpp CONFIG QUIET)
if(opentelemetry-cpp_FOUND)
set(OpenTelemetry_FOUND TRUE)
# Create imported targets if not already created by config
if(NOT TARGET OpenTelemetry::api)
add_library(OpenTelemetry::api ALIAS opentelemetry-cpp::api)
endif()
if(NOT TARGET OpenTelemetry::sdk)
add_library(OpenTelemetry::sdk ALIAS opentelemetry-cpp::sdk)
endif()
if(NOT TARGET OpenTelemetry::otlp_grpc_exporter)
add_library(OpenTelemetry::otlp_grpc_exporter ALIAS
opentelemetry-cpp::otlp_grpc_exporter)
endif()
else()
# Try pkg-config fallback
find_package(PkgConfig QUIET)
if(PKG_CONFIG_FOUND)
pkg_check_modules(OTEL opentelemetry-cpp QUIET)
if(OTEL_FOUND)
set(OpenTelemetry_FOUND TRUE)
# Create imported targets from pkg-config
add_library(OpenTelemetry::api INTERFACE IMPORTED)
target_include_directories(OpenTelemetry::api INTERFACE
${OTEL_INCLUDE_DIRS})
endif()
endif()
endif()
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(OpenTelemetry
REQUIRED_VARS OpenTelemetry_FOUND)
```
### 5.4.2 CMakeLists.txt Changes
```cmake
# CMakeLists.txt (additions)
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY OPTIONS
# ═══════════════════════════════════════════════════════════════════════════════
option(XRPL_ENABLE_TELEMETRY
"Enable OpenTelemetry distributed tracing support" OFF)
if(XRPL_ENABLE_TELEMETRY)
find_package(OpenTelemetry REQUIRED)
# Define compile-time flag
add_compile_definitions(XRPL_ENABLE_TELEMETRY)
message(STATUS "OpenTelemetry tracing: ENABLED")
else()
message(STATUS "OpenTelemetry tracing: DISABLED")
endif()
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY LIBRARY
# ═══════════════════════════════════════════════════════════════════════════════
if(XRPL_ENABLE_TELEMETRY)
add_library(xrpl_telemetry
src/libxrpl/telemetry/Telemetry.cpp
src/libxrpl/telemetry/TelemetryConfig.cpp
src/libxrpl/telemetry/TraceContext.cpp
)
target_include_directories(xrpl_telemetry
PUBLIC
${CMAKE_CURRENT_SOURCE_DIR}/include
)
target_link_libraries(xrpl_telemetry
PUBLIC
OpenTelemetry::api
OpenTelemetry::sdk
OpenTelemetry::otlp_grpc_exporter
PRIVATE
xrpl_basics
)
# Add to main library dependencies
target_link_libraries(xrpld PRIVATE xrpl_telemetry)
else()
# Create null implementation library
add_library(xrpl_telemetry
src/libxrpl/telemetry/NullTelemetry.cpp
)
target_include_directories(xrpl_telemetry
PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include
)
endif()
```
---
## 5.5 OpenTelemetry Collector Configuration
> **OTLP** = OpenTelemetry Protocol | **APM** = Application Performance Monitoring
### 5.5.1 Development Configuration
```yaml
# otel-collector-dev.yaml
# Minimal configuration for local development
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 100
exporters:
# Console output for debugging
logging:
verbosity: detailed
sampling_initial: 5
sampling_thereafter: 200
# Tempo for trace visualization
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
# Grafana Tempo for trace storage
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, jaeger, otlp/tempo]
```
### 5.5.2 Production Configuration
```yaml
# otel-collector-prod.yaml
# Production configuration with filtering, sampling, and multiple backends
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
tls:
cert_file: /etc/otel/server.crt
key_file: /etc/otel/server.key
ca_file: /etc/otel/ca.crt
processors:
# Memory limiter to prevent OOM
memory_limiter:
check_interval: 1s
limit_mib: 1000
spike_limit_mib: 200
# Batch processing for efficiency
batch:
timeout: 5s
send_batch_size: 512
send_batch_max_size: 1024
# Tail-based sampling (keep errors and slow traces)
tail_sampling:
decision_wait: 10s
num_traces: 100000
expected_new_traces_per_sec: 1000
policies:
# Always keep error traces
- name: errors
type: status_code
status_code:
status_codes: [ERROR]
# Keep slow consensus rounds (>5s)
- name: slow-consensus
type: latency
latency:
threshold_ms: 5000
# Keep slow RPC requests (>1s)
- name: slow-rpc
type: and
and:
and_sub_policy:
- name: rpc-spans
type: string_attribute
string_attribute:
key: xrpl.rpc.command
values: [".*"]
enabled_regex_matching: true
- name: latency
type: latency
latency:
threshold_ms: 1000
# Probabilistic sampling for the rest
- name: probabilistic
type: probabilistic
probabilistic:
sampling_percentage: 10
# Attribute processing
attributes:
actions:
# Hash sensitive data
- key: xrpl.tx.account
action: hash
# Add deployment info
- key: deployment.environment
value: production
action: upsert
exporters:
# Grafana Tempo for long-term storage
otlp/tempo:
endpoint: tempo.monitoring:4317
tls:
insecure: false
ca_file: /etc/otel/tempo-ca.crt
# Elastic APM for correlation with logs
otlp/elastic:
endpoint: apm.elastic:8200
headers:
Authorization: "Bearer ${ELASTIC_APM_TOKEN}"
extensions:
health_check:
endpoint: 0.0.0.0:13133
zpages:
endpoint: 0.0.0.0:55679
service:
extensions: [health_check, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, tail_sampling, attributes, batch]
exporters: [otlp/tempo, otlp/elastic]
```
---
## 5.6 Docker Compose Development Environment
> **OTLP** = OpenTelemetry Protocol
```yaml
# docker-compose-telemetry.yaml
version: "3.8"
services:
# OpenTelemetry Collector
otel-collector:
image: otel/opentelemetry-collector-contrib:0.92.0
container_name: otel-collector
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-dev.yaml:/etc/otel-collector-config.yaml:ro
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
- "13133:13133" # Health check
depends_on:
- tempo
# Tempo for trace visualization
tempo:
image: grafana/tempo:2.6.1
container_name: tempo
ports:
- "3200:3200" # Tempo HTTP API
- "4317" # OTLP gRPC (internal)
# Grafana Tempo for trace storage (recommended for production)
tempo:
image: grafana/tempo:2.7.2
container_name: tempo
command: ["-config.file=/etc/tempo.yaml"]
volumes:
- ./tempo.yaml:/etc/tempo.yaml:ro
- tempo-data:/var/tempo
ports:
- "3200:3200" # HTTP API
# Grafana for dashboards
grafana:
image: grafana/grafana:10.2.3
container_name: grafana
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
volumes:
- ./grafana/provisioning:/etc/grafana/provisioning:ro
- ./grafana/dashboards:/var/lib/grafana/dashboards:ro
ports:
- "3000:3000"
depends_on:
- jaeger
- tempo
# Prometheus for metrics (optional, for correlation)
prometheus:
image: prom/prometheus:v2.48.1
container_name: prometheus
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml:ro
ports:
- "9090:9090"
networks:
default:
name: rippled-telemetry
```
---
## 5.7 Configuration Architecture
> **OTLP** = OpenTelemetry Protocol
```mermaid
flowchart TB
subgraph config["Configuration Sources"]
cfgFile["xrpld.cfg<br/>[telemetry] section"]
cmake["CMake<br/>XRPL_ENABLE_TELEMETRY"]
end
subgraph init["Initialization"]
parse["setup_Telemetry()"]
factory["make_Telemetry()"]
end
subgraph runtime["Runtime Components"]
tracer["TracerProvider"]
exporter["OTLP Exporter"]
processor["BatchProcessor"]
end
subgraph collector["Collector Pipeline"]
recv["Receivers"]
proc["Processors"]
exp["Exporters"]
end
cfgFile --> parse
cmake -->|"compile flag"| parse
parse --> factory
factory --> tracer
tracer --> processor
processor --> exporter
exporter -->|"OTLP"| recv
recv --> proc
proc --> exp
style config fill:#e3f2fd,stroke:#1976d2
style runtime fill:#e8f5e9,stroke:#388e3c
style collector fill:#fff3e0,stroke:#ff9800
```
**Reading the diagram:**
- **Configuration Sources**: `xrpld.cfg` provides runtime settings (endpoint, sampling) while the CMake flag controls whether telemetry is compiled in at all.
- **Initialization**: `setup_Telemetry()` parses config values, then `make_Telemetry()` constructs the provider, processor, and exporter objects.
- **Runtime Components**: The `TracerProvider` creates spans, the `BatchProcessor` buffers them, and the `OTLP Exporter` serializes and sends them over the wire.
- **OTLP arrow to Collector**: Trace data leaves the rippled process via OTLP (gRPC or HTTP) and enters the external Collector pipeline.
- **Collector Pipeline**: `Receivers` ingest OTLP data, `Processors` apply sampling/filtering/enrichment, and `Exporters` forward traces to storage backends (Tempo, etc.).
---
## 5.8 Grafana Integration
> **APM** = Application Performance Monitoring
Step-by-step instructions for integrating rippled traces with Grafana.
### 5.8.1 Data Source Configuration
#### Tempo (Recommended)
```yaml
# grafana/provisioning/datasources/tempo.yaml
apiVersion: 1
datasources:
- name: Tempo
type: tempo
access: proxy
url: http://tempo:3200
jsonData:
httpMethod: GET
tracesToLogs:
datasourceUid: loki
tags: ["service.name", "xrpl.tx.hash"]
mappedTags: [{ key: "trace_id", value: "traceID" }]
mapTagNamesEnabled: true
filterByTraceID: true
serviceMap:
datasourceUid: prometheus
nodeGraph:
enabled: true
search:
hide: false
lokiSearch:
datasourceUid: loki
```
#### Elastic APM
```yaml
# grafana/provisioning/datasources/elastic-apm.yaml
apiVersion: 1
datasources:
- name: Elasticsearch-APM
type: elasticsearch
access: proxy
url: http://elasticsearch:9200
database: "apm-*"
jsonData:
esVersion: "8.0.0"
timeField: "@timestamp"
logMessageField: message
logLevelField: log.level
```
### 5.8.2 Dashboard Provisioning
```yaml
# grafana/provisioning/dashboards/dashboards.yaml
apiVersion: 1
providers:
- name: "rippled-dashboards"
orgId: 1
folder: "rippled"
folderUid: "rippled"
type: file
disableDeletion: false
updateIntervalSeconds: 30
options:
path: /var/lib/grafana/dashboards/rippled
```
### 5.8.3 Example Dashboard: RPC Performance
```json
{
"title": "rippled RPC Performance",
"uid": "rippled-rpc-performance",
"panels": [
{
"title": "RPC Latency by Command",
"type": "heatmap",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && span.xrpl.rpc.command != \"\"} | histogram_over_time(duration) by (span.xrpl.rpc.command)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 0 }
},
{
"title": "RPC Error Rate",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && status.code=error} | rate() by (span.xrpl.rpc.command)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 0 }
},
{
"title": "Top 10 Slowest RPC Commands",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && span.xrpl.rpc.command != \"\"} | avg(duration) by (span.xrpl.rpc.command) | topk(10)"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 8 }
},
{
"title": "Recent Traces",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\"}"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 16 }
}
]
}
```
### 5.8.4 Example Dashboard: Transaction Tracing
```json
{
"title": "rippled Transaction Tracing",
"uid": "rippled-tx-tracing",
"panels": [
{
"title": "Transaction Throughput",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.receive\"} | rate()"
}
],
"gridPos": { "h": 4, "w": 6, "x": 0, "y": 0 }
},
{
"title": "Cross-Node Relay Count",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.relay\"} | avg(span.xrpl.tx.relay_count)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 4 }
},
{
"title": "Transaction Validation Errors",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.validate\" && status.code=error}"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 4 }
}
]
}
```
### 5.8.5 TraceQL Query Examples
Common queries for rippled traces:
```
# Find all traces for a specific transaction hash
{resource.service.name="rippled" && span.xrpl.tx.hash="ABC123..."}
# Find slow RPC commands (>100ms)
{resource.service.name="rippled" && name=~"rpc.command.*"} | duration > 100ms
# Find consensus rounds taking >5 seconds
{resource.service.name="rippled" && name="consensus.round"} | duration > 5s
# Find failed transactions with error details
{resource.service.name="rippled" && name="tx.validate" && status.code=error}
# Find transactions relayed to many peers
{resource.service.name="rippled" && name="tx.relay"} | span.xrpl.tx.relay_count > 10
# Compare latency across nodes
{resource.service.name="rippled" && name="rpc.command.account_info"} | avg(duration) by (resource.service.instance.id)
```
### 5.8.6 Correlation with PerfLog
To correlate OpenTelemetry traces with existing PerfLog data:
**Step 1: Configure Loki to ingest PerfLog**
```yaml
# promtail-config.yaml
scrape_configs:
- job_name: rippled-perflog
static_configs:
- targets:
- localhost
labels:
job: rippled
__path__: /var/log/rippled/perf*.log
pipeline_stages:
- json:
expressions:
trace_id: trace_id
ledger_seq: ledger_seq
tx_hash: tx_hash
- labels:
trace_id:
ledger_seq:
tx_hash:
```
**Step 2: Add trace_id to PerfLog entries**
Modify PerfLog to include trace_id when available:
```cpp
// In PerfLog output, add trace_id from current span context
void logPerf(Json::Value& entry) {
auto span = opentelemetry::trace::GetSpan(
opentelemetry::context::RuntimeContext::GetCurrent());
if (span && span->GetContext().IsValid()) {
char traceIdHex[33];
span->GetContext().trace_id().ToLowerBase16(traceIdHex);
entry["trace_id"] = std::string(traceIdHex, 32);
}
// ... existing logging
}
```
**Step 3: Configure Grafana trace-to-logs link**
In Tempo data source configuration, set up the derived field:
```yaml
jsonData:
tracesToLogs:
datasourceUid: loki
tags: ["trace_id", "xrpl.tx.hash"]
filterByTraceID: true
filterBySpanID: false
```
### 5.8.7 Correlation with Insight/StatsD Metrics
To correlate traces with existing Beast Insight metrics:
**Step 1: Export Insight metrics to Prometheus**
```yaml
# prometheus.yaml
scrape_configs:
- job_name: "rippled-statsd"
static_configs:
- targets: ["statsd-exporter:9102"]
```
**Step 2: Add exemplars to metrics**
OpenTelemetry SDK automatically adds exemplars (trace IDs) to metrics when using the Prometheus exporter. This links metrics spikes to specific traces.
**Step 3: Configure Grafana metric-to-trace link**
```yaml
# In Prometheus data source
jsonData:
exemplarTraceIdDestinations:
- name: trace_id
datasourceUid: tempo
```
**Step 4: Dashboard panel with exemplars**
```json
{
"title": "RPC Latency with Trace Links",
"type": "timeseries",
"datasource": "Prometheus",
"targets": [
{
"expr": "histogram_quantile(0.99, rate(rippled_rpc_duration_seconds_bucket[5m]))",
"exemplar": true
}
]
}
```
This allows clicking on metric data points to jump directly to the related trace.
---
_Previous: [Code Samples](./04-code-samples.md)_ | _Next: [Implementation Phases](./06-implementation-phases.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,649 @@
# Implementation Phases
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Configuration Reference](./05-configuration-reference.md) | [Observability Backends](./07-observability-backends.md)
---
## 6.1 Phase Overview
> **TxQ** = Transaction Queue
```mermaid
gantt
title OpenTelemetry Implementation Timeline
dateFormat YYYY-MM-DD
axisFormat Week %W
section Phase 1
Core Infrastructure :p1, 2024-01-01, 2w
SDK Integration :p1a, 2024-01-01, 4d
Telemetry Interface :p1b, after p1a, 3d
Configuration & CMake :p1c, after p1b, 3d
Unit Tests :p1d, after p1c, 2d
Buffer & Integration :p1e, after p1d, 2d
section Phase 2
RPC Tracing :p2, after p1, 2w
HTTP Context Extraction :p2a, after p1, 2d
RPC Handler Instrumentation :p2b, after p2a, 4d
PathFinding Instrumentation :p2f, after p2b, 2d
TxQ Instrumentation :p2g, after p2f, 2d
WebSocket Support :p2c, after p2g, 2d
Integration Tests :p2d, after p2c, 2d
Buffer & Review :p2e, after p2d, 4d
section Phase 3
Transaction Tracing :p3, after p2, 2w
Protocol Buffer Extension :p3a, after p2, 2d
PeerImp Instrumentation :p3b, after p3a, 3d
Fee Escalation Instrumentation :p3f, after p3b, 2d
Relay Context Propagation :p3c, after p3f, 3d
Multi-node Tests :p3d, after p3c, 2d
Buffer & Review :p3e, after p3d, 4d
section Phase 4
Consensus Tracing :p4, after p3, 2w
Consensus Round Spans :p4a, after p3, 3d
Proposal Handling :p4b, after p4a, 3d
Validator List & Manifest Tracing :p4f, after p4b, 2d
Amendment Voting Tracing :p4g, after p4f, 2d
SHAMap Sync Tracing :p4h, after p4g, 2d
Validation Tests :p4c, after p4h, 4d
Buffer & Review :p4e, after p4c, 4d
section Phase 5
Documentation & Deploy :p5, after p4, 1w
```
---
## 6.2 Phase 1: Core Infrastructure (Weeks 1-2)
**Objective**: Establish foundational telemetry infrastructure
### Tasks
| Task | Description |
| ---- | ----------------------------------------------------- |
| 1.1 | Add OpenTelemetry C++ SDK to Conan/CMake |
| 1.2 | Implement `Telemetry` interface and factory |
| 1.3 | Implement `SpanGuard` RAII wrapper |
| 1.4 | Implement configuration parser |
| 1.5 | Integrate into `ApplicationImp` |
| 1.6 | Add conditional compilation (`XRPL_ENABLE_TELEMETRY`) |
| 1.7 | Create `NullTelemetry` no-op implementation |
| 1.8 | Unit tests for core infrastructure |
### Exit Criteria
- [ ] OpenTelemetry SDK compiles and links
- [ ] Telemetry can be enabled/disabled via config
- [ ] Basic span creation works
- [ ] No performance regression when disabled
- [ ] Unit tests passing
---
## 6.3 Phase 2: RPC Tracing (Weeks 3-4)
> **TxQ** = Transaction Queue
**Objective**: Complete tracing for all RPC operations
### Tasks
| Task | Description |
| ---- | -------------------------------------------------------------------------- |
| 2.1 | Implement W3C Trace Context HTTP header extraction |
| 2.2 | Instrument `ServerHandler::onRequest()` |
| 2.3 | Instrument `RPCHandler::doCommand()` |
| 2.4 | Add RPC-specific attributes |
| 2.5 | Instrument WebSocket handler |
| 2.6 | PathFinding instrumentation (`pathfind.request`, `pathfind.compute` spans) |
| 2.7 | TxQ instrumentation (`txq.enqueue`, `txq.apply` spans) |
| 2.8 | Integration tests for RPC tracing |
| 2.9 | Performance benchmarks |
| 2.10 | Documentation |
### Exit Criteria
- [ ] All RPC commands traced
- [ ] Trace context propagates from HTTP headers
- [ ] WebSocket and HTTP both instrumented
- [ ] <1ms overhead per RPC call
- [ ] Integration tests passing
---
## 6.4 Phase 3: Transaction Tracing (Weeks 5-6)
**Objective**: Trace transaction lifecycle across network
### Tasks
| Task | Description |
| ---- | ---------------------------------------------------- |
| 3.1 | Define `TraceContext` Protocol Buffer message |
| 3.2 | Implement protobuf context serialization |
| 3.3 | Instrument `PeerImp::handleTransaction()` |
| 3.4 | Instrument `NetworkOPs::submitTransaction()` |
| 3.5 | Instrument HashRouter integration |
| 3.6 | Fee escalation instrumentation (`fee.escalate` span) |
| 3.7 | Implement relay context propagation |
| 3.8 | Integration tests (multi-node) |
| 3.9 | Performance benchmarks |
### Exit Criteria
- [ ] Transaction traces span across nodes
- [ ] Trace context in Protocol Buffer messages
- [ ] HashRouter deduplication visible in traces
- [ ] Multi-node integration tests passing
- [ ] <5% overhead on transaction throughput
---
## 6.5 Phase 4: Consensus Tracing (Weeks 7-8)
**Objective**: Full observability into consensus rounds
### Tasks
| Task | Description |
| ---- | ---------------------------------------------- |
| 4.1 | Instrument `RCLConsensusAdaptor::startRound()` |
| 4.2 | Instrument phase transitions |
| 4.3 | Instrument proposal handling |
| 4.4 | Instrument validation handling |
| 4.5 | Add consensus-specific attributes |
| 4.6 | Correlate with transaction traces |
| 4.7 | Validator list and manifest tracing |
| 4.8 | Amendment voting tracing |
| 4.9 | SHAMap sync tracing |
| 4.10 | Multi-validator integration tests |
| 4.11 | Performance validation |
### Spans Produced
| Span Name | Location | Attributes |
| --------------------------- | ---------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `consensus.proposal.send` | `RCLConsensus.cpp:177` | `xrpl.consensus.round` |
| `consensus.ledger_close` | `RCLConsensus.cpp:282` | `xrpl.consensus.ledger.seq`, `xrpl.consensus.mode` |
| `consensus.accept` | `RCLConsensus.cpp:395` | `xrpl.consensus.proposers`, `xrpl.consensus.round_time_ms` |
| `consensus.accept.apply` | `RCLConsensus.cpp:521` | `xrpl.consensus.close_time`, `close_time_correct`, `close_resolution_ms`, `state`, `proposing`, `round_time_ms`, `ledger.seq`, `parent_close_time`, `close_time_self`, `close_time_vote_bins`, `resolution_direction` |
| `consensus.validation.send` | `RCLConsensus.cpp:753` | `xrpl.consensus.proposing` |
### Exit Criteria
- [x] Complete consensus round traces
- [x] Phase transitions visible
- [x] Proposals and validations traced
- [x] Close time agreement tracked (per `avCT_CONSENSUS_PCT`)
- [x] No impact on consensus timing
- [ ] Multi-validator test network validated
### Implementation Status — Phase 4a Complete
Phase 4a (establish-phase gap fill & cross-node correlation) adds:
- **Deterministic trace ID** derived from `previousLedger.id()` so all validators
in the same round share the same `trace_id` (switchable via
`consensus_trace_strategy` config: `"deterministic"` or `"attribute"`).
See [Configuration Reference](./05-configuration-reference.md) for full
configuration options. The `consensus_trace_strategy` option will be
documented in the configuration reference as part of Phase 4a implementation.
- **Round lifecycle spans**: `consensus.round` with round-to-round span links.
- **Establish phase**: `consensus.establish`, `consensus.update_positions` (with
`dispute.resolve` events), `consensus.check` (with threshold tracking).
- **Mode changes**: `consensus.mode_change` spans.
- **Validation**: `consensus.validation.send` with span link to round span
(thread-safe cross-thread access via `roundSpanContext_` snapshot).
- **Separation of concerns**: telemetry extracted to private helpers
(`startRoundTracing`, `createValidationSpan`, `startEstablishTracing`,
`updateEstablishTracing`, `endEstablishTracing`).
See [Phase4_taskList.md](./Phase4_taskList.md) for the full spec and implementation notes.
---
## 6.5a Phase 4a: Establish-Phase Gap Fill & Cross-Node Correlation
**Objective**: Fill tracing gaps in the establish phase and establish cross-node
correlation using deterministic trace IDs derived from `previousLedger.id()`.
**Approach**: Direct instrumentation in `Consensus.h`. Long-lived spans use
direct SpanGuard members; short-lived scoped spans use `XRPL_TRACE_*` macros.
### Tasks
| Task | Description | Effort | Risk |
| ---- | ------------------------------------------------ | ------ | ------ |
| 4a.0 | Prerequisites: extend SpanGuard & Telemetry APIs | 1d | Medium |
| 4a.1 | Adaptor `getTelemetry()` method | 0.5d | Low |
| 4a.2 | Switchable round span with deterministic traceID | 2d | High |
| 4a.3 | Span members in `Consensus.h` | 0.5d | Medium |
| 4a.4 | Instrument `phaseEstablish()` | 1d | Medium |
| 4a.5 | Instrument `updateOurPositions()` | 1d | Medium |
| 4a.6 | Instrument `haveConsensus()` (thresholds) | 1d | Medium |
| 4a.7 | Instrument mode changes | 0.5d | Low |
| 4a.8 | Reparent existing spans under round | 0.5d | Low |
| 4a.9 | Build verification and testing | 1d | Low |
**Total Effort**: 9 days
### Spans Produced
| Span Name | Location | Key Attributes |
| ---------------------------- | ------------------ | ---------------------------------------------------------------- |
| `consensus.round` | `RCLConsensus.cpp` | `round_id`, `ledger_id`, `ledger.seq`, `mode`; link prev round |
| `consensus.establish` | `Consensus.h` | `converge_percent`, `establish_count`, `proposers` |
| `consensus.update_positions` | `Consensus.h` | `disputes_count`, `converge_percent`, `proposers_agreed/total` |
| `consensus.check` | `Consensus.h` | `agree/disagree_count`, `threshold_percent`, `result` |
| `consensus.mode_change` | `RCLConsensus.cpp` | `mode.old`, `mode.new` |
### Exit Criteria
- [ ] Establish phase internals fully traced (disputes, convergence, thresholds)
- [ ] Cross-node correlation works via deterministic trace_id
- [ ] Strategy switchable via config (`deterministic` / `attribute`)
- [ ] Consecutive rounds linked via follows-from spans
- [ ] Build passes with telemetry ON and OFF
- [ ] No impact on consensus timing
See [Phase4_taskList.md](./Phase4_taskList.md) for full task details.
---
## 6.5b Phase 4b: Cross-Node Propagation (Future)
**Objective**: Wire `TraceContextPropagator` for P2P messages (proposals,
validations) to enable true distributed tracing between nodes.
**Status**: Design documented, NOT implemented. Protobuf fields (field 1001)
and `TraceContextPropagator` class exist. Wiring deferred until Phase 4a is
validated in a multi-node environment.
**Prerequisites**: Phase 4a complete and validated.
See [Phase4_taskList.md § Phase 4b](./Phase4_taskList.md) for full design.
---
## 6.6 Phase 5: Documentation & Deployment (Week 9)
**Objective**: Production readiness
### Tasks
| Task | Description |
| ---- | ----------------------------- |
| 5.1 | Operator runbook |
| 5.2 | Grafana dashboards |
| 5.3 | Alert definitions |
| 5.4 | Collector deployment examples |
| 5.5 | Developer documentation |
| 5.6 | Training materials |
| 5.7 | Final integration testing |
---
## 6.7 Risk Assessment
```mermaid
quadrantChart
title Risk Assessment Matrix
x-axis Low Impact --> High Impact
y-axis Low Likelihood --> High Likelihood
quadrant-1 Mitigate Immediately
quadrant-2 Plan Mitigation
quadrant-3 Accept Risk
quadrant-4 Monitor Closely
SDK Compat: [0.2, 0.18]
Protocol Chg: [0.75, 0.72]
Perf Overhead: [0.58, 0.42]
Context Prop: [0.4, 0.55]
Memory Leaks: [0.85, 0.25]
```
### Risk Details
| Risk | Likelihood | Impact | Mitigation |
| ------------------------------------ | ---------- | ------ | --------------------------------------- |
| Protocol changes break compatibility | Medium | High | Use high field numbers, optional fields |
| Performance overhead unacceptable | Medium | Medium | Sampling, conditional compilation |
| Context propagation complexity | Medium | Medium | Phased rollout, extensive testing |
| SDK compatibility issues | Low | Medium | Pin SDK version, fallback to no-op |
| Memory leaks in long-running nodes | Low | High | Memory profiling, bounded queues |
---
## 6.8 Success Metrics
| Metric | Target | Measurement |
| ------------------------ | -------------------------------------------------------------- | --------------------- |
| Trace coverage | >95% of transaction code paths (independent of sampling ratio) | Sampling verification |
| CPU overhead | <3% | Benchmark tests |
| Memory overhead | <10 MB | Memory profiling |
| Latency impact (p99) | <2% | Performance tests |
| Trace completeness | >99% spans with required attrs | Validation script |
| Cross-node trace linkage | >90% of multi-hop transactions | Integration tests |
---
## 6.9 Quick Wins and Crawl-Walk-Run Strategy
> **TxQ** = Transaction Queue
This section outlines a prioritized approach to maximize ROI with minimal initial investment.
### 6.9.1 Crawl-Walk-Run Overview
<div align="center">
```mermaid
flowchart TB
subgraph crawl["🐢 CRAWL (Week 1-2)"]
direction LR
c1[Core SDK Setup] ~~~ c2[RPC Tracing Only] ~~~ c3[PathFinding + TxQ Tracing] ~~~ c4[Single Node]
end
subgraph walk["🚶 WALK (Week 3-5)"]
direction LR
w1[Transaction Tracing] ~~~ w2[Fee Escalation Tracing] ~~~ w3[Cross-Node Context] ~~~ w4[Basic Dashboards]
end
subgraph run["🏃 RUN (Week 6-9)"]
direction LR
r1[Consensus Tracing] ~~~ r2[Validator, Amendment,<br/>SHAMap Tracing] ~~~ r3[Full Correlation] ~~~ r4[Production Deploy]
end
crawl --> walk --> run
style crawl fill:#1b5e20,stroke:#0d3d14,color:#fff
style walk fill:#bf360c,stroke:#8c2809,color:#fff
style run fill:#0d47a1,stroke:#082f6a,color:#fff
style c1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style c2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style c3 fill:#1b5e20,stroke:#0d3d14,color:#fff
style c4 fill:#1b5e20,stroke:#0d3d14,color:#fff
style w1 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style w2 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style w3 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style w4 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style r1 fill:#0d47a1,stroke:#082f6a,color:#fff
style r2 fill:#0d47a1,stroke:#082f6a,color:#fff
style r3 fill:#0d47a1,stroke:#082f6a,color:#fff
style r4 fill:#0d47a1,stroke:#082f6a,color:#fff
```
</div>
**Reading the diagram:**
- **CRAWL (Weeks 1-2)**: Minimal investment -- set up the SDK, instrument RPC and PathFinding/TxQ handlers, and verify on a single node. Delivers immediate latency visibility.
- **WALK (Weeks 3-5)**: Expand to transaction lifecycle tracing, fee escalation, cross-node context propagation, and basic Grafana dashboards. This is where distributed tracing starts working.
- **RUN (Weeks 6-9)**: Full consensus instrumentation, validator/amendment/SHAMap tracing, end-to-end correlation, and production deployment with sampling and alerting.
- **Arrows (crawl → walk → run)**: Each phase builds on the prior one; you cannot skip ahead because later phases depend on infrastructure established earlier.
### 6.9.2 Quick Wins (Immediate Value)
| Quick Win | Value | When to Deploy |
| ------------------------------ | ------ | -------------- |
| **RPC Command Tracing** | High | Week 2 |
| **RPC Latency Histograms** | High | Week 2 |
| **Error Rate Dashboard** | Medium | Week 2 |
| **Transaction Submit Tracing** | High | Week 3 |
| **Consensus Round Duration** | Medium | Week 6 |
### 6.9.3 CRAWL Phase (Weeks 1-2)
**Goal**: Get basic tracing working with minimal code changes.
**What You Get**:
- RPC request/response traces for all commands
- Latency breakdown per RPC command
- PathFinding and TxQ tracing (directly impacts RPC latency)
- Error visibility with stack traces
- Basic Grafana dashboard
**Code Changes**: ~15 lines in `ServerHandler.cpp`, ~40 lines in new telemetry module
**Why Start Here**:
- RPC is the lowest-risk, highest-visibility component
- PathFinding and TxQ are RPC-adjacent and directly affect latency
- Immediate value for debugging client issues
- No cross-node complexity
- Single file modification to existing code
### 6.9.4 WALK Phase (Weeks 3-5)
**Goal**: Add transaction lifecycle tracing across nodes.
**What You Get**:
- End-to-end transaction traces from submit to relay
- Fee escalation tracing within the transaction pipeline
- Cross-node correlation (see transaction path)
- HashRouter deduplication visibility
- Relay latency metrics
**Code Changes**: ~120 lines across 4 files, plus protobuf extension
**Why Do This Second**:
- Builds on RPC tracing (transactions submitted via RPC)
- Fee escalation is integral to the transaction processing pipeline
- Moderate complexity (requires context propagation)
- High value for debugging transaction issues
### 6.9.5 RUN Phase (Weeks 6-9)
**Goal**: Full observability including consensus.
**What You Get**:
- Complete consensus round visibility
- Phase transition timing
- Validator proposal tracking
- Validator list and manifest tracing
- Amendment voting tracing
- SHAMap sync tracing
- Full end-to-end traces (client → RPC → TX → consensus → ledger)
**Code Changes**: ~100 lines across 3 consensus files, plus validator/amendment/SHAMap modules
**Why Do This Last**:
- Highest complexity (consensus is critical path)
- Validator, amendment, and SHAMap components are lower priority
- Requires thorough testing
- Lower relative value (consensus issues are rarer)
### 6.9.6 ROI Prioritization Matrix
```mermaid
quadrantChart
title Implementation ROI Matrix
x-axis Low Effort --> High Effort
y-axis Low Value --> High Value
quadrant-1 Quick Wins - Do First
quadrant-2 Major Projects - Plan Carefully
quadrant-3 Nice to Have - Optional
quadrant-4 Time Sinks - Avoid
RPC Tracing: [0.15, 0.92]
TX Submit Trace: [0.3, 0.78]
TX Relay Trace: [0.5, 0.88]
Consensus Trace: [0.72, 0.72]
Peer Msg Trace: [0.85, 0.3]
Ledger Acquire: [0.55, 0.52]
```
---
## 6.10 Definition of Done
> **TxQ** = Transaction Queue | **HA** = High Availability
Clear, measurable criteria for each phase.
### 6.10.1 Phase 1: Core Infrastructure
| Criterion | Measurement | Target |
| --------------- | ---------------------------------------------------------- | ---------------------------- |
| SDK Integration | `cmake --build` succeeds with `-DXRPL_ENABLE_TELEMETRY=ON` | ✅ Compiles |
| Runtime Toggle | `enabled=0` produces zero overhead | <0.1% CPU difference |
| Span Creation | Unit test creates and exports span | Span appears in Tempo |
| Configuration | All config options parsed correctly | Config validation tests pass |
| Documentation | Developer guide exists | PR approved |
**Definition of Done**: All criteria met, PR merged, no regressions in CI.
### 6.10.2 Phase 2: RPC Tracing
| Criterion | Measurement | Target |
| ------------------ | ---------------------------------- | -------------------------- |
| Coverage | All RPC commands instrumented | 100% of commands |
| Context Extraction | traceparent header propagates | Integration test passes |
| Attributes | Command, status, duration recorded | Validation script confirms |
| Performance | RPC latency overhead | <1ms p99 |
| Dashboard | Grafana dashboard deployed | Screenshot in docs |
**Definition of Done**: RPC traces visible in Tempo for all commands, dashboard shows latency distribution.
### 6.10.3 Phase 3: Transaction Tracing
| Criterion | Measurement | Target |
| ---------------- | ------------------------------- | ---------------------------------- |
| Local Trace | Submit validate TxQ traced | Single-node test passes |
| Cross-Node | Context propagates via protobuf | Multi-node test passes |
| Relay Visibility | relay_count attribute correct | Spot check 100 txs |
| HashRouter | Deduplication visible in trace | Duplicate txs show suppressed=true |
| Performance | TX throughput overhead | <5% degradation |
**Definition of Done**: Transaction traces span 3+ nodes in test network, performance within bounds.
### 6.10.4 Phase 4: Consensus Tracing
| Criterion | Measurement | Target |
| -------------------- | ----------------------------- | ------------------------- |
| Round Tracing | startRound creates root span | Unit test passes |
| Phase Visibility | All phases have child spans | Integration test confirms |
| Proposer Attribution | Proposer ID in attributes | Spot check 50 rounds |
| Timing Accuracy | Phase durations match PerfLog | <5% variance |
| No Consensus Impact | Round timing unchanged | Performance test passes |
**Definition of Done**: Consensus rounds fully traceable, no impact on consensus timing.
### 6.10.5 Phase 5: Production Deployment
| Criterion | Measurement | Target |
| ------------ | ---------------------------- | -------------------------- |
| Collector HA | Multiple collectors deployed | No single point of failure |
| Sampling | Tail sampling configured | 10% base + errors + slow |
| Retention | Data retained per policy | 7 days hot, 30 days warm |
| Alerting | Alerts configured | Error spike, high latency |
| Runbook | Operator documentation | Approved by ops team |
| Training | Team trained | Session completed |
**Definition of Done**: Telemetry running in production, operators trained, alerts active.
### 6.10.6 Success Metrics Summary
| Phase | Primary Metric | Secondary Metric | Deadline |
| ------- | ---------------------- | --------------------------- | ------------- |
| Phase 1 | SDK compiles and runs | Zero overhead when disabled | End of Week 2 |
| Phase 2 | 100% RPC coverage | <1ms latency overhead | End of Week 4 |
| Phase 3 | Cross-node traces work | <5% throughput impact | End of Week 6 |
| Phase 4 | Consensus fully traced | No consensus timing impact | End of Week 8 |
| Phase 5 | Production deployment | Operators trained | End of Week 9 |
---
## 6.12 Recommended Implementation Order
Based on ROI analysis, implement in this exact order:
```mermaid
flowchart TB
subgraph week1["Week 1"]
t1[1. OpenTelemetry SDK<br/>Conan/CMake integration]
t2[2. Telemetry interface<br/>SpanGuard, config]
end
subgraph week2["Week 2"]
t3[3. RPC ServerHandler<br/>instrumentation]
t4[4. Basic Tempo setup<br/>for testing]
end
subgraph week3["Week 3"]
t5[5. Transaction submit<br/>tracing]
t6[6. Grafana dashboard<br/>v1]
end
subgraph week4["Week 4"]
t7[7. Protobuf context<br/>extension]
t8[8. PeerImp tx.relay<br/>instrumentation]
end
subgraph week5["Week 5"]
t9[9. Multi-node<br/>integration tests]
t10[10. Performance<br/>benchmarks]
end
subgraph week6_8["Weeks 6-8"]
t11[11. Consensus<br/>instrumentation]
t12[12. Full integration<br/>testing]
end
subgraph week9["Week 9"]
t13[13. Production<br/>deployment]
t14[14. Documentation<br/>& training]
end
t1 --> t2 --> t3 --> t4
t4 --> t5 --> t6
t6 --> t7 --> t8
t8 --> t9 --> t10
t10 --> t11 --> t12
t12 --> t13 --> t14
style week1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style week2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style week3 fill:#bf360c,stroke:#8c2809,color:#fff
style week4 fill:#bf360c,stroke:#8c2809,color:#fff
style week5 fill:#bf360c,stroke:#8c2809,color:#fff
style week6_8 fill:#0d47a1,stroke:#082f6a,color:#fff
style week9 fill:#4a148c,stroke:#2e0d57,color:#fff
style t1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style t2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style t3 fill:#1b5e20,stroke:#0d3d14,color:#fff
style t4 fill:#1b5e20,stroke:#0d3d14,color:#fff
style t5 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t6 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t7 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t8 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t9 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t10 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t11 fill:#0d47a1,stroke:#082f6a,color:#fff
style t12 fill:#0d47a1,stroke:#082f6a,color:#fff
style t13 fill:#4a148c,stroke:#2e0d57,color:#fff
style t14 fill:#4a148c,stroke:#2e0d57,color:#fff
```
**Reading the diagram:**
- **Week 1 (tasks 1-2)**: Foundation work -- integrate the OpenTelemetry SDK via Conan/CMake and build the `Telemetry` interface with `SpanGuard` and config parsing.
- **Week 2 (tasks 3-4)**: First observable output -- instrument `ServerHandler` for RPC tracing and stand up Tempo so developers can see traces immediately.
- **Weeks 3-5 (tasks 5-10)**: Transaction lifecycle -- add submit tracing, build the first Grafana dashboard, extend protobuf for cross-node context, instrument `PeerImp` relay, then validate with multi-node integration tests and performance benchmarks.
- **Weeks 6-8 (tasks 11-12)**: Consensus deep-dive -- instrument consensus rounds and phases, then run full integration testing across all instrumented paths.
- **Week 9 (tasks 13-14)**: Go-live -- deploy to production with sampling/alerting configured, and deliver documentation and operator training.
- **Arrow chain (t1 ... t14)**: Strict sequential dependency; each task's output is a prerequisite for the next.
---
_Previous: [Configuration Reference](./05-configuration-reference.md)_ | _Next: [Observability Backends](./07-observability-backends.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,641 @@
# Observability Backend Recommendations
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Implementation Phases](./06-implementation-phases.md) | [Appendix](./08-appendix.md)
---
## 7.1 Development/Testing Backends
> **OTLP** = OpenTelemetry Protocol
| Backend | Pros | Cons | Use Case |
| ---------- | ----------------------------------- | ---------------------- | ------------------- |
| **Tempo** | Cost-effective, Grafana integration | Requires Grafana stack | Local dev, CI, Prod |
| **Zipkin** | Simple, lightweight | Basic features | Quick prototyping |
### Quick Start with Tempo
```bash
# Start Tempo with OTLP support
docker run -d --name tempo \
-p 3200:3200 \
-p 4317:4317 \
-p 4318:4318 \
grafana/tempo:2.6.1
```
---
## 7.2 Production Backends
> **APM** = Application Performance Monitoring
| Backend | Pros | Cons | Use Case |
| ----------------- | ----------------------------------------- | ---------------------- | --------------------------- |
| **Grafana Tempo** | Cost-effective, Grafana integration | Requires Grafana stack | Most production deployments |
| **Elastic APM** | Full observability stack, log correlation | Resource intensive | Existing Elastic users |
| **Honeycomb** | Excellent query, high cardinality | SaaS cost | Deep debugging needs |
| **Datadog APM** | Full platform, easy setup | SaaS cost | Enterprise with budget |
### Backend Selection Flowchart
```mermaid
flowchart TD
start[Select Backend] --> budget{Budget<br/>Constraints?}
budget -->|Yes| oss[Open Source]
budget -->|No| saas{Prefer<br/>SaaS?}
oss --> existing{Existing<br/>Stack?}
existing -->|Grafana| tempo[Grafana Tempo]
existing -->|Elastic| elastic[Elastic APM]
existing -->|None| tempo
saas -->|Yes| enterprise{Enterprise<br/>Support?}
saas -->|No| oss
enterprise -->|Yes| datadog[Datadog APM]
enterprise -->|No| honeycomb[Honeycomb]
tempo --> final[Configure Collector]
elastic --> final
honeycomb --> final
datadog --> final
style start fill:#0f172a,stroke:#020617,color:#fff
style budget fill:#334155,stroke:#1e293b,color:#fff
style oss fill:#1e293b,stroke:#0f172a,color:#fff
style existing fill:#334155,stroke:#1e293b,color:#fff
style saas fill:#334155,stroke:#1e293b,color:#fff
style enterprise fill:#334155,stroke:#1e293b,color:#fff
style final fill:#0f172a,stroke:#020617,color:#fff
style tempo fill:#1b5e20,stroke:#0d3d14,color:#fff
style elastic fill:#bf360c,stroke:#8c2809,color:#fff
style honeycomb fill:#0d47a1,stroke:#082f6a,color:#fff
style datadog fill:#4a148c,stroke:#2e0d57,color:#fff
```
**Reading the diagram:**
- **Budget Constraints? (Yes)**: Leads to open-source options. If you already run Grafana or Elastic, pick the matching backend; otherwise default to Grafana Tempo.
- **Budget Constraints? (No) → Prefer SaaS?**: If you want a managed service, choose between Datadog (enterprise support) and Honeycomb (developer-focused). If not, fall back to open-source.
- **Terminal nodes (Tempo / Elastic / Honeycomb / Datadog)**: Each represents a concrete backend choice, all of which feed into the same final step.
- **Configure Collector**: Regardless of backend, you always finish by configuring the OTel Collector to export to your chosen destination.
---
## 7.3 Recommended Production Architecture
> **OTLP** = OpenTelemetry Protocol | **APM** = Application Performance Monitoring | **HA** = High Availability
```mermaid
flowchart TB
subgraph validators["Validator Nodes"]
v1[rippled<br/>Validator 1]
v2[rippled<br/>Validator 2]
end
subgraph stock["Stock Nodes"]
s1[rippled<br/>Stock 1]
s2[rippled<br/>Stock 2]
end
subgraph collector["OTel Collector Cluster"]
c1[Collector<br/>DC1]
c2[Collector<br/>DC2]
end
subgraph backends["Storage Backends"]
tempo[(Grafana<br/>Tempo)]
elastic[(Elastic<br/>APM)]
archive[(S3/GCS<br/>Archive)]
end
subgraph ui["Visualization"]
grafana[Grafana<br/>Dashboards]
end
v1 -->|OTLP| c1
v2 -->|OTLP| c1
s1 -->|OTLP| c2
s2 -->|OTLP| c2
c1 --> tempo
c1 --> elastic
c2 --> tempo
c2 --> archive
tempo --> grafana
elastic --> grafana
%% Note: simplified single-collector-per-DC topology shown for clarity
style validators fill:#b71c1c,stroke:#7f1d1d,color:#ffffff
style stock fill:#0d47a1,stroke:#082f6a,color:#ffffff
style collector fill:#bf360c,stroke:#8c2809,color:#ffffff
style backends fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style ui fill:#4a148c,stroke:#2e0d57,color:#ffffff
```
**Reading the diagram:**
- **Validator / Stock Nodes**: All rippled nodes emit trace data via OTLP. Validators and stock nodes are grouped separately because they may reside in different network zones.
- **Collector Cluster (DC1, DC2)**: Regional collectors receive OTLP from nodes in their datacenter, apply processing (sampling, enrichment), and fan out to multiple backends.
- **Storage Backends**: Tempo and Elastic provide queryable trace storage; S3/GCS Archive provides long-term cold storage for compliance or post-incident analysis.
- **Grafana Dashboards**: The single visualization layer that queries both Tempo and Elastic, giving operators a unified view of all traces.
- **Data flow direction**: Nodes → Collectors → Storage → Grafana. Each arrow represents a network hop; minimizing collector-to-backend hops reduces latency.
> **Note**: Production deployments should use multiple collector instances behind a load balancer for high availability. The diagram shows a simplified single-collector topology for clarity.
---
## 7.4 Architecture Considerations
### 7.4.1 Collector Placement
| Strategy | Description | Pros | Cons |
| ------------- | -------------------- | ------------------------ | ----------------------- |
| **Sidecar** | Collector per node | Isolation, simple config | Resource overhead |
| **DaemonSet** | Collector per host | Shared resources | Complexity |
| **Gateway** | Central collector(s) | Centralized processing | Single point of failure |
**Recommendation**: Use **Gateway** pattern with regional collectors for rippled networks:
- One collector cluster per datacenter/region
- Tail-based sampling at collector level
- Multiple export destinations for redundancy
### 7.4.2 Sampling Strategy
```mermaid
flowchart LR
subgraph head["Head Sampling (Node)"]
hs[Node-level head sampling<br/>configurable, default: 100%<br/>recommended production: 10%]
end
subgraph tail["Tail Sampling (Collector)"]
ts1[Keep all errors]
ts2[Keep slow >5s]
ts3[Keep 10% rest]
end
head --> tail
ts1 --> final[Final Traces]
ts2 --> final
ts3 --> final
style head fill:#0d47a1,stroke:#082f6a,color:#fff
style tail fill:#1b5e20,stroke:#0d3d14,color:#fff
style hs fill:#0d47a1,stroke:#082f6a,color:#fff
style ts1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style ts2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style ts3 fill:#1b5e20,stroke:#0d3d14,color:#fff
style final fill:#bf360c,stroke:#8c2809,color:#fff
```
**Reading the diagram:**
- **Head Sampling (Node)**: The first filter -- each rippled node decides whether to sample a trace at creation time (default 100%, recommended 10% in production). This controls the volume leaving the node.
- **Tail Sampling (Collector)**: The second filter -- the collector inspects completed traces and applies rules: keep all errors, keep anything slower than 5 seconds, and keep 10% of the remainder.
- **Arrow head → tail**: All head-sampled traces flow to the collector, where tail sampling further reduces volume while preserving the most valuable data.
- **Final Traces**: The output after both sampling stages; this is what gets stored and queried. The two-stage approach balances cost with debuggability.
### 7.4.3 Data Retention
| Environment | Hot Storage | Warm Storage | Cold Archive |
| ----------- | ----------- | ------------ | ------------ |
| Development | 24 hours | N/A | N/A |
| Staging | 7 days | N/A | N/A |
| Production | 7 days | 30 days | many years |
---
## 7.5 Integration Checklist
- [ ] Choose primary backend (Tempo recommended for cost/features)
- [ ] Deploy collector cluster with high availability
- [ ] Configure tail-based sampling for error/latency traces
- [ ] Set up Grafana dashboards for trace visualization
- [ ] Configure alerts for trace anomalies
- [ ] Establish data retention policies
- [ ] Test trace correlation with logs and metrics
---
## 7.6 Grafana Dashboard Examples
Pre-built dashboards for rippled observability.
### 7.6.1 Consensus Health Dashboard
```json
{
"title": "rippled Consensus Health",
"uid": "rippled-consensus-health",
"tags": ["rippled", "consensus", "tracing"],
"panels": [
{
"title": "Consensus Round Duration",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"consensus.round\"} | avg(duration) by (resource.service.instance.id)"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"thresholds": {
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 4000 },
{ "color": "red", "value": 5000 }
]
}
}
},
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 0 }
},
{
"title": "Phase Duration Breakdown",
"type": "barchart",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=~\"consensus.phase.*\"} | avg(duration) by (name)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 0 }
},
{
"title": "Proposers per Round",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"consensus.round\"} | avg(span.xrpl.consensus.proposers)"
}
],
"gridPos": { "h": 4, "w": 6, "x": 0, "y": 8 }
},
{
"title": "Recent Slow Rounds (>5s)",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"consensus.round\"} | duration > 5s"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 12 }
}
]
}
```
### 7.6.2 Node Overview Dashboard
```json
{
"title": "rippled Node Overview",
"uid": "rippled-node-overview",
"panels": [
{
"title": "Active Nodes",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\"} | count_over_time() by (resource.service.instance.id) | count()"
}
],
"gridPos": { "h": 4, "w": 4, "x": 0, "y": 0 }
},
{
"title": "Total Transactions (1h)",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.receive\"} | count()"
}
],
"gridPos": { "h": 4, "w": 4, "x": 4, "y": 0 }
},
{
"title": "Error Rate",
"type": "gauge",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && status.code=error} | rate() / {resource.service.name=\"rippled\"} | rate() * 100"
}
],
"fieldConfig": {
"defaults": {
"unit": "percent",
"max": 10,
"thresholds": {
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 1 },
{ "color": "red", "value": 5 }
]
}
}
},
"gridPos": { "h": 4, "w": 4, "x": 8, "y": 0 }
},
{
"title": "Service Map",
"type": "nodeGraph",
"datasource": "Tempo",
"gridPos": { "h": 12, "w": 12, "x": 12, "y": 0 }
}
]
}
```
### 7.6.3 Alert Rules
```yaml
# grafana/provisioning/alerting/rippled-alerts.yaml
apiVersion: 1
groups:
- name: rippled-tracing-alerts
folder: rippled
interval: 1m
rules:
- uid: consensus-slow
title: Consensus Round Slow
condition: A
data:
- refId: A
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="rippled" && name="consensus.round"} | avg(duration) > 5s'
# Note: Verify TraceQL aggregate queries are supported by your
# Tempo version. Aggregate alerting (e.g., avg(duration)) requires
# Tempo 2.3+ with TraceQL metrics enabled.
for: 5m
annotations:
summary: Consensus rounds taking >5 seconds
description: "Consensus duration: {{ $value }}ms"
labels:
severity: warning
- uid: rpc-error-spike
title: RPC Error Rate Spike
condition: B
data:
- refId: B
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="rippled" && name=~"rpc.command.*" && status.code=error} | rate() > 0.05'
# Note: Verify TraceQL aggregate queries are supported by your
# Tempo version. Aggregate alerting (e.g., rate()) requires
# Tempo 2.3+ with TraceQL metrics enabled.
for: 2m
annotations:
summary: RPC error rate >5%
labels:
severity: critical
- uid: tx-throughput-drop
title: Transaction Throughput Drop
condition: C
data:
- refId: C
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="rippled" && name="tx.receive"} | rate() < 10'
for: 10m
annotations:
summary: Transaction throughput below threshold
labels:
severity: warning
```
---
## 7.7 PerfLog and Insight Correlation
> **OTLP** = OpenTelemetry Protocol
How to correlate OpenTelemetry traces with existing rippled observability.
### 7.7.1 Correlation Architecture
```mermaid
flowchart TB
subgraph rippled["rippled Node"]
otel[OpenTelemetry<br/>Spans]
perflog[PerfLog<br/>JSON Logs]
insight[Beast Insight<br/>StatsD Metrics]
end
subgraph collectors["Data Collection"]
otelc[OTel Collector]
promtail[Promtail/Fluentd]
statsd[StatsD Exporter]
end
subgraph storage["Storage"]
tempo[(Tempo)]
loki[(Loki)]
prom[(Prometheus)]
end
subgraph grafana["Grafana"]
traces[Trace View]
logs[Log View]
metrics[Metrics View]
corr[Correlation<br/>Panel]
end
otel -->|OTLP| otelc --> tempo
perflog -->|JSON| promtail --> loki
insight -->|StatsD| statsd --> prom
tempo --> traces
loki --> logs
prom --> metrics
traces --> corr
logs --> corr
metrics --> corr
style rippled fill:#0d47a1,stroke:#082f6a,color:#fff
style collectors fill:#bf360c,stroke:#8c2809,color:#fff
style storage fill:#1b5e20,stroke:#0d3d14,color:#fff
style grafana fill:#4a148c,stroke:#2e0d57,color:#fff
style otel fill:#0d47a1,stroke:#082f6a,color:#fff
style perflog fill:#0d47a1,stroke:#082f6a,color:#fff
style insight fill:#0d47a1,stroke:#082f6a,color:#fff
style otelc fill:#bf360c,stroke:#8c2809,color:#fff
style promtail fill:#bf360c,stroke:#8c2809,color:#fff
style statsd fill:#bf360c,stroke:#8c2809,color:#fff
style tempo fill:#1b5e20,stroke:#0d3d14,color:#fff
style loki fill:#1b5e20,stroke:#0d3d14,color:#fff
style prom fill:#1b5e20,stroke:#0d3d14,color:#fff
style traces fill:#4a148c,stroke:#2e0d57,color:#fff
style logs fill:#4a148c,stroke:#2e0d57,color:#fff
style metrics fill:#4a148c,stroke:#2e0d57,color:#fff
style corr fill:#4a148c,stroke:#2e0d57,color:#fff
```
**Reading the diagram:**
- **rippled Node (three sources)**: A single node emits three independent data streams -- OpenTelemetry spans, PerfLog JSON logs, and Beast Insight StatsD metrics.
- **Data Collection layer**: Each stream has its own collector -- OTel Collector for spans, Promtail/Fluentd for logs, and a StatsD exporter for metrics. They operate independently.
- **Storage layer (Tempo, Loki, Prometheus)**: Each data type lands in a purpose-built store optimized for its query patterns (trace search, log grep, metric aggregation).
- **Grafana Correlation Panel**: The key integration point -- Grafana queries all three stores and links them via shared fields (`trace_id`, `xrpl.tx.hash`, `ledger_seq`), enabling a single-pane debugging experience.
### 7.7.2 Correlation Fields
| Source | Field | Link To | Purpose |
| ----------- | --------------------------- | ------------- | -------------------------- |
| **Trace** | `trace_id` | Logs | Find log entries for trace |
| **Trace** | `xrpl.tx.hash` | Logs, Metrics | Find TX-related data |
| **Trace** | `xrpl.consensus.ledger.seq` | Logs | Find ledger-related logs |
| **PerfLog** | `trace_id` (new) | Traces | Jump to trace from log |
| **PerfLog** | `ledger_seq` | Traces | Find consensus trace |
| **Insight** | `exemplar.trace_id` | Traces | Jump from metric spike |
### 7.7.3 Example: Debugging a Slow Transaction
**Step 1: Find the trace**
```
# In Grafana Explore with Tempo
{resource.service.name="rippled" && span.xrpl.tx.hash="ABC123..."}
```
**Step 2: Get the trace_id from the trace view**
```
Trace ID: 4bf92f3577b34da6a3ce929d0e0e4736
```
**Step 3: Find related PerfLog entries**
```
# In Grafana Explore with Loki
{job="rippled"} |= "4bf92f3577b34da6a3ce929d0e0e4736"
```
**Step 4: Check Insight metrics for the time window**
```
# In Grafana with Prometheus
rate(rippled_tx_applied_total[1m])
@ timestamp_from_trace
```
### 7.7.4 Unified Dashboard Example
```json
{
"title": "rippled Unified Observability",
"uid": "rippled-unified",
"panels": [
{
"title": "Transaction Latency (Traces)",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.receive\"} | histogram_over_time(duration)"
}
],
"gridPos": { "h": 6, "w": 8, "x": 0, "y": 0 }
},
{
"title": "Transaction Rate (Metrics)",
"type": "timeseries",
"datasource": "Prometheus",
"targets": [
{
"expr": "rate(rippled_tx_received_total[5m])",
"legendFormat": "{{ instance }}"
}
],
"fieldConfig": {
"defaults": {
"links": [
{
"title": "View traces",
"url": "/explore?left={\"datasource\":\"Tempo\",\"query\":\"{resource.service.name=\\\"rippled\\\" && name=\\\"tx.receive\\\"}\"}"
}
]
}
},
"gridPos": { "h": 6, "w": 8, "x": 8, "y": 0 }
},
{
"title": "Recent Logs",
"type": "logs",
"datasource": "Loki",
"targets": [
{
"expr": "{job=\"rippled\"} | json"
}
],
"gridPos": { "h": 6, "w": 8, "x": 16, "y": 0 }
},
{
"title": "Trace Search",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\"}"
}
],
"fieldConfig": {
"overrides": [
{
"matcher": { "id": "byName", "options": "traceID" },
"properties": [
{
"id": "links",
"value": [
{
"title": "View trace",
"url": "/explore?left={\"datasource\":\"Tempo\",\"query\":\"${__value.raw}\"}"
},
{
"title": "View logs",
"url": "/explore?left={\"datasource\":\"Loki\",\"query\":\"{job=\\\"rippled\\\"} |= \\\"${__value.raw}\\\"\"}"
}
]
}
]
}
]
},
"gridPos": { "h": 12, "w": 24, "x": 0, "y": 6 }
}
]
}
```
---
_Previous: [Implementation Phases](./06-implementation-phases.md)_ | _Next: [Appendix](./08-appendix.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,200 @@
# Appendix
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Observability Backends](./07-observability-backends.md)
---
## 8.1 Glossary
> **OTLP** = OpenTelemetry Protocol | **TxQ** = Transaction Queue
| Term | Definition |
| --------------------- | ---------------------------------------------------------- |
| **Span** | A unit of work with start/end time, name, and attributes |
| **Trace** | A collection of spans representing a complete request flow |
| **Trace ID** | 128-bit unique identifier for a trace |
| **Span ID** | 64-bit unique identifier for a span within a trace |
| **Context** | Carrier for trace/span IDs across boundaries |
| **Propagator** | Component that injects/extracts context |
| **Sampler** | Decides which traces to record |
| **Exporter** | Sends spans to backend |
| **Collector** | Receives, processes, and forwards telemetry |
| **OTLP** | OpenTelemetry Protocol (wire format) |
| **W3C Trace Context** | Standard HTTP headers for trace propagation |
| **Baggage** | Key-value pairs propagated across service boundaries |
| **Resource** | Entity producing telemetry (service, host, etc.) |
| **Instrumentation** | Code that creates telemetry data |
### rippled-Specific Terms
| Term | Definition |
| ----------------- | ------------------------------------------------------------- |
| **Overlay** | P2P network layer managing peer connections |
| **Consensus** | XRP Ledger consensus algorithm (RCL) |
| **Proposal** | Validator's suggested transaction set for a ledger |
| **Validation** | Validator's signature on a closed ledger |
| **HashRouter** | Component for transaction deduplication |
| **JobQueue** | Thread pool for asynchronous task execution |
| **PerfLog** | Existing performance logging system in rippled |
| **Beast Insight** | Existing metrics framework in rippled |
| **PathFinding** | Payment path computation engine for cross-currency payments |
| **TxQ** | Transaction queue managing fee-based prioritization |
| **LoadManager** | Dynamic fee escalation based on network load |
| **SHAMap** | SHA-256 hash-based map (Merkle trie variant) for ledger state |
---
## 8.2 Span Hierarchy Visualization
> **TxQ** = Transaction Queue
```mermaid
flowchart TB
subgraph trace["Trace: Transaction Lifecycle"]
rpc["rpc.request<br/>(entry point)"]
validate["tx.validate"]
relay["tx.relay<br/>(parent span)"]
subgraph peers["Peer Spans"]
p1["peer.send<br/>Peer A"]
p2["peer.send<br/>Peer B"]
p3["peer.send<br/>Peer C"]
end
subgraph pathfinding["PathFinding Spans"]
pathfind["pathfind.request"]
pathcomp["pathfind.compute"]
end
consensus["consensus.round"]
apply["tx.apply"]
subgraph txqueue["TxQ Spans"]
txq["txq.enqueue"]
txqApply["txq.apply"]
end
feeCalc["fee.escalate"]
end
subgraph validators["Validator Spans"]
valFetch["validator.list.fetch"]
valManifest["validator.manifest"]
end
rpc --> validate
rpc --> pathfind
pathfind --> pathcomp
validate --> relay
relay --> p1
relay --> p2
relay --> p3
p1 -.->|"context propagation"| consensus
consensus --> apply
apply --> txq
txq --> txqApply
txq --> feeCalc
style trace fill:#0f172a,stroke:#020617,color:#fff
style peers fill:#1e3a8a,stroke:#172554,color:#fff
style pathfinding fill:#134e4a,stroke:#0f766e,color:#fff
style txqueue fill:#064e3b,stroke:#047857,color:#fff
style validators fill:#4c1d95,stroke:#6d28d9,color:#fff
style rpc fill:#1d4ed8,stroke:#1e40af,color:#fff
style validate fill:#047857,stroke:#064e3b,color:#fff
style relay fill:#047857,stroke:#064e3b,color:#fff
style p1 fill:#0e7490,stroke:#155e75,color:#fff
style p2 fill:#0e7490,stroke:#155e75,color:#fff
style p3 fill:#0e7490,stroke:#155e75,color:#fff
style consensus fill:#fef3c7,stroke:#fde68a,color:#1e293b
style apply fill:#047857,stroke:#064e3b,color:#fff
style pathfind fill:#0e7490,stroke:#155e75,color:#fff
style pathcomp fill:#0e7490,stroke:#155e75,color:#fff
style txq fill:#047857,stroke:#064e3b,color:#fff
style txqApply fill:#047857,stroke:#064e3b,color:#fff
style feeCalc fill:#047857,stroke:#064e3b,color:#fff
style valFetch fill:#6d28d9,stroke:#4c1d95,color:#fff
style valManifest fill:#6d28d9,stroke:#4c1d95,color:#fff
```
**Reading the diagram:**
- **rpc.request (blue, top)**: The entry point — every traced transaction starts as an RPC call; this root span is the parent of all downstream work.
- **tx.validate and pathfind.request (green/teal, first fork)**: The RPC request fans out into transaction validation and, for cross-currency payments, a PathFinding branch (`pathfind.request` -> `pathfind.compute`).
- **tx.relay -> Peer Spans (teal, middle)**: After validation, the transaction is relayed to peers A, B, and C in parallel; each `peer.send` is a sibling child span showing fan-out across the network.
- **context propagation (dashed arrow)**: The dotted line from `peer.send Peer A` to `consensus.round` represents the trace context crossing a node boundary — the receiving validator picks up the same `trace_id` and continues the trace.
- **consensus.round -> tx.apply -> TxQ Spans (green, lower)**: Once consensus accepts the transaction, it is applied to the ledger; the TxQ spans (`txq.enqueue`, `txq.apply`, `fee.escalate`) capture queue depth and fee escalation behavior.
- **Validator Spans (purple, detached)**: `validator.list.fetch` and `validator.manifest` are independent workflows for UNL management — they run on their own traces and are linked to consensus via Span Links, not parent-child relationships.
---
## 8.3 References
> **OTLP** = OpenTelemetry Protocol
### OpenTelemetry Resources
1. [OpenTelemetry C++ SDK](https://github.com/open-telemetry/opentelemetry-cpp)
2. [OpenTelemetry Specification](https://opentelemetry.io/docs/specs/otel/)
3. [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/)
4. [OTLP Protocol Specification](https://opentelemetry.io/docs/specs/otlp/)
### Standards
5. [W3C Trace Context](https://www.w3.org/TR/trace-context/)
6. [W3C Baggage](https://www.w3.org/TR/baggage/)
7. [Protocol Buffers](https://protobuf.dev/)
### rippled Resources
8. [rippled Source Code](https://github.com/XRPLF/rippled)
9. [XRP Ledger Documentation](https://xrpl.org/docs/)
10. [rippled Overlay README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/overlay/README.md)
11. [rippled RPC README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/rpc/README.md)
12. [rippled Consensus README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/app/consensus/README.md)
---
## 8.4 Version History
| Version | Date | Author | Changes |
| ------- | ---------- | ------ | -------------------------------------------------------------- |
| 1.0 | 2026-02-12 | - | Initial implementation plan |
| 1.1 | 2026-02-13 | - | Refactored into modular documents |
| 1.2 | 2026-03-24 | - | Review fixes: accuracy corrections, cross-document consistency |
---
## 8.5 Document Index
### Plan Documents
| Document | Description |
| ---------------------------------------------------------------- | -------------------------------------------- |
| [OpenTelemetryPlan.md](./OpenTelemetryPlan.md) | Master overview and executive summary |
| [00-tracing-fundamentals.md](./00-tracing-fundamentals.md) | Distributed tracing concepts and OTel primer |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | rippled architecture and trace points |
| [02-design-decisions.md](./02-design-decisions.md) | SDK selection, exporters, span conventions |
| [03-implementation-strategy.md](./03-implementation-strategy.md) | Directory structure, performance analysis |
| [04-code-samples.md](./04-code-samples.md) | C++ code examples for all components |
| [05-configuration-reference.md](./05-configuration-reference.md) | rippled config, CMake, Collector configs |
| [06-implementation-phases.md](./06-implementation-phases.md) | Timeline, tasks, risks, success metrics |
| [07-observability-backends.md](./07-observability-backends.md) | Backend selection and architecture |
| [08-appendix.md](./08-appendix.md) | Glossary, references, version history |
| [presentation.md](./presentation.md) | Slide deck for OTel plan overview |
### Task Lists
| Document | Description |
| ------------------------------------------ | --------------------------------------------------- |
| [POC_taskList.md](./POC_taskList.md) | Proof-of-concept telemetry integration |
| [Phase2_taskList.md](./Phase2_taskList.md) | RPC layer trace instrumentation |
| [Phase3_taskList.md](./Phase3_taskList.md) | Peer overlay & consensus tracing |
| [Phase4_taskList.md](./Phase4_taskList.md) | Transaction lifecycle tracing |
| [Phase5_taskList.md](./Phase5_taskList.md) | Ledger processing & advanced tracing |
| [presentation.md](./presentation.md) | Presentation slides for OpenTelemetry plan overview |
---
_Previous: [Observability Backends](./07-observability-backends.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,230 @@
# [OpenTelemetry](00-tracing-fundamentals.md) Distributed Tracing Implementation Plan for rippled (xrpld)
## Executive Summary
> **OTLP** = OpenTelemetry Protocol
This document provides a comprehensive implementation plan for integrating OpenTelemetry distributed tracing into the rippled XRP Ledger node software. The plan addresses the unique challenges of a decentralized peer-to-peer system where trace context must propagate across network boundaries between independent nodes.
### Key Benefits
- **End-to-end transaction visibility**: Track transactions from submission through consensus to ledger inclusion
- **Consensus round analysis**: Understand timing and behavior of consensus phases across validators
- **RPC performance insights**: Identify slow handlers and optimize response times
- **Network topology understanding**: Visualize message propagation patterns between peers
- **Incident debugging**: Correlate events across distributed nodes during issues
### Estimated Performance Overhead
| Metric | Overhead | Notes |
| ------------- | ---------- | ----------------------------------- |
| CPU | 1-3% | Span creation and attribute setting |
| Memory | 2-5 MB | Batch buffer for pending spans |
| Network | 10-50 KB/s | Compressed OTLP export to collector |
| Latency (p99) | <2% | With proper sampling configuration |
---
## Document Structure
This implementation plan is organized into modular documents for easier navigation:
<div align="center">
```mermaid
flowchart TB
overview["📋 OpenTelemetryPlan.md<br/>(This Document)"]
subgraph fundamentals["Fundamentals"]
fund["00-tracing-fundamentals.md"]
end
subgraph analysis["Analysis & Design"]
arch["01-architecture-analysis.md"]
design["02-design-decisions.md"]
end
subgraph impl["Implementation"]
strategy["03-implementation-strategy.md"]
code["04-code-samples.md"]
config["05-configuration-reference.md"]
end
subgraph deploy["Deployment & Planning"]
phases["06-implementation-phases.md"]
backends["07-observability-backends.md"]
appendix["08-appendix.md"]
poc["POC_taskList.md"]
end
overview --> fundamentals
overview --> analysis
overview --> impl
overview --> deploy
fund --> arch
arch --> design
design --> strategy
strategy --> code
code --> config
config --> phases
phases --> backends
backends --> appendix
phases --> poc
style overview fill:#1b5e20,stroke:#0d3d14,color:#fff,stroke-width:2px
style fundamentals fill:#00695c,stroke:#004d40,color:#fff
style fund fill:#00695c,stroke:#004d40,color:#fff
style analysis fill:#0d47a1,stroke:#082f6a,color:#fff
style impl fill:#bf360c,stroke:#8c2809,color:#fff
style deploy fill:#4a148c,stroke:#2e0d57,color:#fff
style arch fill:#0d47a1,stroke:#082f6a,color:#fff
style design fill:#0d47a1,stroke:#082f6a,color:#fff
style strategy fill:#bf360c,stroke:#8c2809,color:#fff
style code fill:#bf360c,stroke:#8c2809,color:#fff
style config fill:#bf360c,stroke:#8c2809,color:#fff
style phases fill:#4a148c,stroke:#2e0d57,color:#fff
style backends fill:#4a148c,stroke:#2e0d57,color:#fff
style appendix fill:#4a148c,stroke:#2e0d57,color:#fff
style poc fill:#4a148c,stroke:#2e0d57,color:#fff
```
</div>
---
## Table of Contents
| Section | Document | Description |
| ------- | ---------------------------------------------------------- | ---------------------------------------------------------------------- |
| **0** | [Tracing Fundamentals](./00-tracing-fundamentals.md) | Distributed tracing concepts, span relationships, context propagation |
| **1** | [Architecture Analysis](./01-architecture-analysis.md) | rippled component analysis, trace points, instrumentation priorities |
| **2** | [Design Decisions](./02-design-decisions.md) | SDK selection, exporters, span naming, attributes, context propagation |
| **3** | [Implementation Strategy](./03-implementation-strategy.md) | Directory structure, key principles, performance optimization |
| **4** | [Code Samples](./04-code-samples.md) | C++ implementation examples for core infrastructure and key modules |
| **5** | [Configuration Reference](./05-configuration-reference.md) | rippled config, CMake integration, Collector configurations |
| **6** | [Implementation Phases](./06-implementation-phases.md) | 5-phase timeline, tasks, risks, success metrics |
| **7** | [Observability Backends](./07-observability-backends.md) | Backend selection guide and production architecture |
| **8** | [Appendix](./08-appendix.md) | Glossary, references, version history |
| **POC** | [POC Task List](./POC_taskList.md) | Proof of concept tasks for RPC tracing end-to-end demo |
---
## 0. Tracing Fundamentals
This document introduces distributed tracing concepts for readers unfamiliar with the domain. It covers what traces and spans are, how parent-child and follows-from relationships model causality, how context propagates across service boundaries, and how sampling controls data volume. It also maps these concepts to rippled-specific scenarios like transaction relay and consensus.
➡️ **[Read Tracing Fundamentals](./00-tracing-fundamentals.md)**
---
## 1. Architecture Analysis
> **WS** = WebSocket | **TxQ** = Transaction Queue
The rippled node consists of several key components that require instrumentation for comprehensive distributed tracing. The main areas include the RPC server (HTTP/WebSocket), Overlay P2P network, Consensus mechanism (RCLConsensus), JobQueue for async task execution, PathFinding, Transaction Queue (TxQ), fee escalation (LoadManager), ledger acquisition, validator management, and existing observability infrastructure (PerfLog, Insight/StatsD, Journal logging).
Key trace points span across transaction submission via RPC, peer-to-peer message propagation, consensus round execution, ledger building, path computation, transaction queue behavior, fee escalation, and validator health. The implementation prioritizes high-value, low-risk components first: RPC handlers provide immediate value with minimal risk, while consensus tracing requires careful implementation to avoid timing impacts.
➡️ **[Read full Architecture Analysis](./01-architecture-analysis.md)**
---
## 2. Design Decisions
> **OTLP** = OpenTelemetry Protocol | **CNCF** = Cloud Native Computing Foundation
The OpenTelemetry C++ SDK is selected for its CNCF backing, active development, and native performance characteristics. Traces are exported via OTLP/gRPC (primary) or OTLP/HTTP (fallback) to an OpenTelemetry Collector, which provides flexible routing and sampling.
Span naming follows a hierarchical `<component>.<operation>` convention (e.g., `rpc.submit`, `tx.relay`, `consensus.round`). Context propagation uses W3C Trace Context headers for HTTP and embedded Protocol Buffer fields for P2P messages. The implementation coexists with existing PerfLog and Insight observability systems through correlation IDs.
**Data Collection & Privacy**: Telemetry collects only operational metadata (timing, counts, hashes) — never sensitive content (private keys, balances, amounts, raw payloads). Privacy protection includes account hashing, configurable redaction, sampling, and collector-level filtering. Node operators retain full control over telemetry configuration.
➡️ **[Read full Design Decisions](./02-design-decisions.md)**
---
## 3. Implementation Strategy
The telemetry code is organized under `include/xrpl/telemetry/` for headers and `src/libxrpl/telemetry/` for implementation. Key principles include RAII-based span management via `SpanGuard`, conditional compilation with `XRPL_ENABLE_TELEMETRY`, and minimal runtime overhead through batch processing and efficient sampling.
Performance optimization strategies include probabilistic head sampling (10% default), tail-based sampling at the collector for errors and slow traces, batch export to reduce network overhead, and conditional instrumentation that compiles to no-ops when disabled.
➡️ **[Read full Implementation Strategy](./03-implementation-strategy.md)**
---
## 4. Code Samples
C++ implementation examples are provided for the core telemetry infrastructure and key modules:
- `Telemetry.h` - Core interface for tracer access and span creation
- `SpanGuard.h` - RAII wrapper for automatic span lifecycle management
- `TracingInstrumentation.h` - Macros for conditional instrumentation
- Protocol Buffer extensions for trace context propagation
- Module-specific instrumentation (RPC, Consensus, P2P, JobQueue)
- Remaining modules (PathFinding, TxQ, Validator, etc.) follow the same patterns
➡️ **[View all Code Samples](./04-code-samples.md)**
---
## 5. Configuration Reference
> **OTLP** = OpenTelemetry Protocol | **APM** = Application Performance Monitoring
Configuration is handled through the `[telemetry]` section in `xrpld.cfg` with options for enabling/disabling, exporter selection, endpoint configuration, sampling ratios, and component-level filtering. CMake integration includes a `XRPL_ENABLE_TELEMETRY` option for compile-time control.
OpenTelemetry Collector configurations are provided for development and production (with tail-based sampling, Tempo, and Elastic APM). Docker Compose examples enable quick local development environment setup.
➡️ **[View full Configuration Reference](./05-configuration-reference.md)**
---
## 6. Implementation Phases
The implementation spans 9 weeks across 5 phases:
| Phase | Duration | Focus | Key Deliverables |
| ----- | --------- | ------------------- | --------------------------------------------------- |
| 1 | Weeks 1-2 | Core Infrastructure | SDK integration, Telemetry interface, Configuration |
| 2 | Weeks 3-4 | RPC Tracing | HTTP context extraction, Handler instrumentation |
| 3 | Weeks 5-6 | Transaction Tracing | Protocol Buffer context, Relay propagation |
| 4 | Weeks 7-8 | Consensus Tracing | Round spans, Proposal/validation tracing |
| 5 | Week 9 | Documentation | Runbook, Dashboards, Training |
**Total Effort**: 47 person-days (2 developers working in parallel)
➡️ **[View full Implementation Phases](./06-implementation-phases.md)**
---
## 7. Observability Backends
> **APM** = Application Performance Monitoring | **GCS** = Google Cloud Storage
Grafana Tempo is recommended for all environments due to its cost-effectiveness and Grafana integration, while Elastic APM is ideal for organizations with existing Elastic infrastructure.
The recommended production architecture uses a gateway collector pattern with regional collectors performing tail-based sampling, routing traces to multiple backends (Tempo for primary storage, Elastic for log correlation, S3/GCS for long-term archive).
➡️ **[View Observability Backend Recommendations](./07-observability-backends.md)**
---
## 8. Appendix
The appendix contains a glossary of OpenTelemetry and rippled-specific terms, references to external documentation and specifications, version history for this implementation plan, and a complete document index.
➡️ **[View Appendix](./08-appendix.md)**
---
## POC Task List
A step-by-step task list for building a minimal end-to-end proof of concept that demonstrates distributed tracing in rippled. The POC scope is limited to RPC tracing — showing request traces flowing from rippled through an OpenTelemetry Collector into Tempo, viewable in Grafana.
➡️ **[View POC Task List](./POC_taskList.md)**
---
_This document provides a comprehensive implementation plan for integrating OpenTelemetry distributed tracing into the rippled XRP Ledger node software. For detailed information on any section, follow the links to the corresponding sub-documents._

View File

@@ -0,0 +1,620 @@
# OpenTelemetry POC Task List
> **Goal**: Build a minimal end-to-end proof of concept that demonstrates distributed tracing in rippled. A successful POC will show RPC request traces flowing from rippled through an OTel Collector into Tempo, viewable in Grafana.
>
> **Scope**: RPC tracing only (highest value, lowest risk per the [CRAWL phase](./06-implementation-phases.md#6102-quick-wins-immediate-value) in the implementation phases). No cross-node P2P context propagation or consensus tracing in the POC.
### Related Plan Documents
| Document | Relevance to POC |
| ---------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [00-tracing-fundamentals.md](./00-tracing-fundamentals.md) | Core concepts: traces, spans, context propagation, sampling |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | RPC request flow (§1.5), key trace points (§1.6), instrumentation priority (§1.7) |
| [02-design-decisions.md](./02-design-decisions.md) | SDK selection (§2.1), exporter config (§2.2), span naming (§2.3), attribute schema (§2.4), coexistence with PerfLog/Insight (§2.6) |
| [03-implementation-strategy.md](./03-implementation-strategy.md) | Directory structure (§3.1), key principles (§3.2), performance overhead (§3.3-3.6), conditional compilation (§3.7.3), code intrusiveness (§3.9) |
| [04-code-samples.md](./04-code-samples.md) | Telemetry interface (§4.1), SpanGuard (§4.2), macros (§4.3), RPC instrumentation (§4.5.3) |
| [05-configuration-reference.md](./05-configuration-reference.md) | rippled config (§5.1), config parser (§5.2), Application integration (§5.3), CMake (§5.4), Collector config (§5.5), Docker Compose (§5.6), Grafana (§5.8) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 1 core tasks (§6.2), Phase 2 RPC tasks (§6.3), quick wins (§6.10), definition of done (§6.11) |
| [07-observability-backends.md](./07-observability-backends.md) | Tempo dev setup (§7.1), Grafana dashboards (§7.6), alert rules (§7.6.3) |
---
## Task 0: Docker Observability Stack Setup
> **OTLP** = OpenTelemetry Protocol
**Objective**: Stand up the backend infrastructure to receive, store, and display traces.
**What to do**:
- Create `docker/telemetry/docker-compose.yml` in the repo with three services:
1. **OpenTelemetry Collector** (`otel/opentelemetry-collector-contrib:0.92.0`)
- Expose ports `4317` (OTLP gRPC) and `4318` (OTLP HTTP)
- Expose port `13133` (health check)
- Mount a config file `docker/telemetry/otel-collector-config.yaml`
2. **Tempo** (`grafana/tempo:2.6.1`)
- Expose port `3200` (HTTP API) and `4317` (OTLP gRPC, internal)
3. **Grafana** (`grafana/grafana:latest`) — optional but useful
- Expose port `3000`
- Enable anonymous admin access for local dev (`GF_AUTH_ANONYMOUS_ENABLED=true`, `GF_AUTH_ANONYMOUS_ORG_ROLE=Admin`)
- Provision Tempo as a data source via `docker/telemetry/grafana/provisioning/datasources/tempo.yaml`
- Create `docker/telemetry/otel-collector-config.yaml`:
```yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 100
exporters:
logging:
verbosity: detailed
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp/tempo]
```
- Create Grafana Tempo datasource provisioning file at `docker/telemetry/grafana/provisioning/datasources/tempo.yaml`:
```yaml
apiVersion: 1
datasources:
- name: Tempo
type: tempo
access: proxy
url: http://tempo:3200
```
**Verification**: Run `docker compose -f docker/telemetry/docker-compose.yml up -d`, then:
- `curl http://localhost:13133` returns healthy (Collector)
- `http://localhost:3000` opens Grafana (Tempo datasource available, no traces yet)
**Reference**:
- [05-configuration-reference.md §5.5](./05-configuration-reference.md) — Collector config (dev YAML with Tempo exporter)
- [05-configuration-reference.md §5.6](./05-configuration-reference.md) — Docker Compose development environment
- [07-observability-backends.md §7.1](./07-observability-backends.md) — Tempo quick start and backend selection
- [05-configuration-reference.md §5.8](./05-configuration-reference.md) — Grafana datasource provisioning and dashboards
---
## Task 1: Add OpenTelemetry C++ SDK Dependency
**Objective**: Make `opentelemetry-cpp` available to the build system.
**What to do**:
- Edit `conanfile.py` to add `opentelemetry-cpp` as an **optional** dependency. The gRPC otel plugin flag (`"grpc/*:otel_plugin": False`) in the existing conanfile may need to remain false — we pull the OTel SDK separately.
- Add a Conan option: `with_telemetry = [True, False]` defaulting to `False`
- When `with_telemetry` is `True`, add `opentelemetry-cpp` to `self.requires()`
- Required OTel Conan components: `opentelemetry-cpp` (which bundles api, sdk, and exporters). If the package isn't in Conan Center, consider using `FetchContent` in CMake or building from source as a fallback.
- Edit `CMakeLists.txt`:
- Add option: `option(XRPL_ENABLE_TELEMETRY "Enable OpenTelemetry tracing" OFF)`
- When ON, `find_package(opentelemetry-cpp CONFIG REQUIRED)` and add compile definition `XRPL_ENABLE_TELEMETRY`
- When OFF, do nothing (zero build impact)
- Verify the build succeeds with `-DXRPL_ENABLE_TELEMETRY=OFF` (no regressions) and with `-DXRPL_ENABLE_TELEMETRY=ON` (SDK links successfully).
**Key files**:
- `conanfile.py`
- `CMakeLists.txt`
**Reference**:
- [05-configuration-reference.md §5.4](./05-configuration-reference.md) — CMake integration, `FindOpenTelemetry.cmake`, `XRPL_ENABLE_TELEMETRY` option
- [03-implementation-strategy.md §3.2](./03-implementation-strategy.md) — Key principle: zero-cost when disabled via compile-time flags
- [02-design-decisions.md §2.1](./02-design-decisions.md) — SDK selection rationale and required OTel components
---
## Task 2: Create Core Telemetry Interface and NullTelemetry
**Objective**: Define the `Telemetry` abstract interface and a no-op implementation so the rest of the codebase can reference telemetry without hard-depending on the OTel SDK.
**What to do**:
- Create `include/xrpl/telemetry/Telemetry.h`:
- Define `namespace xrpl::telemetry`
- Define `struct Telemetry::Setup` holding: `enabled`, `exporterEndpoint`, `samplingRatio`, `serviceName`, `serviceVersion`, `serviceInstanceId`, `traceRpc`, `traceTransactions`, `traceConsensus`, `tracePeer`
- Define abstract `class Telemetry` with:
- `virtual void start() = 0;`
- `virtual void stop() = 0;`
- `virtual bool isEnabled() const = 0;`
- `virtual nostd::shared_ptr<Tracer> getTracer(string_view name = "rippled") = 0;`
- `virtual nostd::shared_ptr<Span> startSpan(string_view name, SpanKind kind = kInternal) = 0;`
- `virtual nostd::shared_ptr<Span> startSpan(string_view name, Context const& parentContext, SpanKind kind = kInternal) = 0;`
- `virtual bool shouldTraceRpc() const = 0;`
- `virtual bool shouldTraceTransactions() const = 0;`
- `virtual bool shouldTraceConsensus() const = 0;`
- Factory: `std::unique_ptr<Telemetry> make_Telemetry(Setup const&, beast::Journal);`
- Config parser: `Telemetry::Setup setup_Telemetry(Section const&, std::string const& nodePublicKey, std::string const& version);`
- Create `include/xrpl/telemetry/SpanGuard.h`:
- RAII guard that takes an `nostd::shared_ptr<Span>`, creates a `Scope`, and calls `span->End()` in destructor.
- Convenience: `setAttribute()`, `setOk()`, `setStatus()`, `addEvent()`, `recordException()`, `context()`
- See [04-code-samples.md](./04-code-samples.md) §4.2 for the full implementation.
- Create `src/libxrpl/telemetry/NullTelemetry.cpp`:
- Implements `Telemetry` with all no-ops.
- `isEnabled()` returns `false`, `startSpan()` returns a noop span.
- This is used when `XRPL_ENABLE_TELEMETRY` is OFF or `enabled=0` in config.
- Guard all OTel SDK headers behind `#ifdef XRPL_ENABLE_TELEMETRY`. The `NullTelemetry` implementation should compile without the OTel SDK present.
**Key new files**:
- `include/xrpl/telemetry/Telemetry.h`
- `include/xrpl/telemetry/SpanGuard.h`
- `src/libxrpl/telemetry/NullTelemetry.cpp`
**Reference**:
- [04-code-samples.md §4.1](./04-code-samples.md) — Full `Telemetry` interface with `Setup` struct, lifecycle, tracer access, span creation, and component filtering methods
- [04-code-samples.md §4.2](./04-code-samples.md) — Full `SpanGuard` RAII implementation and `NullSpanGuard` no-op class
- [03-implementation-strategy.md §3.1](./03-implementation-strategy.md) — Directory structure: `include/xrpl/telemetry/` for headers, `src/libxrpl/telemetry/` for implementation
- [03-implementation-strategy.md §3.7.3](./03-implementation-strategy.md) — Conditional instrumentation and zero-cost compile-time disabled pattern
---
## Task 3: Implement OTel-Backed Telemetry
> **OTLP** = OpenTelemetry Protocol
**Objective**: Implement the real `Telemetry` class that initializes the OTel SDK, configures the OTLP exporter and batch processor, and creates tracers/spans.
**What to do**:
- Create `src/libxrpl/telemetry/Telemetry.cpp` (compiled only when `XRPL_ENABLE_TELEMETRY=ON`):
- `class TelemetryImpl : public Telemetry` that:
- In `start()`: creates a `TracerProvider` with:
- Resource attributes: `service.name`, `service.version`, `service.instance.id`
- An `OtlpHttpExporter` pointed at `setup.exporterEndpoint` (default `localhost:4318`)
- A `BatchSpanProcessor` with configurable batch size and delay
- A `TraceIdRatioBasedSampler` using `setup.samplingRatio`
- Sets the global `TracerProvider`
- In `stop()`: calls `ForceFlush()` then shuts down the provider
- In `startSpan()`: delegates to `getTracer()->StartSpan(name, ...)`
- `shouldTraceRpc()` etc. read from `Setup` fields
- Create `src/libxrpl/telemetry/TelemetryConfig.cpp`:
- `setup_Telemetry()` parses the `[telemetry]` config section from `xrpld.cfg`
- Maps config keys: `enabled`, `exporter`, `endpoint`, `sampling_ratio`, `trace_rpc`, `trace_transactions`, `trace_consensus`, `trace_peer`
- Wire `make_Telemetry()` factory:
- If `setup.enabled` is true AND `XRPL_ENABLE_TELEMETRY` is defined: return `TelemetryImpl`
- Otherwise: return `NullTelemetry`
- Add telemetry source files to CMake. When `XRPL_ENABLE_TELEMETRY=ON`, compile `Telemetry.cpp` and `TelemetryConfig.cpp` and link against `opentelemetry-cpp::api`, `opentelemetry-cpp::sdk`, `opentelemetry-cpp::otlp_grpc_exporter`. When OFF, compile only `NullTelemetry.cpp`.
**Key new files**:
- `src/libxrpl/telemetry/Telemetry.cpp`
- `src/libxrpl/telemetry/TelemetryConfig.cpp`
**Key modified files**:
- `CMakeLists.txt` (add telemetry library target)
**Reference**:
- [04-code-samples.md §4.1](./04-code-samples.md) — `Telemetry` interface that `TelemetryImpl` must implement
- [05-configuration-reference.md §5.2](./05-configuration-reference.md) — `setup_Telemetry()` config parser implementation
- [02-design-decisions.md §2.2](./02-design-decisions.md) — OTLP/gRPC exporter config (endpoint, TLS options)
- [02-design-decisions.md §2.4.1](./02-design-decisions.md) — Resource attributes: `service.name`, `service.version`, `service.instance.id`, `xrpl.network.id`
- [03-implementation-strategy.md §3.4](./03-implementation-strategy.md) — Per-operation CPU costs and overhead budget for span creation
- [03-implementation-strategy.md §3.5](./03-implementation-strategy.md) — Memory overhead: static (~456 KB) and dynamic (~1.2 MB) budgets
---
## Task 4: Integrate Telemetry into Application Lifecycle
**Objective**: Wire the `Telemetry` object into `Application` so all components can access it.
**What to do**:
- Edit `src/xrpld/app/main/Application.h`:
- Forward-declare `namespace xrpl::telemetry { class Telemetry; }`
- Add pure virtual method: `virtual telemetry::Telemetry& getTelemetry() = 0;`
- Edit `src/xrpld/app/main/Application.cpp` (the `ApplicationImp` class):
- Add member: `std::unique_ptr<telemetry::Telemetry> telemetry_;`
- In the constructor, after config is loaded and node identity is known:
```cpp
auto const telemetrySection = config_->section("telemetry");
auto telemetrySetup = telemetry::setup_Telemetry(
telemetrySection,
toBase58(TokenType::NodePublic, nodeIdentity_.publicKey()),
BuildInfo::getVersionString());
telemetry_ = telemetry::make_Telemetry(telemetrySetup, logs_->journal("Telemetry"));
```
- In `start()`: call `telemetry_->start()` early
- In `stop()` or destructor: call `telemetry_->stop()` late (to flush pending spans)
- Implement `getTelemetry()` override: return `*telemetry_`
- Add `[telemetry]` section to the example config `cfg/rippled-example.cfg`:
```ini
# [telemetry]
# enabled=1
# endpoint=localhost:4317
# sampling_ratio=1.0
# trace_rpc=1
```
**Key modified files**:
- `src/xrpld/app/main/Application.h`
- `src/xrpld/app/main/Application.cpp`
- `cfg/rippled-example.cfg` (or equivalent example config)
**Reference**:
- [05-configuration-reference.md §5.3](./05-configuration-reference.md) — `ApplicationImp` changes: member declaration, constructor init, `start()`/`stop()` wiring, `getTelemetry()` override
- [05-configuration-reference.md §5.1](./05-configuration-reference.md) — `[telemetry]` config section format and all option defaults
- [03-implementation-strategy.md §3.9.2](./03-implementation-strategy.md) — File impact assessment: `Application.cpp` ~15 lines added, ~3 changed (Low risk)
---
## Task 5: Create Instrumentation Macros
**Objective**: Define convenience macros that make instrumenting code one-liners, and that compile to zero-cost no-ops when telemetry is disabled.
**What to do**:
- Create `src/xrpld/telemetry/TracingInstrumentation.h`:
- When `XRPL_ENABLE_TELEMETRY` is defined:
```cpp
#define XRPL_TRACE_SPAN(telemetry, name) \
auto _xrpl_span_ = (telemetry).startSpan(name); \
::xrpl::telemetry::SpanGuard _xrpl_guard_(_xrpl_span_)
#define XRPL_TRACE_RPC(telemetry, name) \
std::optional<::xrpl::telemetry::SpanGuard> _xrpl_guard_; \
if ((telemetry).shouldTraceRpc()) { \
_xrpl_guard_.emplace((telemetry).startSpan(name)); \
}
#define XRPL_TRACE_SET_ATTR(key, value) \
if (_xrpl_guard_.has_value()) { \
_xrpl_guard_->setAttribute(key, value); \
}
#define XRPL_TRACE_EXCEPTION(e) \
if (_xrpl_guard_.has_value()) { \
_xrpl_guard_->recordException(e); \
}
```
- When `XRPL_ENABLE_TELEMETRY` is NOT defined, all macros expand to `((void)0)`
**Key new file**:
- `src/xrpld/telemetry/TracingInstrumentation.h`
**Reference**:
- [04-code-samples.md §4.3](./04-code-samples.md) — Full macro definitions for `XRPL_TRACE_SPAN`, `XRPL_TRACE_RPC`, `XRPL_TRACE_CONSENSUS`, `XRPL_TRACE_SET_ATTR`, `XRPL_TRACE_EXCEPTION` with both enabled and disabled branches
- [03-implementation-strategy.md §3.7.3](./03-implementation-strategy.md) — Conditional instrumentation pattern: compile-time `#ifndef` and runtime `shouldTrace*()` checks
- [03-implementation-strategy.md §3.9.7](./03-implementation-strategy.md) — Before/after code examples showing minimal intrusiveness (~1-3 lines per instrumentation point)
---
## Task 6: Instrument RPC ServerHandler
> **WS** = WebSocket
**Objective**: Add tracing to the HTTP RPC entry point so every incoming RPC request creates a span.
**What to do**:
- Edit `src/xrpld/rpc/detail/ServerHandler.cpp`:
- `#include` the `TracingInstrumentation.h` header
- In `ServerHandler::onRequest(Session& session)`:
- At the top of the method, add: `XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.request");`
- After the RPC command name is extracted, set attribute: `XRPL_TRACE_SET_ATTR("xrpl.rpc.command", command);`
- After the response status is known, set: `XRPL_TRACE_SET_ATTR("http.status_code", static_cast<int64_t>(statusCode));`
- Wrap error paths with: `XRPL_TRACE_EXCEPTION(e);`
- In `ServerHandler::processRequest(...)`:
- Add a child span: `XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.process");`
- Set method attribute: `XRPL_TRACE_SET_ATTR("xrpl.rpc.method", request_method);`
- In `ServerHandler::onWSMessage(...)` (WebSocket path):
- Add: `XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.ws.message");`
- The goal is to see spans like:
```
rpc.request
└── rpc.process
```
in Tempo/Grafana for every HTTP RPC call.
**Key modified file**:
- `src/xrpld/rpc/detail/ServerHandler.cpp` (~15-25 lines added)
**Reference**:
- [04-code-samples.md §4.5.3](./04-code-samples.md) — Complete `ServerHandler::onRequest()` instrumented code sample with W3C header extraction, span creation, attribute setting, and error handling
- [01-architecture-analysis.md §1.5](./01-architecture-analysis.md) — RPC request flow diagram: HTTP request -> attributes -> jobqueue.enqueue -> rpc.command -> response
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — Key trace points table: `rpc.request` in `ServerHandler.cpp::onRequest()` (Priority: High)
- [02-design-decisions.md §2.3](./02-design-decisions.md) — Span naming convention: `rpc.request`, `rpc.command.*`
- [02-design-decisions.md §2.4.2](./02-design-decisions.md) — RPC span attributes: `xrpl.rpc.command`, `xrpl.rpc.version`, `xrpl.rpc.role`, `xrpl.rpc.params`
- [03-implementation-strategy.md §3.9.2](./03-implementation-strategy.md) — File impact: `ServerHandler.cpp` ~40 lines added, ~10 changed (Low risk)
---
## Task 7: Instrument RPC Command Execution
**Objective**: Add per-command tracing inside the RPC handler so each command (e.g., `submit`, `account_info`, `server_info`) gets its own child span.
**What to do**:
- Edit `src/xrpld/rpc/detail/RPCHandler.cpp`:
- `#include` the `TracingInstrumentation.h` header
- In `doCommand(RPC::JsonContext& context, Json::Value& result)`:
- At the top: `XRPL_TRACE_RPC(context.app.getTelemetry(), "rpc.command." + context.method);`
- Set attributes:
- `XRPL_TRACE_SET_ATTR("xrpl.rpc.command", context.method);`
- `XRPL_TRACE_SET_ATTR("xrpl.rpc.version", static_cast<int64_t>(context.apiVersion));`
- `XRPL_TRACE_SET_ATTR("xrpl.rpc.role", (context.role == Role::ADMIN) ? "admin" : "user");`
- On success: `XRPL_TRACE_SET_ATTR("xrpl.rpc.status", "success");`
- On error: `XRPL_TRACE_SET_ATTR("xrpl.rpc.status", "error");` and set the error message
- After this, traces in Tempo/Grafana should look like:
```
rpc.request (xrpl.rpc.command=account_info)
└── rpc.process
└── rpc.command.account_info (xrpl.rpc.version=2, xrpl.rpc.role=user, xrpl.rpc.status=success)
```
**Key modified file**:
- `src/xrpld/rpc/detail/RPCHandler.cpp` (~15-20 lines added)
**Reference**:
- [04-code-samples.md §4.5.3](./04-code-samples.md) — `ServerHandler::onRequest()` code sample (includes child span pattern for `rpc.command.*`)
- [02-design-decisions.md §2.3](./02-design-decisions.md) — Span naming: `rpc.command.*` pattern with dynamic command name (e.g., `rpc.command.server_info`)
- [02-design-decisions.md §2.4.2](./02-design-decisions.md) — RPC attribute schema: `xrpl.rpc.command`, `xrpl.rpc.version`, `xrpl.rpc.role`, `xrpl.rpc.status`
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — Key trace points table: `rpc.command.*` in `RPCHandler.cpp::doCommand()` (Priority: High)
- [02-design-decisions.md §2.6.5](./02-design-decisions.md) — Correlation with PerfLog: how `doCommand()` can link trace_id with existing PerfLog entries
- [03-implementation-strategy.md §3.4.4](./03-implementation-strategy.md) — RPC request overhead budget: ~1.75 μs total per request
---
## Task 8: Build, Run, and Verify End-to-End
> **OTLP** = OpenTelemetry Protocol
**Objective**: Prove the full pipeline works: rippled emits traces -> OTel Collector receives them -> Tempo stores them for Grafana visualization.
**What to do**:
1. **Start the Docker stack**:
```bash
docker compose -f docker/telemetry/docker-compose.yml up -d
```
Verify Collector health: `curl http://localhost:13133`
2. **Build rippled with telemetry**:
```bash
# Adjust for your actual build workflow
conan install . --build=missing -o with_telemetry=True
cmake --preset default -DXRPL_ENABLE_TELEMETRY=ON
cmake --build --preset default
```
3. **Configure rippled**:
Add to `rippled.cfg` (or your local test config):
```ini
[telemetry]
enabled=1
endpoint=localhost:4317
sampling_ratio=1.0
trace_rpc=1
```
4. **Start rippled** in standalone mode:
```bash
./rippled --conf rippled.cfg -a --start
```
5. **Generate RPC traffic**:
```bash
# server_info
curl -s -X POST http://localhost:5005 \
-H "Content-Type: application/json" \
-d '{"method":"server_info","params":[{}]}'
# ledger
curl -s -X POST http://localhost:5005 \
-H "Content-Type: application/json" \
-d '{"method":"ledger","params":[{"ledger_index":"current"}]}'
# account_info (will error in standalone, that's fine — we trace errors too)
curl -s -X POST http://localhost:5005 \
-H "Content-Type: application/json" \
-d '{"method":"account_info","params":[{"account":"rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"}]}'
```
6. **Verify in Grafana (Tempo)**:
- Open `http://localhost:3000`
- Navigate to Explore → select Tempo datasource
- Search for service `rippled`
- Confirm you see traces with spans: `rpc.request` -> `rpc.process` -> `rpc.command.server_info`
- Click into a trace and verify attributes: `xrpl.rpc.command`, `xrpl.rpc.status`, `xrpl.rpc.version`
7. **Verify zero-overhead when disabled**:
- Rebuild with `XRPL_ENABLE_TELEMETRY=OFF`, or set `enabled=0` in config
- Run the same RPC calls
- Confirm no new traces appear and no errors in rippled logs
**Verification Checklist**:
- [ ] Docker stack starts without errors
- [ ] rippled builds with `-DXRPL_ENABLE_TELEMETRY=ON`
- [ ] rippled starts and connects to OTel Collector (check rippled logs for telemetry messages)
- [ ] Traces appear in Grafana/Tempo under service "rippled"
- [ ] Span hierarchy is correct (parent-child relationships)
- [ ] Span attributes are populated (`xrpl.rpc.command`, `xrpl.rpc.status`, etc.)
- [ ] Error spans show error status and message
- [ ] Building with `XRPL_ENABLE_TELEMETRY=OFF` produces no regressions
- [ ] Setting `enabled=0` at runtime produces no traces and no errors
**Reference**:
- [06-implementation-phases.md §6.11.1](./06-implementation-phases.md) — Phase 1 definition of done: SDK compiles, runtime toggle works, span creation verified in Tempo, config validation passes
- [06-implementation-phases.md §6.11.2](./06-implementation-phases.md#6112-phase-2-rpc-tracing) — Phase 2 definition of done: 100% RPC coverage, traceparent propagation, <1ms p99 overhead, dashboard deployed
- [06-implementation-phases.md §6.8](./06-implementation-phases.md) — Success metrics: trace coverage >95%, CPU overhead <3%, memory <5 MB, latency impact <2%
- [03-implementation-strategy.md §3.9.5](./03-implementation-strategy.md) — Backward compatibility: config optional, protocol unchanged, `XRPL_ENABLE_TELEMETRY=OFF` produces identical binary
- [01-architecture-analysis.md §1.8](./01-architecture-analysis.md) — Observable outcomes: what traces, metrics, and dashboards to expect
---
## Task 9: Document POC Results and Next Steps
> **OTLP** = OpenTelemetry Protocol | **WS** = WebSocket
**Objective**: Capture findings, screenshots, and remaining work for the team.
**What to do**:
- Take screenshots of Grafana/Tempo showing:
- The service list with "rippled"
- A trace with the full span tree
- Span detail view showing attributes
- Document any issues encountered (build issues, SDK quirks, missing attributes)
- Note performance observations (build time impact, any noticeable runtime overhead)
- Write a short summary of what the POC proves and what it doesn't cover yet:
- **Proves**: OTel SDK integrates with rippled, OTLP export works, RPC traces visible
- **Doesn't cover**: Cross-node P2P context propagation, consensus tracing, protobuf trace context, W3C traceparent header extraction, tail-based sampling, production deployment
- Outline next steps (mapping to the full plan phases):
- [Phase 2](./06-implementation-phases.md) completion: [W3C header extraction](./02-design-decisions.md) (§2.5), WebSocket tracing, all [RPC handlers](./01-architecture-analysis.md) (§1.6)
- [Phase 3](./06-implementation-phases.md): [Protobuf `TraceContext` message](./04-code-samples.md) (§4.4), [transaction relay tracing](./04-code-samples.md) (§4.5.1) across nodes
- [Phase 4](./06-implementation-phases.md): [Consensus round and phase tracing](./04-code-samples.md) (§4.5.2)
- [Phase 5](./06-implementation-phases.md): [Production collector config](./05-configuration-reference.md) (§5.5.2), [Grafana dashboards](./07-observability-backends.md) (§7.6), [alerting](./07-observability-backends.md) (§7.6.3)
**Reference**:
- [06-implementation-phases.md §6.1](./06-implementation-phases.md) — Full 5-phase timeline overview and Gantt chart
- [06-implementation-phases.md §6.10](./06-implementation-phases.md) — Crawl-Walk-Run strategy: POC is the CRAWL phase, next steps are WALK and RUN
- [06-implementation-phases.md §6.12](./06-implementation-phases.md) — Recommended implementation order (14 steps across 9 weeks)
- [03-implementation-strategy.md §3.9](./03-implementation-strategy.md) — Code intrusiveness assessment and risk matrix for each remaining component
- [07-observability-backends.md §7.2](./07-observability-backends.md) — Production backend selection (Tempo, Elastic APM, Honeycomb, Datadog)
- [02-design-decisions.md §2.5](./02-design-decisions.md) — Context propagation design: W3C HTTP headers, protobuf P2P, JobQueue internal
- [00-tracing-fundamentals.md](./00-tracing-fundamentals.md) — Reference for team onboarding on distributed tracing concepts
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ------------------------------------ | --------- | -------------- | ---------- |
| 0 | Docker observability stack | 4 | 0 | — |
| 1 | OTel C++ SDK dependency | 0 | 2 | — |
| 2 | Core Telemetry interface + NullImpl | 3 | 0 | 1 |
| 3 | OTel-backed Telemetry implementation | 2 | 1 | 1, 2 |
| 4 | Application lifecycle integration | 0 | 3 | 2, 3 |
| 5 | Instrumentation macros | 1 | 0 | 2 |
| 6 | Instrument RPC ServerHandler | 0 | 1 | 4, 5 |
| 7 | Instrument RPC command execution | 0 | 1 | 4, 5 |
| 8 | End-to-end verification | 0 | 0 | 0-7 |
| 9 | Document results and next steps | 1 | 0 | 8 |
**Parallel work**: Tasks 0 and 1 can run in parallel. Tasks 2 and 5 have no dependency on each other. Tasks 6 and 7 can be done in parallel once Tasks 4 and 5 are complete.
---
## Next Steps (Post-POC)
> **OTLP** = OpenTelemetry Protocol | **WS** = WebSocket
### Metrics Pipeline for Grafana Dashboards
The current POC exports **traces only**. Grafana's Explore view can query Tempo for individual traces, but time-series charts (latency histograms, request throughput, error rates) require a **metrics pipeline**. To enable this:
1. **Add a `spanmetrics` connector** to the OTel Collector config that derives RED metrics (Rate, Errors, Duration) from trace spans automatically:
```yaml
connectors:
spanmetrics:
histogram:
explicit:
buckets: [1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 5s]
dimensions:
- name: xrpl.rpc.command
- name: xrpl.rpc.status
exporters:
prometheus:
endpoint: 0.0.0.0:8889
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp/tempo, spanmetrics]
metrics:
receivers: [spanmetrics]
exporters: [prometheus]
```
2. **Add Prometheus** to the Docker Compose stack to scrape the collector's metrics endpoint.
3. **Add Prometheus as a Grafana datasource** and build dashboards for:
- RPC request latency (p50/p95/p99) by command
- RPC throughput (requests/sec) by command
- Error rate by command
- Span duration distribution
### Additional Instrumentation
- **W3C `traceparent` header extraction** in `ServerHandler` to support cross-service context propagation from external callers
- **WebSocket RPC tracing** in `ServerHandler::onWSMessage()`
- **Transaction relay tracing** across nodes using protobuf `TraceContext` messages
- **Consensus round and phase tracing** for validator coordination visibility
- **Ledger close tracing** to measure close-to-validated latency
### Production Hardening
- **Tail-based sampling** in the OTel Collector to reduce volume while retaining error/slow traces
- **TLS configuration** for the OTLP exporter in production deployments
- **Resource limits** on the batch processor queue to prevent unbounded memory growth
- **Health monitoring** for the telemetry pipeline itself (collector lag, export failures)
### POC Lessons Learned
Issues encountered during POC implementation that inform future work:
| Issue | Resolution | Impact on Future Work |
| -------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------- | ---------------------------------------------------------------- |
| Conan lockfile rejected `opentelemetry-cpp/1.18.0` | Used `--lockfile=""` to bypass | Lockfile must be regenerated when adding new dependencies |
| Conan package only builds OTLP HTTP exporter, not gRPC | Switched from gRPC to HTTP exporter (`localhost:4318/v1/traces`) | HTTP exporter is the default; gRPC requires custom Conan profile |
| CMake target `opentelemetry-cpp::api` etc. don't exist in Conan package | Use umbrella target `opentelemetry-cpp::opentelemetry-cpp` | Conan targets differ from upstream CMake targets |
| OTel Collector `logging` exporter deprecated | Renamed to `debug` exporter | Use `debug` in all collector configs going forward |
| Macro parameter `telemetry` collided with `::xrpl::telemetry::` namespace | Renamed macro params to `_tel_obj_`, `_span_name_` | Avoid common words as macro parameter names |
| `opentelemetry::trace::Scope` creates new context on move | Store scope as member, create once in constructor | SpanGuard move semantics need care with Scope lifecycle |
| `TracerProviderFactory::Create` returns `unique_ptr<sdk::TracerProvider>`, not `nostd::shared_ptr` | Use `std::shared_ptr` member, wrap in `nostd::shared_ptr` for global provider | OTel SDK factory return types don't match API provider types |

View File

@@ -0,0 +1,230 @@
# Phase 2: RPC Tracing Completion Task List
> **Goal**: Complete full RPC tracing coverage with W3C Trace Context propagation, unit tests, and performance validation. Build on the POC foundation to achieve production-quality RPC observability.
>
> **Scope**: W3C header extraction, TraceContext propagation utilities, unit tests for core telemetry, integration tests for RPC tracing, and performance benchmarks.
>
> **Branch**: `pratik/otel-phase2-rpc-tracing` (from `pratik/OpenTelemetry_and_DistributedTracing_planning`)
### Related Plan Documents
| Document | Relevance |
| ------------------------------------------------------------ | ------------------------------------------------------------- |
| [04-code-samples.md](./04-code-samples.md) | TraceContextPropagator (§4.4.2), RPC instrumentation (§4.5.3) |
| [02-design-decisions.md](./02-design-decisions.md) | W3C Trace Context (§2.5), span attributes (§2.4.2) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 2 tasks (§6.3), definition of done (§6.11.2) |
---
## Task 2.1: Implement W3C Trace Context HTTP Header Extraction
**Objective**: Extract `traceparent` and `tracestate` headers from incoming HTTP RPC requests so external callers can propagate their trace context into rippled.
**What to do**:
- Create `include/xrpl/telemetry/TraceContextPropagator.h`:
- `extractFromHeaders(headerGetter)` - extract W3C traceparent/tracestate from HTTP headers
- `injectToHeaders(ctx, headerSetter)` - inject trace context into response headers
- Use OTel's `TextMapPropagator` with `W3CTraceContextPropagator` for standards compliance
- Only compiled when `XRPL_ENABLE_TELEMETRY` is defined
- Create `src/libxrpl/telemetry/TraceContextPropagator.cpp`:
- Implement a simple `TextMapCarrier` adapter for HTTP headers
- Use `opentelemetry::context::propagation::GlobalTextMapPropagator` for extraction/injection
- Register the W3C propagator in `TelemetryImpl::start()`
- Modify `src/xrpld/rpc/detail/ServerHandler.cpp`:
- In the HTTP request handler, extract parent context from headers before creating span
- Pass extracted context to `startSpan()` as parent
- Inject trace context into response headers
**Key new files**:
- `include/xrpl/telemetry/TraceContextPropagator.h`
- `src/libxrpl/telemetry/TraceContextPropagator.cpp`
**Key modified files**:
- `src/xrpld/rpc/detail/ServerHandler.cpp`
- `src/libxrpl/telemetry/Telemetry.cpp` (register W3C propagator)
**Reference**:
- [04-code-samples.md §4.4.2](./04-code-samples.md) — TraceContextPropagator with extractFromHeaders/injectToHeaders
- [02-design-decisions.md §2.5](./02-design-decisions.md) — W3C Trace Context propagation design
---
## Task 2.2: Add XRPL_TRACE_PEER Macro
**Objective**: Add the missing peer-tracing macro for future Phase 3 use and ensure macro completeness.
**What to do**:
- Edit `src/xrpld/telemetry/TracingInstrumentation.h`:
- Add `XRPL_TRACE_PEER(_tel_obj_, _span_name_)` macro that checks `shouldTracePeer()`
- Add `XRPL_TRACE_LEDGER(_tel_obj_, _span_name_)` macro (for future ledger tracing)
- Ensure disabled variants expand to `((void)0)`
**Key modified file**:
- `src/xrpld/telemetry/TracingInstrumentation.h`
---
## Task 2.3: Add shouldTraceLedger() to Telemetry Interface
**Objective**: The `Setup` struct has a `traceLedger` field but there's no corresponding virtual method. Add it for interface completeness.
**What to do**:
- Edit `include/xrpl/telemetry/Telemetry.h`:
- Add `virtual bool shouldTraceLedger() const = 0;`
- Update all implementations:
- `src/libxrpl/telemetry/Telemetry.cpp` (TelemetryImpl, NullTelemetryOtel)
- `src/libxrpl/telemetry/NullTelemetry.cpp` (NullTelemetry)
**Key modified files**:
- `include/xrpl/telemetry/Telemetry.h`
- `src/libxrpl/telemetry/Telemetry.cpp`
- `src/libxrpl/telemetry/NullTelemetry.cpp`
---
## Task 2.4: Unit Tests for Core Telemetry Infrastructure
**Objective**: Add unit tests for the core telemetry abstractions to validate correctness and catch regressions.
**What to do**:
- Create `src/test/telemetry/Telemetry_test.cpp`:
- Test NullTelemetry: verify all methods return expected no-op values
- Test Setup defaults: verify all Setup fields have correct defaults
- Test setup_Telemetry config parser: verify parsing of [telemetry] section
- Test enabled/disabled factory paths
- Test shouldTrace\* methods respect config flags
- Create `src/test/telemetry/SpanGuard_test.cpp`:
- Test SpanGuard RAII lifecycle (span ends on destruction)
- Test move constructor works correctly
- Test setAttribute, setOk, setStatus, addEvent, recordException
- Test context() returns valid context
- Add test files to CMake build
**Key new files**:
- `src/test/telemetry/Telemetry_test.cpp`
- `src/test/telemetry/SpanGuard_test.cpp`
**Reference**:
- [06-implementation-phases.md §6.11.1](./06-implementation-phases.md) — Phase 1 exit criteria (unit tests passing)
---
## Task 2.5: Enhance RPC Span Attributes
**Objective**: Add additional attributes to RPC spans per the semantic conventions defined in the plan.
**What to do**:
- Edit `src/xrpld/rpc/detail/ServerHandler.cpp`:
- Add `http.method` attribute for HTTP requests
- Add `http.status_code` attribute for responses
- Add `net.peer.ip` attribute for client IP (if available)
- Edit `src/xrpld/rpc/detail/RPCHandler.cpp`:
- Add `xrpl.rpc.duration_ms` attribute on completion
- Add error message attribute on failure: `xrpl.rpc.error_message`
**Key modified files**:
- `src/xrpld/rpc/detail/ServerHandler.cpp`
- `src/xrpld/rpc/detail/RPCHandler.cpp`
**Reference**:
- [02-design-decisions.md §2.4.2](./02-design-decisions.md) — RPC attribute schema
---
## Task 2.6: Build Verification and Performance Baseline
**Objective**: Verify the build succeeds with and without telemetry, and establish a performance baseline.
**What to do**:
1. Build with `telemetry=ON` and verify no compilation errors
2. Build with `telemetry=OFF` and verify no regressions
3. Run existing unit tests to verify no breakage
4. Document any build issues in lessons.md
**Verification Checklist**:
- [ ] `conan install . --build=missing -o telemetry=True` succeeds
- [ ] `cmake --preset default -Dtelemetry=ON` configures correctly
- [ ] Build succeeds with telemetry ON
- [ ] Build succeeds with telemetry OFF
- [ ] Existing tests pass with telemetry ON
- [ ] Existing tests pass with telemetry OFF
---
## Task 2.8: RPC Span Attribute Enrichment — Node Health Context
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md) — adds node-level health context inspired by the community [xrpl-validator-dashboard](https://github.com/realgrapedrop/xrpl-validator-dashboard).
>
> **Downstream**: Phase 7 (MetricsRegistry uses these attributes for alerting context), Phase 10 (validation checks for these attributes).
**Objective**: Add node-level health state to every `rpc.command.*` span so operators can correlate RPC behavior with node state in Tempo.
**What to do**:
- Edit `src/xrpld/rpc/detail/RPCHandler.cpp`:
- In the `rpc.command.*` span creation block (after existing `setAttribute` calls for `xrpl.rpc.command`, `xrpl.rpc.version`, etc.):
- Add `xrpl.node.amendment_blocked` (bool) — from `context.app.getOPs().isAmendmentBlocked()`
- Add `xrpl.node.server_state` (string) — from `context.app.getOPs().strOperatingMode()`
**New span attributes**:
| Attribute | Type | Source | Example |
| ----------------------------- | ------ | ------------------------------------------- | -------- |
| `xrpl.node.amendment_blocked` | bool | `context.app.getOPs().isAmendmentBlocked()` | `true` |
| `xrpl.node.server_state` | string | `context.app.getOPs().strOperatingMode()` | `"full"` |
**Rationale**: When a node is amendment-blocked or in a degraded state, every RPC response is suspect. Tagging spans with this state enables Tempo TraceQL queries like:
```
{name=~"rpc.command.*"} | xrpl.node.amendment_blocked = true
```
This surfaces all RPCs served during a blocked period — critical for post-incident analysis.
**Key modified files**:
- `src/xrpld/rpc/detail/RPCHandler.cpp`
**Exit Criteria**:
- [ ] `rpc.command.server_info` spans carry `xrpl.node.amendment_blocked` and `xrpl.node.server_state` attributes
- [ ] No measurable latency impact (attribute values are cached atomics, not computed per-call)
- [ ] Attributes appear in Tempo trace detail view
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ------------------------------------------- | --------- | -------------- | ---------- |
| 2.1 | W3C Trace Context header extraction | 2 | 2 | POC |
| 2.2 | Add XRPL_TRACE_PEER/LEDGER macros | 0 | 1 | POC |
| 2.3 | Add shouldTraceLedger() interface method | 0 | 3 | POC |
| 2.4 | Unit tests for core telemetry | 2 | 1 | POC |
| 2.5 | Enhanced RPC span attributes | 0 | 2 | POC |
| 2.6 | Build verification and performance baseline | 0 | 0 | 2.1-2.5 |
| 2.8 | RPC span attribute enrichment (node health) | 0 | 1 | 2.5 |
**Parallel work**: Tasks 2.1, 2.2, 2.3 can run in parallel. Task 2.4 depends on 2.3. Task 2.5 can run in parallel with 2.4. Task 2.6 depends on all others. Task 2.8 depends on 2.5 (existing span creation must be in place).

View File

@@ -0,0 +1,275 @@
# Phase 3: Transaction Tracing Task List
> **Goal**: Trace the full transaction lifecycle from RPC submission through peer relay, including cross-node context propagation via Protocol Buffer extensions. This is the WALK phase that demonstrates true distributed tracing.
>
> **Scope**: Protocol Buffer `TraceContext` message, context serialization, PeerImp transaction instrumentation, NetworkOPs processing instrumentation, HashRouter visibility, and multi-node relay context propagation.
>
> **Branch**: `pratik/otel-phase3-tx-tracing` (from `pratik/otel-phase2-rpc-tracing`)
### Related Plan Documents
| Document | Relevance |
| ------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ |
| [04-code-samples.md](./04-code-samples.md) | TraceContext protobuf (§4.4.1), PeerImp instrumentation (§4.5.1), context serialization (§4.4.2) |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | Transaction flow (§1.3), key trace points (§1.6) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 3 tasks (§6.4), definition of done (§6.11.3) |
| [02-design-decisions.md](./02-design-decisions.md) | Context propagation design (§2.5), attribute schema (§2.4.3) |
---
## Task 3.1: Define TraceContext Protocol Buffer Message
**Objective**: Add trace context fields to the P2P protocol messages so trace IDs can propagate across nodes.
**What to do**:
- Edit `include/xrpl/proto/xrpl.proto` (or `src/ripple/proto/ripple.proto`, wherever the proto is):
- Add `TraceContext` message definition:
```protobuf
message TraceContext {
bytes trace_id = 1; // 16-byte trace identifier
bytes span_id = 2; // 8-byte span identifier
uint32 trace_flags = 3; // bit 0 = sampled
string trace_state = 4; // W3C tracestate value
}
```
- Add `optional TraceContext trace_context = 1001;` to:
- `TMTransaction`
- `TMProposeSet` (for Phase 4 use)
- `TMValidation` (for Phase 4 use)
- Use high field numbers (1001+) to avoid conflicts with existing fields
- Regenerate protobuf C++ code
**Key modified files**:
- `include/xrpl/proto/xrpl.proto` (or equivalent)
**Reference**:
- [04-code-samples.md §4.4.1](./04-code-samples.md) — TraceContext message definition
- [02-design-decisions.md §2.5.2](./02-design-decisions.md) — Protocol buffer context propagation design
---
## Task 3.2: Implement Protobuf Context Serialization
**Objective**: Create utilities to serialize/deserialize OTel trace context to/from protobuf `TraceContext` messages.
**What to do**:
- Create `include/xrpl/telemetry/TraceContextPropagator.h` (extend from Phase 2 if exists, or add protobuf methods):
- Add protobuf-specific methods:
- `static Context extractFromProtobuf(protocol::TraceContext const& proto)` — reconstruct OTel context from protobuf fields
- `static void injectToProtobuf(Context const& ctx, protocol::TraceContext& proto)` — serialize current span context into protobuf fields
- Both methods guard behind `#ifdef XRPL_ENABLE_TELEMETRY`
- Create/extend `src/libxrpl/telemetry/TraceContextPropagator.cpp`:
- Implement extraction: read trace_id (16 bytes), span_id (8 bytes), trace_flags from protobuf, construct `SpanContext`, wrap in `Context`
- Implement injection: get current span from context, serialize its TraceId, SpanId, and TraceFlags into protobuf fields
**Key new/modified files**:
- `include/xrpl/telemetry/TraceContextPropagator.h`
- `src/libxrpl/telemetry/TraceContextPropagator.cpp`
**Reference**:
- [04-code-samples.md §4.4.2](./04-code-samples.md) — Full extract/inject implementation
---
## Task 3.3: Instrument PeerImp Transaction Handling
**Objective**: Add trace spans to the peer-level transaction receive and relay path.
**What to do**:
- Edit `src/xrpld/overlay/detail/PeerImp.cpp`:
- In `onMessage(TMTransaction)` / `handleTransaction()`:
- Extract parent trace context from incoming `TMTransaction::trace_context` field (if present)
- Create `tx.receive` span as child of extracted context (or new root if none)
- Set attributes: `xrpl.tx.hash`, `xrpl.peer.id`, `xrpl.tx.status`
- On HashRouter suppression (duplicate): set `xrpl.tx.suppressed=true`, add `tx.duplicate` event
- Wrap validation call with child span `tx.validate`
- Wrap relay with `tx.relay` span
- When relaying to peers:
- Inject current trace context into outgoing `TMTransaction::trace_context`
- Set `xrpl.tx.relay_count` attribute
- Include `TracingInstrumentation.h` and use `XRPL_TRACE_TX` macro
**Key modified files**:
- `src/xrpld/overlay/detail/PeerImp.cpp`
**Reference**:
- [04-code-samples.md §4.5.1](./04-code-samples.md) — Full PeerImp instrumentation example
- [01-architecture-analysis.md §1.3](./01-architecture-analysis.md) — Transaction flow diagram
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — tx.receive trace point
---
## Task 3.4: Instrument NetworkOPs Transaction Processing
**Objective**: Trace the transaction processing pipeline in NetworkOPs, covering both sync and async paths.
**What to do**:
- Edit `src/xrpld/app/misc/NetworkOPs.cpp`:
- In `processTransaction()`:
- Create `tx.process` span
- Set attributes: `xrpl.tx.hash`, `xrpl.tx.type`, `xrpl.tx.local` (whether from RPC or peer)
- Record whether sync or async path is taken
- In `doTransactionAsync()`:
- Capture parent context before queuing
- Create `tx.queue` span with queue depth attribute
- Add event when transaction is dequeued for processing
- In `doTransactionSync()`:
- Create `tx.process_sync` span
- Record result (applied, queued, rejected)
**Key modified files**:
- `src/xrpld/app/misc/NetworkOPs.cpp`
**Reference**:
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — tx.validate and tx.process trace points
- [02-design-decisions.md §2.4.3](./02-design-decisions.md) — Transaction attribute schema
---
## Task 3.5: Instrument HashRouter for Dedup Visibility
**Objective**: Make transaction deduplication visible in traces by recording HashRouter decisions as span attributes/events.
**What to do**:
- Edit `src/xrpld/overlay/detail/PeerImp.cpp` (in handleTransaction):
- After calling `HashRouter::shouldProcess()` or `addSuppressionPeer()`:
- Record `xrpl.tx.suppressed` attribute (true/false)
- Record `xrpl.tx.flags` showing current HashRouter state (SAVED, TRUSTED, etc.)
- Add `tx.first_seen` or `tx.duplicate` event
- This is NOT a modification to HashRouter itself — just recording its decisions as span attributes in the existing PeerImp instrumentation from Task 3.3.
**Key modified files**:
- `src/xrpld/overlay/detail/PeerImp.cpp` (same changes as 3.3, logically grouped)
---
## Task 3.6: Context Propagation in Transaction Relay
**Objective**: Ensure trace context flows correctly when transactions are relayed between peers, creating linked spans across nodes.
**What to do**:
- Verify the relay path injects trace context:
- When `PeerImp` relays a transaction, the `TMTransaction` message should carry `trace_context`
- When a remote peer receives it, the context is extracted and used as parent
- Test context propagation:
- Manually verify with 2+ node setup that trace IDs match across nodes
- Confirm parent-child span relationships are correct in Tempo
- Handle edge cases:
- Missing trace context (older peers): create new root span
- Corrupted trace context: log warning, create new root span
- Sampled-out traces: respect trace flags
**Key modified files**:
- `src/xrpld/overlay/detail/PeerImp.cpp`
- `src/xrpld/overlay/detail/OverlayImpl.cpp` (if relay method needs context param)
**Reference**:
- [02-design-decisions.md §2.5](./02-design-decisions.md) — Context propagation design
- [04-code-samples.md §4.5.1](./04-code-samples.md) — Relay context injection pattern
---
## Task 3.7: Build Verification and Testing
**Objective**: Verify all Phase 3 changes compile and work correctly.
**What to do**:
1. Build with `telemetry=ON` — verify no compilation errors
2. Build with `telemetry=OFF` — verify no regressions
3. Run existing unit tests
4. Verify protobuf regeneration produces correct C++ code
5. Document any issues encountered
**Verification Checklist**:
- [ ] Protobuf changes generate valid C++
- [ ] Build succeeds with telemetry ON
- [ ] Build succeeds with telemetry OFF
- [ ] Existing tests pass
- [ ] No undefined symbols from new telemetry calls
---
## Task 3.8: Transaction Span Peer Version Attribute
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md) — adds peer version context inspired by the community [xrpl-validator-dashboard](https://github.com/realgrapedrop/xrpl-validator-dashboard).
>
> **Upstream**: Phase 2 (RPC span infrastructure must exist).
> **Downstream**: Phase 10 (validation checks for this attribute).
**Objective**: Add the relaying peer's rippled version to `tx.receive` spans so operators can correlate transaction issues with peer version mismatches during network upgrades.
**What to do**:
- Edit `src/xrpld/overlay/detail/PeerImp.cpp`:
- In the `tx.receive` span block (after existing `xrpl.peer.id` setAttribute call):
- Add `xrpl.peer.version` (string) — from `this->getVersion()`
- Only set if `getVersion()` returns a non-empty string (avoid empty-string attributes)
**New span attribute**:
| Attribute | Type | Source | Example |
| ------------------- | ------ | -------------------- | ----------------- |
| `xrpl.peer.version` | string | `peer->getVersion()` | `"rippled-2.4.0"` |
**Rationale**: Transaction relay is where version mismatches cause subtle serialization or validation bugs. Tracing "this tx came from a v2.3.0 peer" helps diagnose compatibility issues. The community dashboard tracks peer versions externally; this brings version awareness into the trace itself.
**Key modified files**:
- `src/xrpld/overlay/detail/PeerImp.cpp`
**Exit Criteria**:
- [ ] `tx.receive` spans carry `xrpl.peer.version` attribute with a non-empty version string
- [ ] Attribute is omitted (not set to empty string) when `getVersion()` returns empty
- [ ] Attribute visible in Jaeger span detail view
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ----------------------------------- | --------- | -------------- | ---------- |
| 3.1 | TraceContext protobuf message | 0 | 1 | Phase 2 |
| 3.2 | Protobuf context serialization | 1-2 | 0 | 3.1 |
| 3.3 | PeerImp transaction instrumentation | 0 | 1 | 3.2 |
| 3.4 | NetworkOPs transaction processing | 0 | 1 | Phase 2 |
| 3.5 | HashRouter dedup visibility | 0 | 1 | 3.3 |
| 3.6 | Relay context propagation | 0 | 1-2 | 3.3, 3.5 |
| 3.7 | Build verification and testing | 0 | 0 | 3.1-3.6 |
| 3.8 | TX span peer version attribute | 0 | 1 | 3.3 |
**Parallel work**: Tasks 3.1 and 3.4 can start in parallel. Task 3.2 depends on 3.1. Tasks 3.3 and 3.5 depend on 3.2. Task 3.6 depends on 3.3 and 3.5. Task 3.8 depends on 3.3 (span must exist).
**Exit Criteria** (from [06-implementation-phases.md §6.11.3](./06-implementation-phases.md)):
- [ ] Transaction traces span across nodes
- [ ] Trace context in Protocol Buffer messages
- [ ] HashRouter deduplication visible in traces
- [ ] <5% overhead on transaction throughput

View File

@@ -0,0 +1,896 @@
# Phase 4: Consensus Tracing Task List
> **Goal**: Full observability into consensus rounds — track round lifecycle, phase transitions, proposal handling, and validation. This is the RUN phase that completes the distributed tracing story.
>
> **Scope**: RCLConsensus instrumentation for round starts, phase transitions (open/establish/accept), proposal send/receive, validation handling, and correlation with transaction traces from Phase 3.
>
> **Branch**: `pratik/otel-phase4-consensus-tracing` (from `pratik/otel-phase3-tx-tracing`)
### Related Plan Documents
| Document | Relevance |
| ------------------------------------------------------------ | ----------------------------------------------------------- |
| [04-code-samples.md](./04-code-samples.md) | Consensus instrumentation (§4.5.2), consensus span patterns |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | Consensus round flow (§1.4), key trace points (§1.6) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 4 tasks (§6.5), definition of done (§6.11.4) |
| [02-design-decisions.md](./02-design-decisions.md) | Consensus attribute schema (§2.4.4) |
---
## Task 4.1: Instrument Consensus Round Start
**Objective**: Create a root span for each consensus round that captures the round's key parameters.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp`:
- In `RCLConsensus::startRound()` (or the Adaptor's startRound):
- Create `consensus.round` span using `XRPL_TRACE_CONSENSUS` macro
- Set attributes:
- `xrpl.consensus.ledger.prev` — previous ledger hash
- `xrpl.consensus.ledger.seq` — target ledger sequence
- `xrpl.consensus.proposers` — number of trusted proposers
- `xrpl.consensus.mode` — "proposing" or "observing"
- Store the span context for use by child spans in phase transitions
- Add a member to hold current round trace context:
- `opentelemetry::context::Context currentRoundContext_` (guarded by `#ifdef`)
- Updated at round start, used by phase transition spans
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `src/xrpld/app/consensus/RCLConsensus.h` (add context member)
**Reference**:
- [04-code-samples.md §4.5.2](./04-code-samples.md) — startRound instrumentation example
- [01-architecture-analysis.md §1.4](./01-architecture-analysis.md) — Consensus round flow
---
## Task 4.2: Instrument Phase Transitions
**Objective**: Create child spans for each consensus phase (open, establish, accept) to show timing breakdown.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp`:
- Identify where phase transitions occur (the `Consensus<Adaptor>` template drives this)
- For each phase entry:
- Create span as child of `currentRoundContext_`: `consensus.phase.open`, `consensus.phase.establish`, `consensus.phase.accept`
- Set `xrpl.consensus.phase` attribute
- Add `phase.enter` event at start, `phase.exit` event at end
- Record phase duration in milliseconds
- In the `onClose` adaptor method:
- Create `consensus.ledger_close` span
- Set attributes: close_time, mode, transaction count in initial position
- Note: The Consensus template class in `src/xrpld/consensus/Consensus.h` drives phase transitions — Phase 4a instruments directly in the template
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- Possibly `include/xrpl/consensus/Consensus.h` (for template-level phase tracking)
**Reference**:
- [04-code-samples.md §4.5.2](./04-code-samples.md) — phaseTransition instrumentation
---
## Task 4.3: Instrument Proposal Handling
**Objective**: Trace proposal send and receive to show validator coordination.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp`:
- In `Adaptor::propose()`:
- Create `consensus.proposal.send` span
- Set attributes: `xrpl.consensus.round` (proposal sequence), proposal hash
- Inject trace context into outgoing `TMProposeSet::trace_context` (from Phase 3 protobuf)
- In `Adaptor::peerProposal()` (or wherever peer proposals are received):
- Extract trace context from incoming `TMProposeSet::trace_context`
- Create `consensus.proposal.receive` span as child of extracted context
- Set attributes: `xrpl.consensus.proposer` (node ID), `xrpl.consensus.round`
- In `Adaptor::share(RCLCxPeerPos)`:
- Create `consensus.proposal.relay` span for relaying peer proposals
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
**Reference**:
- [04-code-samples.md §4.5.2](./04-code-samples.md) — peerProposal instrumentation
- [02-design-decisions.md §2.4.4](./02-design-decisions.md) — Consensus attribute schema
---
## Task 4.4: Instrument Validation Handling
**Objective**: Trace validation send and receive to show ledger validation flow.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp` (or the validation handler):
- When sending our validation:
- Create `consensus.validation.send` span
- Set attributes: validated ledger hash, sequence, signing time
- When receiving a peer validation:
- Extract trace context from `TMValidation::trace_context` (if present)
- Create `consensus.validation.receive` span
- Set attributes: `xrpl.consensus.validator` (node ID), ledger hash
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `src/xrpld/app/misc/NetworkOPs.cpp` (if validation handling is here)
---
## Task 4.5: Add Consensus-Specific Attributes
**Objective**: Enrich consensus spans with detailed attributes for debugging and analysis.
**What to do**:
- Review all consensus spans and ensure they include:
- `xrpl.consensus.ledger.seq` — target ledger sequence number
- `xrpl.consensus.round` — consensus round number
- `xrpl.consensus.mode` — proposing/observing/wrongLedger
- `xrpl.consensus.phase` — current phase name
- `xrpl.consensus.phase_duration_ms` — time spent in phase
- `xrpl.consensus.proposers` — number of trusted proposers
- `xrpl.consensus.tx_count` — transactions in proposed set
- `xrpl.consensus.disputes` — number of disputed transactions
- `xrpl.consensus.converge_percent` — convergence percentage
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
---
## Task 4.6: Correlate Transaction and Consensus Traces
**Objective**: Link transaction traces from Phase 3 with consensus traces so you can follow a transaction from submission through consensus into the ledger.
**What to do**:
- In `onClose()` or `onAccept()`:
- When building the consensus position, link the round span to individual transaction spans using span links (if OTel SDK supports it) or events
- At minimum, record the transaction hashes included in the consensus set as span events: `tx.included` with `xrpl.tx.hash` attribute
- In `processTransactionSet()` (NetworkOPs):
- If the consensus round span context is available, create child spans for each transaction applied to the ledger
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `src/xrpld/app/misc/NetworkOPs.cpp`
---
## Task 4.7: Build Verification and Testing
**Objective**: Verify all Phase 4 changes compile and don't affect consensus timing.
**What to do**:
1. Build with `telemetry=ON` — verify no compilation errors
2. Build with `telemetry=OFF` — verify no regressions (critical for consensus code)
3. Run existing consensus-related unit tests
4. Verify that all macros expand to no-ops when disabled
5. Check that no consensus-critical code paths are affected by instrumentation overhead
**Verification Checklist**:
- [ ] Build succeeds with telemetry ON
- [ ] Build succeeds with telemetry OFF
- [ ] Existing consensus tests pass
- [ ] No new includes in consensus headers when telemetry is OFF
- [ ] Phase timing instrumentation doesn't use blocking operations
---
## Task 4.8: Consensus Validation Span Enrichment — External Dashboard Parity
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md) — adds validation agreement context inspired by the community [xrpl-validator-dashboard](https://github.com/realgrapedrop/xrpl-validator-dashboard).
>
> **Upstream**: Phase 4 tasks 4.1-4.4 (span creation must exist).
> **Downstream**: Phase 7 (ValidationTracker reads these attributes), Phase 10 (validation checks).
**Objective**: Add ledger hash, validation type, and quorum data to consensus validation spans on both send and receive paths. This enables trace-level validation agreement analysis — filter by ledger hash to see which validators agreed for a given ledger.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp`:
- On the `consensus.validation.send` span (in `validate()` / `doAccept()`):
- Add `xrpl.validation.ledger_hash` (string) — the ledger hash being validated
- Add `xrpl.validation.full` (bool) — whether this is a full validation (not partial)
- On the `consensus.accept` span (in `onAccept()`):
- Add `xrpl.consensus.validation_quorum` (int64) — from `app_.validators().quorum()`
- Add `xrpl.consensus.proposers_validated` (int64) — from `result.proposers`
- Edit `src/xrpld/overlay/detail/PeerImp.cpp`:
- On the `peer.validation.receive` span:
- Add `xrpl.peer.validation.ledger_hash` (string) — from deserialized `STValidation` object
- Add `xrpl.peer.validation.full` (bool) — from `STValidation` flags
**New span attributes**:
| Span | Attribute | Type | Source |
| --------------------------- | ------------------------------------ | ------ | --------------------------------- |
| `consensus.validation.send` | `xrpl.validation.ledger_hash` | string | Ledger hash from validate() args |
| `consensus.validation.send` | `xrpl.validation.full` | bool | Full vs partial validation |
| `peer.validation.receive` | `xrpl.peer.validation.ledger_hash` | string | From STValidation deserialization |
| `peer.validation.receive` | `xrpl.peer.validation.full` | bool | From STValidation flags |
| `consensus.accept` | `xrpl.consensus.validation_quorum` | int64 | `app_.validators().quorum()` |
| `consensus.accept` | `xrpl.consensus.proposers_validated` | int64 | `result.proposers` |
**Rationale**: The external dashboard's most valuable feature is validation agreement tracking. By recording the ledger hash on both outgoing and incoming validation spans, we create the raw data for agreement analysis at the trace level. Example Tempo query:
```
{name="consensus.validation.send"} | xrpl.validation.ledger_hash = "A1B2C3..."
```
Phase 7's `ValidationTracker` builds metric-level aggregation (1h/24h agreement %) on top of this data.
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `src/xrpld/overlay/detail/PeerImp.cpp`
**Exit Criteria**:
- [ ] `consensus.validation.send` spans carry `xrpl.validation.ledger_hash` and `xrpl.validation.full`
- [ ] `peer.validation.receive` spans carry `xrpl.peer.validation.ledger_hash` and `xrpl.peer.validation.full`
- [ ] `consensus.accept` spans carry `xrpl.consensus.validation_quorum` and `xrpl.consensus.proposers_validated`
- [ ] Ledger hash attributes match between send and receive for the same ledger
- [ ] No impact on consensus performance
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ------------------------------------------- | --------- | -------------- | ------------- |
| 4.1 | Consensus round start instrumentation | 0 | 2 | Phase 3 |
| 4.2 | Phase transition instrumentation | 0 | 1-2 | 4.1 |
| 4.3 | Proposal handling instrumentation | 0 | 1 | 4.1 |
| 4.4 | Validation handling instrumentation | 0 | 1-2 | 4.1 |
| 4.5 | Consensus-specific attributes | 0 | 1 | 4.2, 4.3, 4.4 |
| 4.6 | Transaction-consensus correlation | 0 | 2 | 4.2, Phase 3 |
| 4.7 | Build verification and testing | 0 | 0 | 4.1-4.6 |
| 4.8 | Validation span enrichment (ext. dashboard) | 0 | 2 | 4.4 |
**Parallel work**: Tasks 4.2, 4.3, and 4.4 can run in parallel after 4.1 is complete. Task 4.5 depends on all three. Task 4.6 depends on 4.2 and Phase 3. Task 4.8 depends on 4.4 (validation spans must exist).
### Implemented Spans
| Span Name | Method | Key Attributes |
| --------------------------- | ---------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `consensus.proposal.send` | `Adaptor::propose` | `xrpl.consensus.round` |
| `consensus.ledger_close` | `Adaptor::onClose` | `xrpl.consensus.ledger.seq`, `xrpl.consensus.mode` |
| `consensus.accept` | `Adaptor::onAccept` | `xrpl.consensus.proposers`, `xrpl.consensus.round_time_ms` |
| `consensus.accept.apply` | `Adaptor::doAccept` | `xrpl.consensus.close_time`, `close_time_correct`, `close_resolution_ms`, `state`, `proposing`, `round_time_ms`, `ledger.seq`, `parent_close_time`, `close_time_self`, `close_time_vote_bins`, `resolution_direction` |
| `consensus.validation.send` | `Adaptor::onAccept` (via validate) | `xrpl.consensus.proposing` |
#### Close Time Attributes (consensus.accept.apply)
The `consensus.accept.apply` span captures ledger close time agreement details
driven by `avCT_CONSENSUS_PCT` (75% validator agreement threshold):
- **`xrpl.consensus.close_time`** — Agreed-upon ledger close time (epoch seconds). When validators disagree (`consensusCloseTime == epoch`), this is synthetically set to `prevCloseTime + 1s`.
- **`xrpl.consensus.close_time_correct`** — `true` if validators reached agreement, `false` if they "agreed to disagree" (close time forced to prev+1s).
- **`xrpl.consensus.close_resolution_ms`** — Rounding granularity for close time (starts at 30s, decreases as ledger interval stabilizes).
- **`xrpl.consensus.state`** — `"finished"` (normal) or `"moved_on"` (consensus failed, adopted best available).
- **`xrpl.consensus.proposing`** — Whether this node was proposing.
- **`xrpl.consensus.round_time_ms`** — Total consensus round duration.
- **`xrpl.consensus.parent_close_time`** — Previous ledger's close time (epoch seconds). Enables computing close-time deltas across consecutive rounds without correlating separate spans.
- **`xrpl.consensus.close_time_self`** — This node's own proposed close time before consensus voting.
- **`xrpl.consensus.close_time_vote_bins`** — Number of distinct close-time vote bins from peer proposals. Higher values indicate less agreement among validators.
- **`xrpl.consensus.resolution_direction`** — Whether close-time resolution `"increased"` (coarser), `"decreased"` (finer), or stayed `"unchanged"` relative to the previous ledger.
**Exit Criteria** (from [06-implementation-phases.md §6.11.4](./06-implementation-phases.md)):
- [x] Complete consensus round traces
- [x] Phase transitions visible
- [x] Proposals and validations traced
- [x] Close time agreement tracked (per `avCT_CONSENSUS_PCT`)
- [x] No impact on consensus timing
---
# Phase 4a: Establish-Phase Gap Fill & Cross-Node Correlation
> **Goal**: Fill tracing gaps in the consensus establish phase (disputes, convergence,
> threshold escalation, mode changes) and establish cross-node correlation using a
> deterministic shared trace ID derived from `previousLedger.id()`.
>
> **Approach**: Direct instrumentation in `Consensus.h` — the generic consensus
> template has full access to internal state (`convergePercent_`, `result_->disputes`,
> `mode_`, threshold logic). Telemetry access comes via a single new adaptor
> method `getTelemetry()`. Long-lived spans (round, establish) are stored as
> class members using `SpanGuard` directly — NOT the `XRPL_TRACE_*` convenience
> macros (which create local variables named `_xrpl_guard_`). Short-lived
> scoped spans (update_positions, check) can use the macros. All code compiles
> to no-ops when `XRPL_ENABLE_TELEMETRY` is not defined.
>
> **Branch**: `pratik/otel-phase4-consensus-tracing`
## Design: Switchable Correlation Strategy
Two strategies for cross-node trace correlation, switchable via config:
### Strategy A — Deterministic Trace ID (Default)
Derive `trace_id = SHA256(previousLedger.id())[0:16]` so all nodes in the same
consensus round share the same trace_id without P2P context propagation.
- **Pros**: All nodes appear in the same trace in Tempo/Jaeger automatically.
No collector-side post-processing needed.
- **Cons**: Overrides OTel's random trace_id generation; requires custom
`IdGenerator` or manual span context construction.
### Strategy B — Attribute-Based Correlation
Use normal random trace_id but attach `xrpl.consensus.ledger_id` as an attribute
on every consensus span. Correlation happens at query time via Tempo/Grafana
`by attribute` queries.
- **Pros**: Standard OTel trace_id semantics; no SDK customization.
- **Cons**: Cross-node correlation requires query-time joins, not automatic.
### Config
```ini
[telemetry]
# "deterministic" (default) or "attribute"
consensus_trace_strategy=deterministic
```
### Implementation
In `RCLConsensus::Adaptor::startRound()`:
- If `deterministic`:
1. Compute `trace_id_bytes = SHA256(prevLedgerID)[0:16]`
2. Construct `opentelemetry::trace::TraceId(trace_id_bytes)`
3. Create a synthetic `SpanContext` with this trace_id and a random span_id:
```cpp
auto traceId = opentelemetry::trace::TraceId(trace_id_bytes);
auto spanId = opentelemetry::trace::SpanId(random_8_bytes);
auto syntheticCtx = opentelemetry::trace::SpanContext(
traceId, spanId, opentelemetry::trace::TraceFlags(1), false);
```
4. Wrap in `opentelemetry::context::Context` via
`opentelemetry::trace::SetSpan(context, syntheticSpan)`
5. Call `startSpan("consensus.round", parentContext)` so the new span
inherits the deterministic trace_id.
- If `attribute`: start a normal `consensus.round` span, set
`xrpl.consensus.ledger_id = previousLedger.id()` as attribute.
Both strategies always set `xrpl.consensus.round_id` (round number) and
`xrpl.consensus.ledger_id` (previous ledger hash) as attributes.
---
## Design: Span Hierarchy
```
consensus.round (root — created in RCLConsensus::startRound, closed at accept)
│ link → previous round's SpanContext (follows-from)
├── consensus.establish (phaseEstablish → acceptance, in Consensus.h)
│ ├── consensus.update_positions (each updateOurPositions call)
│ │ └── consensus.dispute.resolve (per-tx dispute resolution event)
│ ├── consensus.check (each haveConsensus call)
│ └── consensus.mode_change (short-lived span in adaptor on mode transition)
├── consensus.accept (existing onAccept span — reparented under round)
└── consensus.validation.send (existing — reparented, follows-from link to round)
```
### Span Links (follows-from relationships)
| Link Source | Link Target | Rationale |
| ----------------------------------------- | -------------------------- | ------------------------------------------------------------------------------ |
| `consensus.round` (N+1) | `consensus.round` (N) | Causal chain: round N+1 exists because round N accepted |
| `consensus.validation.send` | `consensus.round` | Validation follows from the round that produced it; may outlive the round span |
| _(Phase 4b)_ Received proposal processing | Sender's `consensus.round` | Cross-node causal link via P2P context propagation |
---
## Task 4a.0: Prerequisites — Extend SpanGuard and Telemetry APIs
**Objective**: Add missing API surface needed by later tasks.
**What to do**:
1. **Add `SpanGuard::addEvent()` with attributes** (needed by Task 4a.5):
The current `addEvent(string_view name)` only accepts a name. Add an
overload that accepts key-value attributes:
```cpp
void addEvent(std::string_view name,
std::initializer_list<
std::pair<opentelemetry::nostd::string_view,
opentelemetry::common::AttributeValue>> attributes)
{
span_->AddEvent(std::string(name), attributes);
}
```
2. **Add a `Telemetry::startSpan()` overload that accepts span links** (needed by Tasks 4a.2, 4a.8):
The current `startSpan()` has no span link support. Add an overload that
accepts a vector of `SpanContext` links for follows-from relationships:
```cpp
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span>
startSpan(
std::string_view name,
opentelemetry::context::Context const& parentContext,
std::vector<opentelemetry::trace::SpanContext> const& links,
opentelemetry::trace::SpanKind kind = opentelemetry::trace::SpanKind::kInternal) = 0;
```
3. **Add `XRPL_TRACE_ADD_EVENT` macro** (needed by Task 4a.5):
Add to `TracingInstrumentation.h` to expose `addEvent(name, attrs)` through
the macro interface (consistent with `XRPL_TRACE_SET_ATTR` pattern):
```cpp
#ifdef XRPL_ENABLE_TELEMETRY
#define XRPL_TRACE_ADD_EVENT(name, ...) \
if (_xrpl_guard_.has_value()) \
{ \
_xrpl_guard_->addEvent(name, __VA_ARGS__); \
}
#else
#define XRPL_TRACE_ADD_EVENT(name, ...) ((void)0)
#endif
```
**Key modified files**:
- `include/xrpl/telemetry/SpanGuard.h` — add `addEvent()` overload
- `include/xrpl/telemetry/Telemetry.h` — add `startSpan()` with links
- `src/xrpld/telemetry/Telemetry.cpp` — implement new overload
- `src/xrpld/telemetry/NullTelemetry.cpp` — no-op implementation
- `src/xrpld/telemetry/TracingInstrumentation.h` — add `XRPL_TRACE_ADD_EVENT` macro
---
## Task 4a.1: Adaptor `getTelemetry()` Method
**Objective**: Give `Consensus.h` access to the telemetry subsystem without
coupling the generic template to OTel headers.
**What to do**:
- Add `getTelemetry()` method to the Adaptor concept (returns
`xrpl::telemetry::Telemetry&`). The return type is already forward-declared
behind `#ifdef XRPL_ENABLE_TELEMETRY`.
- Implement in `RCLConsensus::Adaptor` — delegates to `app_.getTelemetry()`.
- In `Consensus.h`, the `XRPL_TRACE_*` macros call
`adaptor_.getTelemetry()` — when telemetry is disabled, the macros expand to
`((void)0)` and the method is never called.
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.h` — declare `getTelemetry()`
- `src/xrpld/app/consensus/RCLConsensus.cpp` — implement `getTelemetry()`
---
## Task 4a.2: Switchable Round Span with Deterministic Trace ID
**Objective**: Create a `consensus.round` root span in `startRound()` that uses
the switchable correlation strategy. Store span context as a member for child
spans in `Consensus.h`.
**What to do**:
- In `RCLConsensus::Adaptor::startRound()` (or a new helper):
- Read `consensus_trace_strategy` from config.
- **Deterministic**: compute `trace_id = SHA256(prevLedgerID)[0:16]`.
Construct a `SpanContext` with this trace_id, then start
`consensus.round` span as child of that context.
- **Attribute**: start normal `consensus.round` span.
- Set attributes on both: `xrpl.consensus.round_id`,
`xrpl.consensus.ledger_id`, `xrpl.consensus.ledger.seq`,
`xrpl.consensus.mode`.
- Store the round span in `Consensus` as a member (see Task 4a.3).
- If a previous round's span context is available, add a **span link**
(follows-from) to establish the round chain.
- Add `createDeterministicTraceId(hash)` utility to
`include/xrpl/telemetry/Telemetry.h` (returns 16-byte trace ID from a
256-bit hash by truncation).
- Add `consensus_trace_strategy` to `Telemetry::Setup` and
`TelemetryConfig.cpp` parser:
```cpp
/** Cross-node correlation strategy: "deterministic" or "attribute". */
std::string consensusTraceStrategy = "deterministic";
```
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `include/xrpl/telemetry/Telemetry.h` — `createDeterministicTraceId()`
- `src/xrpld/telemetry/TelemetryConfig.cpp` — parse new config option
---
## Task 4a.3: Span Members in `Consensus.h`
**Objective**: Add span storage to the `Consensus` class so that spans created
in `startRound()` (adaptor) are accessible from `phaseEstablish()`,
`updateOurPositions()`, and `haveConsensus()` (template methods).
**What to do**:
- Add to `Consensus` private members (guarded by `#ifdef XRPL_ENABLE_TELEMETRY`):
```cpp
#ifdef XRPL_ENABLE_TELEMETRY
std::optional<xrpl::telemetry::SpanGuard> roundSpan_;
std::optional<xrpl::telemetry::SpanGuard> establishSpan_;
opentelemetry::context::Context prevRoundContext_;
#endif
```
- `roundSpan_` is created in `startRound()` via the adaptor and stored.
Its `SpanGuard::Scope` member keeps the span active on the thread context
for the entire round lifetime.
- `establishSpan_` is created when entering phaseEstablish and cleared on accept.
It becomes a child of `roundSpan_` via OTel's thread-local context propagation.
- `prevRoundContext_` stores the previous round's context for follows-from links.
**Threading assumption**: `startRound()`, `phaseEstablish()`, `updateOurPositions()`,
and `haveConsensus()` all run on the same thread (the consensus job queue thread).
This is required for the `SpanGuard::Scope`-based parent-child hierarchy to work.
The `Consensus` class documentation confirms it is NOT thread-safe and calls are
serialized by the application.
- Add conditional include at top of `Consensus.h`:
```cpp
#ifdef XRPL_ENABLE_TELEMETRY
#include <xrpl/telemetry/SpanGuard.h>
#include <xrpld/telemetry/TracingInstrumentation.h>
#endif
```
**Key modified files**:
- `src/xrpld/consensus/Consensus.h`
---
## Task 4a.4: Instrument `phaseEstablish()`
**Objective**: Create `consensus.establish` span wrapping the establish phase,
with attributes for convergence progress.
**What to do**:
- At the start of `phaseEstablish()` (line 1298), if `establishSpan_` is not
yet created, create it as child of `roundSpan_` using the **direct API**
(NOT the `XRPL_TRACE_CONSENSUS` macro, which creates a local variable):
```cpp
#ifdef XRPL_ENABLE_TELEMETRY
if (!establishSpan_ && adaptor_.getTelemetry().shouldTraceConsensus())
{
establishSpan_.emplace(
adaptor_.getTelemetry().startSpan("consensus.establish"));
}
#endif
```
- Set attributes on each call:
- `xrpl.consensus.converge_percent` — `convergePercent_`
- `xrpl.consensus.establish_count` — `establishCounter_`
- `xrpl.consensus.proposers` — `currPeerPositions_.size()`
- On phase exit (transition to accept), close the establish span and record
final duration.
**Key modified files**:
- `src/xrpld/consensus/Consensus.h` — `phaseEstablish()` method
---
## Task 4a.5: Instrument `updateOurPositions()`
**Objective**: Trace each position update cycle including dispute resolution
details.
**What to do**:
- At the start of `updateOurPositions()` (line 1418), create a scoped child
span. This method is called and returns within a single `phaseEstablish()`
call, so the `XRPL_TRACE_CONSENSUS` macro works here (scoped local):
```cpp
XRPL_TRACE_CONSENSUS(adaptor_.getTelemetry(), "consensus.update_positions");
```
- Set attributes:
- `xrpl.consensus.disputes_count` — `result_->disputes.size()`
- `xrpl.consensus.converge_percent` — current convergence
- `xrpl.consensus.proposers_agreed` — count of peers with same position
- `xrpl.consensus.proposers_total` — total peer positions
- Inside the dispute resolution loop, for each dispute that changes our vote,
add an **event** with attributes using `XRPL_TRACE_ADD_EVENT` (from Task 4a.0):
```cpp
XRPL_TRACE_ADD_EVENT("dispute.resolve", {
{"xrpl.tx.id", std::string(tx_id)},
{"xrpl.dispute.our_vote", our_vote},
{"xrpl.dispute.yays", static_cast<int64_t>(yays)},
{"xrpl.dispute.nays", static_cast<int64_t>(nays)}
});
```
**Key modified files**:
- `src/xrpld/consensus/Consensus.h` — `updateOurPositions()` method
---
## Task 4a.6: Instrument `haveConsensus()` (Threshold & Convergence)
**Objective**: Trace consensus checking including threshold escalation
(`ConsensusParms::AvalancheState::{init, mid, late, stuck}`).
**What to do**:
- At the start of `haveConsensus()` (line 1598), create a scoped child span:
```cpp
XRPL_TRACE_CONSENSUS(adaptor_.getTelemetry(), "consensus.check");
```
- Set attributes:
- `xrpl.consensus.agree_count` — peers that agree with our position
- `xrpl.consensus.disagree_count` — peers that disagree
- `xrpl.consensus.converge_percent` — convergence percentage
- `xrpl.consensus.result` — ConsensusState result (Yes/No/MovedOn)
- The free function `checkConsensus()` in `Consensus.cpp` (line 151) determines
thresholds based on `currentAgreeTime`. Threshold values come from
`ConsensusParms::avalancheCutoffs` (defined in `ConsensusParms.h`).
The escalation states are `ConsensusParms::AvalancheState::{init, mid, late, stuck}`.
Record the effective threshold as an attribute on the span:
- `xrpl.consensus.threshold_percent` — current threshold from `avalancheCutoffs`
**Key modified files**:
- `src/xrpld/consensus/Consensus.h` — `haveConsensus()` method
---
## Task 4a.7: Instrument Mode Changes
**Objective**: Trace consensus mode transitions (proposing ↔ observing,
wrongLedger, switchedLedger).
**What to do**:
Mode changes are rare (typically 0-1 per round), so a **standalone short-lived
span** is appropriate (not an event). This captures timing of the mode change
itself.
- In `RCLConsensus::Adaptor::onModeChange()`, create a scoped span:
```cpp
XRPL_TRACE_CONSENSUS(app_.getTelemetry(), "consensus.mode_change");
XRPL_TRACE_SET_ATTR("xrpl.consensus.mode.old", to_string(before).c_str());
XRPL_TRACE_SET_ATTR("xrpl.consensus.mode.new", to_string(after).c_str());
```
- Note: `MonitoredMode::set()` (line 304 in `Consensus.h`) calls
`adaptor_.onModeChange(before, after)` — so the span is created in the
adaptor, which already has telemetry access. No instrumentation needed
in `Consensus.h` for this task.
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp` — `onModeChange()`
---
## Task 4a.8: Reparent Existing Spans Under Round
**Objective**: Make existing consensus spans (`consensus.accept`,
`consensus.accept.apply`, `consensus.validation.send`) children of the
`consensus.round` root span instead of being standalone.
**What to do**:
- The existing spans in `onAccept()`, `doAccept()`, and `validate()` use
`XRPL_TRACE_CONSENSUS(app_.getTelemetry(), ...)` which creates standalone
spans on the current thread's context.
- After Task 4a.2 creates the round span and stores it, these methods run on
the same thread within the round span's scope, so they automatically become
children. Verify this works correctly.
- For `consensus.validation.send`: add a **span link** (follows-from) to the
round span context, since the validation may be processed after the round
completes.
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp` — verify parent-child hierarchy
---
## Task 4a.9: Build Verification and Testing
**Objective**: Verify all Phase 4a changes compile cleanly with telemetry ON
and OFF, and don't affect consensus timing.
**What to do**:
1. Build with `telemetry=ON` — verify no compilation errors
2. Build with `telemetry=OFF` — verify macros expand to no-ops, no new includes
leak into `Consensus.h` when disabled
3. Run existing consensus unit tests
4. Verify `#ifdef XRPL_ENABLE_TELEMETRY` guards on all new members in
`Consensus.h`
5. Run `pccl` pre-commit checks
**Verification Checklist**:
- [x] Build succeeds with telemetry ON
- [x] Build succeeds with telemetry OFF
- [x] Existing consensus tests pass
- [x] `Consensus.h` has zero OTel includes when telemetry is OFF
- [x] No new virtual calls in hot consensus paths
- [x] `pccl` passes
---
## Phase 4a Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ------------------------------------------------ | --------- | -------------- | ---------- |
| 4a.0 | Prerequisites: extend SpanGuard & Telemetry APIs | 0 | 4 | Phase 4 |
| 4a.1 | Adaptor `getTelemetry()` method | 0 | 2 | Phase 4 |
| 4a.2 | Switchable round span with deterministic traceID | 0 | 3 | 4a.0, 4a.1 |
| 4a.3 | Span members in `Consensus.h` | 0 | 1 | 4a.1 |
| 4a.4 | Instrument `phaseEstablish()` | 0 | 1 | 4a.3 |
| 4a.5 | Instrument `updateOurPositions()` | 0 | 1 | 4a.0, 4a.3 |
| 4a.6 | Instrument `haveConsensus()` (thresholds) | 0 | 1 | 4a.3 |
| 4a.7 | Instrument mode changes | 0 | 1 | 4a.1 |
| 4a.8 | Reparent existing spans under round | 0 | 1 | 4a.0, 4a.2 |
| 4a.9 | Build verification and testing | 0 | 0 | 4a.0-4a.8 |
**Parallel work**: Tasks 4a.0 and 4a.1 can run in parallel. Tasks 4a.4, 4a.5, 4a.6, and 4a.7 can run in parallel after 4a.3 (and 4a.0 for 4a.5).
### New Spans (Phase 4a)
| Span Name | Location | Key Attributes |
| ---------------------------- | ------------------ | ---------------------------------------------------------------------------------- |
| `consensus.round` | `RCLConsensus.cpp` | `round_id`, `ledger_id`, `ledger.seq`, `mode`; link → prev round |
| `consensus.establish` | `Consensus.h` | `converge_percent`, `establish_count`, `proposers` |
| `consensus.update_positions` | `Consensus.h` | `disputes_count`, `converge_percent`, `proposers_agreed`, `proposers_total` |
| `consensus.check` | `Consensus.h` | `agree_count`, `disagree_count`, `converge_percent`, `result`, `threshold_percent` |
| `consensus.mode_change` | `RCLConsensus.cpp` | `mode.old`, `mode.new` |
### New Events (Phase 4a)
| Event Name | Parent Span | Attributes |
| ----------------- | ---------------------------- | ----------------------------------- |
| `dispute.resolve` | `consensus.update_positions` | `tx_id`, `our_vote`, `yays`, `nays` |
### New Attributes (Phase 4a)
```cpp
// Round-level (on consensus.round)
"xrpl.consensus.round_id" = int64 // Consensus round number
"xrpl.consensus.ledger_id" = string // previousLedger.id() hash
"xrpl.consensus.trace_strategy" = string // "deterministic" or "attribute"
// Establish-level
"xrpl.consensus.converge_percent" = int64 // Convergence % (0-100+)
"xrpl.consensus.establish_count" = int64 // Number of establish iterations
"xrpl.consensus.disputes_count" = int64 // Active disputes
"xrpl.consensus.proposers_agreed" = int64 // Peers agreeing with us
"xrpl.consensus.proposers_total" = int64 // Total peer positions
"xrpl.consensus.agree_count" = int64 // Peers that agree (haveConsensus)
"xrpl.consensus.disagree_count" = int64 // Peers that disagree
"xrpl.consensus.threshold_percent" = int64 // Current threshold (50/65/70/95)
"xrpl.consensus.result" = string // "yes", "no", "moved_on"
// Mode change
"xrpl.consensus.mode.old" = string // Previous mode
"xrpl.consensus.mode.new" = string // New mode
```
### Implementation Notes
- **Separation of concerns**: All non-trivial telemetry code extracted to private
helpers (`startRoundTracing`, `createValidationSpan`, `startEstablishTracing`,
`updateEstablishTracing`, `endEstablishTracing`). Business logic methods contain
only single-line `#ifdef` blocks calling these helpers.
- **Thread safety**: `createValidationSpan()` runs on the jtACCEPT worker thread.
Instead of accessing `roundSpan_` across threads, a `roundSpanContext_` snapshot
(lightweight `SpanContext` value type) is captured on the consensus thread in
`startRoundTracing()` and read by `createValidationSpan()`. The job queue
provides the happens-before guarantee.
- **Macro safety**: `XRPL_TRACE_ADD_EVENT` uses `do { } while (0)` to prevent
dangling-else issues.
- **Config validation**: `consensus_trace_strategy` is validated to be either
`"deterministic"` or `"attribute"`, falling back to `"deterministic"` for
unrecognised values.
- **Plan deviation**: `roundSpan_` is stored in `RCLConsensus::Adaptor` (not
`Consensus.h`) because the adaptor has access to telemetry config and can
implement the deterministic trace ID strategy. `establishSpan_` is correctly
in `Consensus.h` as planned.
---
# Phase 4b: Cross-Node Propagation (Future — Documentation Only)
> **Goal**: Wire `TraceContextPropagator` for P2P messages so that proposals
> and validations carry trace context between nodes. This enables true
> distributed tracing where a proposal sent by Node A creates a child span
> on Node B.
>
> **Status**: NOT IMPLEMENTED. The protobuf fields and propagator class exist
> but are not wired. This section documents the design for future work.
## Architecture
```
Node A (proposing) Node B (receiving)
───────────────── ──────────────────
consensus.round consensus.round
├── propose() ├── peerProposal()
│ └── TraceContextPropagator │ └── TraceContextPropagator
│ ::injectToProtobuf( │ ::extractFromProtobuf(
│ TMProposeSet.trace_context) │ TMProposeSet.trace_context)
│ │ └── span link → Node A's context
└── validate() └── onValidation()
└── inject into TMValidation └── extract from TMValidation
```
## Wiring Points
| Message | Inject Location | Extract Location | Protobuf Field |
| --------------- | ---------------------------------- | ----------------------------------- | -------------------------- |
| `TMProposeSet` | `Adaptor::propose()` | `PeerImp::onMessage(TMProposeSet)` | field 1001: `TraceContext` |
| `TMValidation` | `Adaptor::validate()` | `PeerImp::onMessage(TMValidation)` | field 1001: `TraceContext` |
| `TMTransaction` | `NetworkOPs::processTransaction()` | `PeerImp::onMessage(TMTransaction)` | field 1001: `TraceContext` |
## Span Link Semantics
Received messages use **span links** (follows-from), NOT parent-child:
- The receiver's processing span links to the sender's context
- This preserves each node's independent trace tree
- Cross-node correlation visible via linked traces in Tempo/Jaeger
## Interaction with Deterministic Trace ID (Strategy A)
When using deterministic trace_id (Phase 4a default), cross-node spans already
share the same trace_id. P2P propagation adds **span-level** linking:
- Without propagation: spans from different nodes appear in the same trace
(same trace_id) but without parent-child or follows-from relationships.
- With propagation: spans have explicit links showing which proposal/validation
from Node A caused processing on Node B.
## Prerequisites
- Phase 4a (this task list) — establish phase tracing must be in place
- `TraceContextPropagator` class (already exists in
`include/xrpl/telemetry/TraceContextPropagator.h`)
- Protobuf `TraceContext` message (already exists, field 1001)

View File

@@ -0,0 +1,241 @@
# Phase 5: Documentation & Deployment Task List
> **Goal**: Production readiness — Grafana dashboards, spanmetrics pipeline, operator runbook, alert definitions, and final integration testing. This phase ensures the telemetry system is useful and maintainable in production.
>
> **Scope**: Grafana dashboard definitions, OTel Collector spanmetrics connector, Prometheus integration, alert rules, operator documentation, and production-ready Docker Compose stack.
>
> **Branch**: `pratik/otel-phase5-docs-deployment` (from `pratik/otel-phase4-consensus-tracing`)
### Related Plan Documents
| Document | Relevance |
| ---------------------------------------------------------------- | -------------------------------------------------------------------------- |
| [07-observability-backends.md](./07-observability-backends.md) | Tempo setup (§7.1), Grafana dashboards (§7.6), alerts (§7.6.3) |
| [05-configuration-reference.md](./05-configuration-reference.md) | Collector config (§5.5), production config (§5.5.2), Docker Compose (§5.6) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 5 tasks (§6.6), definition of done (§6.11.5) |
---
## Task 5.1: Add Spanmetrics Connector to OTel Collector
**Objective**: Derive RED metrics (Rate, Errors, Duration) from trace spans automatically, enabling Grafana time-series dashboards.
**What to do**:
- Edit `docker/telemetry/otel-collector-config.yaml`:
- Add `spanmetrics` connector:
```yaml
connectors:
spanmetrics:
histogram:
explicit:
buckets: [1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 5s]
dimensions:
- name: xrpl.rpc.command
- name: xrpl.rpc.status
- name: xrpl.consensus.phase
- name: xrpl.tx.type
```
- Add `prometheus` exporter:
```yaml
exporters:
prometheus:
endpoint: 0.0.0.0:8889
```
- Wire the pipeline:
```yaml
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp/tempo, spanmetrics]
metrics:
receivers: [spanmetrics]
exporters: [prometheus]
```
- Edit `docker/telemetry/docker-compose.yml`:
- Expose port `8889` on the collector for Prometheus scraping
- Add Prometheus service
- Add Prometheus as Grafana datasource
**Key modified files**:
- `docker/telemetry/otel-collector-config.yaml`
- `docker/telemetry/docker-compose.yml`
**Key new files**:
- `docker/telemetry/prometheus.yml` (Prometheus scrape config)
- `docker/telemetry/grafana/provisioning/datasources/prometheus.yaml`
**Reference**:
- [POC_taskList.md §Next Steps](./POC_taskList.md) — Metrics pipeline for Grafana dashboards
---
## Task 5.2: Create Grafana Dashboards
**Objective**: Provide pre-built Grafana dashboards for RPC performance, transaction lifecycle, and consensus health.
**What to do**:
- Create `docker/telemetry/grafana/provisioning/dashboards/dashboards.yaml` (provisioning config)
- Create dashboard JSON files:
1. **RPC Performance Dashboard** (`rpc-performance.json`):
- RPC request latency (p50/p95/p99) by command — histogram panel
- RPC throughput (requests/sec) by command — time series
- RPC error rate by command — bar gauge
- Top slowest RPC commands — table
2. **Transaction Overview Dashboard** (`transaction-overview.json`):
- Transaction processing rate — time series
- Transaction latency distribution — histogram
- Suppression rate (duplicates) — stat panel
- Transaction processing path (sync vs async) — pie chart
3. **Consensus Health Dashboard** (`consensus-health.json`):
- Consensus round duration — time series
- Phase duration breakdown (open/establish/accept) — stacked bar
- Proposals sent/received per round — stat panel
- Consensus mode distribution (proposing/observing) — pie chart
- Store dashboards in `docker/telemetry/grafana/dashboards/`
**Key new files**:
- `docker/telemetry/grafana/provisioning/dashboards/dashboards.yaml`
- `docker/telemetry/grafana/dashboards/rpc-performance.json`
- `docker/telemetry/grafana/dashboards/transaction-overview.json`
- `docker/telemetry/grafana/dashboards/consensus-health.json`
**Reference**:
- [07-observability-backends.md §7.6](./07-observability-backends.md) — Grafana dashboard specifications
- [01-architecture-analysis.md §1.8.3](./01-architecture-analysis.md) — Dashboard panel examples
---
## Task 5.3: Define Alert Rules
**Objective**: Create alert definitions for key telemetry anomalies.
**What to do**:
- Create `docker/telemetry/grafana/provisioning/alerting/alerts.yaml`:
- **RPC Latency Alert**: p99 latency > 1s for any command over 5 minutes
- **RPC Error Rate Alert**: Error rate > 5% for any command over 5 minutes
- **Consensus Duration Alert**: Round duration > 10s (warn), > 30s (critical)
- **Transaction Processing Alert**: Processing rate drops below threshold
- **Telemetry Pipeline Health**: No spans received for > 2 minutes
**Key new files**:
- `docker/telemetry/grafana/provisioning/alerting/alerts.yaml`
**Reference**:
- [07-observability-backends.md §7.6.3](./07-observability-backends.md) — Alert rule definitions
---
## Task 5.4: Production Collector Configuration
**Objective**: Create a production-ready OTel Collector configuration with tail-based sampling and resource limits.
**What to do**:
- Create `docker/telemetry/otel-collector-config-production.yaml`:
- Tail-based sampling policy:
- Always sample errors and slow traces
- 10% base sampling rate for normal traces
- Always sample first trace for each unique RPC command
- Resource limits:
- Memory limiter processor (80% of available memory)
- Queued retry for export failures
- TLS configuration for production endpoints
- Health check endpoint
**Key new files**:
- `docker/telemetry/otel-collector-config-production.yaml`
**Reference**:
- [05-configuration-reference.md §5.5.2](./05-configuration-reference.md) — Production collector config
---
## Task 5.5: Operator Runbook
**Objective**: Create operator documentation for managing the telemetry system in production.
**What to do**:
- Create `docs/telemetry-runbook.md`:
- **Setup**: How to enable telemetry in rippled
- **Configuration**: All config options with descriptions
- **Collector Deployment**: Docker Compose vs. Kubernetes vs. bare metal
- **Troubleshooting**: Common issues and resolutions
- No traces appearing
- High memory usage from telemetry
- Collector connection failures
- Sampling configuration tuning
- **Performance Tuning**: Batch size, queue size, sampling ratio guidelines
- **Upgrading**: How to upgrade OTel SDK and Collector versions
**Key new files**:
- `docs/telemetry-runbook.md`
---
## Task 5.6: Final Integration Testing
**Objective**: Validate the complete telemetry stack end-to-end.
**What to do**:
1. Start full Docker stack (Collector, Tempo, Grafana, Prometheus)
2. Build rippled with `telemetry=ON`
3. Run in standalone mode with telemetry enabled
4. Generate RPC traffic and verify traces in Tempo
5. Verify dashboards populate in Grafana
6. Verify alerts trigger correctly
7. Test telemetry OFF path (no regressions)
8. Run full test suite
**Verification Checklist**:
- [ ] Docker stack starts without errors
- [ ] Traces appear in Tempo with correct hierarchy
- [ ] Grafana dashboards show metrics derived from spans
- [ ] Prometheus scrapes spanmetrics successfully
- [ ] Alerts can be triggered by simulated conditions
- [ ] Build succeeds with telemetry ON and OFF
- [ ] Full test suite passes
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ---------------------------------- | --------- | -------------- | ---------- |
| 5.1 | Spanmetrics connector + Prometheus | 2 | 2 | Phase 4 |
| 5.2 | Grafana dashboards | 4 | 0 | 5.1 |
| 5.3 | Alert definitions | 1 | 0 | 5.1 |
| 5.4 | Production collector config | 1 | 0 | Phase 4 |
| 5.5 | Operator runbook | 1 | 0 | Phase 4 |
| 5.6 | Final integration testing | 0 | 0 | 5.1-5.5 |
**Parallel work**: Tasks 5.1, 5.4, and 5.5 can run in parallel. Tasks 5.2 and 5.3 depend on 5.1. Task 5.6 depends on all others.
**Exit Criteria** (from [06-implementation-phases.md §6.11.5](./06-implementation-phases.md)):
- [ ] Dashboards deployed and showing data
- [ ] Alerts configured and tested
- [ ] Operator documentation complete
- [ ] Production collector config ready
- [ ] Full test suite passes

View File

@@ -0,0 +1,673 @@
# OpenTelemetry Distributed Tracing for rippled
---
## Slide 1: Introduction
> **CNCF** = Cloud Native Computing Foundation
### What is OpenTelemetry?
OpenTelemetry is an open-source, CNCF-backed observability framework for distributed tracing, metrics, and logs.
### Why OpenTelemetry for rippled?
- **End-to-End Transaction Visibility**: Track transactions from submission → consensus → ledger inclusion
- **Cross-Node Correlation**: Follow requests across multiple independent nodes using a unique `trace_id`
- **Consensus Round Analysis**: Understand timing and behavior across validators
- **Incident Debugging**: Correlate events across distributed nodes during issues
```mermaid
flowchart LR
A["Node A<br/>tx.receive<br/>trace_id: abc123"] --> B["Node B<br/>tx.relay<br/>trace_id: abc123"] --> C["Node C<br/>tx.validate<br/>trace_id: abc123"] --> D["Node D<br/>ledger.apply<br/>trace_id: abc123"]
style A fill:#1565c0,stroke:#0d47a1,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#2e7d32,stroke:#1b5e20,color:#fff
style D fill:#e65100,stroke:#bf360c,color:#fff
```
**Reading the diagram:**
- **Node A (blue, leftmost)**: The originating node that first receives the transaction and assigns a new `trace_id: abc123`; this ID becomes the correlation key for the entire distributed trace.
- **Node B and Node C (green, middle)**: Relay and validation nodes — each creates its own span but carries the same `trace_id`, so their work is linked to the original submission without any central coordinator.
- **Node D (orange, rightmost)**: The final node that applies the transaction to the ledger; the trace now spans the full lifecycle from submission to ledger inclusion.
- **Left-to-right flow**: The horizontal progression shows the real-world message path — a transaction hops from node to node, and the shared `trace_id` stitches all hops into a single queryable trace.
> **Trace ID: abc123** — All nodes share the same trace, enabling cross-node correlation.
---
## Slide 2: OpenTelemetry vs Open Source Alternatives
> **CNCF** = Cloud Native Computing Foundation
| Feature | OpenTelemetry | Jaeger | Zipkin | SkyWalking | Pinpoint | Prometheus |
| ------------------- | ---------------- | ---------------- | ------------------ | ---------- | ---------- | ---------- |
| **Tracing** | YES | YES | YES | YES | YES | NO |
| **Metrics** | YES | NO | NO | YES | YES | YES |
| **Logs** | YES | NO | NO | YES | NO | NO |
| **C++ SDK** | YES Official | YES (Deprecated) | YES (Unmaintained) | NO | NO | YES |
| **Vendor Neutral** | YES Primary goal | NO | NO | NO | NO | NO |
| **Instrumentation** | Manual + Auto | Manual | Manual | Auto-first | Auto-first | Manual |
| **Backend** | Any (exporters) | Self | Self | Self | Self | Self |
| **CNCF Status** | Incubating | Graduated | NO | Incubating | NO | Graduated |
> **Why OpenTelemetry?** It's the only actively maintained, full-featured C++ option with vendor neutrality — allowing export to Tempo, Prometheus, Grafana, or any commercial backend without changing instrumentation.
---
## Slide 3: Adoption Scope — Traces Only (Current Plan)
OpenTelemetry supports three signal types: **Traces**, **Metrics**, and **Logs**. rippled already captures metrics (StatsD via Beast Insight) and logs (Journal/PerfLog). The question is: how much of OTel do we adopt?
> **Scenario A**: Add distributed tracing. Keep StatsD for metrics and Journal for logs.
```mermaid
flowchart LR
subgraph rippled["rippled Process"]
direction TB
OTel["OTel SDK<br/>(Traces)"]
Insight["Beast Insight<br/>(StatsD Metrics)"]
Journal["Journal + PerfLog<br/>(Logging)"]
end
OTel -->|"OTLP"| Collector["OTel Collector"]
Insight -->|"UDP"| StatsD["StatsD Server"]
Journal -->|"File I/O"| LogFile["perf.log / debug.log"]
Collector --> Tempo["Tempo"]
StatsD --> Graphite["Graphite / Grafana"]
LogFile --> Loki["Loki (optional)"]
style rippled fill:#424242,stroke:#212121,color:#fff
style OTel fill:#2e7d32,stroke:#1b5e20,color:#fff
style Insight fill:#1565c0,stroke:#0d47a1,color:#fff
style Journal fill:#e65100,stroke:#bf360c,color:#fff
style Collector fill:#2e7d32,stroke:#1b5e20,color:#fff
```
| Aspect | Details |
| ------------------------------ | --------------------------------------------------------------------------------------------------------------- |
| **What changes for operators** | Deploy OTel Collector + trace backend. Existing StatsD and log pipelines stay as-is. |
| **Codebase impact** | New `Telemetry` module (~1500 LOC). Beast Insight and Journal untouched. |
| **New capabilities** | Cross-node trace correlation, span-based debugging, request lifecycle visibility. |
| **What we still can't do** | Correlate metrics with specific traces natively. StatsD metrics remain fire-and-forget with no trace exemplars. |
| **Maintenance burden** | Three separate observability systems to maintain (OTel + StatsD + Journal). |
| **Risk** | Lowest — additive change, no existing systems disturbed. |
---
## Slide 4: Future Adoption — Metrics & Logs via OTel
### Scenario B: + OTel Metrics (Replace StatsD)
> Migrate StatsD to OTel Metrics API, exposing Prometheus-compatible metrics. Remove Beast Insight.
```mermaid
flowchart LR
subgraph rippled["rippled Process"]
direction TB
OTel["OTel SDK<br/>(Traces + Metrics)"]
Journal["Journal + PerfLog<br/>(Logging)"]
end
OTel -->|"OTLP"| Collector["OTel Collector"]
Journal -->|"File I/O"| LogFile["perf.log / debug.log"]
Collector --> Tempo["Tempo<br/>(Traces)"]
Collector --> Prom["Prometheus<br/>(Metrics)"]
LogFile --> Loki["Loki (optional)"]
style rippled fill:#424242,stroke:#212121,color:#fff
style OTel fill:#2e7d32,stroke:#1b5e20,color:#fff
style Journal fill:#e65100,stroke:#bf360c,color:#fff
style Collector fill:#2e7d32,stroke:#1b5e20,color:#fff
```
- **Better metrics?** Yes — Prometheus gives native histograms (p50/p95/p99), multi-dimensional labels, and exemplars linking metric spikes to traces.
- **Codebase**: Remove `Beast::Insight` + `StatsDCollector` (~2000 LOC). Single SDK for traces and metrics.
- **Operator effort**: Rewrite dashboards from StatsD/Graphite queries to PromQL. Run both in parallel during transition.
- **Risk**: Medium — operators must migrate monitoring infrastructure.
### Scenario C: + OTel Logs (Full Stack)
> Also replace Journal logging with OTel Logs API. Single SDK for everything.
```mermaid
flowchart LR
subgraph rippled["rippled Process"]
OTel["OTel SDK<br/>(Traces + Metrics + Logs)"]
end
OTel -->|"OTLP"| Collector["OTel Collector"]
Collector --> Tempo["Tempo<br/>(Traces)"]
Collector --> Prom["Prometheus<br/>(Metrics)"]
Collector --> Loki["Loki / Elastic<br/>(Logs)"]
style rippled fill:#424242,stroke:#212121,color:#fff
style OTel fill:#2e7d32,stroke:#1b5e20,color:#fff
style Collector fill:#2e7d32,stroke:#1b5e20,color:#fff
```
- **Structured logging**: OTel Logs API outputs structured records with `trace_id`, `span_id`, severity, and attributes by design.
- **Full correlation**: Every log line carries `trace_id`. Click trace → see logs. Click metric spike → see trace → see logs.
- **Codebase**: Remove Beast Insight (~2000 LOC) + simplify Journal/PerfLog (~3000 LOC). One dependency instead of three.
- **Risk**: Highest — `beast::Journal` is deeply embedded in every component. Large refactor. OTel C++ Logs API is newer (stable since v1.11, less battle-tested).
### Recommendation
```mermaid
flowchart LR
A["Phase 1<br/><b>Traces Only</b><br/>(Current Plan)"] --> B["Phase 2<br/><b>+ Metrics</b><br/>(Replace StatsD)"] --> C["Phase 3<br/><b>+ Logs</b><br/>(Full OTel)"]
style A fill:#2e7d32,stroke:#1b5e20,color:#fff
style B fill:#1565c0,stroke:#0d47a1,color:#fff
style C fill:#e65100,stroke:#bf360c,color:#fff
```
| Phase | Signal | Strategy | Risk |
| -------------------- | --------- | -------------------------------------------------------------- | ------ |
| **Phase 1** (now) | Traces | Add OTel traces. Keep StatsD and Journal. Prove value. | Low |
| **Phase 2** (future) | + Metrics | Migrate StatsD → Prometheus via OTel. Remove Beast Insight. | Medium |
| **Phase 3** (future) | + Logs | Adopt OTel Logs API. Align with structured logging initiative. | High |
> **Key Takeaway**: Start with traces (unique value, lowest risk), then incrementally adopt metrics and logs as the OTel infrastructure proves itself.
---
## Slide 5: Comparison with rippled's Existing Solutions
### Current Observability Stack
| Aspect | PerfLog (JSON) | StatsD (Metrics) | OpenTelemetry (NEW) |
| --------------------- | --------------------- | --------------------- | --------------------------- |
| **Type** | Logging | Metrics | Distributed Tracing |
| **Scope** | Single node | Single node | **Cross-node** |
| **Data** | JSON log entries | Counters, gauges | Spans with context |
| **Correlation** | By timestamp | By metric name | By `trace_id` |
| **Overhead** | Low (file I/O) | Low (UDP) | Low-Medium (configurable) |
| **Question Answered** | "What happened here?" | "How many? How fast?" | **"What was the journey?"** |
### Use Case Matrix
| Scenario | PerfLog | StatsD | OpenTelemetry |
| -------------------------------- | ------- | ------ | ------------- |
| "How many TXs per second?" | ❌ | ✅ | ❌ |
| "Why was this specific TX slow?" | ⚠️ | ❌ | ✅ |
| "Which node delayed consensus?" | ❌ | ❌ | ✅ |
| "Show TX journey across 5 nodes" | ❌ | ❌ | ✅ |
> **Key Insight**: In the **traces-only** approach (Phase 1), OpenTelemetry **complements** existing systems. In future phases, OTel metrics and logs could **replace** StatsD and Journal respectively — see Slides 3-4 for the full adoption roadmap.
---
## Slide 6: Architecture
> **OTLP** = OpenTelemetry Protocol | **WS** = WebSocket
### High-Level Integration Architecture
```mermaid
flowchart TB
subgraph rippled["rippled Node"]
subgraph services["Core Services"]
direction LR
RPC["RPC Server<br/>(HTTP/WS)"] ~~~ Overlay["Overlay<br/>(P2P Network)"] ~~~ Consensus["Consensus<br/>(RCLConsensus)"]
end
Telemetry["Telemetry Module<br/>(OpenTelemetry SDK)"]
services --> Telemetry
end
Telemetry -->|OTLP/gRPC| Collector["OTel Collector"]
Collector --> Tempo["Grafana Tempo"]
Collector --> Elastic["Elastic APM"]
style rippled fill:#424242,stroke:#212121,color:#fff
style services fill:#1565c0,stroke:#0d47a1,color:#fff
style Telemetry fill:#2e7d32,stroke:#1b5e20,color:#fff
style Collector fill:#e65100,stroke:#bf360c,color:#fff
```
**Reading the diagram:**
- **Core Services (blue, top)**: RPC Server, Overlay, and Consensus are the three primary components that generate trace data — they represent the entry points for client requests, peer messages, and consensus rounds respectively.
- **Telemetry Module (green, middle)**: The OpenTelemetry SDK sits below the core services and receives span data from all three; it acts as a single collection point within the rippled process.
- **OTel Collector (orange, center)**: An external process that receives spans over OTLP/gRPC from the Telemetry Module; it decouples rippled from backend choices and handles batching, sampling, and routing.
- **Backends (bottom row)**: Tempo and Elastic APM are interchangeable — the Collector fans out to any combination, so operators can switch backends without modifying rippled code.
- **Top-to-bottom flow**: Data flows from instrumented code down through the SDK, out over the network to the Collector, and finally into storage/visualization backends.
### Context Propagation
```mermaid
sequenceDiagram
participant Client
participant NodeA as Node A
participant NodeB as Node B
Client->>NodeA: Submit TX (no context)
Note over NodeA: Creates trace_id: abc123<br/>span: tx.receive
NodeA->>NodeB: Relay TX<br/>(traceparent: abc123)
Note over NodeB: Links to trace_id: abc123<br/>span: tx.relay
```
- **HTTP/RPC**: W3C Trace Context headers (`traceparent`)
- **P2P Messages**: Protocol Buffer extension fields
---
## Slide 7: Implementation Plan
### 5-Phase Rollout (9 Weeks)
> **Note**: Dates shown are relative to project start, not calendar dates.
```mermaid
gantt
title Implementation Timeline
dateFormat YYYY-MM-DD
axisFormat Week %W
section Phase 1
Core Infrastructure :p1, 2024-01-01, 2w
section Phase 2
RPC Tracing :p2, after p1, 2w
section Phase 3
Transaction Tracing :p3, after p2, 2w
section Phase 4
Consensus Tracing :p4, after p3, 2w
section Phase 5
Documentation :p5, after p4, 1w
```
### Phase Details
| Phase | Focus | Key Deliverables | Effort |
| ----- | ------------------- | -------------------------------------------- | ------- |
| 1 | Core Infrastructure | SDK integration, Telemetry interface, Config | 10 days |
| 2 | RPC Tracing | HTTP context extraction, Handler spans | 10 days |
| 3 | Transaction Tracing | Protobuf context, P2P relay propagation | 10 days |
| 4 | Consensus Tracing | Round spans, Proposal/validation tracing | 10 days |
| 5 | Documentation | Runbook, Dashboards, Training | 7 days |
**Total Effort**: ~47 developer-days (2 developers)
> **Future Phases** (not in current scope): After traces are stable, OTel metrics can replace StatsD (~3 weeks), and OTel logs can replace Journal (~4 weeks, aligned with structured logging initiative). See Slides 3-4 for the full adoption roadmap.
---
## Slide 8: Performance Overhead
> **OTLP** = OpenTelemetry Protocol
### Estimated System Impact
| Metric | Overhead | Notes |
| ----------------- | ---------- | ------------------------------------------------ |
| **CPU** | 1-3% | Span creation and attribute setting |
| **Memory** | ~10 MB | SDK statics + batch buffer + worker thread stack |
| **Network** | 10-50 KB/s | Compressed OTLP export to collector |
| **Latency (p99)** | <2% | With proper sampling configuration |
#### How We Arrived at These Numbers
**Assumptions (XRPL mainnet baseline)**:
| Parameter | Value | Source |
| ------------------------- | ---------------------- | --------------------------------------------------------------------------------------------------- |
| Transaction throughput | ~25 TPS (peaks to ~50) | Mainnet average |
| Default peers per node | 21 | `peerfinder/detail/Tuning.h` (`defaultMaxPeers`) |
| Consensus round frequency | ~1 round / 3-4 seconds | `ConsensusParms.h` (`ledgerMIN_CONSENSUS=1950ms`) |
| Proposers per round | ~20-35 | Mainnet UNL size |
| P2P message rate | ~160 msgs/sec | See message breakdown below |
| Avg TX processing time | ~200 μs | Profiled baseline |
| Single span creation cost | 500-1000 ns | OTel C++ SDK benchmarks (see [3.5.4](./03-implementation-strategy.md#354-performance-data-sources)) |
**P2P message breakdown** (per node, mainnet):
| Message Type | Rate | Derivation |
| ------------- | ------------ | --------------------------------------------------------------------- |
| TMTransaction | ~100/sec | ~25 TPS × ~4 relay hops per TX, deduplicated by HashRouter |
| TMValidation | ~50/sec | ~35 validators × ~1 validation/3s round ~12/sec, plus relay fan-out |
| TMProposeSet | ~10/sec | ~35 proposers / 3s round ~12/round, clustered in establish phase |
| **Total** | **~160/sec** | **Only traced message types counted** |
**CPU (1-3%) — Calculation**:
Per-transaction tracing cost breakdown:
| Operation | Cost | Notes |
| ----------------------------------------------- | ----------- | ------------------------------------------ |
| `tx.receive` span (create + end + 4 attributes) | ~1400 ns | ~1000ns create + ~200ns end + 4×50ns attrs |
| `tx.validate` span | ~1200 ns | ~1000ns create + ~200ns for 2 attributes |
| `tx.relay` span | ~1200 ns | ~1000ns create + ~200ns for 2 attributes |
| Context injection into P2P message | ~200 ns | Serialize trace_id + span_id into protobuf |
| **Total per TX** | **~4.0 μs** | |
> **CPU overhead**: 4.0 μs / 200 μs baseline = **~2.0% per transaction**. Under high load with consensus + RPC spans overlapping, reaches ~3%. Consensus itself adds only ~36 μs per 3-second round (~0.001%), so the TX path dominates. On production server hardware (3+ GHz Xeon), span creation drops to ~500-600 ns, bringing per-TX cost to ~2.6 μs (~1.3%). See [Section 3.5.4](./03-implementation-strategy.md#354-performance-data-sources) for benchmark sources.
**Memory (~10 MB) — Calculation**:
| Component | Size | Notes |
| --------------------------------------------- | ------------------ | ------------------------------------- |
| TracerProvider + Exporter (gRPC channel init) | ~320 KB | Allocated once at startup |
| BatchSpanProcessor (circular buffer) | ~16 KB | 2049 × 8-byte AtomicUniquePtr entries |
| BatchSpanProcessor (worker thread stack) | ~8 MB | Default Linux thread stack size |
| Active spans (in-flight, max ~1000) | ~500-800 KB | ~500-800 bytes/span × 1000 concurrent |
| Export queue (batch buffer, max 2048 spans) | ~1 MB | ~500 bytes/span × 2048 queue depth |
| Thread-local context storage (~100 threads) | ~6.4 KB | ~64 bytes/thread |
| **Total** | **~10 MB ceiling** | |
> Memory plateaus once the export queue fills — the `max_queue_size=2048` config bounds growth.
> The worker thread stack (~8 MB) dominates the static footprint but is virtual memory; actual RSS
> depends on stack usage (typically much less). Active spans are larger than originally estimated
> (~500-800 bytes) because the OTel SDK `Span` object includes a mutex (~40 bytes), `SpanData`
> recordable (~250 bytes base), and `std::map`-based attribute storage (~200-500 bytes for 3-5
> string attributes). See [Section 3.5.4](./03-implementation-strategy.md#354-performance-data-sources) for source references.
**Network (10-50 KB/s) — Calculation**:
Two sources of network overhead:
**(A) OTLP span export to Collector:**
| Sampling Rate | Effective Spans/sec | Avg Span Size (compressed) | Bandwidth |
| -------------------------- | ------------------- | -------------------------- | ------------ |
| 100% (dev only) | ~500 | ~500 bytes | ~250 KB/s |
| **10% (recommended prod)** | **~50** | **~500 bytes** | **~25 KB/s** |
| 1% (minimal) | ~5 | ~500 bytes | ~2.5 KB/s |
> The ~500 spans/sec at 100% comes from: ~100 TX spans + ~160 P2P context spans + ~23 consensus spans/round + ~50 RPC spans = ~500/sec. OTLP protobuf with gzip compression yields ~500 bytes/span average.
**(B) P2P trace context overhead** (added to existing messages, always-on regardless of sampling):
| Message Type | Rate | Context Size | Bandwidth |
| ------------- | -------- | ------------ | ------------- |
| TMTransaction | ~100/sec | 29 bytes | ~2.9 KB/s |
| TMValidation | ~50/sec | 29 bytes | ~1.5 KB/s |
| TMProposeSet | ~10/sec | 29 bytes | ~0.3 KB/s |
| **Total P2P** | | | **~4.7 KB/s** |
> **Combined**: 25 KB/s (OTLP export at 10%) + 5 KB/s (P2P context) ≈ **~30 KB/s typical**. The 10-50 KB/s range covers 10-20% sampling under normal to peak mainnet load.
**Latency (<2%) — Calculation**:
| Path | Tracing Cost | Baseline | Overhead |
| ------------------------------ | ------------ | -------- | -------- |
| Fast RPC (e.g., `server_info`) | 2.75 μs | ~1 ms | 0.275% |
| Slow RPC (e.g., `path_find`) | 2.75 μs | ~100 ms | 0.003% |
| Transaction processing | 4.0 μs | ~200 μs | 2.0% |
| Consensus round | 36 μs | ~3 sec | 0.001% |
> At p99, even the worst case (TX processing at 2.0%) is within the 1-3% range. RPC and consensus overhead are negligible. On production hardware, TX overhead drops to ~1.3%.
### Per-Message Overhead (Context Propagation)
Each P2P message carries trace context with the following overhead:
| Field | Size | Description |
| ------------- | ------------- | ----------------------------------------- |
| `trace_id` | 16 bytes | Unique identifier for the entire trace |
| `span_id` | 8 bytes | Current span (becomes parent on receiver) |
| `trace_flags` | 1 byte | Sampling decision flags |
| `trace_state` | 0-4 bytes | Optional vendor-specific data |
| **Total** | **~29 bytes** | **Added per traced P2P message** |
```mermaid
flowchart LR
subgraph msg["P2P Message with Trace Context"]
A["Original Message<br/>(variable size)"] --> B["+ TraceContext<br/>(~29 bytes)"]
end
subgraph breakdown["Context Breakdown"]
C["trace_id<br/>16 bytes"]
D["span_id<br/>8 bytes"]
E["flags<br/>1 byte"]
F["state<br/>0-4 bytes"]
end
B --> breakdown
style A fill:#424242,stroke:#212121,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#1565c0,stroke:#0d47a1,color:#fff
style D fill:#1565c0,stroke:#0d47a1,color:#fff
style E fill:#e65100,stroke:#bf360c,color:#fff
style F fill:#4a148c,stroke:#2e0d57,color:#fff
```
**Reading the diagram:**
- **Original Message (gray, left)**: The existing P2P message payload of variable size this is unchanged; trace context is appended, never modifying the original data.
- **+ TraceContext (green, right of message)**: The additional 29-byte context block attached to each traced message; the arrow from the original message shows it is a pure addition.
- **Context Breakdown (right subgraph)**: The four fields `trace_id` (16 bytes), `span_id` (8 bytes), `flags` (1 byte), and `state` (0-4 bytes) show exactly what is added and their individual sizes.
- **Color coding**: Blue fields (`trace_id`, `span_id`) are the core identifiers required for trace correlation; orange (`flags`) controls sampling decisions; purple (`state`) is optional vendor data typically omitted.
> **Note**: 29 bytes represents ~1-6% overhead depending on message size (500B simple TX to 5KB proposal), which is acceptable for the observability benefits provided.
### Mitigation Strategies
```mermaid
flowchart LR
A["Head Sampling<br/>10% default"] --> B["Tail Sampling<br/>Keep errors/slow"] --> C["Batch Export<br/>Reduce I/O"] --> D["Conditional Compile<br/>XRPL_ENABLE_TELEMETRY"]
style A fill:#1565c0,stroke:#0d47a1,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#e65100,stroke:#bf360c,color:#fff
style D fill:#4a148c,stroke:#2e0d57,color:#fff
```
> For a detailed explanation of head vs. tail sampling, see Slide 9.
### Kill Switches (Rollback Options)
1. **Config Disable**: Set `enabled=0` in config instant disable, no restart needed for sampling
2. **Rebuild**: Compile with `XRPL_ENABLE_TELEMETRY=OFF` zero overhead (no-op)
3. **Full Revert**: Clean separation allows easy commit reversion
---
## Slide 9: Sampling Strategies — Head vs. Tail
> Sampling controls **which traces are recorded and exported**. Without sampling, every operation generates a trace — at 500+ spans/sec, this overwhelms storage and network. Sampling lets you keep the signal, discard the noise.
### Head Sampling (Decision at Start)
The sampling decision is made **when a trace begins**, before any work is done. A random number is generated; if it falls within the configured ratio, the entire trace is recorded. Otherwise, the trace is silently dropped.
```mermaid
flowchart LR
A["New Request<br/>Arrives"] --> B{"Random < 10%?"}
B -->|"Yes (1 in 10)"| C["Record Entire Trace<br/>(all spans)"]
B -->|"No (9 in 10)"| D["Drop Entire Trace<br/>(zero overhead)"]
style C fill:#2e7d32,stroke:#1b5e20,color:#fff
style D fill:#c62828,stroke:#8c2809,color:#fff
style B fill:#1565c0,stroke:#0d47a1,color:#fff
```
| Aspect | Details |
| ----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Where it runs** | Inside rippled (SDK-level). Configured via `sampling_ratio` in `rippled.cfg`. |
| **When the decision happens** | At trace creation time before the first span is even populated. |
| **How it works** | `sampling_ratio=0.1` means each trace has a 10% probability of being recorded. Dropped traces incur near-zero overhead (no spans created, no attributes set, no export). |
| **Propagation** | Once a trace is sampled, the `trace_flags` field (1 byte in the context header) tells downstream nodes to also sample it. Unsampled traces propagate `trace_flags=0`, so downstream nodes skip them too. |
| **Pros** | Lowest overhead. Simple to configure. Predictable resource usage. |
| **Cons** | **Blind** it doesn't know if the trace will be interesting. A rare error or slow consensus round has only a 10% chance of being captured. |
| **Best for** | High-volume, steady-state traffic where most traces look similar (e.g., routine RPC requests). |
**rippled configuration**:
```ini
[telemetry]
# Record 10% of traces (recommended for production)
sampling_ratio=0.1
```
### Tail Sampling (Decision at End)
The sampling decision is made **after the trace completes**, based on its actual content was it slow? Did it error? Was it a consensus round? This requires buffering complete traces before deciding.
```mermaid
flowchart TB
A["All Traces<br/>Buffered (100%)"] --> B["OTel Collector<br/>Evaluates Rules"]
B --> C{"Error?"}
C -->|Yes| K["KEEP"]
C -->|No| D{"Slow?<br/>(>5s consensus,<br/>>1s RPC)"}
D -->|Yes| K
D -->|No| E{"Random < 10%?"}
E -->|Yes| K
E -->|No| F["DROP"]
style K fill:#2e7d32,stroke:#1b5e20,color:#fff
style F fill:#c62828,stroke:#8c2809,color:#fff
style B fill:#1565c0,stroke:#0d47a1,color:#fff
style C fill:#e65100,stroke:#bf360c,color:#fff
style D fill:#e65100,stroke:#bf360c,color:#fff
style E fill:#4a148c,stroke:#2e0d57,color:#fff
```
| Aspect | Details |
| ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Where it runs** | In the **OTel Collector** (external process), not inside rippled. rippled exports 100% of traces; the Collector decides what to keep. |
| **When the decision happens** | After the Collector has received all spans for a trace (waits `decision_wait=10s` for stragglers). |
| **How it works** | Policy rules evaluate the completed trace: keep all errors, keep slow operations above a threshold, keep all consensus rounds, then probabilistically sample the rest at 10%. |
| **Pros** | **Never misses important traces**. Errors, slow requests, and consensus anomalies are always captured regardless of probability. |
| **Cons** | Higher resource usage rippled must export 100% of spans to the Collector, which buffers them in memory before deciding. The Collector needs more RAM (configured via `num_traces` and `decision_wait`). |
| **Best for** | Production troubleshooting where you can't afford to miss errors or anomalies. |
**Collector configuration** (tail sampling rules for rippled):
```yaml
processors:
tail_sampling:
decision_wait: 10s # Wait for all spans in a trace
num_traces: 100000 # Buffer up to 100K concurrent traces
policies:
- name: errors # Always keep error traces
type: status_code
status_code: { status_codes: [ERROR] }
- name: slow-consensus # Keep consensus rounds >5s
type: latency
latency: { threshold_ms: 5000 }
- name: slow-rpc # Keep slow RPC requests >1s
type: latency
latency: { threshold_ms: 1000 }
- name: probabilistic # Sample 10% of everything else
type: probabilistic
probabilistic: { sampling_percentage: 10 }
```
### Head vs. Tail — Side-by-Side
| | Head Sampling | Tail Sampling |
| ----------------------------- | ---------------------------------------- | ------------------------------------------------ |
| **Decision point** | Trace start (inside rippled) | Trace end (in OTel Collector) |
| **Knows trace content?** | No (random coin flip) | Yes (evaluates completed trace) |
| **Overhead on rippled** | Lowest (dropped traces = no-op) | Higher (must export 100% to Collector) |
| **Collector resource usage** | Low (receives only sampled traces) | Higher (buffers all traces before deciding) |
| **Captures all errors?** | No (only if trace was randomly selected) | **Yes** (error policy catches them) |
| **Captures slow operations?** | No (random) | **Yes** (latency policy catches them) |
| **Configuration** | `rippled.cfg`: `sampling_ratio=0.1` | `otel-collector.yaml`: `tail_sampling` processor |
| **Best for** | High-throughput steady-state | Troubleshooting & anomaly detection |
### Recommended Strategy for rippled
Use **both** in a layered approach:
```mermaid
flowchart LR
subgraph rippled["rippled (Head Sampling)"]
HS["sampling_ratio=1.0<br/>(export everything)"]
end
subgraph collector["OTel Collector (Tail Sampling)"]
TS["Keep: errors + slow + 10% random<br/>Drop: routine traces"]
end
subgraph storage["Backend Storage"]
ST["Only interesting traces<br/>stored long-term"]
end
rippled -->|"100% of spans"| collector -->|"~15-20% kept"| storage
style rippled fill:#424242,stroke:#212121,color:#fff
style collector fill:#1565c0,stroke:#0d47a1,color:#fff
style storage fill:#2e7d32,stroke:#1b5e20,color:#fff
```
> **Why this works**: rippled exports everything (no blind drops), the Collector applies intelligent filtering (keep errors/slow/anomalies, sample the rest), and only ~15-20% of traces reach storage. If Collector resource usage becomes a concern, add head sampling at `sampling_ratio=0.5` to halve the export volume while still giving the Collector enough data for good tail-sampling decisions.
---
## Slide 10: Data Collection & Privacy
### What Data is Collected
| Category | Attributes Collected | Purpose |
| --------------- | ------------------------------------------------------------------------------------ | --------------------------- |
| **Transaction** | `tx.hash`, `tx.type`, `tx.result`, `tx.fee`, `ledger_index` | Trace transaction lifecycle |
| **Consensus** | `round`, `phase`, `mode`, `proposers` (count of proposing validators), `duration_ms` | Analyze consensus timing |
| **RPC** | `command`, `version`, `status`, `duration_ms` | Monitor RPC performance |
| **Peer** | `peer.id`(public key), `latency_ms`, `message.type`, `message.size` | Network topology analysis |
| **Ledger** | `ledger.hash`, `ledger.index`, `close_time`, `tx_count` | Ledger progression tracking |
| **Job** | `job.type`, `queue_ms`, `worker` | JobQueue performance |
### What is NOT Collected (Privacy Guarantees)
```mermaid
flowchart LR
subgraph notCollected["❌ NOT Collected"]
direction LR
A["Private Keys"] ~~~ B["Account Balances"] ~~~ C["Transaction Amounts"]
end
subgraph alsoNot["❌ Also Excluded"]
direction LR
D["IP Addresses<br/>(configurable)"] ~~~ E["Personal Data"] ~~~ F["Raw TX Payloads"]
end
style A fill:#c62828,stroke:#8c2809,color:#fff
style B fill:#c62828,stroke:#8c2809,color:#fff
style C fill:#c62828,stroke:#8c2809,color:#fff
style D fill:#c62828,stroke:#8c2809,color:#fff
style E fill:#c62828,stroke:#8c2809,color:#fff
style F fill:#c62828,stroke:#8c2809,color:#fff
```
**Reading the diagram:**
- **NOT Collected (top row, red)**: Private Keys, Account Balances, and Transaction Amounts are explicitly excluded these are financial/security-sensitive fields that telemetry never touches.
- **Also Excluded (bottom row, red)**: IP Addresses (configurable per deployment), Personal Data, and Raw TX Payloads are also excluded these protect operator and user privacy.
- **All-red styling**: Every box is styled in red to visually reinforce that these are hard exclusions, not optional the telemetry system has no code path to collect any of these fields.
- **Two-row layout**: The split between "NOT Collected" and "Also Excluded" distinguishes between financial data (top) and operational/personal data (bottom), making the privacy boundaries clear to auditors.
### Privacy Protection Mechanisms
| Mechanism | Description |
| -------------------------- | ------------------------------------------------------------- |
| **Account Hashing** | `xrpl.tx.account` is hashed at collector level before storage |
| **Configurable Redaction** | Sensitive fields can be excluded via config |
| **Sampling** | Only 10% of traces recorded by default (reduces exposure) |
| **Local Control** | Node operators control what gets exported |
| **No Raw Payloads** | Transaction content is never recorded, only metadata |
> **Key Principle**: Telemetry collects **operational metadata** (timing, counts, hashes) — never **sensitive content** (keys, balances, amounts).
---
_End of Presentation_

View File

@@ -1529,3 +1529,46 @@ validators.txt
# set to ssl_verify to 0.
[ssl_verify]
1
#-------------------------------------------------------------------------------
#
# 11. Telemetry (OpenTelemetry Tracing)
#
#-------------------------------------------------------------------------------
#
# Enables distributed tracing via OpenTelemetry. Requires building with
# -DXRPL_ENABLE_TELEMETRY=ON (telemetry Conan option).
#
# [telemetry]
#
# enabled=0
#
# Enable or disable telemetry at runtime. Default: 0 (disabled).
#
# endpoint=http://localhost:4318/v1/traces
#
# The OpenTelemetry Collector endpoint (OTLP/HTTP). Default: http://localhost:4318/v1/traces.
#
# exporter=otlp_http
#
# Exporter type: otlp_http. Default: otlp_http.
#
# sampling_ratio=1.0
#
# Fraction of traces to sample (0.0 to 1.0). Default: 1.0 (all traces).
#
# trace_rpc=1
#
# Enable RPC request tracing. Default: 1.
#
# trace_transactions=1
#
# Enable transaction lifecycle tracing. Default: 1.
#
# trace_consensus=1
#
# Enable consensus round tracing. Default: 1.
#
# trace_peer=0
#
# Enable peer message tracing (high volume). Default: 0.
#

View File

@@ -204,6 +204,23 @@ target_link_libraries(
add_module(xrpl tx)
target_link_libraries(xrpl.libxrpl.tx PUBLIC xrpl.libxrpl.ledger)
# Telemetry module — OpenTelemetry distributed tracing support.
# Sources: include/xrpl/telemetry/ (headers), src/libxrpl/telemetry/ (impl).
# When telemetry=ON, links the Conan-provided umbrella target
# opentelemetry-cpp::opentelemetry-cpp (individual component targets like
# ::api, ::sdk are not available in the Conan package).
add_module(xrpl telemetry)
target_link_libraries(
xrpl.libxrpl.telemetry
PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.beast
)
if(telemetry)
target_link_libraries(
xrpl.libxrpl.telemetry
PUBLIC opentelemetry-cpp::opentelemetry-cpp
)
endif()
add_library(xrpl.libxrpl)
set_target_properties(xrpl.libxrpl PROPERTIES OUTPUT_NAME xrpl)
@@ -235,6 +252,7 @@ target_link_modules(
resource
server
shamap
telemetry
tx
)

View File

@@ -140,6 +140,28 @@ function(setup_protocol_autogen)
)
endif()
# Check pip index URL configuration
execute_process(
COMMAND ${VENV_PIP} config get global.index-url
OUTPUT_VARIABLE PIP_INDEX_URL
OUTPUT_STRIP_TRAILING_WHITESPACE
ERROR_QUIET
)
# Default PyPI URL
set(DEFAULT_PIP_INDEX "https://pypi.org/simple")
# Show warning if using non-default index
if(PIP_INDEX_URL AND NOT PIP_INDEX_URL STREQUAL "")
if(NOT PIP_INDEX_URL STREQUAL DEFAULT_PIP_INDEX)
message(
WARNING
"Private pip index URL detected: ${PIP_INDEX_URL}\n"
"You may need to connect to VPN to access this URL."
)
endif()
endif()
message(STATUS "Installing Python dependencies...")
execute_process(
COMMAND ${VENV_PIP} install --upgrade pip

View File

@@ -1,44 +1,52 @@
{
"version": "0.5",
"requires": [
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1765850150.075",
"zlib/1.3.1#cac0f6daea041b0ccf42934163defb20%1774439233.809",
"xxhash/0.8.3#681d36a0a6111fc56e5e45ea182c19cc%1765850149.987",
"sqlite3/3.49.1#8631739a4c9b93bd3d6b753bac548a63%1765850149.926",
"soci/4.0.3#a9f8d773cd33e356b5879a4b0564f287%1765850149.46",
"sqlite3/3.51.0#66aa11eabd0e34954c5c1c061ad44abe%1763899256.358",
"soci/4.0.3#fe32b9ad5eb47e79ab9e45a68f363945%1774450067.231",
"snappy/1.1.10#968fef506ff261592ec30c574d4a7809%1765850147.878",
"secp256k1/0.7.1#3a61e95e220062ef32c48d019e9c81f7%1770306721.686",
"secp256k1/0.7.1#481881709eb0bdd0185a12b912bbe8ad%1770910500.329",
"rocksdb/10.5.1#4a197eca381a3e5ae8adf8cffa5aacd0%1765850186.86",
"re2/20230301#ca3b241baec15bd31ea9187150e0b333%1765850148.103",
"protobuf/6.32.1#f481fd276fc23a33b85a3ed1e898b693%1765850161.038",
"openssl/3.5.5#05a4ac5b7323f7a329b2db1391d9941f%1769599205.414",
"re2/20251105#8579cfd0bda4daf0683f9e3898f964b4%1774398111.888",
"protobuf/6.33.5#d96d52ba5baaaa532f47bda866ad87a5%1773224203.27",
"opentelemetry-cpp/1.18.0#efd9851e173f8a13b9c7d35232de8cf1%1750409186.472",
"openssl/3.6.1#e6399de266349245a4542fc5f6c71552%1774458290.139",
"nudb/2.0.9#0432758a24204da08fee953ec9ea03cb%1769436073.32",
"nlohmann_json/3.11.3#45828be26eb619a2e04ca517bb7b828d%1701220705.259",
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504%1765850143.914",
"libiconv/1.17#1e65319e945f2d31941a9d28cc13c058%1765842973.492",
"libcurl/8.18.0#364bc3755cb9ef84ed9a7ae9c7efc1c1%1770984390.024",
"libbacktrace/cci.20210118#a7691bfccd8caaf66309df196790a5a1%1765842973.03",
"libarchive/3.8.1#ffee18995c706e02bf96e7a2f7042e0d%1765850144.736",
"jemalloc/5.3.0#e951da9cf599e956cebc117880d2d9f8%1729241615.244",
"gtest/1.17.0#5224b3b3ff3b4ce1133cbdd27d53ee7d%1768312129.152",
"grpc/1.72.0#f244a57bff01e708c55a1100b12e1589%1765850193.734",
"grpc/1.78.1#b1a9e74b145cc471bed4dc64dc6eb2c1%1772623605.068",
"ed25519/2015.03#ae761bdc52730a843f0809bdf6c1b1f6%1765850143.772",
"date/3.0.4#862e11e80030356b53c2c38599ceb32b%1765850143.772",
"c-ares/1.34.5#5581c2b62a608b40bb85d965ab3ec7c8%1765850144.336",
"c-ares/1.34.6#545240bb1c40e2cacd4362d6b8967650%1774439234.681",
"bzip2/1.0.8#c470882369c2d95c5c77e970c0c7e321%1765850143.837",
"boost/1.90.0#d5e8defe7355494953be18524a7f135b%1769454080.269",
"abseil/20250127.0#99262a368bd01c0ccca8790dfced9719%1766517936.993"
"abseil/20250127.0#bb0baf1f362bc4a725a24eddd419b8f7%1774365460.196"
],
"build_requires": [
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1765850150.075",
"strawberryperl/5.32.1.1#707032463aa0620fa17ec0d887f5fe41%1765850165.196",
"protobuf/6.32.1#f481fd276fc23a33b85a3ed1e898b693%1765850161.038",
"zlib/1.3.1#cac0f6daea041b0ccf42934163defb20%1774439233.809",
"strawberryperl/5.32.1.1#8d114504d172cfea8ea1662d09b6333e%1774447376.964",
"protobuf/6.33.5#d96d52ba5baaaa532f47bda866ad87a5%1773224203.27",
"pkgconf/2.5.1#93c2051284cba1279494a43a4fcfeae2%1757684701.089",
"opentelemetry-proto/1.4.0#4096a3b05916675ef9628f3ffd571f51%1732731336.11",
"ninja/1.13.2#c8c5dc2a52ed6e4e42a66d75b4717ceb%1764096931.974",
"nasm/2.16.01#31e26f2ee3c4346ecd347911bd126904%1765850144.707",
"msys2/cci.latest#eea83308ad7e9023f7318c60d5a9e6cb%1770199879.083",
"m4/1.4.19#70dc8bbb33e981d119d2acc0175cf381%1763158052.846",
"cmake/4.2.0#ae0a44f44a1ef9ab68fd4b3e9a1f8671%1765850153.937",
"cmake/3.31.10#313d16a1aa16bbdb2ca0792467214b76%1765850153.479",
"b2/5.3.3#107c15377719889654eb9a162a673975%1765850144.355",
"msys2/cci.latest#d22fe7b2808f5fd34d0a7923ace9c54f%1770657326.649",
"meson/1.10.0#60786758ea978964c24525de19603cf4%1768294926.103",
"m4/1.4.19#5d7a4994e5875d76faf7acf3ed056036%1774365463.87",
"libtool/2.4.7#14e7739cc128bc1623d2ed318008e47e%1755679003.847",
"gnu-config/cci.20210814#466e9d4d7779e1c142443f7ea44b4284%1762363589.329",
"cmake/4.3.0#b939a42e98f593fb34d3a8c5cc860359%1774439249.183",
"b2/5.4.2#ffd6084a119587e70f11cd45d1a386e2%1774439233.447",
"automake/1.16.5#b91b7c384c3deaa9d535be02da14d04f%1755524470.56",
"autoconf/2.71#51077f068e61700d65bb05541ea1e4b0%1731054366.86",
"abseil/20250127.0#99262a368bd01c0ccca8790dfced9719%1766517936.993"
"abseil/20250127.0#bb0baf1f362bc4a725a24eddd419b8f7%1774365460.196"
],
"python_requires": [],
"overrides": {
@@ -46,14 +54,14 @@
null,
"boost/1.90.0"
],
"protobuf/5.27.0": [
"protobuf/6.32.1"
"protobuf/[>=5.27.0 <7]": [
"protobuf/6.33.5"
],
"lz4/1.9.4": [
"lz4/1.10.0"
],
"sqlite3/3.44.2": [
"sqlite3/3.49.1"
"sqlite3/[>=3.44 <4]": [
"sqlite3/3.51.0"
],
"boost/1.83.0": [
"boost/1.90.0"

View File

@@ -1,10 +1,9 @@
import re
import os
import re
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
from conan import ConanFile
from conan import __version__ as conan_version
class Xrpl(ConanFile):
@@ -23,6 +22,7 @@ class Xrpl(ConanFile):
"rocksdb": [True, False],
"shared": [True, False],
"static": [True, False],
"telemetry": [True, False],
"tests": [True, False],
"unity": [True, False],
"xrpld": [True, False],
@@ -30,10 +30,10 @@ class Xrpl(ConanFile):
requires = [
"ed25519/2015.03",
"grpc/1.72.0",
"grpc/1.78.1",
"libarchive/3.8.1",
"nudb/2.0.9",
"openssl/3.5.5",
"openssl/3.6.1",
"secp256k1/0.7.1",
"soci/4.0.3",
"zlib/1.3.1",
@@ -44,7 +44,7 @@ class Xrpl(ConanFile):
]
tool_requires = [
"protobuf/6.32.1",
"protobuf/6.33.5",
]
default_options = {
@@ -55,6 +55,7 @@ class Xrpl(ConanFile):
"rocksdb": True,
"shared": False,
"static": True,
"telemetry": True,
"tests": False,
"unity": False,
"xrpld": False,
@@ -137,20 +138,20 @@ class Xrpl(ConanFile):
self.default_options["fPIC"] = False
def requirements(self):
# Conan 2 requires transitive headers to be specified
transitive_headers_opt = (
{"transitive_headers": True} if conan_version.split(".")[0] == "2" else {}
)
self.requires("boost/1.90.0", force=True, **transitive_headers_opt)
self.requires("date/3.0.4", **transitive_headers_opt)
self.requires("boost/1.90.0", force=True, transitive_headers=True)
self.requires("date/3.0.4", transitive_headers=True)
self.requires("lz4/1.10.0", force=True)
self.requires("protobuf/6.32.1", force=True)
self.requires("sqlite3/3.49.1", force=True)
self.requires("protobuf/6.33.5", force=True)
self.requires("sqlite3/3.51.0", force=True)
if self.options.jemalloc:
self.requires("jemalloc/5.3.0")
if self.options.rocksdb:
self.requires("rocksdb/10.5.1")
self.requires("xxhash/0.8.3", **transitive_headers_opt)
# OpenTelemetry C++ SDK for distributed tracing (optional).
# Provides OTLP/HTTP exporter, batch span processor, and trace API.
if self.options.telemetry:
self.requires("opentelemetry-cpp/1.18.0")
self.requires("xxhash/0.8.3", transitive_headers=True)
exports_sources = (
"CMakeLists.txt",
@@ -178,6 +179,7 @@ class Xrpl(ConanFile):
tc.variables["rocksdb"] = self.options.rocksdb
tc.variables["BUILD_SHARED_LIBS"] = self.options.shared
tc.variables["static"] = self.options.static
tc.variables["telemetry"] = self.options.telemetry
tc.variables["unity"] = self.options.unity
tc.variables["xrpld"] = self.options.xrpld
tc.generate()
@@ -230,3 +232,5 @@ class Xrpl(ConanFile):
]
if self.options.rocksdb:
libxrpl.requires.append("rocksdb::librocksdb")
if self.options.telemetry:
libxrpl.requires.append("opentelemetry-cpp::opentelemetry-cpp")

View File

@@ -87,6 +87,8 @@ words:
- daria
- dcmake
- dearmor
- Dedup
- dedup
- deleteme
- demultiplexer
- deserializaton
@@ -182,6 +184,7 @@ words:
- NOLINTNEXTLINE
- nonxrp
- noripple
- nostd
- nudb
- nullptr
- nunl
@@ -196,6 +199,7 @@ words:
- permissioned
- pointee
- populator
- pratik
- preauth
- preauthorization
- preauthorize
@@ -210,6 +214,7 @@ words:
- qalloc
- queuable
- Raphson
- reparent
- replayer
- rerere
- retriable
@@ -267,6 +272,7 @@ words:
- txjson
- txn
- txns
- txqueue
- txs
- UBSAN
- ubsan
@@ -297,6 +303,7 @@ words:
- venv
- vfalco
- vinnie
- wasmi
- wextra
- wptr
- writeme
@@ -313,3 +320,9 @@ words:
- xrplf
- xxhash
- xxhasher
- xychart
- otelc
- zpages
- traceql
- Gantt
- gantt

View File

@@ -0,0 +1,67 @@
# Docker Compose stack for rippled OpenTelemetry observability.
#
# Provides services for local development:
# - otel-collector: receives OTLP traces from rippled, batches and
# forwards them to Tempo. Listens on ports 4317 (gRPC)
# and 4318 (HTTP).
# - tempo: Grafana Tempo tracing backend, queryable via Grafana Explore
# on port 3000. Recommended for production (S3/GCS storage, TraceQL).
# - grafana: dashboards on port 3000, pre-configured with Tempo
# datasource.
#
# Usage:
# docker compose -f docker/telemetry/docker-compose.yml up -d
#
# Configure rippled to export traces by adding to xrpld.cfg:
# [telemetry]
# enabled=1
# endpoint=http://localhost:4318/v1/traces
version: "3.8"
services:
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
command: ["--config=/etc/otel-collector-config.yaml"]
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
- "13133:13133" # Health check
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml:ro
depends_on:
- tempo
networks:
- rippled-telemetry
tempo:
image: grafana/tempo:2.7.2
command: ["-config.file=/etc/tempo.yaml"]
ports:
- "3200:3200" # Tempo HTTP API (health, query)
volumes:
- ./tempo.yaml:/etc/tempo.yaml:ro
- tempo-data:/var/tempo
networks:
- rippled-telemetry
grafana:
image: grafana/grafana:latest
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
ports:
- "3000:3000"
volumes:
- ./grafana/provisioning:/etc/grafana/provisioning:ro
depends_on:
- tempo
networks:
- rippled-telemetry
volumes:
tempo-data:
networks:
rippled-telemetry:
driver: bridge

View File

@@ -0,0 +1,147 @@
# Grafana datasource provisioning for Grafana Tempo.
# Auto-configures Tempo as a trace data source on Grafana startup.
# Access Grafana at http://localhost:3000, then use Explore -> Tempo
# to browse rippled traces using TraceQL.
#
# Search filters provide pre-configured dropdowns in the Explore UI.
# Each phase adds filters for the span attributes it introduces.
# Phase 1b (infra): Base filters — node identity, service, span name, status.
# Phase 2 (RPC): RPC command, status, role filters.
# Phase 3 (TX): Transaction hash, local/peer origin, status.
# Phase 4 (Cons): Consensus mode, round, ledger sequence, close time.
apiVersion: 1
datasources:
- name: Tempo
type: tempo
access: proxy
url: http://tempo:3200
uid: tempo
jsonData:
nodeGraph:
enabled: true
serviceMap:
datasourceUid: prometheus
tracesToMetrics:
datasourceUid: prometheus
spanStartTimeShift: "-1h"
spanEndTimeShift: "1h"
search:
filters:
# --- Node identification filters ---
# service.name: logical service name (default: "rippled").
# Useful when running multiple service types in the same collector.
- id: service-name
tag: service.name
operator: "="
scope: resource
type: static
# service.instance.id: unique node identifier — defaults to the
# node's public key (e.g., nHB1X37...). Distinguishes individual
# nodes in a multi-node cluster or network.
- id: node-id
tag: service.instance.id
operator: "="
scope: resource
type: static
# service.version: rippled build version (e.g., "2.4.0-b1").
# Filter traces from specific software releases.
- id: node-version
tag: service.version
operator: "="
scope: resource
type: dynamic
# xrpl.network.id: numeric network identifier
# (0 = mainnet, 1 = testnet, 2 = devnet, etc.).
- id: network-id
tag: xrpl.network.id
operator: "="
scope: resource
type: dynamic
# xrpl.network.type: human-readable network name
# ("mainnet", "testnet", "devnet", "standalone").
- id: network-type
tag: xrpl.network.type
operator: "="
scope: resource
type: static
# --- Span intrinsic filters ---
- id: span-name
tag: name
operator: "="
scope: intrinsic
type: static
- id: span-status
tag: status
operator: "="
scope: intrinsic
type: static
- id: span-duration
tag: duration
operator: ">"
scope: intrinsic
type: static
# Phase 2: RPC tracing filters
- id: rpc-command
tag: xrpl.rpc.command
operator: "="
scope: span
type: static
- id: rpc-status
tag: xrpl.rpc.status
operator: "="
scope: span
type: dynamic
- id: rpc-role
tag: xrpl.rpc.role
operator: "="
scope: span
type: dynamic
# Phase 3: Transaction tracing filters
- id: tx-hash
tag: xrpl.tx.hash
operator: "="
scope: span
type: static
- id: tx-origin
tag: xrpl.tx.local
operator: "="
scope: span
type: dynamic
- id: tx-status
tag: xrpl.tx.status
operator: "="
scope: span
type: dynamic
# Phase 4: Consensus tracing filters
- id: consensus-mode
tag: xrpl.consensus.mode
operator: "="
scope: span
type: static
- id: consensus-round
tag: xrpl.consensus.round
operator: "="
scope: span
type: dynamic
- id: consensus-ledger-seq
tag: xrpl.consensus.ledger.seq
operator: "="
scope: span
type: static
- id: consensus-close-time-correct
tag: xrpl.consensus.close_time_correct
operator: "="
scope: span
type: dynamic
- id: consensus-state
tag: xrpl.consensus.state
operator: "="
scope: span
type: dynamic
- id: consensus-close-resolution
tag: xrpl.consensus.close_resolution_ms
operator: "="
scope: span
type: dynamic

View File

@@ -0,0 +1,34 @@
# OpenTelemetry Collector configuration for rippled development.
#
# Pipeline: OTLP receiver -> batch processor -> debug + Tempo.
# rippled sends traces via OTLP/HTTP to port 4318. The collector batches
# them and forwards to Tempo via OTLP/gRPC on the Docker network. Tempo
# is queryable via Grafana Explore using TraceQL.
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 100
exporters:
debug:
verbosity: detailed
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp/tempo]

View File

@@ -0,0 +1,59 @@
# Grafana Tempo configuration for rippled telemetry stack.
#
# Runs in single-binary mode for local development.
# Receives traces via OTLP/gRPC from the OTel Collector and stores
# them locally. Queryable via Grafana Explore using the Tempo datasource.
#
# Search filters are configured on the Grafana datasource side
# (grafana/provisioning/datasources/tempo.yaml). Tempo auto-indexes
# all span attributes for search in single-binary mode.
#
# For production, replace local storage with S3/GCS backend and adjust
# retention via the compactor settings. See:
# https://grafana.com/docs/tempo/latest/configuration/
stream_over_http_enabled: true
server:
http_listen_port: 3200
distributor:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
ingester:
max_block_duration: 5m
compactor:
compaction:
block_retention: 1h
# Enable metrics generator for service graph and span metrics.
# Produces RED metrics (rate, errors, duration) per service/span,
# feeding Grafana's service map visualization.
metrics_generator:
registry:
external_labels:
source: tempo
storage:
path: /var/tempo/generator/wal
remote_write:
- url: http://prometheus:9090/api/v1/write
overrides:
defaults:
metrics_generator:
processors:
- service-graphs
- span-metrics
storage:
trace:
backend: local
wal:
path: /var/tempo/wal
local:
path: /var/tempo/blocks

View File

@@ -109,3 +109,32 @@ Install CMake with Homebrew too:
```
brew install cmake
```
## Clang-tidy
Clang-tidy is required to run static analysis checks locally (see [CONTRIBUTING.md](../../CONTRIBUTING.md)).
It is not required to build the project. Currently this project uses clang-tidy version 21.
### Linux
LLVM 21 is not available in the default Debian 12 (Bookworm) repositories.
Install it using the official LLVM apt installer:
```
wget https://apt.llvm.org/llvm.sh
chmod +x llvm.sh
sudo ./llvm.sh 21
sudo apt install --yes clang-tidy-21
```
Then use `run-clang-tidy-21` when running clang-tidy locally.
### macOS
Install LLVM 21 via Homebrew:
```
brew install llvm@21
```
Then use `run-clang-tidy` from the LLVM 21 Homebrew prefix when running clang-tidy locally.

275
docs/build/telemetry.md vendored Normal file
View File

@@ -0,0 +1,275 @@
# OpenTelemetry Tracing for Rippled
This document explains how to build rippled with OpenTelemetry distributed tracing support, configure the runtime telemetry options, and set up the observability backend to view traces.
- [OpenTelemetry Tracing for Rippled](#opentelemetry-tracing-for-rippled)
- [Overview](#overview)
- [Building with Telemetry](#building-with-telemetry)
- [Summary](#summary)
- [Build steps](#build-steps)
- [Install dependencies](#install-dependencies)
- [Call CMake](#call-cmake)
- [Build](#build)
- [Building without telemetry](#building-without-telemetry)
- [Runtime Configuration](#runtime-configuration)
- [Configuration options](#configuration-options)
- [Observability Stack](#observability-stack)
- [Start the stack](#start-the-stack)
- [Verify the stack](#verify-the-stack)
- [View traces in Grafana Explore](#view-traces-in-grafana-explore)
- [Running Tests](#running-tests)
- [Troubleshooting](#troubleshooting)
- [No traces appear in Grafana](#no-traces-appear-in-grafana)
- [Conan lockfile error](#conan-lockfile-error)
- [CMake target not found](#cmake-target-not-found)
- [Architecture](#architecture)
- [Key files](#key-files)
- [Conditional compilation](#conditional-compilation)
## Overview
Rippled supports optional [OpenTelemetry](https://opentelemetry.io/) distributed tracing.
When enabled, it instruments RPC requests with trace spans that are exported via
OTLP/HTTP to an OpenTelemetry Collector, which forwards them to a tracing backend
such as Grafana Tempo.
Telemetry is **off by default** at both compile time and runtime:
- **Compile time**: The Conan option `telemetry` and CMake option `telemetry` must be set to `True`/`ON`.
When disabled, all tracing macros compile to `((void)0)` with zero overhead.
- **Runtime**: The `[telemetry]` config section must set `enabled=1`.
When disabled at runtime, a no-op implementation is used.
## Building with Telemetry
### Summary
Follow the same instructions as mentioned in [BUILD.md](../../BUILD.md) but with the following changes:
1. Pass `-o telemetry=True` to `conan install` to pull the `opentelemetry-cpp` dependency.
2. CMake will automatically pick up `telemetry=ON` from the Conan-generated toolchain.
3. Build as usual.
---
### Build steps
```bash
cd /path/to/rippled
rm -rf .build
mkdir .build
cd .build
```
#### Install dependencies
The `telemetry` option adds `opentelemetry-cpp/1.18.0` as a dependency.
If the Conan lockfile does not yet include this package, bypass it with `--lockfile=""`.
```bash
conan install .. \
--output-folder . \
--build missing \
--settings build_type=Debug \
-o telemetry=True \
-o tests=True \
-o xrpld=True \
--lockfile=""
```
> **Note**: The first build with telemetry may take longer as `opentelemetry-cpp`
> and its transitive dependencies are compiled from source.
#### Call CMake
The Conan-generated toolchain file sets `telemetry=ON` automatically.
No additional CMake flags are needed beyond the standard ones.
```bash
cmake .. -G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=Debug \
-Dtests=ON -Dxrpld=ON
```
You should see in the CMake output:
```
-- OpenTelemetry tracing enabled
```
#### Build
```bash
cmake --build . --parallel $(nproc)
```
### Building without telemetry
Omit the `-o telemetry=True` option (or pass `-o telemetry=False`).
The `opentelemetry-cpp` dependency will not be downloaded,
the `XRPL_ENABLE_TELEMETRY` preprocessor define will not be set,
and all tracing macros will compile to no-ops.
The resulting binary is identical to one built before telemetry support was added.
## Runtime Configuration
Add a `[telemetry]` section to your `xrpld.cfg` file:
```ini
[telemetry]
enabled=1
service_name=rippled
endpoint=http://localhost:4318/v1/traces
sampling_ratio=1.0
trace_rpc=1
trace_transactions=1
trace_consensus=1
trace_peer=0
```
### Configuration options
| Option | Type | Default | Description |
| --------------------- | ------ | --------------------------------- | -------------------------------------------------- |
| `enabled` | int | `0` | Enable (`1`) or disable (`0`) telemetry at runtime |
| `service_name` | string | `rippled` | Service name reported in traces |
| `service_instance_id` | string | node public key | Unique instance identifier |
| `exporter` | string | `otlp_http` | Exporter type |
| `endpoint` | string | `http://localhost:4318/v1/traces` | OTLP/HTTP collector endpoint |
| `use_tls` | int | `0` | Enable TLS for the exporter connection |
| `tls_ca_cert` | string | (empty) | Path to CA certificate for TLS |
| `sampling_ratio` | double | `1.0` | Fraction of traces to sample (`0.0` to `1.0`) |
| `batch_size` | uint32 | `512` | Maximum spans per export batch |
| `batch_delay_ms` | uint32 | `5000` | Maximum delay (ms) before flushing a batch |
| `max_queue_size` | uint32 | `2048` | Maximum spans queued in memory |
| `trace_rpc` | int | `1` | Enable RPC request tracing |
| `trace_transactions` | int | `1` | Enable transaction lifecycle tracing |
| `trace_consensus` | int | `1` | Enable consensus round tracing |
| `trace_peer` | int | `0` | Enable peer message tracing (high volume) |
| `trace_ledger` | int | `1` | Enable ledger close tracing |
## Observability Stack
A Docker Compose stack is provided in `docker/telemetry/` with three services:
| Service | Port | Purpose |
| ------------------ | ---------------------------------------------- | --------------------------------------------------- |
| **OTel Collector** | `4317` (gRPC), `4318` (HTTP), `13133` (health) | Receives OTLP spans, batches, and forwards to Tempo |
| **Tempo** | `3200` (HTTP API) | Trace storage backend |
| **Grafana** | `3000` | Dashboards (Tempo pre-configured as datasource) |
### Start the stack
```bash
docker compose -f docker/telemetry/docker-compose.yml up -d
```
### Verify the stack
```bash
# Collector health
curl http://localhost:13133
# Grafana (Explore -> Tempo for traces)
open http://localhost:3000
```
### View traces in Grafana Explore
1. Open `http://localhost:3000` in a browser.
2. Navigate to **Explore** and select the **Tempo** datasource.
3. Use **Search** or **TraceQL** to find traces by service name (e.g. `rippled`).
4. Click into any trace to see the span tree and attributes.
Traced RPC operations produce a span hierarchy like:
```
rpc.request
└── rpc.command.server_info (xrpl.rpc.command=server_info, xrpl.rpc.status=success)
```
Each span includes attributes:
- `xrpl.rpc.command` — the RPC method name
- `xrpl.rpc.version` — API version
- `xrpl.rpc.role``admin` or `user`
- `xrpl.rpc.status``success` or `error`
## Running Tests
Unit tests run with the telemetry-enabled build regardless of whether the
observability stack is running. When no collector is available, the exporter
silently drops spans with no impact on test results.
```bash
# Run all RPC tests
./xrpld --unittest=RPCCall,ServerInfo,AccountTx,LedgerRPC,Transaction --unittest-jobs $(nproc)
# Run the full test suite
./xrpld --unittest --unittest-jobs $(nproc)
```
To generate traces during manual testing, start rippled in standalone mode:
```bash
./xrpld --conf /path/to/xrpld.cfg --standalone --start
```
Then send RPC requests:
```bash
curl -s -X POST http://127.0.0.1:5005/ \
-H "Content-Type: application/json" \
-d '{"method":"server_info","params":[{}]}'
```
## Troubleshooting
### No traces appear in Grafana
1. Confirm the OTel Collector is running: `docker compose -f docker/telemetry/docker-compose.yml ps`
2. Check collector logs for errors: `docker compose -f docker/telemetry/docker-compose.yml logs otel-collector`
3. Confirm `[telemetry] enabled=1` is set in the rippled config.
4. Confirm `endpoint` points to the correct collector address (`http://localhost:4318/v1/traces`).
5. Wait for the batch delay to elapse (default `5000` ms) before checking Grafana Explore.
### Conan lockfile error
If you see `ERROR: Requirement 'opentelemetry-cpp/1.18.0' not in lockfile 'requires'`,
the lockfile was generated without the telemetry dependency.
Pass `--lockfile=""` to bypass the lockfile, or regenerate it with telemetry enabled.
### CMake target not found
If CMake reports that `opentelemetry-cpp` targets are not found,
ensure you ran `conan install` with `-o telemetry=True` and that the
Conan-generated toolchain file is being used.
The Conan package provides a single umbrella target
`opentelemetry-cpp::opentelemetry-cpp` (not individual component targets).
## Architecture
### Key files
| File | Purpose |
| ---------------------------------------------- | ----------------------------------------------------------- |
| `include/xrpl/telemetry/Telemetry.h` | Abstract telemetry interface and `Setup` struct |
| `include/xrpl/telemetry/SpanGuard.h` | RAII span guard (activates scope, ends span on destruction) |
| `src/libxrpl/telemetry/Telemetry.cpp` | OTel-backed implementation (`TelemetryImpl`) |
| `src/libxrpl/telemetry/TelemetryConfig.cpp` | Config parser (`setup_Telemetry()`) |
| `src/libxrpl/telemetry/NullTelemetry.cpp` | No-op implementation (used when disabled) |
| `src/xrpld/telemetry/TracingInstrumentation.h` | Convenience macros (`XRPL_TRACE_RPC`, etc.) |
| `src/xrpld/rpc/detail/ServerHandler.cpp` | RPC entry point instrumentation |
| `src/xrpld/rpc/detail/RPCHandler.cpp` | Per-command instrumentation |
| `docker/telemetry/docker-compose.yml` | Observability stack (Collector + Tempo + Grafana) |
| `docker/telemetry/otel-collector-config.yaml` | OTel Collector pipeline configuration |
### Conditional compilation
All OpenTelemetry SDK headers are guarded behind `#ifdef XRPL_ENABLE_TELEMETRY`.
The instrumentation macros in `TracingInstrumentation.h` compile to `((void)0)` when
the define is absent.
At runtime, if `enabled=0` is set in config (or the section is omitted), a
`NullTelemetry` implementation is used that returns no-op spans.
This two-layer approach ensures zero overhead when telemetry is not wanted.

View File

@@ -0,0 +1,638 @@
# External Dashboard Parity — Design Spec
> **Date**: 2026-03-30
> **Status**: Draft
> **Source**: [realgrapedrop/xrpl-validator-dashboard](https://github.com/realgrapedrop/xrpl-validator-dashboard)
> **Jira Epic**: RIPD-5060
## Summary
Integrate 29 missing metrics, 18 alert rules, and enriched span attributes from the community `xrpl-validator-dashboard` into rippled's native OpenTelemetry instrumentation. Changes are distributed across phases 2, 3, 4, 6, 7, 9, 10, and 11 of the OTel PR chain.
## Gap Analysis
### Coverage Breakdown (86 external metrics)
| Status | Count | Notes |
| ------------------ | ----- | ------------------------------------------------------------- |
| Already covered | 30 | peer_count, load_factor, io_latency, uptime, overlay traffic |
| Partially covered | 3 | state_value encoding, NuDB granularity, validation_quorum |
| Missing | 29 | Validation agreement, ledger economy, peer quality, UNL health |
| N/A (external) | 24 | Monitor health, realtime duplicates, system metrics |
### Missing Metrics by Category
| Category | Metrics | Count |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----- |
| Validation Agreement | `validations_sent_total`, `validations_checked_total`, `validation_agreements_total`, `validation_missed_total`, `validation_agreement_pct_1h/24h`, `validation_agreements_1h/24h`, `validation_missed_1h/24h`, `validation_event` | 11 |
| Ledger Economy | `ledgers_closed_total`, `ledger_age_seconds`, `base_fee_xrp`, `reserve_base_xrp`, `reserve_inc_xrp`, `transaction_rate` | 6 |
| State Tracking | `time_in_current_state_seconds`, `state_changes_total`, `validator_state_info` | 3 |
| Peer Quality | `peers_insane`, `peer_latency_p90_ms` | 2 |
| Validator Health | `amendment_blocked`, `unl_expiry_days` | 2 |
| Upgrade Awareness | `peers_higher_version_pct`, `upgrade_recommended` | 2 |
| Storage / Other | `ledger_nudb_bytes`, `jq_trans_overflow_total`, `initial_sync_duration_seconds` | 3 |
### Alert Rules (18 total, from external dashboard)
| Group | Count | Rules |
| ----------- | ----- | ---------------------------------------------------------------------------------------------------- |
| Critical | 8 | Agreement <90%, not proposing, unhealthy state, amendment blocked, UNL expiring, IO latency, load factor, peer count <5 |
| Network | 3 | Peer drop >10%/30%, P90 latency + disconnect correlation |
| Performance | 7 | CPU >80%, memory >90%, disk >85%, job queue overflow, upgrade recommended, tx rate drop, stale ledger |
---
## Branch-to-Change Mapping
### Phase 2 — `pratik/otel-phase2-rpc-tracing`
> **Ref**: Adds to existing Phase 2 task list. Consumed by Phase 7 (MetricsRegistry) and Phase 10 (validation checks).
**Task 2.8: RPC Span Attribute Enrichment**
Add node-level health context to every `rpc.command.*` span so operators can correlate RPC behavior with node state.
New span attributes on `rpc.command.*`:
| Attribute | Type | Source | Value Example |
| ----------------------------- | ------ | ---------------------------------- | ------------------------ |
| `xrpl.node.amendment_blocked` | bool | `app_.getOPs().isAmendmentBlocked()` | `true` |
| `xrpl.node.server_state` | string | `app_.getOPs().strOperatingMode()` | `"full"`, `"syncing"` |
**File**: `src/xrpld/rpc/detail/RPCHandler.cpp` (in the `rpc.command.*` span creation block, after existing setAttribute calls)
**Rationale**: RPC is the operator's primary interaction point. When a node is amendment-blocked or degraded, every RPC response is suspect. Tagging spans with this state enables Jaeger queries like `{name=~"rpc.command.*"} | xrpl.node.amendment_blocked = true` to find all RPCs served during a blocked period.
**Exit Criteria**:
- [ ] `rpc.command.server_info` spans carry `xrpl.node.amendment_blocked` and `xrpl.node.server_state` attributes
- [ ] No measurable latency impact (attribute values are cached atomics, not computed per-call)
---
### Phase 3 — `pratik/otel-phase3-tx-tracing`
> **Ref**: Adds to existing Phase 3 task list. Consumed by Phase 10 (validation checks).
**Task 3.7: Transaction Span Peer Version Attribute**
Add the relaying peer's rippled version to transaction receive spans to enable version-mismatch correlation.
New span attribute on `tx.receive`:
| Attribute | Type | Source | Value Example |
| ------------------- | ------ | ------------------- | ------------------ |
| `xrpl.peer.version` | string | `peer->getVersion()` | `"rippled-2.4.0"` |
**File**: `src/xrpld/overlay/detail/PeerImp.cpp` (in the `tx.receive` span block, after existing `xrpl.peer.id` setAttribute)
**Rationale**: Transaction relay is where version mismatches cause subtle serialization or validation bugs. Tracing "this tx came from a v2.3.0 peer" helps diagnose compatibility issues during network upgrades.
**Exit Criteria**:
- [ ] `tx.receive` spans carry `xrpl.peer.version` attribute with a non-empty version string
- [ ] Attribute is omitted (not empty-string) when `getVersion()` returns empty
---
### Phase 4 — `pratik/otel-phase4-consensus-tracing`
> **Ref**: Adds to existing Phase 4 task list. Provides the span-level foundation that Phase 7 (ValidationTracker) builds upon. Consumed by Phase 10 (validation checks).
**Task 4.8: Consensus Validation Span Enrichment**
Add ledger hash and validation type to validation spans on both send and receive paths. This enables trace-level agreement analysis — filter by ledger hash to see which validators agreed.
New span attributes on `consensus.validation.send`:
| Attribute | Type | Source | Value Example |
| ---------------------------- | ------ | --------------------------------------- | -------------------------- |
| `xrpl.validation.ledger_hash` | string | Ledger hash from `validate()` call args | `"A1B2C3..."` (64-char hex) |
| `xrpl.validation.full` | bool | Whether this is a full validation | `true` |
New span attributes on `peer.validation.receive`:
| Attribute | Type | Source | Value Example |
| --------------------------------- | ------ | --------------------------------------- | -------------------------- |
| `xrpl.peer.validation.ledger_hash` | string | From deserialized STValidation object | `"A1B2C3..."` (64-char hex) |
| `xrpl.peer.validation.full` | bool | From STValidation flags | `true` |
New span attributes on `consensus.accept`:
| Attribute | Type | Source | Value Example |
| ------------------------------------ | ----- | -------------------------------------------- | ------------- |
| `xrpl.consensus.validation_quorum` | int64 | `app_.validators().quorum()` | `28` |
| `xrpl.consensus.proposers_validated` | int64 | `result.proposers` from consensus result | `35` |
**Files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp` (validation.send and accept spans)
- `src/xrpld/overlay/detail/PeerImp.cpp` (peer.validation.receive span)
**Rationale**: The external dashboard's most valuable feature is validation agreement tracking. By recording the ledger hash on both outgoing and incoming validation spans, we create the raw data for agreement analysis at the trace level. Phase 7's ValidationTracker builds the metric-level aggregation on top of this.
**Exit Criteria**:
- [ ] `consensus.validation.send` spans carry `xrpl.validation.ledger_hash` and `xrpl.validation.full`
- [ ] `peer.validation.receive` spans carry `xrpl.peer.validation.ledger_hash` and `xrpl.peer.validation.full`
- [ ] `consensus.accept` spans carry `xrpl.consensus.validation_quorum` and `xrpl.consensus.proposers_validated`
- [ ] Ledger hash attributes match between send and receive for the same ledger
---
### Phase 6 — `pratik/otel-phase6-statsd`
> **Ref**: Adds to existing Phase 6 scope. No separate task list file exists for Phase 6 per project convention.
**Addition: Bridge `peerDisconnectsCharges_` metric**
The overlay already tracks resource-limit disconnects via `OverlayImpl::Stats::peerDisconnectsCharges_` (a `beast::insight::Gauge`). This metric is registered but not included in the StatsD bridge mapping.
**What to do**:
- Ensure `rippled_Overlay_Peer_Disconnects_Charges` appears in the StatsD-to-Prometheus metric name mapping
- Verify the metric appears in Prometheus after StatsD bridge is active
**File**: `src/xrpld/overlay/detail/OverlayImpl.cpp`
**Prometheus name**: `rippled_Overlay_Peer_Disconnects_Charges`
---
### Phase 7 — `pratik/otel-phase7-native-metrics`
> **Ref**: Adds to existing Phase 7 task list. This is the largest addition. Depends on Phase 4 span attributes for validation tracking context. Consumed by Phase 9 (dashboards), Phase 10 (validation), Phase 11 (alerts).
**Task 7.8: ValidationTracker — Validation Agreement Computation**
The most valuable missing component. A stateful class that tracks whether our validator's validations agree with network consensus, maintaining rolling 1h and 24h windows.
**Architecture**:
```
consensus.validation.send ─────> ValidationTracker ──────> MetricsRegistry
(records our validation (reconciles after (exports agreement
for ledger X) 8s grace period) gauges every 10s)
ledger.validate ───────────────> ValidationTracker
(records which ledger (marks ledger X as
network validated) agreed or missed)
```
**Design**:
```cpp
/// Tracks validation agreement between this node and network consensus.
///
/// ValidationTracker
/// ├── recordOurValidation(ledgerHash, ledgerSeq) // called when we send
/// ├── recordNetworkValidation(ledgerHash, seq) // called on ledger validate
/// ├── reconcile() // called periodically (timer)
/// ├── agreementPct1h() -> double // 0.0-100.0
/// ├── agreementPct24h() -> double
/// ├── agreements1h() -> uint64_t
/// ├── missed1h() -> uint64_t
/// ├── agreements24h() -> uint64_t
/// ├── missed24h() -> uint64_t
/// ├── totalAgreements() -> uint64_t
/// ├── totalMissed() -> uint64_t
/// ├── totalValidationsSent() -> uint64_t
/// └── totalValidationsChecked() -> uint64_t // all network validations seen
class ValidationTracker
{
// Ring buffer of pending ledger events (max 1000)
struct LedgerEvent {
uint256 ledgerHash;
LedgerIndex seq;
TimePoint closeTime;
bool weValidated = false; // did we send a validation for this ledger?
bool networkValidated = false; // did network validate this ledger?
bool reconciled = false; // has 8s grace period elapsed?
bool agreed = false; // after reconciliation: did we agree?
};
// Sliding window deques for pre-computed window stats
struct WindowEvent {
TimePoint time;
bool agreed;
};
std::deque<WindowEvent> window1h_; // events in last 1 hour
std::deque<WindowEvent> window24h_; // events in last 24 hours
// Reconciliation: 8s grace period after ledger close.
// If our validation hasn't arrived by then, mark as missed.
// 5-minute late repair: if a late validation arrives, correct the miss.
static constexpr auto kGracePeriod = std::chrono::seconds(8);
static constexpr auto kLateRepairWindow = std::chrono::minutes(5);
};
```
**Recording sites** (modifications to consensus code from Phase 7 branch):
| Hook Point | File | What to Record |
| --- | --- | --- |
| `validate()` in `doAccept()` | RCLConsensus.cpp | `tracker.recordOurValidation(ledgerHash, seq)` |
| `onValidation()` callback | RCLValidations path | `tracker.recordNetworkValidation(...)` — increment `validationsChecked` |
| LedgerMaster fully-validated | LedgerMaster.cpp | `tracker.recordNetworkValidation(validatedHash, seq)` |
**Key new files**:
- `src/xrpld/telemetry/ValidationTracker.h`
- `src/xrpld/telemetry/detail/ValidationTracker.cpp`
**Key modified files**:
- `src/xrpld/telemetry/MetricsRegistry.h` (add ValidationTracker member)
- `src/xrpld/telemetry/MetricsRegistry.cpp` (add gauge callback reading from tracker)
- `src/xrpld/app/consensus/RCLConsensus.cpp` (add recording hooks)
- `src/xrpld/app/ledger/detail/LedgerMaster.cpp` (add recording hook)
**Exit Criteria**:
- [ ] `ValidationTracker` correctly tracks agreement with 8s grace period
- [ ] 5-minute late repair corrects false-positive misses
- [ ] Thread-safe (atomics + mutex for window deques)
- [ ] Rolling windows correctly evict stale entries
- [ ] Unit tests for: normal agreement, missed validation, late repair, window eviction
---
**Task 7.9: Validator Health Observable Gauges**
New MetricsRegistry observable gauge for amendment, UNL, and quorum health.
| Gauge Name | Label `metric=` | Type | Source |
| ------------------------- | ----------------------- | ------- | ----------------------------------------- |
| `rippled_validator_health` | `amendment_blocked` | int64 | `app_.getOPs().isAmendmentBlocked()` → 0/1 |
| | `unl_blocked` | int64 | `app_.getOPs().isUNLBlocked()` → 0/1 |
| | `unl_expiry_days` | double | `app_.validators().expires()` → days until expiry |
| | `validation_quorum` | int64 | `app_.validators().quorum()` |
**File**: `src/xrpld/telemetry/MetricsRegistry.cpp` (new gauge callback in `registerAsyncGauges()`)
**Exit Criteria**:
- [ ] All 4 label values emitted every 10s
- [ ] `unl_expiry_days` is negative when expired, positive when active
- [ ] Values visible in Prometheus
---
**Task 7.10: Peer Quality Observable Gauges**
New MetricsRegistry observable gauge for peer health aggregates.
| Gauge Name | Label `metric=` | Type | Source |
| ----------------------- | ------------------------- | ------- | ---------------------------------------------- |
| `rippled_peer_quality` | `peer_latency_p90_ms` | double | Iterate peers, compute P90 from `latency_` |
| | `peers_insane_count` | int64 | Count peers with `tracking_ == diverged` |
| | `peers_higher_version_pct` | double | Compare `getVersion()` to own version |
| | `upgrade_recommended` | int64 | 1 if `peers_higher_version_pct > 60%` |
**Implementation note**: The callback iterates `app_.overlay().foreach(...)` to collect per-peer latency and version data. This runs every 10s on the metrics reader thread — acceptable overhead for ~50-200 peers.
**File**: `src/xrpld/telemetry/MetricsRegistry.cpp`
**Exit Criteria**:
- [ ] P90 latency computed correctly (sort peer latencies, pick 90th percentile)
- [ ] Insane count matches `peers` RPC output
- [ ] Version comparison handles format variations (e.g., "rippled-2.4.0-rc1")
- [ ] Values visible in Prometheus
---
**Task 7.11: Ledger Economy Observable Gauges**
New MetricsRegistry observable gauge for fee and ledger metrics.
| Gauge Name | Label `metric=` | Type | Source |
| ------------------------ | ---------------------- | ------- | ----------------------------------------------- |
| `rippled_ledger_economy` | `base_fee_xrp` | double | `app_.getFeeTrack().getBaseFee()` → drops |
| | `reserve_base_xrp` | double | From validated ledger fee settings |
| | `reserve_inc_xrp` | double | From validated ledger fee settings |
| | `ledger_age_seconds` | double | `now - lastValidatedCloseTime` |
| | `transaction_rate` | double | Derived: tx count delta / time delta |
**File**: `src/xrpld/telemetry/MetricsRegistry.cpp`
**Exit Criteria**:
- [ ] Fee values match `server_info` RPC output
- [ ] `ledger_age_seconds` increases monotonically between ledger closes, resets on close
- [ ] `transaction_rate` is smoothed (rolling average, not instantaneous)
---
**Task 7.12: State Tracking Observable Gauges**
New MetricsRegistry observable gauge for node state duration.
| Gauge Name | Label `metric=` | Type | Source |
| ------------------------- | -------------------------------- | ------- | ---------------------------------------------- |
| `rippled_state_tracking` | `state_value` | int64 | 0-7 numeric encoding matching external dashboard |
| | `time_in_current_state_seconds` | double | `now - lastModeChangeTime` |
**State value encoding**:
rippled's `OperatingMode` enum maps 0-4 (DISCONNECTED through FULL). The external dashboard extends this to 0-6 by combining operating mode with consensus participation:
| Value | State | Source |
| ----- | ------------ | ---------------------------------------------------------------- |
| 0 | disconnected | `OperatingMode::DISCONNECTED` |
| 1 | connected | `OperatingMode::CONNECTED` |
| 2 | syncing | `OperatingMode::SYNCING` |
| 3 | tracking | `OperatingMode::TRACKING` |
| 4 | full | `OperatingMode::FULL` and not validating |
| 5 | validating | `OperatingMode::FULL` and `mConsensus.validating()` is true |
| 6 | proposing | `OperatingMode::FULL` and consensus mode is `proposing` |
**Note**: Values 5-6 require checking both `OperatingMode` and `ConsensusMode`. The callback should derive these from `app_.getOPs().getOperatingMode()` combined with `mConsensus.mode()`. If operating mode is FULL and consensus is proposing → 6; if FULL and validating → 5; otherwise use the raw OperatingMode enum value.
**File**: `src/xrpld/telemetry/MetricsRegistry.cpp`
**Exit Criteria**:
- [ ] `state_value` matches external dashboard encoding
- [ ] `time_in_current_state_seconds` resets on mode change
---
**Task 7.13: Storage Detail Observable Gauge**
| Gauge Name | Label `metric=` | Type | Source |
| -------------------------- | ---------------- | ----- | ----------------------------------------- |
| `rippled_storage_detail` | `nudb_bytes` | int64 | NuDB backend file size (filesystem stat) |
**File**: `src/xrpld/telemetry/MetricsRegistry.cpp`
**Exit Criteria**:
- [ ] NuDB file size reported in bytes
- [ ] Gracefully returns 0 if NuDB not configured
---
**Task 7.14: New Synchronous Counters**
New counters incremented at event sites. Declared in MetricsRegistry, recording sites added in consensus/overlay/network code.
| Counter Name | Increment Site | Source File |
| -------------------------------------- | --------------------------------- | ---------------------- |
| `rippled_ledgers_closed_total` | `onAccept()` in consensus | RCLConsensus.cpp |
| `rippled_validations_sent_total` | `validate()` in consensus | RCLConsensus.cpp |
| `rippled_validations_checked_total` | Network validation received | LedgerMaster.cpp |
| `rippled_validation_agreements_total` | ValidationTracker reconciliation | ValidationTracker.cpp |
| `rippled_validation_missed_total` | ValidationTracker reconciliation | ValidationTracker.cpp |
| `rippled_state_changes_total` | `setMode()` in NetworkOPs | NetworkOPs.cpp |
| `rippled_jq_trans_overflow_total` | Job queue overflow path | JobQueue.cpp |
**Key modified files**:
- `src/xrpld/telemetry/MetricsRegistry.h/.cpp` (counter declarations)
- `src/xrpld/app/consensus/RCLConsensus.cpp` (recording: ledgers_closed, validations_sent)
- `src/xrpld/app/ledger/detail/LedgerMaster.cpp` (recording: validations_checked)
- `src/xrpld/app/misc/NetworkOPs.cpp` (recording: state_changes)
**Exit Criteria**:
- [ ] All 7 counters monotonically increase during normal operation
- [ ] Counter values match expected rates (e.g., ledgers_closed ≈ 1 per 3-5s)
- [ ] Values visible in Prometheus
---
**Task 7.15: Validation Agreement Observable Gauge**
Reads from the `ValidationTracker` (Task 7.8) to export rolling window stats.
| Gauge Name | Label `metric=` | Type | Source |
| --------------------------------- | -------------------------- | ------ | ------------------------------- |
| `rippled_validation_agreement` | `agreement_pct_1h` | double | `tracker.agreementPct1h()` |
| | `agreements_1h` | int64 | `tracker.agreements1h()` |
| | `missed_1h` | int64 | `tracker.missed1h()` |
| | `agreement_pct_24h` | double | `tracker.agreementPct24h()` |
| | `agreements_24h` | int64 | `tracker.agreements24h()` |
| | `missed_24h` | int64 | `tracker.missed24h()` |
**File**: `src/xrpld/telemetry/MetricsRegistry.cpp`
**Exit Criteria**:
- [ ] Agreement percentages in range [0.0, 100.0]
- [ ] Window stats match manual count from validation counters
- [ ] Percentages stabilize after 1h/24h of operation
---
### Phase 9 — `pratik/otel-phase9-metric-gap-fill`
> **Ref**: Adds to existing Phase 9 task list. Depends on Phase 7 gauges/counters. Consumed by Phase 10 (dashboard load checks).
**Task 9.11: Validator Health Dashboard**
New Grafana dashboard: `rippled-validator-health.json`
| Panel | Type | PromQL |
| --------------------------- | ---------- | -------------------------------------------------------------------- |
| Agreement % (1h) | stat | `rippled_validation_agreement{metric="agreement_pct_1h"}` |
| Agreement % (24h) | stat | `rippled_validation_agreement{metric="agreement_pct_24h"}` |
| Agreements vs Missed (1h) | bargauge | `agreements_1h` and `missed_1h` side by side |
| Agreements vs Missed (24h) | bargauge | `agreements_24h` and `missed_24h` side by side |
| Validation Rate | stat | `rate(rippled_validations_sent_total[5m]) * 60` |
| Validations Checked Rate | stat | `rate(rippled_validations_checked_total[5m]) * 60` |
| Amendment Blocked | stat | `rippled_validator_health{metric="amendment_blocked"}` |
| UNL Expiry (days) | stat | `rippled_validator_health{metric="unl_expiry_days"}` |
| Validation Quorum | stat | `rippled_validator_health{metric="validation_quorum"}` |
| State Value Timeline | timeseries | `rippled_state_tracking{metric="state_value"}` |
| Time in Current State | stat | `rippled_state_tracking{metric="time_in_current_state_seconds"}` |
| State Changes Rate | stat | `rate(rippled_state_changes_total[1h])` |
| Ledgers Closed Rate | stat | `rate(rippled_ledgers_closed_total[5m]) * 60` |
**Dashboard conventions**: `$node` template variable for `exported_instance` filtering, dark theme, matching existing panel sizes and color schemes.
---
**Task 9.12: Peer Quality Dashboard**
New Grafana dashboard: `rippled-peer-quality.json`
| Panel | Type | PromQL |
| --------------------------- | ---------- | --------------------------------------------------------------------- |
| P90 Peer Latency | timeseries | `rippled_peer_quality{metric="peer_latency_p90_ms"}` |
| Insane/Diverged Peers | stat | `rippled_peer_quality{metric="peers_insane_count"}` |
| Higher Version Peers % | stat | `rippled_peer_quality{metric="peers_higher_version_pct"}` |
| Upgrade Recommended | stat | `rippled_peer_quality{metric="upgrade_recommended"}` |
| Resource Disconnects | timeseries | `rippled_Overlay_Peer_Disconnects_Charges` |
| Inbound vs Outbound | bargauge | `rippled_Peer_Finder_Active_Inbound_Peers`, `..._Outbound_Peers` |
---
**Task 9.13: Ledger Economy Dashboard Panels**
Add a "Ledger Economy" row to the existing `system-node-health.json` dashboard:
| Panel | Type | PromQL |
| --------------------- | ---------- | -------------------------------------------------------------- |
| Base Fee (drops) | stat | `rippled_ledger_economy{metric="base_fee_xrp"}` |
| Reserve Base (drops) | stat | `rippled_ledger_economy{metric="reserve_base_xrp"}` |
| Reserve Inc (drops) | stat | `rippled_ledger_economy{metric="reserve_inc_xrp"}` |
| Ledger Age | stat | `rippled_ledger_economy{metric="ledger_age_seconds"}` |
| Transaction Rate | timeseries | `rippled_ledger_economy{metric="transaction_rate"}` |
---
### Phase 10 — `pratik/otel-phase10-workload-validation`
> **Ref**: Adds to existing Phase 10 task list. Validates all additions from Phases 2-9.
**Task 10.6: External Dashboard Parity Validation Checks**
Add checks to `validate_telemetry.py` for all new span attributes and metrics.
**New span attribute checks (~8)**:
| Span Name | New Attribute |
| --------------------------- | ------------------------------------ |
| `rpc.command.server_info` | `xrpl.node.amendment_blocked` |
| `rpc.command.server_info` | `xrpl.node.server_state` |
| `tx.receive` | `xrpl.peer.version` |
| `consensus.validation.send` | `xrpl.validation.ledger_hash` |
| `consensus.validation.send` | `xrpl.validation.full` |
| `peer.validation.receive` | `xrpl.peer.validation.ledger_hash` |
| `consensus.accept` | `xrpl.consensus.validation_quorum` |
| `consensus.accept` | `xrpl.consensus.proposers_validated` |
**New metric existence checks (~13)**:
| Metric Name |
| ------------------------------------------------------------- |
| `rippled_validation_agreement{metric="agreement_pct_1h"}` |
| `rippled_validation_agreement{metric="agreement_pct_24h"}` |
| `rippled_validator_health{metric="amendment_blocked"}` |
| `rippled_validator_health{metric="unl_expiry_days"}` |
| `rippled_peer_quality{metric="peer_latency_p90_ms"}` |
| `rippled_peer_quality{metric="peers_insane_count"}` |
| `rippled_ledger_economy{metric="base_fee_xrp"}` |
| `rippled_ledger_economy{metric="transaction_rate"}` |
| `rippled_state_tracking{metric="state_value"}` |
| `rippled_ledgers_closed_total` |
| `rippled_validations_sent_total` |
| `rippled_state_changes_total` |
| `rippled_storage_detail{metric="nudb_bytes"}` |
**New dashboard load checks (~3)**:
| Dashboard |
| --------------------------- |
| `rippled-validator-health` |
| `rippled-peer-quality` |
| `system-node-health` (updated) |
**New metric value sanity checks (~4)**:
| Check | Condition |
| -------------------------------------------------------- | -------------------- |
| `validation_agreement_pct_1h` | in [0, 100] |
| `unl_expiry_days` | > 0 (not expired) |
| `peer_latency_p90_ms` | > 0 (peers exist) |
| `state_value` | in [0, 7] |
**Total new checks: ~28** (bringing total from 73 to ~101)
---
### Phase 11 — (future branch)
> **Ref**: Adds to existing Phase 11 task list. Depends on Phase 7 metrics and Phase 9 dashboards.
**Task 11.9: Alert Rules from External Dashboard**
Port 18 alert rules from the external `xrpl-validator-dashboard` to Grafana alerting provisioning.
**Critical Group** (8 rules, eval interval 10s):
| Rule | Condition | For |
| ------------------------- | ----------------------------------------------------------------- | ---- |
| Agreement Below 90% | `rippled_validation_agreement{metric="agreement_pct_24h"} < 90` | 30s |
| Not Proposing | `rippled_state_tracking{metric="state_value"} < 6` | 10s |
| Unhealthy State | `rippled_state_tracking{metric="state_value"} < 4` | 10s |
| Amendment Blocked | `rippled_validator_health{metric="amendment_blocked"} == 1` | 1m |
| UNL Expiring | `rippled_validator_health{metric="unl_expiry_days"} < 14` | 1h |
| High IO Latency | `histogram_quantile(0.95, rippled_ios_latency_bucket) > 50` | 1m |
| High Load Factor | `rippled_load_factor_metrics{metric="load_factor"} > 1000` | 1m |
| Peer Count Critical | `rippled_server_info{metric="peers"} < 5` | 1m |
**Network Group** (3 rules, eval interval 10s):
| Rule | Condition | For |
| ---------------------- | --------------------------------------------------------------------- | ---- |
| Peer Drop >10% | `delta(rippled_server_info{metric="peers"}[30s]) / ... * 100 < -10` | 30s |
| Peer Drop >30% | Same formula, threshold -30 | 30s |
| P90 Latency + Disconnects | `peer_latency_p90_ms > 500 AND rate(disconnects) > 0` | 2m |
**Performance Group** (7 rules, eval interval 10s):
| Rule | Condition | For |
| -------------------- | ------------------------------------------------------------ | ---- |
| CPU High | Per-core CPU > 80% | 2m |
| Memory Critical | Memory usage > 90% | 1m |
| Disk Warning | Disk usage > 85% | 2m |
| Job Queue Overflow | `rate(rippled_jq_trans_overflow_total[5m]) > 0` | 1m |
| Upgrade Recommended | `rippled_peer_quality{metric="peers_higher_version_pct"} > 60` | 1m |
| TX Rate Drop | Transaction rate dropped > 50% in 5m window | 5m |
| Stale Ledger | `rippled_ledger_economy{metric="ledger_age_seconds"} > 30` | 1m |
**Notification channels**: Template configs for Email/SMTP, Discord, Slack, PagerDuty.
**Files**:
- `docker/telemetry/grafana/alerting/alert-rules.yaml` (new or extend existing)
- `docker/telemetry/grafana/alerting/contact-points.yaml`
- `docker/telemetry/grafana/alerting/notification-policies.yaml`
---
**Task 11.10: Dual-Datasource Architecture Documentation**
Document the external dashboard's "fast path" pattern as a future optimization for real-time panels:
- **Pattern**: A lightweight Prometheus scrape endpoint (separate from OTLP pipeline) that polls critical metrics every 2-5s, bypassing the 10s OTLP metric reader interval and Prometheus scrape interval.
- **Use case**: Real-time state panels (server state, ledger age, peer count) where 10-15s latency is too slow.
- **Decision**: Document as a future option, not implement now. Current 10s interval is acceptable for v1.
**File**: `OpenTelemetryPlan/Phase11_taskList.md` (documentation task, no code)
---
## Documentation Updates
### `docs/telemetry-runbook.md` (on Phase 9 branch)
Add new sections after "Phase 9: OTel Metrics Alerting Rules":
1. **Validator Health Monitoring** — explains agreement tracking, amendment blocked, UNL expiry, with example PromQL queries
2. **Peer Quality Monitoring** — explains P90 latency, insane peers, version awareness
3. **Ledger Economy Monitoring** — explains fee/reserve gauges, transaction rate, ledger age
4. **Validation Agreement Explained** — operator-facing explanation of the reconciliation algorithm (8s grace, 5m late repair), what "missed" means, and when to worry
### `OpenTelemetryPlan/09-data-collection-reference.md` (on Phase 9 branch)
Add new metric tables in a "Phase 7+: External Dashboard Parity" section covering all 29 new metrics with their gauge names, label values, types, and sources.
---
## Cross-Phase Dependency Chain
```
Phase 2 (span attrs: amendment_blocked, server_state)
Phase 3 (span attrs: peer.version)
Phase 4 (span attrs: validation.ledger_hash, validation.full, quorum)
Phase 6 (StatsD bridge: peerDisconnectsCharges)
├── all above rebase into ──>
Phase 7 (ValidationTracker + 7 gauges + 7 counters + agreement gauge)
Phase 9 (3 dashboards + ledger economy panels + runbook + data-collection-ref)
Phase 10 (28 new validation checks in validate_telemetry.py)
Phase 11 (18 alert rules + dual-datasource docs)
```
## Rebase Strategy
After committing changes to each branch (starting from Phase 2):
1. Commit on `pratik/otel-phase2-rpc-tracing`
2. Rebase `phase3` onto `phase2`, resolve conflicts (task list files only — low risk)
3. Commit on `phase3`, rebase `phase4` onto `phase3`
4. Continue through chain: 4 → 5 → 5b → 6 → 7 → 8 → 9 → 10
5. Force-push-with-lease all affected branches
Since these are documentation-only changes (task list .md files), merge conflicts should be minimal — each file is unique to its branch.

View File

@@ -1,73 +0,0 @@
#pragma once
#include <xrpl/beast/utility/Journal.h>
#include <chrono>
#include <cstdint>
#include <string_view>
namespace xrpl {
// cSpell:ignore ptmalloc
// -----------------------------------------------------------------------------
// Allocator interaction note:
// - This facility invokes glibc's malloc_trim(0) on Linux/glibc to request that
// ptmalloc return free heap pages to the OS.
// - If an alternative allocator (e.g. jemalloc or tcmalloc) is linked or
// preloaded (LD_PRELOAD), calling glibc's malloc_trim typically has no effect
// on the *active* heap. The call is harmless but may not reclaim memory
// because those allocators manage their own arenas.
// - Only glibc sbrk/arena space is eligible for trimming; large mmap-backed
// allocations are usually returned to the OS on free regardless of trimming.
// - Call at known reclamation points (e.g., after cache sweeps / online delete)
// and consider rate limiting to avoid churn.
// -----------------------------------------------------------------------------
struct MallocTrimReport
{
bool supported{false};
int trimResult{-1};
std::int64_t rssBeforeKB{-1};
std::int64_t rssAfterKB{-1};
std::chrono::microseconds durationUs{-1};
std::int64_t minfltDelta{-1};
std::int64_t majfltDelta{-1};
[[nodiscard]] std::int64_t
deltaKB() const noexcept
{
if (rssBeforeKB < 0 || rssAfterKB < 0)
return 0;
return rssAfterKB - rssBeforeKB;
}
};
/**
* @brief Attempt to return freed memory to the operating system.
*
* On Linux with glibc malloc, this issues ::malloc_trim(0), which may release
* free space from ptmalloc arenas back to the kernel. On other platforms, or if
* a different allocator is in use, this function is a no-op and the report will
* indicate that trimming is unsupported or had no effect.
*
* @param tag Identifier for logging/debugging purposes.
* @param journal Journal for diagnostic logging.
* @return Report containing before/after metrics and the trim result.
*
* @note If an alternative allocator (jemalloc/tcmalloc) is linked or preloaded,
* calling glibc's malloc_trim may have no effect on the active heap. The
* call is harmless but typically does not reclaim memory under those
* allocators.
*
* @note Only memory served from glibc's sbrk/arena heaps is eligible for trim.
* Large allocations satisfied via mmap are usually returned on free
* independently of trimming.
*
* @note Intended for use after operations that free significant memory (e.g.,
* cache sweeps, ledger cleanup, online delete). Consider rate limiting.
*/
MallocTrimReport
mallocTrim(std::string_view tag, beast::Journal journal);
} // namespace xrpl

View File

@@ -15,7 +15,7 @@
#define ALWAYS_OR_UNREACHABLE(cond, message) assert((message) && (cond))
#define SOMETIMES(cond, message, ...)
#define REACHABLE(message, ...)
#define UNREACHABLE(message, ...) assert((message) && false)
#define UNREACHABLE(message, ...) assert((message) && false) // NOLINT(misc-static-assert)
#endif
#define XRPL_ASSERT ALWAYS_OR_UNREACHABLE

View File

@@ -70,14 +70,24 @@ JobQueue::Coro::resume()
running_ = true;
}
{
std::lock_guard lock(jq_.m_mutex);
std::lock_guard lk(jq_.m_mutex);
--jq_.nSuspend_;
}
auto saved = detail::getLocalValues().release();
detail::getLocalValues().reset(&lvs_);
std::lock_guard lock(mutex_);
XRPL_ASSERT(static_cast<bool>(coro_), "xrpl::JobQueue::Coro::resume : is runnable");
coro_();
// A late resume() can arrive after the coroutine has already completed.
// This is an expected (if rare) outcome of the race condition documented
// in JobQueue.h:354-377 where post() schedules a resume job before the
// coroutine yields — the mutex serializes access, but by the time this
// resume() acquires the lock the coroutine may have already run to
// completion. Calling operator() on a completed boost::coroutine2 is
// undefined behavior, so we must check and skip invoking the coroutine
// body if it has already completed.
if (coro_)
{
coro_();
}
detail::getLocalValues().release();
detail::getLocalValues().reset(saved);
std::lock_guard lk(mutex_run_);

View File

@@ -99,8 +99,8 @@ public:
Effects:
The coroutine continues execution from where it last left off
using this same thread.
Undefined behavior if called after the coroutine has completed
with a return (as opposed to a yield()).
If the coroutine has already completed, returns immediately
(handles the documented post-before-yield race condition).
Undefined behavior if resume() or post() called consecutively
without a corresponding yield.
*/
@@ -316,7 +316,7 @@ private:
// Returns the limit of running jobs for the given job type.
// For jobs with no limit, we return the largest int. Hopefully that
// will be enough.
int
static int
getJobLimit(JobType type);
};
@@ -357,8 +357,10 @@ private:
If the post() job were to be executed before yield(), undefined behavior
would occur. The lock ensures that coro_ is not called again until we exit
the coroutine. At which point a scheduled resume() job waiting on the lock
would gain entry, harmlessly call coro_ and immediately return as we have
already completed the coroutine.
would gain entry. resume() checks if the coroutine has already completed
(coro_ converts to false) and, if so, skips invoking operator() since
calling operator() on a completed boost::coroutine2 pull_type is undefined
behavior.
The race condition occurs as follows:

View File

@@ -3,7 +3,6 @@
#include <xrpl/basics/Blob.h>
#include <xrpl/basics/SHAMapHash.h>
#include <xrpl/basics/TaggedCache.h>
#include <xrpl/ledger/CachedSLEs.h>
#include <boost/asio.hpp>
@@ -19,10 +18,27 @@ class Manager;
namespace perf {
class PerfLog;
}
namespace telemetry {
class Telemetry;
}
// This is temporary until we migrate all code to use ServiceRegistry.
class Application;
template <
class Key,
class T,
bool IsKeyCache,
class SharedWeakUnionPointer,
class SharedPointerType,
class Hash,
class KeyEqual,
class Mutex>
class TaggedCache;
class STLedgerEntry;
using SLE = STLedgerEntry;
using CachedSLEs = TaggedCache<uint256, SLE const>;
// Forward declarations
class AcceptedLedger;
class AmendmentTable;
@@ -45,7 +61,7 @@ class NetworkIDService;
class OpenLedger;
class OrderBookDB;
class Overlay;
class PathRequests;
class PathRequestManager;
class PeerReservationTable;
class PendingSaves;
class RelationalDatabase;
@@ -89,7 +105,7 @@ public:
getNodeFamily() = 0;
virtual TimeKeeper&
timeKeeper() = 0;
getTimeKeeper() = 0;
virtual JobQueue&
getJobQueue() = 0;
@@ -98,7 +114,7 @@ public:
getTempNodeCache() = 0;
virtual CachedSLEs&
cachedSLEs() = 0;
getCachedSLEs() = 0;
virtual NetworkIDService&
getNetworkIDService() = 0;
@@ -120,26 +136,26 @@ public:
getValidations() = 0;
virtual ValidatorList&
validators() = 0;
getValidators() = 0;
virtual ValidatorSite&
validatorSites() = 0;
getValidatorSites() = 0;
virtual ManifestCache&
validatorManifests() = 0;
getValidatorManifests() = 0;
virtual ManifestCache&
publisherManifests() = 0;
getPublisherManifests() = 0;
// Network services
virtual Overlay&
overlay() = 0;
getOverlay() = 0;
virtual Cluster&
cluster() = 0;
getCluster() = 0;
virtual PeerReservationTable&
peerReservations() = 0;
getPeerReservations() = 0;
virtual Resource::Manager&
getResourceManager() = 0;
@@ -174,13 +190,13 @@ public:
getLedgerReplayer() = 0;
virtual PendingSaves&
pendingSaves() = 0;
getPendingSaves() = 0;
virtual OpenLedger&
openLedger() = 0;
getOpenLedger() = 0;
virtual OpenLedger const&
openLedger() const = 0;
getOpenLedger() const = 0;
// Transaction and operation services
virtual NetworkOPs&
@@ -195,8 +211,8 @@ public:
virtual TxQ&
getTxQ() = 0;
virtual PathRequests&
getPathRequests() = 0;
virtual PathRequestManager&
getPathRequestManager() = 0;
// Server services
virtual ServerHandler&
@@ -205,21 +221,24 @@ public:
virtual perf::PerfLog&
getPerfLog() = 0;
virtual telemetry::Telemetry&
getTelemetry() = 0;
// Configuration and state
virtual bool
isStopping() const = 0;
virtual beast::Journal
journal(std::string const& name) = 0;
getJournal(std::string const& name) = 0;
virtual boost::asio::io_context&
getIOContext() = 0;
virtual Logs&
logs() = 0;
getLogs() = 0;
virtual std::optional<uint256> const&
trapTxID() const = 0;
getTrapTxID() const = 0;
/** Retrieve the "wallet database" */
virtual DatabaseCon&
@@ -228,7 +247,7 @@ public:
// Temporary: Get the underlying Application for functions that haven't
// been migrated yet. This should be removed once all code is migrated.
virtual Application&
app() = 0;
getApp() = 0;
};
} // namespace xrpl

View File

@@ -1,13 +1,12 @@
#pragma once
#include <xrpld/core/Config.h>
#include <xrpld/core/TimeKeeper.h>
#include <xrpl/basics/CountedObject.h>
#include <xrpl/beast/utility/Journal.h>
#include <xrpl/ledger/CachedView.h>
#include <xrpl/ledger/View.h>
#include <xrpl/protocol/Fees.h>
#include <xrpl/protocol/Indexes.h>
#include <xrpl/protocol/Rules.h>
#include <xrpl/protocol/STLedgerEntry.h>
#include <xrpl/protocol/Serializer.h>
#include <xrpl/protocol/TxMeta.h>
@@ -15,7 +14,7 @@
namespace xrpl {
class Application;
class ServiceRegistry;
class Job;
class TransactionMaster;
@@ -83,21 +82,26 @@ public:
*/
Ledger(
create_genesis_t,
Config const& config,
Rules const& rules,
Fees const& fees,
std::vector<uint256> const& amendments,
Family& family);
Ledger(LedgerHeader const& info, Config const& config, Family& family);
Ledger(LedgerHeader const& info, Rules const& rules, Family& family);
/** Used for ledgers loaded from JSON files
@param acquire If true, acquires the ledger if not found locally
@note The fees parameter provides default values, but setup() may
override them from the ledger state if fee-related SLEs exist.
*/
Ledger(
LedgerHeader const& info,
bool& loaded,
bool acquire,
Config const& config,
Rules const& rules,
Fees const& fees,
Family& family,
beast::Journal j);
@@ -113,7 +117,8 @@ public:
Ledger(
std::uint32_t ledgerSeq,
NetClock::time_point closeTime,
Config const& config,
Rules const& rules,
Fees const& fees,
Family& family);
~Ledger() = default;
@@ -322,7 +327,7 @@ public:
walkLedger(beast::Journal j, bool parallel = false) const;
bool
assertSensible(beast::Journal ledgerJ) const;
isSensible() const;
void
invariants() const;
@@ -379,8 +384,26 @@ private:
bool
setup();
void
defaultFees(Config const& config);
/** @brief Deserialize a SHAMapItem containing a single STTx.
*
* @param item The SHAMapItem to deserialize.
* @return A shared pointer to the deserialized transaction.
* @throw May throw on deserialization error.
*/
static std::shared_ptr<STTx const>
deserializeTx(SHAMapItem const& item);
/** @brief Deserialize a SHAMapItem containing STTx + STObject metadata.
*
* The SHAMapItem must contain two variable length serialization objects.
*
* @param item The SHAMapItem to deserialize.
* @return A pair containing shared pointers to the deserialized transaction
* and metadata.
* @throw May throw on deserialization error.
*/
static std::pair<std::shared_ptr<STTx const>, std::shared_ptr<STObject const>>
deserializeTxPlusMeta(SHAMapItem const& item);
bool mImmutable;
@@ -402,54 +425,4 @@ private:
/** A ledger wrapped in a CachedView. */
using CachedLedger = CachedView<Ledger>;
//------------------------------------------------------------------------------
//
// API
//
//------------------------------------------------------------------------------
extern bool
pendSaveValidated(
Application& app,
std::shared_ptr<Ledger const> const& ledger,
bool isSynchronous,
bool isCurrent);
std::shared_ptr<Ledger>
loadLedgerHelper(LedgerHeader const& sinfo, Application& app, bool acquire);
std::shared_ptr<Ledger>
loadByIndex(std::uint32_t ledgerIndex, Application& app, bool acquire = true);
std::shared_ptr<Ledger>
loadByHash(uint256 const& ledgerHash, Application& app, bool acquire = true);
// Fetch the ledger with the highest sequence contained in the database
extern std::tuple<std::shared_ptr<Ledger>, std::uint32_t, uint256>
getLatestLedger(Application& app);
/** Deserialize a SHAMapItem containing a single STTx
Throw:
May throw on deserializaton error
*/
std::shared_ptr<STTx const>
deserializeTx(SHAMapItem const& item);
/** Deserialize a SHAMapItem containing STTx + STObject metadata
The SHAMap must contain two variable length
serialization objects.
Throw:
May throw on deserializaton error
*/
std::pair<std::shared_ptr<STTx const>, std::shared_ptr<STObject const>>
deserializeTxPlusMeta(SHAMapItem const& item);
uint256
calculateLedgerHash(LedgerHeader const& info);
} // namespace xrpl

View File

@@ -77,16 +77,16 @@ public:
If the object is not found or an error is encountered, the
result will indicate the condition.
@note This will be called concurrently.
@param hash The hash of the object.
@param key A pointer to the key data.
@param pObject [out] The created object if successful.
@return The result of the operation.
*/
virtual Status
fetch(uint256 const& hash, std::shared_ptr<NodeObject>* pObject) = 0;
fetch(void const* key, std::shared_ptr<NodeObject>* pObject) = 0;
/** Fetch a batch synchronously. */
virtual std::pair<std::vector<std::shared_ptr<NodeObject>>, Status>
fetchBatch(std::vector<uint256> const& hashes) = 0;
fetchBatch(std::vector<uint256 const*> const& hashes) = 0;
/** Store a single object.
Depending on the implementation this may happen immediately

View File

@@ -85,6 +85,15 @@ message TMPublicKey {
// If you want to send an amount that is greater than any single address of yours
// you must first combine coins from one address to another.
// Trace context for OpenTelemetry distributed tracing across nodes.
// Uses W3C Trace Context format internally.
message TraceContext {
optional bytes trace_id = 1; // 16-byte trace identifier
optional bytes span_id = 2; // 8-byte parent span identifier
optional uint32 trace_flags = 3; // bit 0 = sampled
optional string trace_state = 4; // W3C tracestate header value
}
enum TransactionStatus {
tsNEW = 1; // origin node did/could not validate
tsCURRENT = 2; // scheduled to go in this ledger
@@ -101,6 +110,9 @@ message TMTransaction {
required TransactionStatus status = 2;
optional uint64 receiveTimestamp = 3;
optional bool deferred = 4; // not applied to open ledger
// Optional trace context for OpenTelemetry distributed tracing
optional TraceContext trace_context = 1001;
}
message TMTransactions {
@@ -149,6 +161,9 @@ message TMProposeSet {
// Number of hops traveled
optional uint32 hops = 12 [deprecated = true];
// Optional trace context for OpenTelemetry distributed tracing
optional TraceContext trace_context = 1001;
}
enum TxSetStatus {
@@ -194,6 +209,9 @@ message TMValidation {
// Number of hops traveled
optional uint32 hops = 3 [deprecated = true];
// Optional trace context for OpenTelemetry distributed tracing
optional TraceContext trace_context = 1001;
}
// An array of Endpoint messages

View File

@@ -4,6 +4,10 @@
namespace xrpl {
// Deprecated constant for backwards compatibility with pre-XRPFees amendment.
// This was the reference fee units used in the old fee calculation.
inline constexpr std::uint32_t FEE_UNITS_DEPRECATED = 10;
/** Reflects the fee settings for a particular ledger.
The fees are always the same for any transactions applied
@@ -11,15 +15,25 @@ namespace xrpl {
*/
struct Fees
{
XRPAmount base{0}; // Reference tx cost (drops)
XRPAmount reserve{0}; // Reserve base (drops)
XRPAmount increment{0}; // Reserve increment (drops)
/** @brief Cost of a reference transaction in drops. */
XRPAmount base{0};
/** @brief Minimum XRP an account must hold to exist on the ledger. */
XRPAmount reserve{0};
/** @brief Additional XRP reserve required per owned ledger object. */
XRPAmount increment{0};
explicit Fees() = default;
Fees(Fees const&) = default;
Fees&
operator=(Fees const&) = default;
Fees(XRPAmount base_, XRPAmount reserve_, XRPAmount increment_)
: base(base_), reserve(reserve_), increment(increment_)
{
}
/** Returns the account reserve given the owner count, in drops.
The reserve is calculated as the reserve base plus

View File

@@ -72,4 +72,8 @@ deserializeHeader(Slice data, bool hasHash = false);
LedgerHeader
deserializePrefixedHeader(Slice data, bool hasHash = false);
/** Calculate the hash of a ledger header. */
uint256
calculateLedgerHash(LedgerHeader const& info);
} // namespace xrpl

View File

@@ -72,12 +72,12 @@ public:
isDelegable(std::uint32_t const& permissionValue, Rules const& rules) const;
// for tx level permission, permission value is equal to tx type plus one
uint32_t
txToPermissionType(TxType const& type) const;
static uint32_t
txToPermissionType(TxType const& type);
// tx type value is permission value minus one
TxType
permissionToTxType(uint32_t const& value) const;
static TxType
permissionToTxType(uint32_t const& value);
};
} // namespace xrpl

View File

@@ -336,7 +336,7 @@ public:
static_assert(N > 0, "");
}
std::size_t
[[nodiscard]] bool
empty() const noexcept
{
return remain_ == 0;

View File

@@ -16,9 +16,10 @@
// Add new amendments to the top of this list.
// Keep it sorted in reverse chronological order.
XRPL_FIX (Security3_1_3, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (PermissionedDomainInvariant, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (ExpiredNFTokenOfferRemoval, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (BatchInnerSigs, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (BatchInnerSigs, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(LendingProtocol, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(PermissionDelegationV1_1, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (DirectoryLimit, Supported::yes, VoteBehavior::DefaultNo)
@@ -32,7 +33,7 @@ XRPL_FEATURE(TokenEscrow, Supported::yes, VoteBehavior::DefaultNo
XRPL_FIX (EnforceNFTokenTrustlineV2, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (AMMv1_3, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(PermissionedDEX, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(Batch, Supported::no, VoteBehavior::DefaultNo)
XRPL_FEATURE(Batch, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(SingleAssetVault, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (PayChanCancelAfter, Supported::yes, VoteBehavior::DefaultNo)
// Check flags in Credential transactions

View File

@@ -1,6 +1,7 @@
#pragma once
#include <xrpl/core/PerfLog.h>
#include <xrpl/core/ServiceRegistry.h>
#include <xrpl/core/StartUpType.h>
#include <xrpl/rdb/DBInit.h>
#include <xrpl/rdb/SociDB.h>
@@ -94,7 +95,7 @@ public:
struct CheckpointerSetup
{
JobQueue* jobQueue;
Logs* logs;
std::reference_wrapper<ServiceRegistry> registry;
};
template <std::size_t N, std::size_t M>
@@ -129,7 +130,7 @@ public:
beast::Journal journal)
: DatabaseCon(setup, dbName, pragma, initSQL, journal)
{
setupCheckpointing(checkpointerSetup.jobQueue, *checkpointerSetup.logs);
setupCheckpointing(checkpointerSetup.jobQueue, checkpointerSetup.registry.get());
}
template <std::size_t N, std::size_t M>
@@ -154,7 +155,7 @@ public:
beast::Journal journal)
: DatabaseCon(dataDir, dbName, pragma, initSQL, journal)
{
setupCheckpointing(checkpointerSetup.jobQueue, *checkpointerSetup.logs);
setupCheckpointing(checkpointerSetup.jobQueue, checkpointerSetup.registry.get());
}
~DatabaseCon();
@@ -177,7 +178,7 @@ public:
private:
void
setupCheckpointing(JobQueue*, Logs&);
setupCheckpointing(JobQueue*, ServiceRegistry&);
template <std::size_t N, std::size_t M>
DatabaseCon(

View File

@@ -13,8 +13,8 @@
#pragma clang diagnostic ignored "-Wdeprecated"
#endif
#include <xrpl/basics/Log.h>
#include <xrpl/core/JobQueue.h>
#include <xrpl/core/ServiceRegistry.h>
#define SOCI_USE_BOOST
#include <soci/soci.h>
@@ -111,7 +111,7 @@ public:
and so must outlive them both.
*/
std::shared_ptr<Checkpointer>
makeCheckpointer(std::uintptr_t id, std::weak_ptr<soci::session>, JobQueue&, Logs&);
makeCheckpointer(std::uintptr_t id, std::weak_ptr<soci::session>, JobQueue&, ServiceRegistry&);
} // namespace xrpl

View File

@@ -520,7 +520,7 @@ private:
// getMissingNodes helper functions
void
gmn_ProcessNodes(MissingNodes&, MissingNodes::StackEntry& node);
void
static void
gmn_ProcessDeferredReads(MissingNodes&);
// fetch from DB helper function

View File

@@ -0,0 +1,174 @@
#pragma once
/** RAII guard for OpenTelemetry trace spans.
Wraps an OTel Span and Scope together. On construction, the span is
activated on the current thread's context (via Scope). On destruction,
the span is ended and the previous context is restored.
Used by the XRPL_TRACE_* macros in TracingInstrumentation.h. Can also
be stored in std::optional for conditional tracing (move-constructible).
Only compiled when XRPL_ENABLE_TELEMETRY is defined.
*/
#ifdef XRPL_ENABLE_TELEMETRY
#include <opentelemetry/context/runtime_context.h>
#include <opentelemetry/nostd/shared_ptr.h>
#include <opentelemetry/trace/scope.h>
#include <opentelemetry/trace/span.h>
#include <exception>
#include <string_view>
namespace xrpl {
namespace telemetry {
/** RAII wrapper that activates a span on construction and ends it on
destruction. Non-copyable but move-constructible so it can be held
in std::optional for conditional tracing.
*/
class SpanGuard
{
/** The OTel span being guarded. Set to nullptr after move. */
opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span> span_;
/** Scope that activates span_ on the current thread's context stack. */
opentelemetry::trace::Scope scope_;
public:
/** Construct a guard that activates @p span on the current context.
@param span The span to guard. Ended in the destructor.
*/
explicit SpanGuard(opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span> span)
: span_(std::move(span)), scope_(span_)
{
}
/** Non-copyable. Move-constructible to support std::optional.
The move constructor creates a new Scope from the transferred span,
because Scope is not movable.
*/
SpanGuard(SpanGuard const&) = delete;
SpanGuard&
operator=(SpanGuard const&) = delete;
SpanGuard(SpanGuard&& other) noexcept : span_(std::move(other.span_)), scope_(span_)
{
other.span_ = nullptr;
}
SpanGuard&
operator=(SpanGuard&&) = delete;
~SpanGuard()
{
if (span_)
span_->End();
}
/** @return A mutable reference to the underlying span. */
opentelemetry::trace::Span&
span()
{
return *span_;
}
/** @return A const reference to the underlying span. */
opentelemetry::trace::Span const&
span() const
{
return *span_;
}
/** Mark the span status as OK. */
void
setOk()
{
span_->SetStatus(opentelemetry::trace::StatusCode::kOk);
}
/** Set an explicit status code on the span.
@param code The OTel status code.
@param description Optional human-readable status description.
*/
void
setStatus(opentelemetry::trace::StatusCode code, std::string_view description = "")
{
span_->SetStatus(code, std::string(description));
}
/** Set a key-value attribute on the span.
@param key Attribute name (e.g. "xrpl.rpc.command").
@param value Attribute value (string, int, bool, etc.).
*/
template <typename T>
void
setAttribute(std::string_view key, T&& value)
{
span_->SetAttribute(
opentelemetry::nostd::string_view(key.data(), key.size()), std::forward<T>(value));
}
/** Add a named event to the span's timeline.
@param name Event name.
*/
void
addEvent(std::string_view name)
{
span_->AddEvent(std::string(name));
}
/** Add a named event with key-value attributes to the span.
Allows attaching structured metadata to a point-in-time event on
the span timeline (e.g., "dispute.resolve" with transaction ID
and vote result attributes).
@param name Event name (e.g., "dispute.resolve").
@param attributes Key-value pairs describing the event.
*/
void
addEvent(
std::string_view name,
std::initializer_list<
std::pair<opentelemetry::nostd::string_view, opentelemetry::common::AttributeValue>>
attributes)
{
span_->AddEvent(std::string(name), attributes);
}
/** Record an exception as a span event following OTel semantic
conventions, and mark the span status as error.
@param e The exception to record.
*/
void
recordException(std::exception const& e)
{
span_->AddEvent(
"exception",
{{"exception.type", "std::exception"}, {"exception.message", std::string(e.what())}});
span_->SetStatus(opentelemetry::trace::StatusCode::kError, e.what());
}
/** Return the current OTel context.
Useful for creating child spans on a different thread by passing
this context to Telemetry::startSpan(name, parentContext).
*/
opentelemetry::context::Context
context() const
{
return opentelemetry::context::RuntimeContext::GetCurrent();
}
};
} // namespace telemetry
} // namespace xrpl
#endif // XRPL_ENABLE_TELEMETRY

View File

@@ -0,0 +1,282 @@
#pragma once
/** Abstract interface for OpenTelemetry distributed tracing.
Provides the Telemetry base class that all components use to create trace
spans. Two implementations exist:
- TelemetryImpl (Telemetry.cpp): real OTel SDK integration, compiled
only when XRPL_ENABLE_TELEMETRY is defined and enabled at runtime.
- NullTelemetry (NullTelemetry.cpp): no-op stub used when telemetry is
disabled at compile time or runtime.
The Setup struct holds all configuration parsed from the [telemetry]
section of xrpld.cfg. See TelemetryConfig.cpp for the parser and
cfg/xrpld-example.cfg for the available options.
OTel SDK headers are conditionally included behind XRPL_ENABLE_TELEMETRY
so that builds without telemetry have zero dependency on opentelemetry-cpp.
*/
#include <xrpl/basics/BasicConfig.h>
#include <xrpl/beast/utility/Journal.h>
#include <chrono>
#include <memory>
#include <string>
#include <string_view>
#ifdef XRPL_ENABLE_TELEMETRY
#include <opentelemetry/common/attribute_value.h>
#include <opentelemetry/context/context.h>
#include <opentelemetry/nostd/shared_ptr.h>
#include <opentelemetry/trace/span.h>
#include <opentelemetry/trace/span_context.h>
#include <opentelemetry/trace/tracer.h>
#include <utility>
#include <vector>
#endif
namespace xrpl {
namespace telemetry {
class Telemetry
{
public:
/** Configuration parsed from the [telemetry] section of xrpld.cfg.
All fields have sensible defaults so the section can be minimal
or omitted entirely. See TelemetryConfig.cpp for the parser.
*/
struct Setup
{
/** Master switch: true to enable tracing at runtime. */
bool enabled = false;
/** OTel resource attribute `service.name`. */
std::string serviceName = "rippled";
/** OTel resource attribute `service.version` (set from BuildInfo). */
std::string serviceVersion;
/** OTel resource attribute `service.instance.id` (defaults to node
public key). */
std::string serviceInstanceId;
/** Exporter type: currently only "otlp_http" is supported. */
std::string exporterType = "otlp_http";
/** OTLP/HTTP endpoint URL where spans are sent. */
std::string exporterEndpoint = "http://localhost:4318/v1/traces";
/** Whether to use TLS for the exporter connection. */
bool useTls = false;
/** Path to a CA certificate bundle for TLS verification. */
std::string tlsCertPath;
/** Head-based sampling ratio in [0.0, 1.0]. 1.0 = trace everything. */
double samplingRatio = 1.0;
/** Maximum number of spans per batch export. */
std::uint32_t batchSize = 512;
/** Delay between batch exports. */
std::chrono::milliseconds batchDelay{5000};
/** Maximum number of spans queued before dropping. */
std::uint32_t maxQueueSize = 2048;
/** Network identifier, added as an OTel resource attribute. */
std::uint32_t networkId = 0;
/** Network type label (e.g. "mainnet", "testnet", "devnet"). */
std::string networkType = "mainnet";
/** Enable tracing for transaction processing. */
bool traceTransactions = true;
/** Enable tracing for consensus rounds. */
bool traceConsensus = true;
/** Enable tracing for RPC request handling. */
bool traceRpc = true;
/** Enable tracing for peer-to-peer messages (disabled by default
due to high volume). */
bool tracePeer = false;
/** Enable tracing for ledger close/accept. */
bool traceLedger = true;
/** Cross-node correlation strategy for consensus tracing.
"deterministic" derives trace_id from previousLedger.id() so all
nodes participating in the same consensus round share the same
trace_id, enabling cross-node trace correlation in the backend.
"attribute" uses normal random trace_id with the ledger_id stored
as a span attribute; correlation must be done via attribute queries.
*/
std::string consensusTraceStrategy = "deterministic";
};
virtual ~Telemetry() = default;
/** Update the service instance ID (OTel resource attribute
`service.instance.id`).
Must be called before start(). The node public key is not available
when Telemetry is constructed (during the ApplicationImp member
initializer list), so this setter allows Application::setup() to
inject the identity once nodeIdentity_ is known.
@param id The node's base58-encoded public key or custom identifier.
*/
virtual void
setServiceInstanceId(std::string const& id)
{
// Default no-op for NullTelemetry implementations.
(void)id;
}
/** Initialize the tracing pipeline (exporter, processor, provider).
Call after construction.
*/
virtual void
start() = 0;
/** Flush pending spans and shut down the tracing pipeline.
Call before destruction.
*/
virtual void
stop() = 0;
/** @return true if this instance is actively exporting spans. */
virtual bool
isEnabled() const = 0;
/** @return true if transaction processing should be traced. */
virtual bool
shouldTraceTransactions() const = 0;
/** @return true if consensus rounds should be traced. */
virtual bool
shouldTraceConsensus() const = 0;
/** @return true if RPC request handling should be traced. */
virtual bool
shouldTraceRpc() const = 0;
/** @return true if peer-to-peer messages should be traced. */
virtual bool
shouldTracePeer() const = 0;
/** @return true if ledger close/accept should be traced. */
virtual bool
shouldTraceLedger() const = 0;
/** @return The consensus trace correlation strategy.
"deterministic" derives trace_id from previousLedger.id() so all
nodes participating in the same consensus round share the same
trace_id, enabling cross-node trace correlation in the backend.
"attribute" uses normal random trace_id with the ledger_id stored
as a span attribute; correlation must be done via attribute queries.
*/
virtual std::string const&
getConsensusTraceStrategy() const = 0;
#ifdef XRPL_ENABLE_TELEMETRY
/** Get or create a named tracer instance.
@param name Tracer name used to identify the instrumentation library.
@return A shared pointer to the Tracer.
*/
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Tracer>
getTracer(std::string_view name = "rippled") = 0;
/** Start a new span on the current thread's context.
The span becomes a child of the current active span (if any) via
OpenTelemetry's context propagation.
@param name Span name (typically "rpc.command.<cmd>").
@param kind The span kind (defaults to kInternal).
@return A shared pointer to the new Span.
*/
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span>
startSpan(
std::string_view name,
opentelemetry::trace::SpanKind kind = opentelemetry::trace::SpanKind::kInternal) = 0;
/** Start a new span with an explicit parent context.
Use this overload when the parent span is not on the current
thread's context stack (e.g. cross-thread trace propagation).
@param name Span name.
@param parentContext The parent span's context.
@param kind The span kind (defaults to kInternal).
@return A shared pointer to the new Span.
*/
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span>
startSpan(
std::string_view name,
opentelemetry::context::Context const& parentContext,
opentelemetry::trace::SpanKind kind = opentelemetry::trace::SpanKind::kInternal) = 0;
/** Start a new span with an explicit parent context and span links.
Span links establish follows-from relationships without implying
a parent-child hierarchy. Common uses include linking consensus
round N+1 to round N, or linking a validation span back to the
round that produced it.
@param name Span name.
@param parentContext The parent span's context.
@param links Vector of (SpanContext, attributes) pairs
for follows-from relationships.
@param kind The span kind (defaults to kInternal).
@return A shared pointer to the new Span.
*/
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span>
startSpan(
std::string_view name,
opentelemetry::context::Context const& parentContext,
std::vector<std::pair<
opentelemetry::trace::SpanContext,
std::vector<std::pair<std::string, opentelemetry::common::AttributeValue>>>> const&
links,
opentelemetry::trace::SpanKind kind = opentelemetry::trace::SpanKind::kInternal) = 0;
#endif
};
/** Create a Telemetry instance.
Returns a TelemetryImpl when setup.enabled is true, or a
NullTelemetry no-op stub otherwise.
@param setup Configuration from the [telemetry] config section.
@param journal Journal for log output during initialization.
*/
std::unique_ptr<Telemetry>
make_Telemetry(Telemetry::Setup const& setup, beast::Journal journal);
/** Parse the [telemetry] config section into a Setup struct.
@param section The [telemetry] config section.
@param nodePublicKey Node public key, used as default instance ID.
@param version Build version string.
@return A populated Setup struct with defaults for missing values.
*/
Telemetry::Setup
setup_Telemetry(
Section const& section,
std::string const& nodePublicKey,
std::string const& version);
} // namespace telemetry
} // namespace xrpl

View File

@@ -0,0 +1,94 @@
#pragma once
/** Utilities for trace context propagation across nodes.
Provides serialization/deserialization of OTel trace context to/from
Protocol Buffer TraceContext messages (P2P cross-node propagation).
Only compiled when XRPL_ENABLE_TELEMETRY is defined.
*/
#ifdef XRPL_ENABLE_TELEMETRY
#include <xrpl/proto/xrpl.pb.h>
#include <opentelemetry/context/context.h>
#include <opentelemetry/trace/context.h>
#include <opentelemetry/trace/default_span.h>
#include <opentelemetry/trace/span_context.h>
#include <opentelemetry/trace/trace_flags.h>
#include <opentelemetry/trace/trace_id.h>
#include <cstdint>
namespace xrpl {
namespace telemetry {
/** Extract OTel context from a protobuf TraceContext message.
@param proto The protobuf TraceContext received from a peer.
@return An OTel Context with the extracted parent span, or an empty
context if the protobuf fields are missing or invalid.
*/
inline opentelemetry::context::Context
extractFromProtobuf(protocol::TraceContext const& proto)
{
namespace trace = opentelemetry::trace;
if (!proto.has_trace_id() || proto.trace_id().size() != 16 || !proto.has_span_id() ||
proto.span_id().size() != 8)
{
return opentelemetry::context::Context{};
}
auto const* rawTraceId = reinterpret_cast<std::uint8_t const*>(proto.trace_id().data());
auto const* rawSpanId = reinterpret_cast<std::uint8_t const*>(proto.span_id().data());
trace::TraceId traceId(opentelemetry::nostd::span<std::uint8_t const, 16>(rawTraceId, 16));
trace::SpanId spanId(opentelemetry::nostd::span<std::uint8_t const, 8>(rawSpanId, 8));
// Default to not-sampled (0x00) per W3C Trace Context spec when
// the trace_flags field is absent.
trace::TraceFlags flags(
proto.has_trace_flags() ? static_cast<std::uint8_t>(proto.trace_flags())
: static_cast<std::uint8_t>(0));
trace::SpanContext spanCtx(traceId, spanId, flags, /* remote = */ true);
return opentelemetry::context::Context{}.SetValue(
trace::kSpanKey,
opentelemetry::nostd::shared_ptr<trace::Span>(new trace::DefaultSpan(spanCtx)));
}
/** Inject the current span's trace context into a protobuf TraceContext.
@param ctx The OTel context containing the span to propagate.
@param proto The protobuf TraceContext to populate.
*/
inline void
injectToProtobuf(opentelemetry::context::Context const& ctx, protocol::TraceContext& proto)
{
namespace trace = opentelemetry::trace;
auto span = trace::GetSpan(ctx);
if (!span)
return;
auto const& spanCtx = span->GetContext();
if (!spanCtx.IsValid())
return;
// Serialize trace_id (16 bytes)
auto const& traceId = spanCtx.trace_id();
proto.set_trace_id(traceId.Id().data(), trace::TraceId::kSize);
// Serialize span_id (8 bytes)
auto const& spanId = spanCtx.span_id();
proto.set_span_id(spanId.Id().data(), trace::SpanId::kSize);
// Serialize flags
proto.set_trace_flags(spanCtx.trace_flags().flags());
}
} // namespace telemetry
} // namespace xrpl
#endif // XRPL_ENABLE_TELEMETRY

View File

@@ -111,7 +111,7 @@ public:
checkInvariants(TER const result, XRPAmount const fee);
private:
TER
static TER
failInvariantCheck(TER const result);
template <std::size_t... Is>

View File

@@ -48,7 +48,7 @@ private:
bool
isValidEntry(std::shared_ptr<SLE const> const& before, std::shared_ptr<SLE const> const& after);
STAmount
static STAmount
calculateBalanceChange(
std::shared_ptr<SLE const> const& before,
std::shared_ptr<SLE const> const& after,
@@ -63,7 +63,7 @@ private:
std::shared_ptr<SLE const>
findIssuer(AccountID const& issuerID, ReadView const& view);
bool
static bool
validateIssuerChanges(
std::shared_ptr<SLE const> const& issuer,
IssuerChanges const& changes,
@@ -71,7 +71,7 @@ private:
beast::Journal const& j,
bool enforce);
bool
static bool
validateFrozenState(
BalanceChange const& change,
bool high,

View File

@@ -7,6 +7,7 @@
#include <xrpl/protocol/TER.h>
#include <xrpl/tx/invariants/AMMInvariant.h>
#include <xrpl/tx/invariants/FreezeInvariant.h>
#include <xrpl/tx/invariants/LoanBrokerInvariant.h>
#include <xrpl/tx/invariants/LoanInvariant.h>
#include <xrpl/tx/invariants/MPTInvariant.h>
#include <xrpl/tx/invariants/NFTInvariant.h>
@@ -111,7 +112,7 @@ public:
void
visitEntry(bool, std::shared_ptr<SLE const> const&, std::shared_ptr<SLE const> const&);
bool
static bool
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&);
};

View File

@@ -0,0 +1,55 @@
#pragma once
#include <xrpl/basics/base_uint.h>
#include <xrpl/beast/utility/Journal.h>
#include <xrpl/ledger/ReadView.h>
#include <xrpl/protocol/STTx.h>
#include <xrpl/protocol/TER.h>
#include <map>
#include <vector>
namespace xrpl {
/**
* @brief Invariants: Loan brokers are internally consistent
*
* 1. If `LoanBroker.OwnerCount = 0` the `DirectoryNode` will have at most one
* node (the root), which will only hold entries for `RippleState` or
* `MPToken` objects.
*
*/
class ValidLoanBroker
{
// Not all of these elements will necessarily be populated. Remaining items
// will be looked up as needed.
struct BrokerInfo
{
SLE::const_pointer brokerBefore = nullptr;
// After is used for most of the checks, except
// those that check changed values.
SLE::const_pointer brokerAfter = nullptr;
};
// Collect all the LoanBrokers found directly or indirectly through
// pseudo-accounts. Key is the brokerID / index. It will be used to find the
// LoanBroker object if brokerBefore and brokerAfter are nullptr
std::map<uint256, BrokerInfo> brokers_;
// Collect all the modified trust lines. Their high and low accounts will be
// loaded to look for LoanBroker pseudo-accounts.
std::vector<SLE::const_pointer> lines_;
// Collect all the modified MPTokens. Their accounts will be loaded to look
// for LoanBroker pseudo-accounts.
std::vector<SLE::const_pointer> mpts_;
static bool
goodZeroDirectory(ReadView const& view, SLE::const_ref dir, beast::Journal const& j);
public:
void
visitEntry(bool, std::shared_ptr<SLE const> const&, std::shared_ptr<SLE const> const&);
bool
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&);
};
} // namespace xrpl

View File

@@ -1,57 +1,14 @@
#pragma once
#include <xrpl/basics/base_uint.h>
#include <xrpl/beast/utility/Journal.h>
#include <xrpl/ledger/ReadView.h>
#include <xrpl/protocol/STTx.h>
#include <xrpl/protocol/TER.h>
#include <map>
#include <vector>
namespace xrpl {
/**
* @brief Invariants: Loan brokers are internally consistent
*
* 1. If `LoanBroker.OwnerCount = 0` the `DirectoryNode` will have at most one
* node (the root), which will only hold entries for `RippleState` or
* `MPToken` objects.
*
*/
class ValidLoanBroker
{
// Not all of these elements will necessarily be populated. Remaining items
// will be looked up as needed.
struct BrokerInfo
{
SLE::const_pointer brokerBefore = nullptr;
// After is used for most of the checks, except
// those that check changed values.
SLE::const_pointer brokerAfter = nullptr;
};
// Collect all the LoanBrokers found directly or indirectly through
// pseudo-accounts. Key is the brokerID / index. It will be used to find the
// LoanBroker object if brokerBefore and brokerAfter are nullptr
std::map<uint256, BrokerInfo> brokers_;
// Collect all the modified trust lines. Their high and low accounts will be
// loaded to look for LoanBroker pseudo-accounts.
std::vector<SLE::const_pointer> lines_;
// Collect all the modified MPTokens. Their accounts will be loaded to look
// for LoanBroker pseudo-accounts.
std::vector<SLE::const_pointer> mpts_;
bool
goodZeroDirectory(ReadView const& view, SLE::const_ref dir, beast::Journal const& j) const;
public:
void
visitEntry(bool, std::shared_ptr<SLE const> const&, std::shared_ptr<SLE const> const&);
bool
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&);
};
/**
* @brief Invariants: Loans are internally consistent
*

View File

@@ -1,6 +1,6 @@
#pragma once
#include <xrpl/basics/Log.h>
#include <xrpl/core/ServiceRegistry.h>
#include <xrpl/ledger/PaymentSandbox.h>
#include <xrpl/protocol/STAmount.h>
#include <xrpl/protocol/TER.h>
@@ -92,7 +92,7 @@ public:
STPathSet const& spsPaths,
std::optional<uint256> const& domainID,
Logs& l,
ServiceRegistry& registry,
Input const* const pInputs = nullptr);
// The view we are currently working on

View File

@@ -1,157 +0,0 @@
#include <xrpl/basics/Log.h>
#include <xrpl/basics/MallocTrim.h>
#include <boost/predef.h>
#include <chrono>
#include <cstdint>
#include <cstdio>
#include <fstream>
#include <sstream>
#if defined(__GLIBC__) && BOOST_OS_LINUX
#include <sys/resource.h>
#include <malloc.h>
#include <unistd.h>
// Require RUSAGE_THREAD for thread-scoped page fault tracking
#ifndef RUSAGE_THREAD
#error "MallocTrim rusage instrumentation requires RUSAGE_THREAD on Linux/glibc"
#endif
namespace {
bool
getRusageThread(struct rusage& ru)
{
return ::getrusage(RUSAGE_THREAD, &ru) == 0; // LCOV_EXCL_LINE
}
} // namespace
#endif
namespace xrpl {
namespace detail {
// cSpell:ignore statm
#if defined(__GLIBC__) && BOOST_OS_LINUX
inline int
mallocTrimWithPad(std::size_t padBytes)
{
return ::malloc_trim(padBytes);
}
long
parseStatmRSSkB(std::string const& statm)
{
// /proc/self/statm format: size resident shared text lib data dt
// We want the second field (resident) which is in pages
std::istringstream iss(statm);
long size = 0, resident = 0;
if (!(iss >> size >> resident))
return -1;
// Convert pages to KB
long const pageSize = ::sysconf(_SC_PAGESIZE);
if (pageSize <= 0)
return -1;
return (resident * pageSize) / 1024;
}
#endif // __GLIBC__ && BOOST_OS_LINUX
} // namespace detail
MallocTrimReport
mallocTrim(std::string_view tag, beast::Journal journal)
{
// LCOV_EXCL_START
MallocTrimReport report;
#if !(defined(__GLIBC__) && BOOST_OS_LINUX)
JLOG(journal.debug()) << "malloc_trim not supported on this platform (tag=" << tag << ")";
#else
// Keep glibc malloc_trim padding at 0 (default): 12h Mainnet tests across 0/256KB/1MB/16MB
// showed no clear, consistent benefit from custom padding—0 provided the best overall balance
// of RSS reduction and trim-latency stability without adding a tuning surface.
constexpr std::size_t TRIM_PAD = 0;
report.supported = true;
if (journal.debug())
{
auto readFile = [](std::string const& path) -> std::string {
std::ifstream ifs(path, std::ios::in | std::ios::binary);
if (!ifs.is_open())
return {};
// /proc files are often not seekable; read as a stream.
std::ostringstream oss;
oss << ifs.rdbuf();
return oss.str();
};
std::string const tagStr{tag};
std::string const statmPath = "/proc/self/statm";
auto const statmBefore = readFile(statmPath);
long const rssBeforeKB = detail::parseStatmRSSkB(statmBefore);
struct rusage ru0{};
bool const have_ru0 = getRusageThread(ru0);
auto const t0 = std::chrono::steady_clock::now();
report.trimResult = detail::mallocTrimWithPad(TRIM_PAD);
auto const t1 = std::chrono::steady_clock::now();
struct rusage ru1{};
bool const have_ru1 = getRusageThread(ru1);
auto const statmAfter = readFile(statmPath);
long const rssAfterKB = detail::parseStatmRSSkB(statmAfter);
// Populate report fields
report.rssBeforeKB = rssBeforeKB;
report.rssAfterKB = rssAfterKB;
report.durationUs = std::chrono::duration_cast<std::chrono::microseconds>(t1 - t0);
if (have_ru0 && have_ru1)
{
report.minfltDelta = ru1.ru_minflt - ru0.ru_minflt;
report.majfltDelta = ru1.ru_majflt - ru0.ru_majflt;
}
std::int64_t const deltaKB = (rssBeforeKB < 0 || rssAfterKB < 0)
? 0
: (static_cast<std::int64_t>(rssAfterKB) - static_cast<std::int64_t>(rssBeforeKB));
JLOG(journal.debug()) << "malloc_trim tag=" << tagStr << " result=" << report.trimResult
<< " pad=" << TRIM_PAD << " bytes"
<< " rss_before=" << rssBeforeKB << "kB"
<< " rss_after=" << rssAfterKB << "kB"
<< " delta=" << deltaKB << "kB"
<< " duration_us=" << report.durationUs.count()
<< " minflt_delta=" << report.minfltDelta
<< " majflt_delta=" << report.majfltDelta;
}
else
{
report.trimResult = detail::mallocTrimWithPad(TRIM_PAD);
}
#endif
return report;
// LCOV_EXCL_STOP
}
} // namespace xrpl

View File

@@ -981,7 +981,7 @@ root(Number f, unsigned d)
auto ex = [e = e, di = di]() // Euclidean remainder of e/d
{
int k = (e >= 0 ? e : e - (di - 1)) / di;
int k2 = e - k * di;
int k2 = e - (k * di);
if (k2 == 0)
return 0;
return di - k2;
@@ -998,7 +998,7 @@ root(Number f, unsigned d)
}
// Quadratic least squares curve fit of f^(1/d) in the range [0, 1]
auto const D = ((6 * di + 11) * di + 6) * di + 1;
auto const D = (((6 * di + 11) * di + 6) * di) + 1;
auto const a0 = 3 * di * ((2 * di - 3) * di + 1);
auto const a1 = 24 * di * (2 * di - 1);
auto const a2 = -30 * (di - 1) * di;

View File

@@ -169,7 +169,7 @@ public:
XRPL_ASSERT(m_stopped == true, "xrpl::ResolverAsioImpl::start : stopped");
XRPL_ASSERT(m_stop_called == false, "xrpl::ResolverAsioImpl::start : not stopping");
if (m_stopped.exchange(false) == true)
if (m_stopped.exchange(false))
{
{
std::lock_guard lk{m_mut};
@@ -182,7 +182,7 @@ public:
void
stop_async() override
{
if (m_stop_called.exchange(true) == false)
if (!m_stop_called.exchange(true))
{
boost::asio::dispatch(
m_io_context,
@@ -229,7 +229,7 @@ public:
{
XRPL_ASSERT(m_stop_called == true, "xrpl::ResolverAsioImpl::do_stop : stopping");
if (m_stopped.exchange(true) == false)
if (!m_stopped.exchange(true))
{
m_work.clear();
m_resolver.cancel();
@@ -271,7 +271,7 @@ public:
m_strand, std::bind(&ResolverAsioImpl::do_work, this, CompletionCounter(this))));
}
HostAndPort
static HostAndPort
parseName(std::string const& str)
{
// first attempt to parse as an endpoint (IP addr + port).
@@ -319,7 +319,7 @@ public:
void
do_work(CompletionCounter)
{
if (m_stop_called == true)
if (m_stop_called)
return;
// We don't have any work to do at this time
@@ -367,7 +367,7 @@ public:
{
XRPL_ASSERT(!names.empty(), "xrpl::ResolverAsioImpl::do_resolve : names non-empty");
if (m_stop_called == false)
if (!m_stop_called)
{
m_work.emplace_back(names, handler);

View File

@@ -24,7 +24,7 @@ sqlBlobLiteral(Blob const& blob)
{
std::string j;
j.reserve(blob.size() * 2 + 3);
j.reserve((blob.size() * 2) + 3);
j.push_back('X');
j.push_back('\'');
boost::algorithm::hex(blob.begin(), blob.end(), std::back_inserter(j));

View File

@@ -107,7 +107,7 @@ encode(void* dest, void const* src, std::size_t len)
char const* in = static_cast<char const*>(src);
auto const tab = base64::get_alphabet();
for (auto n = len / 3; n--;)
for (auto n = len / 3; n != 0u; --n)
{
*out++ = tab[(in[0] & 0xfc) >> 2];
*out++ = tab[((in[0] & 0x03) << 4) + ((in[1] & 0xf0) >> 4)];
@@ -162,7 +162,7 @@ decode(void* dest, char const* src, std::size_t len)
auto const inverse = base64::get_inverse();
while (len-- && *in != '=')
while (((len--) != 0u) && *in != '=')
{
auto const v = inverse[*in];
if (v == -1)
@@ -181,7 +181,7 @@ decode(void* dest, char const* src, std::size_t len)
}
}
if (i)
if (i != 0)
{
c3[0] = (c4[0] << 2) + ((c4[1] & 0x30) >> 4);
c3[1] = ((c4[1] & 0xf) << 4) + ((c4[2] & 0x3c) >> 2);

View File

@@ -253,7 +253,7 @@ initAuthenticated(
// VFALCO Replace fopen() with RAII
FILE* f = fopen(chain_file.c_str(), "r");
if (!f)
if (f == nullptr)
{
LogicError(
"Problem opening SSL chain file" +

View File

@@ -352,7 +352,7 @@ public:
}
}
void
static void
log(std::vector<boost::asio::const_buffer> const& buffers)
{
(void)buffers;

View File

@@ -10,7 +10,7 @@ bool
is_private(AddressV6 const& addr)
{
return (
(addr.to_bytes()[0] & 0xfd) || // TODO fc00::/8 too ?
((addr.to_bytes()[0] & 0xfd) != 0) || // TODO fc00::/8 too ?
(addr.is_v4_mapped() &&
is_private(boost::asio::ip::make_address_v4(boost::asio::ip::v4_mapped, addr))));
}

View File

@@ -57,7 +57,7 @@ Endpoint::to_string() const
if (port() != 0 && address().is_v6())
s += '[';
s += address().to_string();
if (port())
if (port() != 0u)
{
if (address().is_v6())
s += ']';
@@ -111,7 +111,7 @@ operator>>(std::istream& is, Endpoint& endpoint)
// so we continue to honor that here by assuming we are at the end
// of the address portion if we hit a space (or the separator
// we were expecting to see)
if (isspace(static_cast<unsigned char>(i)) || (readTo && i == readTo))
if ((isspace(static_cast<unsigned char>(i)) != 0) || ((readTo != 0) && i == readTo))
break;
if ((i == '.') || (i >= '0' && i <= ':') || (i >= 'a' && i <= 'f') ||
@@ -121,13 +121,13 @@ operator>>(std::istream& is, Endpoint& endpoint)
// don't exceed a reasonable length...
if (addrStr.size() == INET6_ADDRSTRLEN ||
(readTo && readTo == ':' && addrStr.size() > 15))
((readTo != 0) && readTo == ':' && addrStr.size() > 15))
{
is.setstate(std::ios_base::failbit);
return is;
}
if (!readTo && (i == '.' || i == ':'))
if ((readTo == 0) && (i == '.' || i == ':'))
{
// if we see a dot first, must be IPv4
// otherwise must be non-bracketed IPv6
@@ -145,7 +145,7 @@ operator>>(std::istream& is, Endpoint& endpoint)
if (readTo == ']' && is.rdbuf()->in_avail() > 0)
{
is.get(i);
if (!(isspace(static_cast<unsigned char>(i)) || i == ':'))
if ((isspace(static_cast<unsigned char>(i)) == 0) && i != ':')
{
is.unget();
is.setstate(std::ios_base::failbit);

View File

@@ -84,8 +84,7 @@ JobQueue::addRefCountedJob(JobType type, std::string const& name, JobFunction co
JobType const type(job.getType());
XRPL_ASSERT(type != jtINVALID, "xrpl::JobQueue::addRefCountedJob : has valid job type");
XRPL_ASSERT(
m_jobSet.find(job) != m_jobSet.end(), "xrpl::JobQueue::addRefCountedJob : job found");
XRPL_ASSERT(m_jobSet.contains(job), "xrpl::JobQueue::addRefCountedJob : job found");
perfLog_.jobQueue(type);
JobTypeData& data(getJobTypeData(type));

View File

@@ -141,7 +141,7 @@ LoadMonitor::isOver()
update();
if (mLatencyEvents == 0)
return 0;
return false;
return isOverTarget(
mLatencyMSAvg / (mLatencyEvents * 4), mLatencyMSPeak / (mLatencyEvents * 4));

View File

@@ -47,7 +47,7 @@ Workers::setNumberOfThreads(int numberOfThreads)
if (m_numberOfThreads == numberOfThreads)
return;
if (perfLog_)
if (perfLog_ != nullptr)
perfLog_->resizeJobs(numberOfThreads);
if (numberOfThreads > m_numberOfThreads)

View File

@@ -210,8 +210,8 @@ RFC1751::extract(char const* s, int start, int length)
int const shiftR = 24 - (length + (start % 8));
cl = s[start / 8]; // get components
cc = (shiftR < 16) ? s[start / 8 + 1] : 0;
cr = (shiftR < 8) ? s[start / 8 + 2] : 0;
cc = (shiftR < 16) ? s[(start / 8) + 1] : 0;
cr = (shiftR < 8) ? s[(start / 8) + 2] : 0;
x = ((long)(cl << 8 | cc) << 8 | cr); // Put bits together
x = x >> shiftR; // Right justify number
@@ -265,13 +265,13 @@ RFC1751::insert(char* s, int x, int start, int length)
if (shift + length > 16)
{
s[start / 8] |= cl;
s[start / 8 + 1] |= cc;
s[start / 8 + 2] |= cr;
s[(start / 8) + 1] |= cc;
s[(start / 8) + 2] |= cr;
}
else if (shift + length > 8)
{
s[start / 8] |= cc;
s[start / 8 + 1] |= cr;
s[(start / 8) + 1] |= cr;
}
else
{
@@ -284,7 +284,7 @@ RFC1751::standard(std::string& strWord)
{
for (auto& letter : strWord)
{
if (islower(static_cast<unsigned char>(letter)))
if (islower(static_cast<unsigned char>(letter)) != 0)
{
letter = toupper(static_cast<unsigned char>(letter));
}
@@ -312,10 +312,10 @@ RFC1751::wsrch(std::string const& strWord, int iMin, int iMax)
while (iResult < 0 && iMin != iMax)
{
// Have a range to search.
int iMid = iMin + (iMax - iMin) / 2;
int iMid = iMin + ((iMax - iMin) / 2);
int iDir = strWord.compare(s_dictionary[iMid]);
if (!iDir)
if (iDir == 0)
{
iResult = iMid; // Found it.
}

View File

@@ -152,7 +152,7 @@ public:
#ifndef NDEBUG
// Make sure we haven't already seen this tag.
auto& tags = stack_.top().tags;
check(tags.find(tag) == tags.end(), "Already seen tag " + tag);
check(!tags.contains(tag), "Already seen tag " + tag);
tags.insert(tag);
#endif

View File

@@ -296,7 +296,7 @@ Reader::match(Location pattern, int patternLength)
int index = patternLength;
while (index--)
while ((index--) != 0)
{
if (current_[index] != pattern[index])
return false;
@@ -362,7 +362,7 @@ Reader::readNumber()
while (current_ != end_)
{
if (!std::isdigit(static_cast<unsigned char>(*current_)))
if (std::isdigit(static_cast<unsigned char>(*current_)) == 0)
{
auto ret =
std::find(std::begin(extended_tokens), std::end(extended_tokens), *current_);
@@ -913,7 +913,7 @@ Reader::getFormattedErrorMessages() const
formattedMessage += "* " + getLocationLineAndColumn(error.token_.start_) + "\n";
formattedMessage += " " + error.message_ + "\n";
if (error.extra_)
if (error.extra_ != nullptr)
formattedMessage += "See " + getLocationLineAndColumn(error.extra_) + " for detail.\n";
}

View File

@@ -40,10 +40,10 @@ public:
// return 0;
if (length == unknown)
length = value ? (unsigned int)strlen(value) : 0;
length = (value != nullptr) ? (unsigned int)strlen(value) : 0;
char* newString = static_cast<char*>(malloc(length + 1));
if (value)
if (value != nullptr)
memcpy(newString, value, length);
newString[length] = 0;
return newString;
@@ -52,7 +52,7 @@ public:
void
releaseStringValue(char* value) override
{
if (value)
if (value != nullptr)
free(value);
}
};
@@ -108,14 +108,14 @@ Value::CZString::CZString(CZString const& other)
Value::CZString::~CZString()
{
if (cstr_ && index_ == duplicate)
if ((cstr_ != nullptr) && index_ == duplicate)
valueAllocator()->releaseMemberName(const_cast<char*>(cstr_));
}
bool
Value::CZString::operator<(CZString const& other) const
{
if (cstr_ && other.cstr_)
if ((cstr_ != nullptr) && (other.cstr_ != nullptr))
return strcmp(cstr_, other.cstr_) < 0;
return index_ < other.index_;
@@ -124,7 +124,7 @@ Value::CZString::operator<(CZString const& other) const
bool
Value::CZString::operator==(CZString const& other) const
{
if (cstr_ && other.cstr_)
if ((cstr_ != nullptr) && (other.cstr_ != nullptr))
return strcmp(cstr_, other.cstr_) == 0;
return index_ == other.index_;
@@ -251,7 +251,7 @@ Value::Value(Value const& other) : type_(other.type_)
break;
case stringValue:
if (other.value_.string_)
if (other.value_.string_ != nullptr)
{
value_.string_ = valueAllocator()->duplicateStringValue(other.value_.string_);
allocated_ = true;
@@ -294,7 +294,7 @@ Value::~Value()
case arrayValue:
case objectValue:
if (value_.map_)
if (value_.map_ != nullptr)
delete value_.map_;
break;
@@ -392,11 +392,11 @@ operator<(Value const& x, Value const& y)
return x.value_.real_ < y.value_.real_;
case booleanValue:
return x.value_.bool_ < y.value_.bool_;
return static_cast<int>(x.value_.bool_) < static_cast<int>(y.value_.bool_);
case stringValue:
return (x.value_.string_ == 0 && y.value_.string_) ||
(y.value_.string_ && x.value_.string_ &&
return (x.value_.string_ == 0 && (y.value_.string_ != nullptr)) ||
((y.value_.string_ != nullptr) && (x.value_.string_ != nullptr) &&
strcmp(x.value_.string_, y.value_.string_) < 0);
case arrayValue:
@@ -413,7 +413,7 @@ operator<(Value const& x, Value const& y)
// LCOV_EXCL_STOP
}
return 0; // unreachable
return false; // unreachable
}
bool
@@ -422,9 +422,9 @@ operator==(Value const& x, Value const& y)
if (x.type_ != y.type_)
{
if (x.type_ == intValue && y.type_ == uintValue)
return !integerCmp(x.value_.int_, y.value_.uint_);
return integerCmp(x.value_.int_, y.value_.uint_) == 0;
if (x.type_ == uintValue && y.type_ == intValue)
return !integerCmp(y.value_.int_, x.value_.uint_);
return integerCmp(y.value_.int_, x.value_.uint_) == 0;
return false;
}
@@ -447,8 +447,8 @@ operator==(Value const& x, Value const& y)
case stringValue:
return x.value_.string_ == y.value_.string_ ||
(y.value_.string_ && x.value_.string_ &&
!strcmp(x.value_.string_, y.value_.string_));
((y.value_.string_ != nullptr) && (x.value_.string_ != nullptr) &&
(strcmp(x.value_.string_, y.value_.string_) == 0));
case arrayValue:
case objectValue:
@@ -461,7 +461,7 @@ operator==(Value const& x, Value const& y)
// LCOV_EXCL_STOP
}
return 0; // unreachable
return false; // unreachable
}
char const*
@@ -480,7 +480,7 @@ Value::asString() const
return "";
case stringValue:
return value_.string_ ? value_.string_ : "";
return (value_.string_ != nullptr) ? value_.string_ : "";
case booleanValue:
return value_.bool_ ? "true" : "false";
@@ -525,7 +525,7 @@ Value::asInt() const
case realValue:
JSON_ASSERT_MESSAGE(
value_.real_ >= minInt && value_.real_ <= maxInt,
(value_.real_ >= minInt && value_.real_ <= maxInt),
"Real out of signed integer range");
return Int(value_.real_);
@@ -533,7 +533,7 @@ Value::asInt() const
return value_.bool_ ? 1 : 0;
case stringValue: {
char const* const str{value_.string_ ? value_.string_ : ""};
char const* const str{(value_.string_ != nullptr) ? value_.string_ : ""};
return beast::lexicalCastThrow<int>(str);
}
@@ -584,7 +584,7 @@ Value::asAbsUInt() const
return value_.bool_ ? 1 : 0;
case stringValue: {
char const* const str{value_.string_ ? value_.string_ : ""};
char const* const str{(value_.string_ != nullptr) ? value_.string_ : ""};
auto const temp = beast::lexicalCastThrow<std::int64_t>(str);
if (temp < 0)
{
@@ -626,14 +626,15 @@ Value::asUInt() const
case realValue:
JSON_ASSERT_MESSAGE(
value_.real_ >= 0 && value_.real_ <= maxUInt, "Real out of unsigned integer range");
(value_.real_ >= 0 && value_.real_ <= maxUInt),
"Real out of unsigned integer range");
return UInt(value_.real_);
case booleanValue:
return value_.bool_ ? 1 : 0;
case stringValue: {
char const* const str{value_.string_ ? value_.string_ : ""};
char const* const str{(value_.string_ != nullptr) ? value_.string_ : ""};
return beast::lexicalCastThrow<unsigned int>(str);
}
@@ -703,7 +704,7 @@ Value::asBool() const
return value_.bool_;
case stringValue:
return value_.string_ && value_.string_[0] != 0;
return (value_.string_ != nullptr) && value_.string_[0] != 0;
case arrayValue:
case objectValue:
@@ -745,13 +746,13 @@ Value::isConvertibleTo(ValueType other) const
other == realValue || other == stringValue || other == booleanValue;
case booleanValue:
return (other == nullValue && value_.bool_ == false) || other == intValue ||
return (other == nullValue && !value_.bool_) || other == intValue ||
other == uintValue || other == realValue || other == stringValue ||
other == booleanValue;
case stringValue:
return other == stringValue ||
(other == nullValue && (!value_.string_ || value_.string_[0] == 0));
(other == nullValue && ((value_.string_ == nullptr) || value_.string_[0] == 0));
case arrayValue:
return other == arrayValue || (other == nullValue && value_.map_->empty());
@@ -813,10 +814,10 @@ operator bool() const
if (isString())
{
auto s = asCString();
return s && s[0];
return (s != nullptr) && (s[0] != 0);
}
return !(isArray() || isObject()) || size();
return !(isArray() || isObject()) || (size() != 0u);
}
void
@@ -1139,7 +1140,7 @@ Value::begin() const
{
case arrayValue:
case objectValue:
if (value_.map_)
if (value_.map_ != nullptr)
return const_iterator(value_.map_->begin());
break;
@@ -1157,7 +1158,7 @@ Value::end() const
{
case arrayValue:
case objectValue:
if (value_.map_)
if (value_.map_ != nullptr)
return const_iterator(value_.map_->end());
break;
@@ -1175,7 +1176,7 @@ Value::begin()
{
case arrayValue:
case objectValue:
if (value_.map_)
if (value_.map_ != nullptr)
return iterator(value_.map_->begin());
break;
default:
@@ -1192,7 +1193,7 @@ Value::end()
{
case arrayValue:
case objectValue:
if (value_.map_)
if (value_.map_ != nullptr)
return iterator(value_.map_->end());
break;
default:

View File

@@ -89,7 +89,7 @@ ValueIteratorBase::key() const
{
Value::CZString const czString = (*current_).first;
if (czString.c_str())
if (czString.c_str() != nullptr)
{
if (czString.isStaticString())
return Value(StaticString(czString.c_str()));
@@ -105,7 +105,7 @@ ValueIteratorBase::index() const
{
Value::CZString const czString = (*current_).first;
if (!czString.c_str())
if (czString.c_str() == nullptr)
return czString.index();
return Value::UInt(-1);
@@ -115,7 +115,7 @@ char const*
ValueIteratorBase::memberName() const
{
char const* name = (*current_).first.c_str();
return name ? name : "";
return (name != nullptr) ? name : "";
}
// //////////////////////////////////////////////////////////////////

View File

@@ -23,7 +23,7 @@ isControlCharacter(char ch)
static bool
containsControlCharacter(char const* str)
{
while (*str)
while (*str != 0)
{
if (isControlCharacter(*(str++)))
return true;
@@ -106,7 +106,7 @@ valueToQuotedString(char const* value)
// We have to walk value and escape any special characters.
// Appending to std::string is not efficient, but this should be rare.
// (Note: forward slashes are *not* rare, but I am not escaping them.)
unsigned maxsize = strlen(value) * 2 + 3; // all-escaped+quotes+NULL
unsigned maxsize = (strlen(value) * 2) + 3; // all-escaped+quotes+NULL
std::string result;
result.reserve(maxsize); // to avoid lots of mallocs
result += "\"";
@@ -416,7 +416,7 @@ StyledWriter::isMultilineArray(Value const& value)
{
childValues_.reserve(size);
addChildValues_ = true;
int lineLength = 4 + (size - 1) * 2; // '[ ' + ', '*n + ' ]'
int lineLength = 4 + ((size - 1) * 2); // '[ ' + ', '*n + ' ]'
for (int index = 0; index < size; ++index)
{
@@ -651,7 +651,7 @@ StyledStreamWriter::isMultilineArray(Value const& value)
{
childValues_.reserve(size);
addChildValues_ = true;
int lineLength = 4 + (size - 1) * 2; // '[ ' + ', '*n + ' ]'
int lineLength = 4 + ((size - 1) * 2); // '[ ' + ', '*n + ' ]'
for (int index = 0; index < size; ++index)
{

View File

@@ -275,9 +275,7 @@ ApplyStateTable::exists(ReadView const& base, Keylet const& k) const
case Action::modify:
break;
}
if (!k.check(*sle))
return false;
return true;
return k.check(*sle);
}
auto

View File

@@ -36,7 +36,7 @@ findPreviousPage(ApplyView& view, Keylet const& directory, SLE::ref start)
auto node = start;
if (page)
if (page != 0u)
{
node = view.peek(keylet::page(directory, page));
if (!node)

View File

@@ -1,4 +1,4 @@
#include <xrpld/app/misc/CanonicalTXSet.h>
#include <xrpl/ledger/CanonicalTXSet.h>
namespace xrpl {

View File

@@ -1,19 +1,9 @@
#include <xrpld/app/ledger/InboundLedgers.h>
#include <xrpld/app/ledger/Ledger.h>
#include <xrpld/app/ledger/LedgerToJson.h>
#include <xrpld/app/ledger/PendingSaves.h>
#include <xrpld/app/main/Application.h>
#include <xrpld/consensus/LedgerTiming.h>
#include <xrpld/core/Config.h>
#include <xrpl/basics/Log.h>
#include <xrpl/basics/contract.h>
#include <xrpl/beast/utility/instrumentation.h>
#include <xrpl/core/HashRouter.h>
#include <xrpl/core/JobQueue.h>
#include <xrpl/json/to_string.h>
#include <xrpl/nodestore/Database.h>
#include <xrpl/nodestore/detail/DatabaseNodeImp.h>
#include <xrpl/ledger/Ledger.h>
#include <xrpl/ledger/LedgerTiming.h>
#include <xrpl/protocol/Feature.h>
#include <xrpl/protocol/HashPrefix.h>
#include <xrpl/protocol/Indexes.h>
@@ -21,7 +11,6 @@
#include <xrpl/protocol/SecretKey.h>
#include <xrpl/protocol/digest.h>
#include <xrpl/protocol/jss.h>
#include <xrpl/rdb/RelationalDatabase.h>
#include <utility>
#include <vector>
@@ -30,23 +19,6 @@ namespace xrpl {
create_genesis_t const create_genesis{};
uint256
calculateLedgerHash(LedgerHeader const& info)
{
// VFALCO This has to match addRaw in View.h.
return sha512Half(
HashPrefix::ledgerMaster,
std::uint32_t(info.seq),
std::uint64_t(info.drops.drops()),
info.parentHash,
info.txHash,
info.accountHash,
std::uint32_t(info.parentCloseTime.time_since_epoch().count()),
std::uint32_t(info.closeTime.time_since_epoch().count()),
std::uint8_t(info.closeTimeResolution.count()),
std::uint8_t(info.closeFlags));
}
//------------------------------------------------------------------------------
class Ledger::sles_iter_impl : public sles_type::iter_base
@@ -138,8 +110,8 @@ public:
{
auto const& item = *iter_;
if (metadata_)
return deserializeTxPlusMeta(item);
return {deserializeTx(item), nullptr};
return Ledger::deserializeTxPlusMeta(item);
return {Ledger::deserializeTx(item), nullptr};
}
};
@@ -147,13 +119,15 @@ public:
Ledger::Ledger(
create_genesis_t,
Config const& config,
Rules const& rules,
Fees const& fees,
std::vector<uint256> const& amendments,
Family& family)
: mImmutable(false)
, txMap_(SHAMapType::TRANSACTION, family)
, stateMap_(SHAMapType::STATE, family)
, rules_{config.features}
, fees_(fees)
, rules_(rules)
, j_(beast::Journal(beast::Journal::getNullSink()))
{
header_.seq = 1;
@@ -182,19 +156,19 @@ Ledger::Ledger(
// Whether featureXRPFees is supported will depend on startup options.
if (std::find(amendments.begin(), amendments.end(), featureXRPFees) != amendments.end())
{
sle->at(sfBaseFeeDrops) = config.FEES.reference_fee;
sle->at(sfReserveBaseDrops) = config.FEES.account_reserve;
sle->at(sfReserveIncrementDrops) = config.FEES.owner_reserve;
sle->at(sfBaseFeeDrops) = fees.base;
sle->at(sfReserveBaseDrops) = fees.reserve;
sle->at(sfReserveIncrementDrops) = fees.increment;
}
else
{
if (auto const f = config.FEES.reference_fee.dropsAs<std::uint64_t>())
if (auto const f = fees.base.dropsAs<std::uint64_t>())
sle->at(sfBaseFee) = *f;
if (auto const f = config.FEES.account_reserve.dropsAs<std::uint32_t>())
if (auto const f = fees.reserve.dropsAs<std::uint32_t>())
sle->at(sfReserveBase) = *f;
if (auto const f = config.FEES.owner_reserve.dropsAs<std::uint32_t>())
if (auto const f = fees.increment.dropsAs<std::uint32_t>())
sle->at(sfReserveIncrement) = *f;
sle->at(sfReferenceFeeUnits) = Config::FEE_UNITS_DEPRECATED;
sle->at(sfReferenceFeeUnits) = FEE_UNITS_DEPRECATED;
}
rawInsert(sle);
}
@@ -207,13 +181,15 @@ Ledger::Ledger(
LedgerHeader const& info,
bool& loaded,
bool acquire,
Config const& config,
Rules const& rules,
Fees const& fees,
Family& family,
beast::Journal j)
: mImmutable(true)
, txMap_(SHAMapType::TRANSACTION, info.txHash, family)
, stateMap_(SHAMapType::STATE, info.accountHash, family)
, rules_(config.features)
, fees_(fees)
, rules_(rules)
, header_(info)
, j_(j)
{
@@ -235,7 +211,6 @@ Ledger::Ledger(
txMap_.setImmutable();
stateMap_.setImmutable();
defaultFees(config);
if (!setup())
loaded = false;
@@ -275,11 +250,11 @@ Ledger::Ledger(Ledger const& prevLedger, NetClock::time_point closeTime)
}
}
Ledger::Ledger(LedgerHeader const& info, Config const& config, Family& family)
Ledger::Ledger(LedgerHeader const& info, Rules const& rules, Family& family)
: mImmutable(true)
, txMap_(SHAMapType::TRANSACTION, info.txHash, family)
, stateMap_(SHAMapType::STATE, info.accountHash, family)
, rules_{config.features}
, rules_(rules)
, header_(info)
, j_(beast::Journal(beast::Journal::getNullSink()))
{
@@ -289,18 +264,19 @@ Ledger::Ledger(LedgerHeader const& info, Config const& config, Family& family)
Ledger::Ledger(
std::uint32_t ledgerSeq,
NetClock::time_point closeTime,
Config const& config,
Rules const& rules,
Fees const& fees,
Family& family)
: mImmutable(false)
, txMap_(SHAMapType::TRANSACTION, family)
, stateMap_(SHAMapType::STATE, family)
, rules_{config.features}
, fees_(fees)
, rules_(rules)
, j_(beast::Journal(beast::Journal::getNullSink()))
{
header_.seq = ledgerSeq;
header_.closeTime = closeTime;
header_.closeTimeResolution = ledgerDefaultTimeResolution;
defaultFees(config);
setup();
}
@@ -350,14 +326,14 @@ Ledger::addSLE(SLE const& sle)
//------------------------------------------------------------------------------
std::shared_ptr<STTx const>
deserializeTx(SHAMapItem const& item)
Ledger::deserializeTx(SHAMapItem const& item)
{
SerialIter sit(item.slice());
return std::make_shared<STTx const>(sit);
}
std::pair<std::shared_ptr<STTx const>, std::shared_ptr<STObject const>>
deserializeTxPlusMeta(SHAMapItem const& item)
Ledger::deserializeTxPlusMeta(SHAMapItem const& item)
{
std::pair<std::shared_ptr<STTx const>, std::shared_ptr<STObject const>> result;
SerialIter sit(item.slice());
@@ -636,20 +612,6 @@ Ledger::setup()
return ret;
}
void
Ledger::defaultFees(Config const& config)
{
XRPL_ASSERT(
fees_.base == 0 && fees_.reserve == 0 && fees_.increment == 0,
"xrpl::Ledger::defaultFees : zero fees");
if (fees_.base == 0)
fees_.base = config.FEES.reference_fee;
if (fees_.reserve == 0)
fees_.reserve = config.FEES.account_reserve;
if (fees_.increment == 0)
fees_.increment = config.FEES.owner_reserve;
}
std::shared_ptr<SLE>
Ledger::peek(Keylet const& k) const
{
@@ -816,27 +778,17 @@ Ledger::walkLedger(beast::Journal j, bool parallel) const
}
bool
Ledger::assertSensible(beast::Journal ledgerJ) const
Ledger::isSensible() const
{
if (header_.hash.isNonZero() && header_.accountHash.isNonZero() &&
(header_.accountHash == stateMap_.getHash().as_uint256()) &&
(header_.txHash == txMap_.getHash().as_uint256()))
{
return true;
}
// LCOV_EXCL_START
Json::Value j = getJson({*this, {}});
j[jss::accountTreeHash] = to_string(header_.accountHash);
j[jss::transTreeHash] = to_string(header_.txHash);
JLOG(ledgerJ.fatal()) << "ledger is not sensible" << j;
UNREACHABLE("xrpl::Ledger::assertSensible : ledger is not sensible");
return false;
// LCOV_EXCL_STOP
if (header_.hash.isZero())
return false;
if (header_.accountHash.isZero())
return false;
if (header_.accountHash != stateMap_.getHash().as_uint256())
return false;
if (header_.txHash != txMap_.getHash().as_uint256())
return false;
return true;
}
// update the skip list with the information from our previous ledger
@@ -925,76 +877,6 @@ Ledger::isVotingLedger() const
return ::xrpl::isVotingLedger(header_.seq + 1);
}
static bool
saveValidatedLedger(Application& app, std::shared_ptr<Ledger const> const& ledger, bool current)
{
auto j = app.journal("Ledger");
auto seq = ledger->header().seq;
if (!app.pendingSaves().startWork(seq))
{
// The save was completed synchronously
JLOG(j.debug()) << "Save aborted";
return true;
}
auto& db = app.getRelationalDatabase();
auto const res = db.saveValidatedLedger(ledger, current);
// Clients can now trust the database for
// information about this ledger sequence.
app.pendingSaves().finishWork(seq);
return res;
}
/** Save, or arrange to save, a fully-validated ledger
Returns false on error
*/
bool
pendSaveValidated(
Application& app,
std::shared_ptr<Ledger const> const& ledger,
bool isSynchronous,
bool isCurrent)
{
if (!app.getHashRouter().setFlags(ledger->header().hash, HashRouterFlags::SAVED))
{
// We have tried to save this ledger recently
auto stream = app.journal("Ledger").debug();
JLOG(stream) << "Double pend save for " << ledger->header().seq;
if (!isSynchronous || !app.pendingSaves().pending(ledger->header().seq))
{
// Either we don't need it to be finished
// or it is finished
return true;
}
}
XRPL_ASSERT(ledger->isImmutable(), "xrpl::pendSaveValidated : immutable ledger");
if (!app.pendingSaves().shouldWork(ledger->header().seq, isSynchronous))
{
auto stream = app.journal("Ledger").debug();
JLOG(stream) << "Pend save with seq in pending saves " << ledger->header().seq;
return true;
}
// See if we can use the JobQueue.
if (!isSynchronous &&
app.getJobQueue().addJob(
isCurrent ? jtPUBLEDGER : jtPUBOLDLEDGER,
std::to_string(ledger->seq()),
[&app, ledger, isCurrent]() { saveValidatedLedger(app, ledger, isCurrent); }))
{
return true;
}
// The JobQueue won't do the Job. Do the save synchronously.
return saveValidatedLedger(app, ledger, isCurrent);
}
void
Ledger::unshare() const
{
@@ -1008,84 +890,5 @@ Ledger::invariants() const
stateMap_.invariants();
txMap_.invariants();
}
//------------------------------------------------------------------------------
/*
* Make ledger using info loaded from database.
*
* @param LedgerHeader: Ledger information.
* @param app: Link to the Application.
* @param acquire: Acquire the ledger if not found locally.
* @return Shared pointer to the ledger.
*/
std::shared_ptr<Ledger>
loadLedgerHelper(LedgerHeader const& info, Application& app, bool acquire)
{
bool loaded = false;
auto ledger = std::make_shared<Ledger>(
info, loaded, acquire, app.config(), app.getNodeFamily(), app.journal("Ledger"));
if (!loaded)
ledger.reset();
return ledger;
}
static void
finishLoadByIndexOrHash(
std::shared_ptr<Ledger> const& ledger,
Config const& config,
beast::Journal j)
{
if (!ledger)
return;
XRPL_ASSERT(
ledger->header().seq < XRP_LEDGER_EARLIEST_FEES || ledger->read(keylet::fees()),
"xrpl::finishLoadByIndexOrHash : valid ledger fees");
ledger->setImmutable();
JLOG(j.trace()) << "Loaded ledger: " << to_string(ledger->header().hash);
ledger->setFull();
}
std::tuple<std::shared_ptr<Ledger>, std::uint32_t, uint256>
getLatestLedger(Application& app)
{
std::optional<LedgerHeader> const info = app.getRelationalDatabase().getNewestLedgerInfo();
if (!info)
return {std::shared_ptr<Ledger>(), {}, {}};
return {loadLedgerHelper(*info, app, true), info->seq, info->hash};
}
std::shared_ptr<Ledger>
loadByIndex(std::uint32_t ledgerIndex, Application& app, bool acquire)
{
if (std::optional<LedgerHeader> info =
app.getRelationalDatabase().getLedgerInfoByIndex(ledgerIndex))
{
std::shared_ptr<Ledger> ledger = loadLedgerHelper(*info, app, acquire);
finishLoadByIndexOrHash(ledger, app.config(), app.journal("Ledger"));
return ledger;
}
return {};
}
std::shared_ptr<Ledger>
loadByHash(uint256 const& ledgerHash, Application& app, bool acquire)
{
if (std::optional<LedgerHeader> info =
app.getRelationalDatabase().getLedgerInfoByHash(ledgerHash))
{
std::shared_ptr<Ledger> ledger = loadLedgerHelper(*info, app, acquire);
finishLoadByIndexOrHash(ledger, app.config(), app.journal("Ledger"));
XRPL_ASSERT(
!ledger || ledger->header().hash == ledgerHash,
"xrpl::loadByHash : ledger hash match if loaded");
return ledger;
}
return {};
}
} // namespace xrpl

View File

@@ -164,7 +164,7 @@ PaymentSandbox::balanceHook(
auto delta = amount.zeroed();
auto lastBal = amount;
auto minBal = amount;
for (auto curSB = this; curSB; curSB = curSB->ps_)
for (auto curSB = this; curSB != nullptr; curSB = curSB->ps_)
{
if (auto adj = curSB->tab_.adjustments(account, issuer, currency))
{
@@ -198,7 +198,7 @@ std::uint32_t
PaymentSandbox::ownerCountHook(AccountID const& account, std::uint32_t count) const
{
std::uint32_t result = count;
for (auto curSB = this; curSB; curSB = curSB->ps_)
for (auto curSB = this; curSB != nullptr; curSB = curSB->ps_)
{
if (auto adj = curSB->tab_.ownerCount(account))
result = std::max(result, *adj);

View File

@@ -433,8 +433,7 @@ doWithdraw(
j) < amount)
{
// LCOV_EXCL_START
JLOG(j.error()) << "LoanBrokerCoverWithdraw: negative balance of "
"broker cover assets.";
JLOG(j.error()) << "doWithdraw: negative balance of broker cover assets.";
return tefINTERNAL;
// LCOV_EXCL_STOP
}

View File

@@ -78,7 +78,7 @@ deleteSLE(ApplyView& view, std::shared_ptr<SLE> const& sleCredential, beast::Jou
auto const issuer = sleCredential->getAccountID(sfIssuer);
auto const subject = sleCredential->getAccountID(sfSubject);
bool const accepted = sleCredential->getFlags() & lsfAccepted;
bool const accepted = (sleCredential->getFlags() & lsfAccepted) != 0u;
auto err = delSLE(issuer, sfIssuerNode, !accepted || (subject == issuer));
if (!isTesSuccess(err))
@@ -147,7 +147,7 @@ valid(STTx const& tx, ReadView const& view, AccountID const& src, beast::Journal
return tecBAD_CREDENTIALS;
}
if (!(sleCred->getFlags() & lsfAccepted))
if ((sleCred->getFlags() & lsfAccepted) == 0u)
{
JLOG(j.trace()) << "Credential isn't accepted. Cred: " << h;
return tecBAD_CREDENTIALS;
@@ -188,7 +188,7 @@ validDomain(ReadView const& view, uint256 domainID, AccountID const& subject)
foundExpired = true;
continue;
}
if (sleCredential->getFlags() & lsfAccepted)
if ((sleCredential->getFlags() & lsfAccepted) != 0u)
{
return tesSUCCESS;
}
@@ -309,7 +309,7 @@ verifyValidDomain(ApplyView& view, AccountID const& account, uint256 domainID, b
if (!sleCredential)
continue; // expired, i.e. deleted in credentials::removeExpired
if (sleCredential->getFlags() & lsfAccepted)
if ((sleCredential->getFlags() & lsfAccepted) != 0u)
return tesSUCCESS;
}
@@ -336,7 +336,7 @@ verifyDepositPreauth(
if (credentialsPresent && credentials::removeExpired(view, tx.getFieldV256(sfCredentialIDs), j))
return tecEXPIRED;
if (sleDst && (sleDst->getFlags() & lsfDepositAuth))
if (sleDst && ((sleDst->getFlags() & lsfDepositAuth) != 0u))
{
if (src != dst)
{

View File

@@ -69,7 +69,7 @@ forEachItem(
for (auto const& key : sle->getFieldV256(sfIndexes))
f(view.read(keylet::child(key)));
auto const next = sle->getFieldU64(sfIndexNext);
if (!next)
if (next == 0u)
return;
pos = keylet::page(root, next);
}

View File

@@ -1,6 +1,7 @@
#include <xrpl/ledger/helpers/MPTokenHelpers.h>
//
#include <xrpl/basics/Log.h>
#include <xrpl/ledger/View.h>
#include <xrpl/ledger/helpers/AccountRootHelpers.h>
#include <xrpl/ledger/helpers/CredentialHelpers.h>
#include <xrpl/ledger/helpers/DirectoryHelpers.h>
@@ -12,21 +13,6 @@
namespace xrpl {
// Forward declarations for functions that remain in View.h/cpp
bool
isVaultPseudoAccountFrozen(
ReadView const& view,
AccountID const& account,
MPTIssue const& mptShare,
int depth);
[[nodiscard]] TER
dirLink(
ApplyView& view,
AccountID const& owner,
std::shared_ptr<SLE>& object,
SF_UINT64 const& node = sfOwnerNode);
bool
isGlobalFrozen(ReadView const& view, MPTIssue const& mptIssue)
{
@@ -83,7 +69,7 @@ transferRate(ReadView const& view, MPTID const& issuanceID)
// which represents 50% of 1,000,000,000
if (auto const sle = view.read(keylet::mptIssuance(issuanceID));
sle && sle->isFieldPresent(sfTransferFee))
return Rate{1'000'000'000u + 10'000 * sle->getFieldU16(sfTransferFee)};
return Rate{1'000'000'000u + (10'000 * sle->getFieldU16(sfTransferFee))};
return parityRate;
}
@@ -149,7 +135,7 @@ authorizeMPToken(
// When a holder wants to unauthorize/delete a MPT, the ledger must
// - delete mptokenKey from owner directory
// - delete the MPToken
if (flags & tfMPTUnauthorize)
if ((flags & tfMPTUnauthorize) != 0)
{
auto const mptokenKey = keylet::mptoken(mptIssuanceID, account);
auto const sleMpt = view.peek(mptokenKey);
@@ -229,7 +215,7 @@ authorizeMPToken(
// Issuer wants to unauthorize the holder, unset lsfMPTAuthorized on
// their MPToken
if (flags & tfMPTUnauthorize)
if ((flags & tfMPTUnauthorize) != 0)
{
flagsOut &= ~lsfMPTAuthorized;
}
@@ -490,7 +476,7 @@ canTransfer(
if (!sleIssuance)
return tecOBJECT_NOT_FOUND;
if (!(sleIssuance->getFieldU32(sfFlags) & lsfMPTCanTransfer))
if (!sleIssuance->isFlag(lsfMPTCanTransfer))
{
if (from != (*sleIssuance)[sfIssuer] && to != (*sleIssuance)[sfIssuer])
return TER{tecNO_AUTH};

Some files were not shown because too many files have changed in this diff Show More