Compare commits

...

67 Commits

Author SHA1 Message Date
Pratik Mankawde
1035ca41dd fix(telemetry): fix Windows WinSock.h header ordering in MetricsRegistry test
Pre-include boost/asio/detail/socket_types.hpp on Windows before OTel
SDK headers to ensure WinSock2.h is included before WinSock.h.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 20:16:16 +01:00
Pratik Mankawde
44765952f1 fix(telemetry): fix CI linker errors, check-rename, and docs build
- Add ValidationTracker.cpp to xrpl.test.telemetry target sources
  (implementation lives in src/xrpld/ but has no OTel SDK dependency)
- Change BEAST_DEFINE_TESTSUITE namespace from ripple to xrpl
- Replace recursive *.md glob with non-recursive GLOB in XrplDocs.cmake
  to avoid picking up .claude/instructions.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 19:42:43 +01:00
Pratik Mankawde
17270be582 feat(telemetry): add workload orchestrator with phased load profiles
Add a profile-driven workload orchestrator that executes sequential load
phases with configurable RPC rates and TX throughput. Three profiles:
full-validation (6 phases covering all 18 dashboards), quick-smoke (CI),
and stress (benchmarking). Fix 10 validation failures: correct Phase 9
metric prefixes, relax peer latency bounds for localhost clusters, and
allow sub-microsecond span durations.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 19:29:12 +01:00
Pratik Mankawde
5850ef1eb4 fix(telemetry): fix CI failures in MetricsRegistry test, levelization, and dashboard titles
- Update MockServiceRegistry to match current ServiceRegistry interface
  (17 method renames: get* prefix, PathRequests→PathRequestManager)
- Make throwUnimplemented() static to satisfy clang-tidy
- Regenerate levelization ordering.txt and loops.txt
- Remove 'rippled' prefix from 3 StatsD dashboard titles

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 18:43:39 +01:00
Pratik Mankawde
a6acd725f2 docs: update data-collection-reference and presentation for external dashboard parity
- Fix validations_checked_total recording site (NetworkOPs, not LedgerMaster)
- Add Slide 11 to presentation: External Dashboard Parity overview with
  Mermaid diagrams for new metric categories, ValidationTracker sequence,
  and new dashboard summary

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:41:04 +01:00
Pratik Mankawde
79d83660d4 fix(telemetry): fix dashboard UID and add parity attributes to expected_spans
- Remove duplicate 'system-node-health' UID from expected_metrics.json
  (already covered by 'rippled-system-node-health')
- Add parity span attributes to expected_spans.json: node health on
  rpc.command.*, validation hash/full on consensus.validation.send,
  quorum/proposers on consensus.accept, validation hash/full on
  peer.validation.receive

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:41:04 +01:00
Pratik Mankawde
186bfa8ebe feat(telemetry): add external dashboard parity validation checks (Task 10.8)
Add ~28 validation checks for external dashboard parity:
- 8 span attribute checks (server_info, tx.receive, consensus, peer spans)
- 13 metric existence checks (validation agreement, validator health,
  peer quality, ledger economy, state tracking, counters, storage)
- 3 dashboard load checks (validator-health, peer-quality, system-node-health)
- 4 value sanity checks (agreement %, UNL expiry, latency, state value)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:41:04 +01:00
Pratik Mankawde
6a19ab6568 docs: add Tasks 11.12-11.13 for external dashboard parity alerts and docs
Task 11.12: 18 Grafana alert rules (critical/network/performance groups)
for Phase 7+ parity metrics — validation agreement, state tracking,
validator health, peer quality, ledger economy.

Task 11.13: Dual-datasource architecture documentation — records the
external dashboard's fast-path pattern as a future optimization option.

Source: external dashboard parity design spec (2026-03-30).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 16:41:04 +01:00
Pratik Mankawde
0e66672413 docs: add Task 10.8 external dashboard parity validation checks
28 new checks for validate_telemetry.py: 8 span attribute checks
(Phase 2-4 enrichments), 13 metric existence checks (Phase 7+ gauges
and counters), 3 dashboard load checks, and 4 metric sanity checks.

Source: external dashboard parity design spec (2026-03-30).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 16:41:04 +01:00
Pratik Mankawde
df3dd9d3e8 Phase 10: Workload validation - synthetic load generation and telemetry checks
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 16:41:04 +01:00
Pratik Mankawde
15bee7a01a feat(telemetry): add 7-day agreement window to validation_agreement gauge
Add agreement_pct_7d, agreements_7d, missed_7d labels to the
rippled_validation_agreement observable gauge, matching the external
xrpl-validator-dashboard's 7-day tracking.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:40:46 +01:00
Pratik Mankawde
1408ffd794 fix(telemetry): fix ServiceRegistry API names and transaction rate computation
- cachedSLEs() -> getCachedSLEs()
- openLedger() -> getOpenLedger()
- overlay() -> getOverlay()
- Use OpenView::txCount() for transaction rate instead of SHAMap::size()

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:39:40 +01:00
Pratik Mankawde
5d8d7ad6dc feat(telemetry): wire ValidationTracker to MetricsRegistry and consensus hooks
Add ValidationTracker member to MetricsRegistry with a public accessor,
register a rippled_validation_agreement observable gauge that calls
reconcile() and reports 1h/24h agreement percentages and counts, and
hook recordOurValidation/recordNetworkValidation into RCLConsensus
validate() and LedgerMaster setValidLedger() respectively.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:39:40 +01:00
Pratik Mankawde
29e1e851a3 feat(telemetry): add validationsChecked recording hook in recvValidation
Wire incrementValidationsChecked() into NetworkOPs::recvValidation() so
each received network validation increments the counter.

Note: incrementJqTransOverflow() hook is deferred — JobQueue has no
explicit overflow event path; the counter is reserved for future use.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:39:40 +01:00
Pratik Mankawde
c4bafa3c93 fix(telemetry): add missing counters, fix dashboard metric name, clean dead code
- Add rippled_validation_agreements_total and rippled_validation_missed_total
  counter declarations and creation (wiring to ValidationTracker pending rebase)
- Fix peer-quality dashboard: query rippled_server_info{metric="peer_disconnects_resources"}
  instead of non-existent rippled_Overlay_Peer_Disconnects_Charges
- Remove dead getCountsJson() call in storageDetail callback

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:39:40 +01:00
Pratik Mankawde
1371aebb63 fix(telemetry): fix metric labels and add missing parity gauge values
- Rename fee labels to match spec: base_fee_drops -> base_fee_xrp,
  reserve_base_drops -> reserve_base_xrp, reserve_inc_drops -> reserve_inc_xrp
- Add peers_insane_count (stub with TODO for PeerImp::tracking_ exposure)
- Add transaction_rate to ledger economy gauge
- Replace node_store_writes/node_written_bytes with nudb_bytes per spec

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:39:40 +01:00
Pratik Mankawde
2f48e68013 feat(telemetry): add external dashboard parity gauges and counters to MetricsRegistry
Add validator health, peer quality, ledger economy, state tracking, and
storage detail observable gauges plus 5 synchronous counters with recording
hooks for ledger close, validation send, state change, and overflow events.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:39:40 +01:00
Pratik Mankawde
dfda2e4185 feat(telemetry): add validator health, peer quality dashboards and ledger economy panels (Tasks 9.11-9.13)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:39:40 +01:00
Pratik Mankawde
92d109ce16 docs: add external dashboard parity tasks and metric reference for Phase 9
Add Tasks 9.11-9.13 (Validator Health, Peer Quality, Ledger Economy dashboards),
new metric tables in data-collection-reference, and monitoring sections in runbook
covering validation agreement, validator health, peer quality, and state tracking.

Source: external dashboard parity design spec (2026-03-30).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 16:39:40 +01:00
Pratik Mankawde
6738f8b9ab docs: update Phase 9 docs and dashboard for push_metrics.py parity gauges
- Add Task 9.7a to Phase9_taskList.md documenting new gauges
- Add metric tables to 09-data-collection-reference.md (server_info,
  build_info, complete_ledgers, db_metrics, extended cache/nodestore)
- Update metric counts from ~50 to ~68 in 06-implementation-phases.md
- Add OTel MetricsRegistry gauge reference to telemetry-runbook.md
- Add 11 new panels to system-node-health.json Grafana dashboard
  (server state, uptime, peers, validated seq, last close info,
  build version, complete ledgers, db sizes, historical fetch rate,
  peer disconnects)
- Fix leftover merge conflict marker in 08-appendix.md
- Add ripplex/mseconds to cspell dictionary

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 16:39:40 +01:00
Pratik Mankawde
5e19b24ebf feat(telemetry): add push_metrics.py parity gauges to MetricsRegistry
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 16:39:40 +01:00
Pratik Mankawde
43d36ff4f0 Phase 9: Metric gap fill - nodestore, cache, TxQ, load factor dashboards
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 16:39:40 +01:00
Pratik Mankawde
6916734eae Phase 8: Log-trace correlation with Loki and filelog receiver
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 16:39:40 +01:00
Pratik Mankawde
aa24a38d51 feat(telemetry): add 7-day validation agreement window to ValidationTracker
Add window7d_ deque, agreementPct7d(), agreements7d(), missed7d() to
match the external xrpl-validator-dashboard's 7-day agreement tracking.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:39:39 +01:00
Pratik Mankawde
8880d15c8c test(telemetry): add ValidationTracker unit tests
Cover normal agreement, missed validation, late repair, empty window,
grace period boundary, max pending trimming, mixed results, duplicate
recording, and only-we-validated scenarios.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:39:39 +01:00
Pratik Mankawde
f329a5d899 fix(telemetry): fix ValidationTracker grace period boundary and hard trim
- Use >= instead of > for grace period comparison to reconcile at exactly
  8 seconds rather than skipping the boundary
- Two-pass hard trim: first remove entries past late-repair window, then
  any reconciled entry, to avoid sabotaging late repairs

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:39:39 +01:00
Pratik Mankawde
4c11ebbfc0 feat(telemetry): add ValidationTracker for validation agreement tracking (Task 7.8)
Standalone class that tracks whether this validator's validations agree
with network consensus, maintaining rolling 1h/24h windows and lifetime
totals with a late-repair mechanism for out-of-order arrivals.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:39:39 +01:00
Pratik Mankawde
fe10835a7c docs: add Tasks 7.9-7.16 for external dashboard parity metrics
Adds ValidationTracker (agreement computation with 8s grace period),
validator health, peer quality, ledger economy, state tracking,
storage detail gauges, 7 synchronous counters, and agreement gauge.

29 new metrics covering validation agreement, peer quality, UNL health,
ledger economy, state tracking, and upgrade awareness.

Part of the external dashboard parity initiative across phases 2-11.
See docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 16:39:39 +01:00
Pratik Mankawde
4137495282 Phase 7: Native OTel metrics migration
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 16:39:39 +01:00
Pratik Mankawde
7e808806a9 docs: add peerDisconnectsCharges metric to data collection reference
Bridge the existing beast::insight gauge for resource-limit peer
disconnects (peerDisconnectsCharges_) into the StatsD metric inventory.

Part of the external dashboard parity initiative.
See docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 16:39:39 +01:00
Pratik Mankawde
7ad43d4c21 Phase 6: StatsD metrics integration into telemetry pipeline
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 16:39:39 +01:00
Pratik Mankawde
4f8197e01d fix: remove non-existent CanonicalTXSet.h include from BuildLedger.cpp
The xrpld/app/misc/CanonicalTXSet.h header doesn't exist — it was
incorrectly added during a rebase conflict resolution. The correct
include xrpl/ledger/CanonicalTXSet.h is already present.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:39:39 +01:00
Pratik Mankawde
f35ed05bcc feat(telemetry): add validation attributes to peer.validation.receive span (Task 4.8)
Add ledger hash and full-validation flag to peer.validation.receive
spans for trace-level agreement analysis across validators.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:39:39 +01:00
Pratik Mankawde
753e7721e0 Phase 5b: Ledger, peer, and tx spans with expanded Grafana dashboards
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 16:39:39 +01:00
Pratik Mankawde
58aa308052 fix: use docker/telemetry/data/ for runtime data and add .gitignore
Move xrpld data paths from ./data/ to docker/telemetry/data/ so runtime
files stay within the docker telemetry directory. Add .gitignore to
exclude the data directory from version control.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 16:39:20 +01:00
Pratik Mankawde
c07dd573fe Phase 5: Documentation, deployment configs, integration test infrastructure
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 13:50:38 +01:00
Pratik Mankawde
aa329d5084 fix(telemetry): move quorum/proposers attributes to consensus.accept span
Move validation_quorum and proposers_validated attributes from
consensus.accept.apply to consensus.accept span to match the design
spec. Both values are available in onAccept() scope.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 13:48:05 +01:00
Pratik Mankawde
27d2078419 feat(telemetry): add consensus validation span enrichment (Task 4.8)
Add validation ledger hash and full-validation flag to
consensus.validation.send spans, plus quorum and proposer count to
consensus.accept spans for trace-level agreement analysis.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 13:09:49 +01:00
Pratik Mankawde
14770376c3 docs: add Task 4.8 consensus validation span enrichment for external dashboard parity
Adds ledger_hash, validation.full to validation send/receive spans,
and validation_quorum, proposers_validated to consensus.accept spans.
Foundation for Phase 7 ValidationTracker agreement computation.

Part of the external dashboard parity initiative across phases 2-11.
See docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 13:09:49 +01:00
Pratik Mankawde
69d4b77abf Phase 4: Consensus tracing - round lifecycle, proposals, validations, close time
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 13:09:49 +01:00
Pratik Mankawde
c93e16a413 feat(telemetry): add peer version attribute to tx.receive spans (Task 3.7)
Tag transaction receive spans with the relaying peer's rippled version
to enable version-mismatch correlation during network upgrades.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 13:09:40 +01:00
Pratik Mankawde
8dcb03d73b docs: add Task 3.8 TX span peer version attribute for external dashboard parity
Adds xrpl.peer.version attribute to tx.receive spans for version-mismatch
correlation during network upgrades.

Part of the external dashboard parity initiative across phases 2-11.
See docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 13:09:40 +01:00
Pratik Mankawde
3436f93870 Phase 3: Transaction tracing - protobuf context propagation, PeerImp, NetworkOPs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 13:09:40 +01:00
Pratik Mankawde
177c1b32db feat(telemetry): add node health attributes to RPC spans (Task 2.8)
Add amendment_blocked and server_state span attributes to every
rpc.command.* span so operators can correlate RPC behavior with node state.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 12:11:13 +01:00
Pratik Mankawde
439264dc79 docs: add Task 2.8 RPC span attribute enrichment for external dashboard parity
Adds node health context (amendment_blocked, server_state) to rpc.command.*
spans, inspired by the community xrpl-validator-dashboard.

Part of the external dashboard parity initiative across phases 2-11.
See docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 16:07:10 +01:00
Pratik Mankawde
ab6946319c Phase 2: RPC tracing - span macros, attributes, WebSocket, command spans
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 16:07:10 +01:00
Pratik Mankawde
987ce6202e Phase 1b: Telemetry core infrastructure - CMake, Conan, SpanGuard, config
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 16:06:09 +01:00
Pratik Mankawde
a00c59eb7e Phase 1b: Telemetry core infrastructure - CMake, Conan, SpanGuard, config
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 16:04:31 +01:00
Pratik Mankawde
8707cc7d48 Phase 1c: RPC integration - ServerHandler tracing, telemetry config wiring
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 16:01:42 +01:00
Pratik Mankawde
405886719c Phase 1b: Telemetry core infrastructure - CMake, Conan, SpanGuard, config
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 16:01:24 +01:00
Pratik Mankawde
3ae9084bd5 Phase 1b: Telemetry core infrastructure - CMake, Conan, SpanGuard, config
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 16:01:24 +01:00
Pratik Mankawde
0f0c188111 Phase 1b: Telemetry core infrastructure - CMake, Conan, SpanGuard, config
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 15:58:38 +01:00
Pratik Mankawde
f135842071 docs: correct OTel overhead estimates against SDK benchmarks
Verified CPU, memory, and network overhead calculations against
official OTel C++ SDK benchmarks (969 CI runs) and source code
analysis. Key corrections:

- Span creation: 200-500ns → 500-1000ns (SDK BM_SpanCreation median
  ~1000ns; original estimate matched API no-op, not SDK path)
- Per-TX overhead: 2.4μs → 4.0μs (2.0% vs 1.2%; still within 1-3%)
- Active span memory: ~200 bytes → ~500-800 bytes (Span wrapper +
  SpanData + std::map attribute storage)
- Static memory: ~456KB → ~8.3MB (BatchSpanProcessor worker thread
  stack ~8MB was omitted)
- Total memory ceiling: ~2.3MB → ~10MB
- Memory success metric target: <5MB → <10MB
- AddEvent: 50-80ns → 100-200ns

Added Section 3.5.4 with links to all benchmark sources.
Updated presentation.md with matching corrections.
High-level conclusions unchanged (1-3% CPU, negligible consensus).

Also includes: review fixes, cross-document consistency improvements,
additional component tracing docs (PathFinding, TxQ, Validator, etc.),
context size corrections (32 → 25 bytes).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 15:55:26 +01:00
Pratik Mankawde
a9bc525f22 moved presentation.md file
Signed-off-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
2026-03-30 15:55:26 +01:00
Pratik Mankawde
5c9102bd9a Remove effort estimates from implementation phases document
Strip effort/risk columns from task tables and remove the §6.9 Effort
Summary section with its pie chart and resource requirements table.
Renumber §6.10 Quick Wins → §6.9.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 15:55:26 +01:00
Pratik Mankawde
c556f3471b Add Phase 4a implementation status to plan docs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 15:55:26 +01:00
Pratik Mankawde
2fb6124412 Appendix: add 00-tracing-fundamentals.md and POC_taskList.md to document index
Split document index into Plan Documents and Task Lists sections.
These files were introduced in this branch but missing from the index.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 15:55:26 +01:00
Pratik Mankawde
e482b56f58 Phase 1a: OpenTelemetry plan documentation
Add comprehensive planning documentation for the OpenTelemetry
distributed tracing integration:

- Tracing fundamentals and concepts
- Architecture analysis of rippled's tracing surface area
- Design decisions and trade-offs
- Implementation strategy and code samples
- Configuration reference
- Implementation phases roadmap
- Observability backend comparison
- POC task list and presentation materials

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 15:55:26 +01:00
dependabot[bot]
de671863e2 ci: [DEPENDABOT] bump actions/deploy-pages from 4.0.5 to 5.0.0 (#6684)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 14:09:57 +00:00
dependabot[bot]
e0cabb9f8c ci: [DEPENDABOT] bump codecov/codecov-action from 5.5.3 to 6.0.0 (#6685)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 13:57:32 +00:00
Pratik Mankawde
3d9c545f59 fix: Guard Coro::resume() against completed coroutines (#6608)
Signed-off-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-27 18:52:18 +00:00
Vito Tumas
9b944ee8c2 refactor: Split LoanInvariant into LoanBrokerInvariant and LoanInvariant (#6674) 2026-03-27 18:35:42 +00:00
Ayaz Salikhov
509677abfd ci: Don't publish docs on release branches (#6673) 2026-03-26 14:11:37 +00:00
Jingchen
addc1e8e25 refactor: Make function naming in ServiceRegistry consistent (#6390)
Signed-off-by: JCW <a1q123456@users.noreply.github.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
2026-03-26 14:11:16 +00:00
Valentin Balaschenko
faf69da4b0 chore: Shorten job names to stay within Linux 15-char thread limit (#6669) 2026-03-26 14:10:51 +00:00
Vito Tumas
76e3b4fb0f fix: Improve loan invariant message (#6668) 2026-03-26 12:40:26 +00:00
Ayaz Salikhov
e8bdbf975a ci: Upload artifacts only in public repositories (#6670) 2026-03-26 12:37:37 +00:00
261 changed files with 39438 additions and 1377 deletions

1
.claude/instructions.md Symbolic link
View File

@@ -0,0 +1 @@
/home/pratik/sourceCode/personal/Rippled/instructions.md

View File

@@ -36,3 +36,8 @@ ignore:
- "src/tests/"
- "include/xrpl/beast/test/"
- "include/xrpl/beast/unit_test/"
# Telemetry modules — conditionally compiled behind XRPL_ENABLE_TELEMETRY,
# which is not enabled in coverage builds.
- "src/xrpld/telemetry/"
- "src/libxrpl/beast/insight/OTelCollector.cpp"
- "include/xrpl/beast/insight/OTelCollector.h"

View File

@@ -16,6 +16,12 @@ Loop: xrpld.app xrpld.rpc
Loop: xrpld.app xrpld.shamap
xrpld.shamap ~= xrpld.app
Loop: xrpld.app xrpld.telemetry
xrpld.telemetry ~= xrpld.app
Loop: xrpld.overlay xrpld.rpc
xrpld.rpc ~= xrpld.overlay
Loop: xrpld.overlay xrpld.telemetry
xrpld.telemetry == xrpld.overlay

View File

@@ -21,6 +21,7 @@ libxrpl.protocol > xrpl.json
libxrpl.protocol > xrpl.protocol
libxrpl.protocol_autogen > xrpl.protocol_autogen
libxrpl.rdb > xrpl.basics
libxrpl.rdb > xrpl.core
libxrpl.rdb > xrpl.rdb
libxrpl.resource > xrpl.basics
libxrpl.resource > xrpl.json
@@ -33,6 +34,8 @@ libxrpl.server > xrpl.server
libxrpl.shamap > xrpl.basics
libxrpl.shamap > xrpl.protocol
libxrpl.shamap > xrpl.shamap
libxrpl.telemetry > xrpl.basics
libxrpl.telemetry > xrpl.telemetry
libxrpl.tx > xrpl.basics
libxrpl.tx > xrpl.conditions
libxrpl.tx > xrpl.core
@@ -92,6 +95,7 @@ test.csf > xrpld.consensus
test.csf > xrpl.json
test.csf > xrpl.ledger
test.csf > xrpl.protocol
test.csf > xrpl.telemetry
test.json > test.jtx
test.json > xrpl.json
test.jtx > xrpl.basics
@@ -171,20 +175,24 @@ test.shamap > xrpl.basics
test.shamap > xrpl.nodestore
test.shamap > xrpl.protocol
test.shamap > xrpl.shamap
test.telemetry > xrpl.core
test.telemetry > xrpld.telemetry
test.toplevel > test.csf
test.toplevel > xrpl.json
test.unit_test > xrpl.basics
test.unit_test > xrpl.protocol
tests.libxrpl > xrpl.basics
tests.libxrpl > xrpl.core
tests.libxrpl > xrpld.telemetry
tests.libxrpl > xrpl.json
tests.libxrpl > xrpl.net
tests.libxrpl > xrpl.protocol
tests.libxrpl > xrpl.protocol_autogen
tests.libxrpl > xrpl.telemetry
xrpl.conditions > xrpl.basics
xrpl.conditions > xrpl.protocol
xrpl.core > xrpl.basics
xrpl.core > xrpl.json
xrpl.core > xrpl.ledger
xrpl.core > xrpl.protocol
xrpl.json > xrpl.basics
xrpl.ledger > xrpl.basics
@@ -214,6 +222,7 @@ xrpl.server > xrpl.shamap
xrpl.shamap > xrpl.basics
xrpl.shamap > xrpl.nodestore
xrpl.shamap > xrpl.protocol
xrpl.telemetry > xrpl.basics
xrpl.tx > xrpl.basics
xrpl.tx > xrpl.core
xrpl.tx > xrpl.ledger
@@ -232,11 +241,14 @@ xrpld.app > xrpl.rdb
xrpld.app > xrpl.resource
xrpld.app > xrpl.server
xrpld.app > xrpl.shamap
xrpld.app > xrpl.telemetry
xrpld.app > xrpl.tx
xrpld.consensus > xrpl.basics
xrpld.consensus > xrpld.telemetry
xrpld.consensus > xrpl.json
xrpld.consensus > xrpl.ledger
xrpld.consensus > xrpl.protocol
xrpld.consensus > xrpl.telemetry
xrpld.core > xrpl.basics
xrpld.core > xrpl.core
xrpld.core > xrpl.json
@@ -260,10 +272,12 @@ xrpld.peerfinder > xrpl.rdb
xrpld.perflog > xrpl.basics
xrpld.perflog > xrpl.core
xrpld.perflog > xrpld.rpc
xrpld.perflog > xrpld.telemetry
xrpld.perflog > xrpl.json
xrpld.rpc > xrpl.basics
xrpld.rpc > xrpl.core
xrpld.rpc > xrpld.core
xrpld.rpc > xrpld.telemetry
xrpld.rpc > xrpl.json
xrpld.rpc > xrpl.ledger
xrpld.rpc > xrpl.net
@@ -274,3 +288,11 @@ xrpld.rpc > xrpl.resource
xrpld.rpc > xrpl.server
xrpld.rpc > xrpl.tx
xrpld.shamap > xrpl.shamap
xrpld.telemetry > xrpl.basics
xrpld.telemetry > xrpl.core
xrpld.telemetry > xrpld.core
xrpld.telemetry > xrpl.nodestore
xrpld.telemetry > xrpl.protocol
xrpld.telemetry > xrpl.rdb
xrpld.telemetry > xrpl.server
xrpld.telemetry > xrpl.telemetry

View File

@@ -6,7 +6,6 @@ on:
push:
branches:
- "develop"
- "release*"
paths:
- ".github/workflows/publish-docs.yml"
- "*.md"
@@ -100,4 +99,4 @@ jobs:
steps:
- name: Deploy to GitHub Pages
id: deploy
uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e # v4.0.5
uses: actions/deploy-pages@cd2ce8fcbc39b97be8ca5fce6e763baed58fa128 # v5.0.0

View File

@@ -101,7 +101,7 @@ jobs:
steps:
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
uses: XRPLF/actions/cleanup-workspace@c7d9ce5ebb03c752a354889ecd870cadfc2b1cd4
uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
@@ -199,7 +199,7 @@ jobs:
fi
- name: Upload the binary (Linux)
if: ${{ github.repository == 'XRPLF/rippled' && runner.os == 'Linux' }}
if: ${{ github.event.repository.visibility == 'public' && runner.os == 'Linux' }}
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: xrpld-${{ inputs.config_name }}
@@ -263,18 +263,6 @@ jobs:
[ "$COVERAGE_ENABLED" = "true" ] && BUILD_NPROC=$(( BUILD_NPROC - 2 ))
./xrpld --unittest --unittest-jobs "${BUILD_NPROC}" 2>&1 | tee unittest.log
- name: Show test failure summary
if: ${{ failure() && !inputs.build_only }}
working-directory: ${{ runner.os == 'Windows' && format('{0}/{1}', env.BUILD_DIR, inputs.build_type) || env.BUILD_DIR }}
run: |
if [ ! -f unittest.log ]; then
echo "unittest.log not found; embedded tests may not have run."
exit 0
fi
if ! grep -E "failed" unittest.log; then
echo "Log present but no failure lines found in unittest.log."
fi
- name: Debug failure (Linux)
if: ${{ failure() && runner.os == 'Linux' && !inputs.build_only }}
run: |
@@ -298,7 +286,7 @@ jobs:
- name: Upload coverage report
if: ${{ github.repository == 'XRPLF/rippled' && !inputs.build_only && env.COVERAGE_ENABLED == 'true' }}
uses: codecov/codecov-action@1af58845a975a7985b0beb0cbe6fbbb71a41dbad # v5.5.3
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0
with:
disable_search: true
disable_telem: true

View File

@@ -78,12 +78,12 @@ jobs:
id: run_clang_tidy
continue-on-error: true
env:
TARGETS: ${{ inputs.files != '' && inputs.files || 'src tests' }}
FILES: ${{ inputs.files }}
run: |
run-clang-tidy -j ${{ steps.nproc.outputs.nproc }} -p "${BUILD_DIR}" ${TARGETS} 2>&1 | tee clang-tidy-output.txt
run-clang-tidy -j ${{ steps.nproc.outputs.nproc }} -p "$BUILD_DIR" $FILES 2>&1 | tee clang-tidy-output.txt
- name: Upload clang-tidy output
if: steps.run_clang_tidy.outcome != 'success'
if: ${{ github.event.repository.visibility == 'public' && steps.run_clang_tidy.outcome != 'success' }}
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: clang-tidy-results

View File

@@ -0,0 +1,242 @@
# Telemetry Validation CI Workflow
#
# Builds rippled with telemetry enabled, runs the multi-node workload
# harness, validates all telemetry data, and runs performance benchmarks.
#
# This is a separate workflow from the main CI. It runs:
# - On manual dispatch (workflow_dispatch)
# - On pushes to telemetry-related branches
#
# The workflow is intentionally heavyweight (builds rippled, starts Docker
# services, runs a multi-node cluster) — it validates the full telemetry
# stack end-to-end rather than individual unit tests.
#
# Architecture: two jobs to leverage cached dependencies:
# 1. build-xrpld — runs on a self-hosted runner inside the same container
# image the main CI uses (debian-bookworm-gcc-13). This ensures Conan
# packages are fetched from the XRPLF remote instead of built from
# source, and ccache hits the remote cache.
# 2. validate-telemetry — runs on ubuntu-latest (which has Docker) to
# launch the telemetry stack (OTel collector, Prometheus, Tempo, etc.)
# and validate the full pipeline end-to-end.
name: Telemetry Validation
on:
workflow_dispatch:
inputs:
rpc_rate:
description: "RPC load rate (requests per second)"
required: false
default: "50"
rpc_duration:
description: "RPC load duration (seconds)"
required: false
default: "120"
tx_tps:
description: "Transaction submit rate (TPS)"
required: false
default: "5"
tx_duration:
description: "Transaction submit duration (seconds)"
required: false
default: "120"
run_benchmark:
description: "Run performance benchmarks"
required: false
type: boolean
default: false
push:
branches:
- "pratik/otel-phase*"
- "feature/otel-*"
- "feature/telemetry-*"
paths:
- ".github/workflows/telemetry-validation.yml"
- "docker/telemetry/**"
- "include/xrpl/basics/Telemetry*.h"
- "src/xrpld/app/misc/Telemetry*"
concurrency:
group: telemetry-validation-${{ github.ref }}
cancel-in-progress: true
defaults:
run:
shell: bash
env:
BUILD_DIR: build
jobs:
# ── Job 1: Build xrpld in the same container the main CI uses ──────
# This ensures Conan binary packages are fetched from the XRPLF remote
# (matching package IDs) and ccache hits the remote compilation cache.
build-xrpld:
name: Build xrpld
runs-on: [self-hosted, Linux, X64, heavy]
container: ghcr.io/xrplf/ci/debian-bookworm:gcc-13-sha-ab4d1f0
timeout-minutes: 60
env:
CCACHE_NAMESPACE: telemetry-validation
CCACHE_REMOTE_ONLY: true
CCACHE_REMOTE_STORAGE: http://cache.dev.ripplex.io:8080|layout=bazel
CCACHE_SLOPPINESS: include_file_ctime,include_file_mtime
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Prepare runner
uses: XRPLF/actions/prepare-runner@2cbf481018d930656e9276fcc20dc0e3a0be5b6d
with:
enable_ccache: ${{ github.repository_owner == 'XRPLF' }}
- name: Print build environment
uses: ./.github/actions/print-env
- name: Get number of processors
uses: XRPLF/actions/get-nproc@cf0433aa74563aead044a1e395610c96d65a37cf
id: nproc
with:
subtract: 2
- name: Setup Conan
uses: ./.github/actions/setup-conan
- name: Build dependencies
uses: ./.github/actions/build-deps
with:
build_nproc: ${{ steps.nproc.outputs.nproc }}
build_type: Release
log_verbosity: verbose
- name: Configure CMake
working-directory: ${{ env.BUILD_DIR }}
run: |
cmake \
-G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=Release \
..
- name: Build xrpld
working-directory: ${{ env.BUILD_DIR }}
env:
BUILD_NPROC: ${{ steps.nproc.outputs.nproc }}
run: |
cmake \
--build . \
--config Release \
--parallel "${BUILD_NPROC}" \
--target xrpld
- name: Show ccache statistics
if: ${{ github.repository_owner == 'XRPLF' }}
run: ccache --show-stats -vv
- name: Upload xrpld binary
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: xrpld-telemetry
path: ${{ env.BUILD_DIR }}/xrpld
retention-days: 1
if-no-files-found: error
# ── Job 2: Run telemetry validation on ubuntu-latest (has Docker) ──
validate-telemetry:
name: Telemetry Stack Validation
needs: build-xrpld
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Install Python dependencies
run: pip3 install -r docker/telemetry/workload/requirements.txt
- name: Download xrpld binary
uses: actions/download-artifact@95815c38cf2ff2164869cbab79da8d1f422bc89e # v4.2.1
with:
name: xrpld-telemetry
path: ${{ env.BUILD_DIR }}
- name: Make binaries and scripts executable
run: |
chmod +x ${{ env.BUILD_DIR }}/xrpld
chmod +x docker/telemetry/workload/*.sh
- name: Run full telemetry validation
id: validation
env:
RPC_RATE: ${{ github.event.inputs.rpc_rate || '50' }}
RPC_DURATION: ${{ github.event.inputs.rpc_duration || '120' }}
TX_TPS: ${{ github.event.inputs.tx_tps || '5' }}
TX_DURATION: ${{ github.event.inputs.tx_duration || '120' }}
RUN_BENCHMARK: ${{ github.event.inputs.run_benchmark }}
run: |
ARGS="--xrpld ${{ env.BUILD_DIR }}/xrpld --skip-loki"
ARGS="$ARGS --rpc-rate $RPC_RATE"
ARGS="$ARGS --rpc-duration $RPC_DURATION"
ARGS="$ARGS --tx-tps $TX_TPS"
ARGS="$ARGS --tx-duration $TX_DURATION"
if [ "$RUN_BENCHMARK" = "true" ]; then
ARGS="$ARGS --with-benchmark"
fi
docker/telemetry/workload/run-full-validation.sh $ARGS
# continue-on-error allows subsequent steps (artifact upload,
# summary printing) to run even if validation fails. The final
# "Check validation result" step re-checks steps.validation.outcome
# (the pre-continue-on-error result) and fails the job properly.
continue-on-error: true
- name: Upload validation reports
if: always()
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: telemetry-validation-reports
path: /tmp/xrpld-validation/reports/
retention-days: 30
- name: Upload node logs
if: failure()
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: xrpld-node-logs
path: /tmp/xrpld-validation/node*/debug.log
retention-days: 7
- name: Print validation summary
if: always()
run: |
REPORT="/tmp/xrpld-validation/reports/validation-report.json"
if [ -f "$REPORT" ]; then
echo "## Telemetry Validation Results" >> "$GITHUB_STEP_SUMMARY"
echo "" >> "$GITHUB_STEP_SUMMARY"
TOTAL=$(jq '.summary.total' "$REPORT")
PASSED=$(jq '.summary.passed' "$REPORT")
FAILED=$(jq '.summary.failed' "$REPORT")
echo "| Metric | Value |" >> "$GITHUB_STEP_SUMMARY"
echo "|--------|-------|" >> "$GITHUB_STEP_SUMMARY"
echo "| Total Checks | $TOTAL |" >> "$GITHUB_STEP_SUMMARY"
echo "| Passed | $PASSED |" >> "$GITHUB_STEP_SUMMARY"
echo "| Failed | $FAILED |" >> "$GITHUB_STEP_SUMMARY"
echo "" >> "$GITHUB_STEP_SUMMARY"
if [ "$FAILED" -gt 0 ]; then
echo "### Failed Checks" >> "$GITHUB_STEP_SUMMARY"
echo "" >> "$GITHUB_STEP_SUMMARY"
jq -r '.checks[] | select(.passed == false) | "- **\(.name)**: \(.message)"' "$REPORT" >> "$GITHUB_STEP_SUMMARY"
fi
fi
- name: Cleanup
if: always()
run: |
docker/telemetry/workload/run-full-validation.sh --cleanup 2>/dev/null || true
- name: Check validation result
if: steps.validation.outcome == 'failure'
run: |
echo "Telemetry validation failed. Check the uploaded reports for details."
exit 1

View File

@@ -64,7 +64,7 @@ jobs:
steps:
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
uses: XRPLF/actions/cleanup-workspace@c7d9ce5ebb03c752a354889ecd870cadfc2b1cd4
uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2

1
.gitignore vendored
View File

@@ -85,3 +85,4 @@ __pycache__
# clangd cache
/.cache
docker/telemetry/workload/__pycache__/

View File

@@ -117,6 +117,18 @@ if(rocksdb)
target_link_libraries(xrpl_libs INTERFACE RocksDB::rocksdb)
endif()
# OpenTelemetry distributed tracing (optional).
# When ON, links against opentelemetry-cpp and defines XRPL_ENABLE_TELEMETRY
# so that tracing macros in TracingInstrumentation.h are compiled in.
# When OFF (default), all tracing code compiles to no-ops with zero overhead.
# Enable via: conan install -o telemetry=True, or cmake -Dtelemetry=ON.
option(telemetry "Enable OpenTelemetry tracing" OFF)
if(telemetry)
find_package(opentelemetry-cpp CONFIG REQUIRED)
add_compile_definitions(XRPL_ENABLE_TELEMETRY)
message(STATUS "OpenTelemetry tracing enabled")
endif()
# Work around changes to Conan recipe for now.
if(TARGET nudb::core)
set(nudb nudb::core)

View File

@@ -0,0 +1,567 @@
# Distributed Tracing Fundamentals
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Next**: [Architecture Analysis](./01-architecture-analysis.md)
---
## What is Distributed Tracing?
Distributed tracing is a method for tracking data objects as they flow through distributed systems. In a network like XRP Ledger, a single transaction touches multiple independent nodes—each with no shared memory or logging. Distributed tracing connects these dots.
**Without tracing:** You see isolated logs on each node with no way to correlate them.
**With tracing:** You see the complete journey of a transaction or an event across all nodes it touched.
---
## Actors and Actions at a Glance
### Actors
| Who (Plain English) | Technical Term |
| ---------------------------------------------- | --------------- |
| A single unit of work being tracked | Span |
| The complete journey of a request | Trace |
| Data that links spans across services | Trace Context |
| Code that creates spans and propagates context | Instrumentation |
| Service that receives and processes traces | Collector |
| Storage and visualization system | Backend (Tempo) |
| Decision logic for which traces to keep | Sampler |
### Actions
| What Happens (Plain English) | Technical Term |
| --------------------------------------- | ----------------------- |
| Start tracking a new operation | Create a Span |
| Connect a child operation to its parent | Set `parent_span_id` |
| Group all related operations together | Share a `trace_id` |
| Pass tracking data between services | Context Propagation |
| Decide whether to record a trace | Sampling (Head or Tail) |
| Send completed traces to storage | Export (OTLP) |
---
## Core Concepts
### 1. Trace
A **trace** represents the entire journey of a request through the system. It has a unique `trace_id` that stays constant across all nodes.
```
Trace ID: abc123
├── Node A: received transaction
├── Node B: relayed transaction
├── Node C: included in consensus
└── Node D: applied to ledger
```
### 2. Span
A **span** represents a single unit of work within a trace. Each span has:
| Attribute | Description | Example |
| ---------------- | -------------------------------- | -------------------------- |
| `trace_id` | Identifies the trace | `event123` |
| `span_id` | Unique identifier | `span456` |
| `parent_span_id` | Parent span (if any) | `p_span123` |
| `name` | Operation name | `rpc.submit` |
| `start_time` | When work began (local time) | `2024-01-15T10:30:00Z` |
| `end_time` | When work completed (local time) | `2024-01-15T10:30:00.050Z` |
| `attributes` | Key-value metadata | `tx.hash=ABC...` |
| `status` | OK, ERROR MSG | `OK` |
### 3. Trace Context
**Trace context** is the data that propagates between services to link spans together. It contains:
- `trace_id` - The trace this span belongs to
- `span_id` - The current span (becomes parent for child spans)
- `trace_flags` - Sampling decisions
---
## How Spans Form a Trace
Spans have parent-child relationships forming a tree structure:
```mermaid
flowchart TB
subgraph trace["Trace: abc123"]
A["tx.submit<br/>span_id: 001<br/>50ms"] --> B["tx.validate<br/>span_id: 002<br/>5ms"]
A --> C["tx.relay<br/>span_id: 003<br/>10ms"]
A --> D["tx.apply<br/>span_id: 004<br/>30ms"]
D --> E["ledger.update<br/>span_id: 005<br/>20ms"]
end
style A fill:#0d47a1,stroke:#082f6a,color:#ffffff
style B fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style C fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style D fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style E fill:#bf360c,stroke:#8c2809,color:#ffffff
```
**Reading the diagram:**
- **tx.submit (blue, root)**: The top-level span representing the entire transaction submission; all other spans are its descendants.
- **tx.validate, tx.relay, tx.apply (green)**: Direct children of tx.submit, representing the three main stages -- validation, relay to peers, and application to the ledger.
- **ledger.update (red)**: A grandchild span nested under tx.apply, representing the actual ledger state mutation triggered by applying the transaction.
- **Arrows (parent to child)**: Each arrow indicates a parent-child span relationship where the parent's completion depends on the child finishing.
The same trace visualized as a **timeline (Gantt chart)**:
```
Time → 0ms 10ms 20ms 30ms 40ms 50ms
├───────────────────────────────────────────┤
tx.submit│▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
├─────┤
tx.valid │▓▓▓▓▓│
│ ├──────────┤
tx.relay │ │▓▓▓▓▓▓▓▓▓▓│
│ ├────────────────────────────┤
tx.apply │ │▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
│ ├──────────────────┤
ledger │ │▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
```
---
## Span Relationships
Spans don't always form simple parent-child trees. Distributed tracing defines several relationship types to capture different causal patterns:
### 1. Parent-Child (ChildOf)
The default relationship. The parent span **depends on** or **contains** the child span. The child runs within the scope of the parent.
```
tx.submit (parent)
├── tx.validate (child) ← parent waits for this
├── tx.relay (child) ← parent waits for this
└── tx.apply (child) ← parent waits for this
```
**When to use:** Synchronous calls, nested operations, any case where the parent's completion depends on the child.
### 2. Follows-From
A causal relationship where the first span **triggers** the second, but does **not wait** for it. The originator fires and moves on.
```
Time →
tx.receive [=======]
↓ triggers (follows-from)
tx.relay [===========] ← runs independently
```
**When to use:** Asynchronous jobs, queued work, fire-and-forget patterns. For example, a node receives a transaction and queues it for relay — the relay span _follows from_ the receive span but the receiver doesn't wait for relaying to complete.
> **OpenTracing** defined `FollowsFrom` as a first-class reference type alongside `ChildOf`.
> **OpenTelemetry** represents this using **Span Links** with descriptive attributes instead (see below).
### 3. Span Links (Cross-Trace and Non-Hierarchical)
Links connect spans that are **causally related but not in a parent-child hierarchy**. Unlike parent-child, links can cross trace boundaries.
```
Trace A Trace B
────── ──────
batch.schedule batch.execute
├─ item.enqueue (span X) ┌──► process.item
├─ item.enqueue (span Y) ───┤ (links to X, Y, Z)
├─ item.enqueue (span Z) └──►
```
**Use cases:**
| Pattern | Description |
| -------------------- | --------------------------------------------------------------------------- |
| **Batch processing** | A batch span links back to all individual spans that contributed to it |
| **Fan-in** | An aggregation span links to the multiple producer spans it merges |
| **Fan-out** | Multiple downstream spans link back to the single span that triggered them |
| **Async handoff** | A deferred job links back to the request that queued it (follows-from) |
| **Cross-trace** | Correlating spans across independent traces (e.g., retries, related events) |
**Link structure:** Each link carries the target span's context plus optional attributes:
```
Link {
trace_id: <target trace>
span_id: <target span>
attributes: { "link.description": "triggered by batch scheduler" }
}
```
### Relationship Summary
```mermaid
flowchart LR
subgraph parent_child["Parent-Child"]
direction TB
P["Parent"] --> C["Child"]
end
subgraph follows_from["Follows-From"]
direction TB
A["Span A"] -.->|triggers| B["Span B"]
end
subgraph links["Span Links"]
direction TB
X["Span X\n(Trace 1)"] -.-|link| Y["Span Y\n(Trace 2)"]
end
parent_child ~~~ follows_from ~~~ links
style P fill:#0d47a1,stroke:#082f6a,color:#ffffff
style C fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style A fill:#0d47a1,stroke:#082f6a,color:#ffffff
style B fill:#bf360c,stroke:#8c2809,color:#ffffff
style X fill:#4a148c,stroke:#38006b,color:#ffffff
style Y fill:#4a148c,stroke:#38006b,color:#ffffff
```
| Relationship | Same Trace? | Dependency? | OTel Mechanism |
| ---------------- | ----------- | -------------------------- | ----------------- |
| **Parent-Child** | Yes | Parent depends on child | `parent_span_id` |
| **Follows-From** | Usually | Causal but no dependency | Link + attributes |
| **Span Link** | Either | Correlation, no dependency | Link + attributes |
---
## Trace ID Generation
A `trace_id` is a 128-bit (16-byte) identifier that groups all spans belonging to one logical operation. How it's generated determines how easily you can find and correlate traces later.
### General Approaches
#### 1. Random (W3C Default)
Generate a random 128-bit ID when a trace starts. Standard approach for most services.
```
trace_id = random_128_bits()
```
| Pros | Cons |
| --------------------------- | --------------------------------------------- |
| Simple, standard | No natural correlation to domain events |
| Guaranteed unique per trace | If propagation is lost, trace is broken |
| Works with all OTel tooling | "Find trace for TX abc" requires index lookup |
#### 2. Deterministic (Derived from Domain Data)
Compute the trace_id from a hash of a natural identifier. Every node independently derives the **same** trace_id for the same event.
```
trace_id = SHA-256(domain_identifier)[0:16] // truncate to 128 bits
```
| Pros | Cons |
| --------------------------------------------------- | ---------------------------------------------------------- |
| Propagation-resilient — same ID computed everywhere | Same event processed twice (retry) shares trace_id |
| Natural search — domain ID maps directly to trace | Non-standard (tooling assumes random) |
| No coordination needed between nodes | 256→128 bit truncation (collision risk negligible at ~2⁶⁴) |
#### 3. Hybrid (Deterministic Prefix + Random Suffix)
First 8 bytes derived from domain data, last 8 bytes random.
```
trace_id = SHA-256(domain_identifier)[0:8] || random_64_bits()
```
| Pros | Cons |
| ------------------------------------------- | ---------------------------------------- |
| Prefix search: "find all traces for TX abc" | Must propagate to maintain full trace_id |
| Unique per processing instance | More complex generation logic |
| Retries get distinct trace_ids | Partial correlation only (prefix match) |
### XRPL Workflow Analysis
XRPL has a unique advantage: its core workflows produce **globally unique 256-bit hashes** that are known on every node. This makes deterministic trace_id generation practical in ways most systems can't achieve.
#### Natural Identifiers by Workflow
| Workflow | Natural Identifier | Size | Known at Start? | Same on All Nodes? |
| ------------------- | --------------------------------- | ---------- | ----------------------------- | -------------------------------- |
| **Transaction** | Transaction hash (`tid_`) | 256-bit | Yes — computed before signing | Yes — hash of canonical tx data |
| **Consensus round** | Previous ledger hash + ledger seq | 256+32 bit | Yes — known when round opens | Yes — all validators agree |
| **Validation** | Ledger hash being validated | 256-bit | Yes — from consensus result | Yes — same closed ledger |
| **Ledger catch-up** | Target ledger hash | 256-bit | Yes — we know what to fetch | Yes — identifies ledger globally |
#### Where These Identifiers Live in Code
```
Transaction: STTx::getTransactionID() → uint256 tid_
TMTransaction::rawTransaction → recompute hash from bytes
Consensus: ConsensusProposal::prevLedger_ → uint256 (previous ledger hash)
ConsensusProposal::position_ → uint256 (TxSet hash)
LedgerHeader::seq → uint32_t (ledger sequence)
Validation: STValidation::getLedgerHash() → uint256
STValidation::getNodeID() → NodeID (160-bit)
Ledger fetch: InboundLedger constructor → uint256 hash, uint32_t seq
TMGetLedger::ledgerHash → bytes (uint256)
```
### Recommended Strategy: Workflow-Scoped Deterministic
Each workflow type derives its trace_id from its natural domain identifier:
```
Transaction trace: trace_id = SHA-256("tx" || tx_hash)[0:16]
Consensus trace: trace_id = SHA-256("cons" || prev_ledger_hash || ledger_seq)[0:16]
Ledger catch-up: trace_id = SHA-256("fetch" || target_ledger_hash)[0:16]
```
The string prefix (`"tx"`, `"cons"`, `"fetch"`) prevents collisions between workflows that might share underlying hashes.
**Why this works for XRPL:**
1. **Propagation-resilient** — Even if a P2P message drops trace context, every node independently computes the same trace_id from the same tx_hash or ledger_hash. Spans still correlate.
2. **Zero-cost search** — "Show me the trace for transaction ABC" becomes a direct lookup: compute `SHA-256("tx" || ABC)[0:16]` and query. No secondary index needed.
3. **Cross-workflow linking via Span Links** — A consensus trace links to individual transaction traces. A validation span links to the consensus trace. This connects the full picture without forcing everything into one giant trace.
### Cross-Workflow Correlation
Each workflow gets its own trace. Span Links tie them together:
```mermaid
flowchart TB
subgraph tx_trace["Transaction Trace"]
direction LR
Tn["trace_id = f(tx_hash)"]:::note --> T1["tx.receive"] --> T2["tx.validate"] --> T3["tx.relay"]
end
subgraph cons_trace["Consensus Trace"]
direction LR
Cn["trace_id = f(prev_ledger, seq)"]:::note --> C1["cons.open"] --> C2["cons.propose"] --> C3["cons.accept"]
end
subgraph val_trace["Validation"]
direction LR
Vn["spans within consensus trace"]:::note --> V1["val.create"] --> V2["val.broadcast"]
end
subgraph fetch_trace["Catch-Up Trace"]
direction LR
Fn["trace_id = f(ledger_hash)"]:::note --> F1["fetch.request"] --> F2["fetch.receive"] --> F3["fetch.apply"]
end
C1 -.-|"span link\n(tx traces)"| T3
C3 --> V1
F1 -.-|"span link\n(target ledger)"| C3
classDef note fill:none,stroke:#888,stroke-dasharray:5 5,color:#333,font-style:italic
style T1 fill:#0d47a1,stroke:#082f6a,color:#ffffff
style T2 fill:#0d47a1,stroke:#082f6a,color:#ffffff
style T3 fill:#0d47a1,stroke:#082f6a,color:#ffffff
style C1 fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style C2 fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style C3 fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style V1 fill:#bf360c,stroke:#8c2809,color:#ffffff
style V2 fill:#bf360c,stroke:#8c2809,color:#ffffff
style F1 fill:#4a148c,stroke:#38006b,color:#ffffff
style F2 fill:#4a148c,stroke:#38006b,color:#ffffff
style F3 fill:#4a148c,stroke:#38006b,color:#ffffff
```
**Reading the diagram:**
- **Transaction Trace (blue)**: An independent trace whose `trace_id` is deterministically derived from the transaction hash. Contains receive, validate, and relay spans.
- **Consensus Trace (green)**: An independent trace whose `trace_id` is derived from the previous ledger hash and sequence number. Covers the open, propose, and accept phases.
- **Validation (red)**: Validation spans live within the consensus trace (not a separate trace). They are created after the accept phase completes.
- **Catch-Up Trace (purple)**: An independent trace for ledger acquisition, derived from the target ledger hash. Used when a node is behind and fetching missing ledgers.
- **Dotted arrows (span links)**: Cross-trace correlations. Consensus links to transaction traces it included; catch-up links to the consensus trace that produced the target ledger.
- **Solid arrow (C3 to V1)**: A parent-child relationship -- validation spans are direct children of the consensus accept span within the same trace.
**How a query flows:**
```
"Why was TX abc slow?"
1. Compute trace_id = SHA-256("tx" || abc)[0:16]
2. Find transaction trace → see it was included in consensus round N
3. Follow span link → consensus trace for round N
4. See which phase was slow (propose? accept?)
5. If a node was catching up, follow link → catch-up trace
```
### Trade-offs to Consider
| Concern | Mitigation |
| ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| **Retries get same trace_id** | Add `attempt` attribute to root span; spans have unique span_ids and timestamps |
| **256→128 bit truncation** | Birthday-bound collision at ~2⁶⁴ operations — negligible for XRPL's throughput |
| **Non-standard generation** | OTel spec allows any 16-byte non-zero value; tooling works on the hex string |
| **Hash computation cost** | SHA-256 is ~0.3μs per call; XRPL already computes these hashes for other purposes |
| **Late-binding identifiers** | Ledger hash isn't known until after consensus — validation spans use ledger_seq as fallback, then link to the consensus trace |
---
## Distributed Traces Across Nodes
In distributed systems like rippled, traces span **multiple independent nodes**. The trace context must be propagated in network messages:
```mermaid
sequenceDiagram
participant Client
participant NodeA as Node A
participant NodeB as Node B
participant NodeC as Node C
Client->>NodeA: Submit TX<br/>(no trace context)
Note over NodeA: Creates new trace<br/>trace_id: abc123<br/>span: tx.receive
NodeA->>NodeB: Relay TX<br/>(trace_id: abc123, parent: 001)
Note over NodeB: Creates child span<br/>span: tx.relay<br/>parent_span_id: 001
NodeA->>NodeC: Relay TX<br/>(trace_id: abc123, parent: 001)
Note over NodeC: Creates child span<br/>span: tx.relay<br/>parent_span_id: 001
Note over NodeA,NodeC: All spans share trace_id: abc123<br/>enabling correlation across nodes
```
**Reading the diagram:**
- **Client**: The external entity that submits a transaction. It does not carry trace context -- the trace originates at the first node.
- **Node A**: The entry point that creates a new trace (trace_id: abc123) and the root span `tx.receive`. It relays the transaction to peers with trace context attached.
- **Node B and Node C**: Peer nodes that receive the relayed transaction along with the propagated trace context. Each creates a child span under Node A's span, preserving the same `trace_id`.
- **Arrows with trace context**: The relay messages carry `trace_id` and `parent_span_id`, allowing each downstream node to link its spans back to the originating span on Node A.
---
## Context Propagation
For traces to work across nodes, **trace context must be propagated** in messages.
### What's in the Context (~26 bytes)
| Field | Size | Description |
| ------------- | -------- | ------------------------------------------------------- |
| `trace_id` | 16 bytes | Identifies the entire trace (constant across all nodes) |
| `span_id` | 8 bytes | The sender's current span (becomes parent on receiver) |
| `trace_flags` | 1 byte | Sampling decision (bit 0 = sampled; bits 1-7 reserved) |
| `trace_state` | variable | Optional vendor-specific data (typically omitted) |
### How span_id Changes at Each Hop
Only **one** `span_id` travels in the context - the sender's current span. Each node:
1. Extracts the received `span_id` and uses it as the `parent_span_id`
2. Creates a **new** `span_id` for its own span
3. Sends its own `span_id` as the parent when forwarding
```
Node A Node B Node C
────── ────── ──────
Span AAA Span BBB Span CCC
│ │ │
▼ ▼ ▼
Context out: Context out: Context out:
├─ trace_id: abc123 ├─ trace_id: abc123 ├─ trace_id: abc123
├─ span_id: AAA ──────────► ├─ span_id: BBB ──────────► ├─ span_id: CCC ──────►
└─ flags: 01 └─ flags: 01 └─ flags: 01
│ │
parent = AAA parent = BBB
```
The `trace_id` stays constant, but `span_id` **changes at every hop** to maintain the parent-child chain.
### Propagation Formats
There are two patterns:
### HTTP/RPC Headers (W3C Trace Context)
```
traceparent: 00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01
│ │ │ │
│ │ │ └── Flags (sampled)
│ │ └── Parent span ID (16 hex)
│ └── Trace ID (32 hex)
└── Version
```
### Protocol Buffers (rippled P2P messages)
```protobuf
message TMTransaction {
bytes rawTransaction = 1;
// ... existing fields ...
// Trace context extension
bytes trace_parent = 100; // W3C traceparent
bytes trace_state = 101; // W3C tracestate
}
```
---
## Sampling
Not every trace needs to be recorded. **Sampling** reduces overhead:
### Head Sampling (at trace start)
```
Request arrives → Random 10% chance → Record or skip entire trace
```
- ✅ Low overhead
- ❌ May miss interesting traces
### Tail Sampling (after trace completes)
```
Trace completes → Collector evaluates:
- Error? → KEEP
- Slow? → KEEP
- Normal? → Sample 10%
```
- ✅ Never loses important traces
- ❌ Higher memory usage at collector
---
## Key Benefits for rippled
| Challenge | How Tracing Helps |
| ---------------------------------- | ---------------------------------------- |
| "Where is my transaction?" | Follow trace across all nodes it touched |
| "Why was consensus slow?" | See timing breakdown of each phase |
| "Which node is the bottleneck?" | Compare span durations across nodes |
| "What happened during the outage?" | Correlate errors across the network |
---
## Glossary
| Term | Definition |
| -------------------- | ------------------------------------------------------------------- |
| **Trace** | Complete journey of a request, identified by `trace_id` |
| **Span** | Single operation within a trace |
| **Parent-Child** | Span relationship where the parent depends on the child |
| **Follows-From** | Causal relationship where originator doesn't wait for the result |
| **Span Link** | Non-hierarchical connection between spans, possibly across traces |
| **Deterministic ID** | Trace ID derived from domain data (e.g., tx_hash) instead of random |
| **Context** | Data propagated between services (`trace_id`, `span_id`, flags) |
| **Instrumentation** | Code that creates spans and propagates context |
| **Collector** | Service that receives, processes, and exports traces |
| **Backend** | Storage/visualization system (Tempo) |
| **Head Sampling** | Sampling decision at trace start |
| **Tail Sampling** | Sampling decision after trace completes |
---
_Next: [Architecture Analysis](./01-architecture-analysis.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,467 @@
# Architecture Analysis
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Design Decisions](./02-design-decisions.md) | [Implementation Strategy](./03-implementation-strategy.md)
---
## 1.1 Current rippled Architecture Overview
> **WS** = WebSocket | **UNL** = Unique Node List | **TxQ** = Transaction Queue | **StatsD** = Statistics Daemon
The rippled node software consists of several interconnected components that need instrumentation for distributed tracing:
```mermaid
flowchart TB
subgraph rippled["rippled Node"]
subgraph services["Core Services"]
RPC["RPC Server<br/>(HTTP/WS/gRPC)"]
Overlay["Overlay<br/>(P2P Network)"]
Consensus["Consensus<br/>(RCLConsensus)"]
ValidatorList["ValidatorList<br/>(UNL Mgmt)"]
end
JobQueue["JobQueue<br/>(Thread Pool)"]
subgraph processing["Processing Layer"]
NetworkOPs["NetworkOPs<br/>(Tx Processing)"]
LedgerMaster["LedgerMaster<br/>(Ledger Mgmt)"]
NodeStore["NodeStore<br/>(Database)"]
InboundLedgers["InboundLedgers<br/>(Ledger Sync)"]
end
subgraph appservices["Application Services"]
PathFind["PathFinding<br/>(Payment Paths)"]
TxQ["TxQ<br/>(Fee Escalation)"]
LoadMgr["LoadManager<br/>(Fee/Load)"]
end
subgraph observability["Existing Observability"]
PerfLog["PerfLog<br/>(JSON)"]
Insight["Insight<br/>(StatsD)"]
Logging["Logging<br/>(Journal)"]
end
services --> JobQueue
JobQueue --> processing
JobQueue --> appservices
end
style rippled fill:#424242,stroke:#212121,color:#ffffff
style services fill:#1565c0,stroke:#0d47a1,color:#ffffff
style processing fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style appservices fill:#6a1b9a,stroke:#4a148c,color:#ffffff
style observability fill:#e65100,stroke:#bf360c,color:#ffffff
```
**Reading the diagram:**
- **Core Services (blue)**: The entry points into rippled -- RPC Server handles client requests, Overlay manages peer-to-peer networking, Consensus drives agreement, and ValidatorList manages trusted validators.
- **JobQueue (center)**: The asynchronous thread pool that decouples Core Services from the Processing and Application layers. All work flows through it.
- **Processing Layer (green)**: Core business logic -- NetworkOPs processes transactions, LedgerMaster manages ledger state, NodeStore handles persistence, and InboundLedgers synchronizes missing data.
- **Application Services (purple)**: Higher-level features -- PathFinding computes payment routes, TxQ manages fee-based queuing, and LoadManager tracks server load.
- **Existing Observability (orange)**: The current monitoring stack (PerfLog, Insight, Journal logging) that OpenTelemetry will complement, not replace.
- **Arrows (Services to JobQueue to layers)**: Work originates at Core Services, is enqueued onto the JobQueue, and dispatched to Processing or Application layers for execution.
---
## 1.1.1 Actors and Actions
### Actors
| Who (Plain English) | Technical Term |
| ----------------------------------------- | -------------------------- |
| Network node running XRPL software | rippled node |
| External client submitting requests | RPC Client |
| Network neighbor sharing data | Peer (PeerImp) |
| Request handler for client queries | RPC Server (ServerHandler) |
| Command executor for specific RPC methods | RPCHandler |
| Agreement process between nodes | Consensus (RCLConsensus) |
| Transaction processing coordinator | NetworkOPs |
| Background task scheduler | JobQueue |
| Ledger state manager | LedgerMaster |
| Payment route calculator | PathFinding (Pathfinder) |
| Transaction waiting room | TxQ (Transaction Queue) |
| Fee adjustment system | LoadManager |
| Trusted validator list manager | ValidatorList |
| Protocol upgrade tracker | AmendmentTable |
| Ledger state hash tree | SHAMap |
| Persistent key-value storage | NodeStore |
### Actions
| What Happens (Plain English) | Technical Term |
| ---------------------------------------------- | ---------------------- |
| Client sends a request to a node | `rpc.request` |
| Node executes a specific RPC command | `rpc.command.*` |
| Node receives a transaction from a peer | `tx.receive` |
| Node checks if a transaction is valid | `tx.validate` |
| Node forwards a transaction to neighbors | `tx.relay` |
| Nodes agree on which transactions to include | `consensus.round` |
| Consensus progresses through phases | `consensus.phase.*` |
| Node builds a new confirmed ledger | `ledger.build` |
| Node fetches missing ledger data from peers | `ledger.acquire` |
| Node computes payment routes | `pathfind.compute` |
| Node queues a transaction for later processing | `txq.enqueue` |
| Node increases fees due to high load | `fee.escalate` |
| Node fetches the latest trusted validator list | `validator.list.fetch` |
| Node votes on a protocol amendment | `amendment.vote` |
| Node synchronizes state tree data | `shamap.sync` |
---
## 1.2 Key Components for Instrumentation
> **TxQ** = Transaction Queue | **UNL** = Unique Node List
| Component | Location | Purpose | Trace Value |
| ------------------ | ------------------------------------------ | ------------------------ | -------------------------------- |
| **Overlay** | `src/xrpld/overlay/` | P2P communication | Message propagation timing |
| **PeerImp** | `src/xrpld/overlay/detail/PeerImp.cpp` | Individual peer handling | Per-peer latency |
| **RCLConsensus** | `src/xrpld/app/consensus/RCLConsensus.cpp` | Consensus algorithm | Round timing, phase analysis |
| **NetworkOPs** | `src/xrpld/app/misc/NetworkOPs.cpp` | Transaction processing | Tx lifecycle tracking |
| **ServerHandler** | `src/xrpld/rpc/detail/ServerHandler.cpp` | RPC entry point | Request latency |
| **RPCHandler** | `src/xrpld/rpc/detail/RPCHandler.cpp` | Command execution | Per-command timing |
| **JobQueue** | `src/xrpl/core/JobQueue.h` | Async task execution | Queue wait times |
| **PathFinding** | `src/xrpld/app/paths/` | Payment path computation | Path latency, cache hits |
| **TxQ** | `src/xrpld/app/misc/TxQ.cpp` | Transaction queue/fees | Queue depth, eviction rates |
| **LoadManager** | `src/xrpld/app/main/LoadManager.cpp` | Fee escalation/load | Fee levels, load factors |
| **InboundLedgers** | `src/xrpld/app/ledger/InboundLedgers.cpp` | Ledger acquisition | Sync time, peer reliability |
| **ValidatorList** | `src/xrpld/app/misc/ValidatorList.cpp` | UNL management | List freshness, fetch failures |
| **AmendmentTable** | `src/xrpld/app/misc/AmendmentTable.cpp` | Protocol amendments | Voting status, activation events |
| **SHAMap** | `src/xrpld/shamap/` | State hash tree | Sync speed, missing nodes |
---
## 1.3 Transaction Flow Diagram
Transaction flow spans multiple nodes in the network. Each node creates linked spans to form a distributed trace:
```mermaid
sequenceDiagram
participant Client
participant PeerA as Peer A (Receive)
participant PeerB as Peer B (Relay)
participant PeerC as Peer C (Validate)
Client->>PeerA: 1. Submit TX
rect rgb(230, 245, 255)
Note over PeerA: tx.receive SPAN START
PeerA->>PeerA: HashRouter Deduplication
PeerA->>PeerA: tx.validate (child span)
end
PeerA->>PeerB: 2. Relay TX (with trace ctx)
rect rgb(230, 245, 255)
Note over PeerB: tx.receive (linked span)
end
PeerB->>PeerC: 3. Relay TX
rect rgb(230, 245, 255)
Note over PeerC: tx.receive (linked span)
PeerC->>PeerC: tx.process
end
Note over Client,PeerC: DISTRIBUTED TRACE (same trace_id: abc123)
```
**Reading the diagram:**
- **Client**: The external entity that submits a transaction to Peer A. It has no trace context -- the trace starts at the first node.
- **Peer A (Receive)**: The entry node that creates the root span `tx.receive`, runs HashRouter deduplication to avoid processing duplicates, and creates a child `tx.validate` span.
- **Peer A to Peer B arrow**: The relay message carries trace context (trace_id + parent span_id), enabling Peer B to create a linked span under the same trace.
- **Peer B (Relay)**: Receives the transaction and trace context, creates a `tx.receive` span linked to Peer A's trace, then relays onward.
- **Peer C (Validate)**: Final hop in this example. Creates a linked `tx.receive` span and runs `tx.process` to fully process the transaction.
- **Blue rectangles**: Highlight the span boundaries on each node, showing where instrumentation creates and closes spans.
### Trace Structure
```
trace_id: abc123
├── span: tx.receive (Peer A)
│ ├── span: tx.validate
│ └── span: tx.relay
├── span: tx.receive (Peer B) [parent: Peer A]
│ └── span: tx.relay
└── span: tx.receive (Peer C) [parent: Peer B]
└── span: tx.process
```
---
## 1.4 Consensus Round Flow
Consensus rounds are multi-phase operations that benefit significantly from tracing:
```mermaid
flowchart TB
subgraph round["consensus.round (root span)"]
attrs["Attributes:<br/>xrpl.consensus.ledger.seq = 12345678<br/>xrpl.consensus.mode = proposing<br/>xrpl.consensus.proposers = 35"]
subgraph open["consensus.phase.open"]
open_desc["Duration: ~3s<br/>Waiting for transactions"]
end
subgraph establish["consensus.phase.establish"]
est_attrs["proposals_received = 28<br/>disputes_resolved = 3"]
est_children["├── consensus.proposal.receive (×28)<br/>├── consensus.proposal.send (×1)<br/>└── consensus.dispute.resolve (×3)"]
end
subgraph accept["consensus.phase.accept"]
acc_attrs["transactions_applied = 150<br/>ledger.hash = DEF456..."]
acc_children["├── ledger.build<br/>└── ledger.validate"]
end
attrs --> open
open --> establish
establish --> accept
end
style round fill:#f57f17,stroke:#e65100,color:#ffffff
style open fill:#1565c0,stroke:#0d47a1,color:#ffffff
style establish fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style accept fill:#c2185b,stroke:#880e4f,color:#ffffff
```
**Reading the diagram:**
- **consensus.round (orange, root span)**: The top-level span encompassing the entire consensus round, with attributes like ledger sequence, mode, and proposer count.
- **consensus.phase.open (blue)**: The first phase where the node waits (~3s) to collect incoming transactions before proposing.
- **consensus.phase.establish (green)**: The negotiation phase where validators exchange proposals, resolve disputes, and converge on a transaction set. Child spans track each proposal received/sent and each dispute resolved.
- **consensus.phase.accept (pink)**: The final phase where the agreed transaction set is applied, a new ledger is built, and the ledger is validated. Child spans cover `ledger.build` and `ledger.validate`.
- **Arrows (open to establish to accept)**: The sequential flow through the three consensus phases. Each phase must complete before the next begins.
---
## 1.5 RPC Request Flow
> **WS** = WebSocket
RPC requests support W3C Trace Context headers for distributed tracing across services:
```mermaid
flowchart TB
subgraph request["rpc.request (root span)"]
http["HTTP Request — POST /<br/>traceparent:<br/>00-abc123...-def456...-01"]
attrs["Attributes:<br/>http.method = POST<br/>net.peer.ip = 192.168.1.100<br/>xrpl.rpc.command = submit"]
subgraph enqueue["jobqueue.enqueue"]
job_attr["xrpl.job.type = jtCLIENT_RPC"]
end
subgraph command["rpc.command.submit"]
cmd_attrs["xrpl.rpc.version = 2<br/>xrpl.rpc.role = user"]
cmd_children["├── tx.deserialize<br/>├── tx.validate_local<br/>└── tx.submit_to_network"]
end
response["Response: 200 OK<br/>Duration: 45ms"]
http --> attrs
attrs --> enqueue
enqueue --> command
command --> response
end
style request fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style enqueue fill:#1565c0,stroke:#0d47a1,color:#ffffff
style command fill:#e65100,stroke:#bf360c,color:#ffffff
```
**Reading the diagram:**
- **rpc.request (green, root span)**: The outermost span representing the full RPC request lifecycle, from HTTP receipt to response. Carries the W3C `traceparent` header for distributed tracing.
- **HTTP Request node**: Shows the incoming POST request with its `traceparent` header and extracted attributes (method, peer IP, command name).
- **jobqueue.enqueue (blue)**: The span covering the asynchronous handoff from the RPC thread to the JobQueue worker thread. The trace context is preserved across this async boundary.
- **rpc.command.submit (orange)**: The span for the actual command execution, with child spans for deserialization, local validation, and network submission.
- **Response node**: The final output with HTTP status and total duration, marking the end of the root span.
- **Arrows (top to bottom)**: The sequential processing pipeline -- receive request, extract attributes, enqueue job, execute command, return response.
---
## 1.6 Key Trace Points
> **TxQ** = Transaction Queue
The following table identifies priority instrumentation points across the codebase:
| Category | Span Name | File | Method | Priority |
| --------------- | ---------------------- | ---------------------- | ----------------------- | -------- |
| **Transaction** | `tx.receive` | `PeerImp.cpp` | `handleTransaction()` | High |
| **Transaction** | `tx.validate` | `NetworkOPs.cpp` | `processTransaction()` | High |
| **Transaction** | `tx.process` | `NetworkOPs.cpp` | `doTransactionSync()` | High |
| **Transaction** | `tx.relay` | `OverlayImpl.cpp` | `relay()` | Medium |
| **Consensus** | `consensus.round` | `RCLConsensus.cpp` | `startRound()` | High |
| **Consensus** | `consensus.phase.*` | `Consensus.h` | `timerEntry()` | High |
| **Consensus** | `consensus.proposal.*` | `RCLConsensus.cpp` | `peerProposal()` | Medium |
| **RPC** | `rpc.request` | `ServerHandler.cpp` | `onRequest()` | High |
| **RPC** | `rpc.command.*` | `RPCHandler.cpp` | `doCommand()` | High |
| **Peer** | `peer.connect` | `OverlayImpl.cpp` | `onHandoff()` | Low |
| **Peer** | `peer.message.*` | `PeerImp.cpp` | `onMessage()` | Low |
| **Ledger** | `ledger.acquire` | `InboundLedgers.cpp` | `acquire()` | Medium |
| **Ledger** | `ledger.build` | `RCLConsensus.cpp` | `buildLCL()` | High |
| **PathFinding** | `pathfind.request` | `PathRequest.cpp` | `doUpdate()` | High |
| **PathFinding** | `pathfind.compute` | `Pathfinder.cpp` | `findPaths()` | High |
| **TxQ** | `txq.enqueue` | `TxQ.cpp` | `apply()` | High |
| **TxQ** | `txq.apply` | `TxQ.cpp` | `processClosedLedger()` | High |
| **Fee** | `fee.escalate` | `LoadManager.cpp` | `raiseLocalFee()` | Medium |
| **Ledger** | `ledger.replay` | `LedgerReplayer.h` | `replay()` | Medium |
| **Ledger** | `ledger.delta` | `LedgerDeltaAcquire.h` | `processData()` | Medium |
| **Validator** | `validator.list.fetch` | `ValidatorList.cpp` | `verify()` | Medium |
| **Validator** | `validator.manifest` | `Manifest.cpp` | `applyManifest()` | Low |
| **Amendment** | `amendment.vote` | `AmendmentTable.cpp` | `doVoting()` | Low |
| **SHAMap** | `shamap.sync` | `SHAMap.cpp` | `fetchRoot()` | Medium |
---
## 1.7 Instrumentation Priority
> **TxQ** = Transaction Queue
```mermaid
quadrantChart
title Instrumentation Priority Matrix
x-axis Low Complexity --> High Complexity
y-axis Low Value --> High Value
quadrant-1 Implement First
quadrant-2 Plan Carefully
quadrant-3 Quick Wins
quadrant-4 Consider Later
RPC Tracing: [0.2, 0.92]
Transaction Tracing: [0.55, 0.88]
Consensus Tracing: [0.78, 0.82]
PathFinding: [0.38, 0.75]
TxQ and Fees: [0.25, 0.65]
Ledger Sync: [0.62, 0.58]
Peer Message Tracing: [0.35, 0.25]
JobQueue Tracing: [0.2, 0.48]
Validator Mgmt: [0.48, 0.42]
Amendment Tracking: [0.15, 0.32]
SHAMap Operations: [0.72, 0.45]
```
---
## 1.8 Observable Outcomes
> **TxQ** = Transaction Queue | **UNL** = Unique Node List
After implementing OpenTelemetry, operators and developers will gain visibility into the following:
### 1.8.1 What You Will See: Traces
| Trace Type | Description | Example Query in Grafana/Tempo |
| -------------------------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| **Transaction Lifecycle** | Full journey from RPC submission through validation, relay, consensus, and ledger inclusion | `{service.name="rippled" && xrpl.tx.hash="ABC123..."}` |
| **Cross-Node Propagation** | Transaction path across multiple rippled nodes with timing | `{xrpl.tx.relay_count > 0}` |
| **Consensus Rounds** | Complete round with all phases (open, establish, accept) | `{span.name=~"consensus.round.*"}` |
| **RPC Request Processing** | Individual command execution with timing breakdown | `{xrpl.rpc.command="account_info"}` |
| **Ledger Acquisition** | Peer-to-peer ledger data requests and responses | `{span.name="ledger.acquire"}` |
| **PathFinding Latency** | Path computation time and cache effectiveness for payment RPCs | `{span.name="pathfind.compute"}` |
| **TxQ Behavior** | Queue depth, eviction patterns, fee escalation during congestion | `{span.name=~"txq.*"}` |
| **Ledger Sync** | Full acquisition timeline including delta and transaction fetches | `{span.name=~"ledger.acquire.*"}` |
| **Validator Health** | UNL fetch success, manifest updates, stale list detection | `{span.name=~"validator.*"}` |
### 1.8.2 What You Will See: Metrics (Derived from Traces)
| Metric | Description | Dashboard Panel |
| ----------------------------- | --------------------------------------- | --------------------------- |
| **RPC Latency (p50/p95/p99)** | Response time distribution per command | Heatmap by command |
| **Transaction Throughput** | Transactions processed per second | Time series graph |
| **Consensus Round Duration** | Time to complete consensus phases | Histogram |
| **Cross-Node Latency** | Time for transaction to reach N nodes | Line chart with percentiles |
| **Error Rate** | Failed transactions/RPC calls by type | Stacked bar chart |
| **PathFinding Latency** | Path computation time per currency pair | Heatmap by currency |
| **TxQ Depth** | Queued transactions over time | Time series with thresholds |
| **Fee Escalation Level** | Current fee multiplier | Gauge with alert thresholds |
| **Ledger Sync Duration** | Time to acquire missing ledgers | Histogram |
### 1.8.3 Concrete Dashboard Examples
**Transaction Trace View (Tempo):**
```
┌────────────────────────────────────────────────────────────────────────────────┐
│ Trace: abc123... (Transaction Submission) Duration: 847ms │
├────────────────────────────────────────────────────────────────────────────────┤
│ ├── rpc.request [ServerHandler] ████░░░░░░ 45ms │
│ │ └── rpc.command.submit [RPCHandler] ████░░░░░░ 42ms │
│ │ └── tx.receive [NetworkOPs] ███░░░░░░░ 35ms │
│ │ ├── tx.validate [TxQ] █░░░░░░░░░ 8ms │
│ │ └── tx.relay [Overlay] ██░░░░░░░░ 15ms │
│ │ ├── tx.receive [Node-B] █████░░░░░ 52ms │
│ │ │ └── tx.relay [Node-B] ██░░░░░░░░ 18ms │
│ │ └── tx.receive [Node-C] ██████░░░░ 65ms │
│ └── consensus.round [RCLConsensus] ████████░░ 720ms │
│ ├── consensus.phase.open ██░░░░░░░░ 180ms │
│ ├── consensus.phase.establish █████░░░░░ 480ms │
│ └── consensus.phase.accept █░░░░░░░░░ 60ms │
└────────────────────────────────────────────────────────────────────────────────┘
```
**RPC Performance Dashboard Panel:**
```
┌─────────────────────────────────────────────────────────────┐
│ RPC Command Latency (Last 1 Hour) │
├─────────────────────────────────────────────────────────────┤
│ Command │ p50 │ p95 │ p99 │ Errors │ Rate │
│──────────────────┼────────┼────────┼────────┼────────┼──────│
│ account_info │ 12ms │ 45ms │ 89ms │ 0.1% │ 150/s│
│ submit │ 35ms │ 120ms │ 250ms │ 2.3% │ 45/s│
│ ledger │ 8ms │ 25ms │ 55ms │ 0.0% │ 80/s│
│ tx │ 15ms │ 50ms │ 100ms │ 0.5% │ 60/s│
│ server_info │ 5ms │ 12ms │ 20ms │ 0.0% │ 200/s│
└─────────────────────────────────────────────────────────────┘
```
**Consensus Health Dashboard Panel:**
```mermaid
---
config:
xyChart:
width: 1200
height: 400
plotReservedSpacePercent: 50
chartOrientation: vertical
themeVariables:
xyChart:
plotColorPalette: "#3498db"
---
xychart-beta
title "Consensus Round Duration (Last 24 Hours)"
x-axis "Time of Day (Hours)" [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24]
y-axis "Duration (seconds)" 1 --> 5
line [2.1, 2.4, 2.8, 3.2, 3.8, 4.3, 4.5, 5.0, 4.7, 4.0, 3.2, 2.6, 2.0]
```
### 1.8.4 Operator Actionable Insights
| Scenario | What You'll See | Action |
| ------------------------- | ---------------------------------------------------------------------------- | ------------------------------------------------ |
| **Slow RPC** | Span showing which phase is slow (parsing, execution, serialization) | Optimize specific code path |
| **Transaction Stuck** | Trace stops at validation; error attribute shows reason | Fix transaction parameters |
| **Consensus Delay** | Phase.establish taking too long; proposer attribute shows missing validators | Investigate network connectivity |
| **Memory Spike** | Large batch of spans correlating with memory increase | Tune batch_size or sampling |
| **Network Partition** | Traces missing cross-node links for specific peer | Check peer connectivity |
| **Path Computation Slow** | pathfind.compute span shows high latency; cache miss rate in attributes | Warm the RippleLineCache, check order book depth |
| **TxQ Full** | txq.enqueue spans show evictions; fee.escalate spans increasing | Monitor fee levels, alert operators |
| **Ledger Sync Stalled** | ledger.acquire spans timing out; peer reliability attributes show issues | Check peer connectivity, add trusted peers |
| **UNL Stale** | validator.list.fetch spans failing; last_update attribute aging | Verify validator site URLs, check DNS |
### 1.8.5 Developer Debugging Workflow
1. **Find Transaction**: Query by `xrpl.tx.hash` to get full trace
2. **Identify Bottleneck**: Look at span durations to find slowest component
3. **Check Attributes**: Review `xrpl.tx.validity`, `xrpl.rpc.status` for errors
4. **Correlate Logs**: Use `trace_id` to find related PerfLog entries
5. **Compare Nodes**: Filter by `service.instance.id` to compare behavior across nodes
---
_Next: [Design Decisions](./02-design-decisions.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,631 @@
# Design Decisions
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Architecture Analysis](./01-architecture-analysis.md) | [Code Samples](./04-code-samples.md)
---
## 2.1 OpenTelemetry Components
> **OTLP** = OpenTelemetry Protocol
### 2.1.1 SDK Selection
**Primary Choice**: OpenTelemetry C++ SDK (`opentelemetry-cpp`)
| Component | Purpose | Required |
| --------------------------------------- | ---------------------- | ----------- |
| `opentelemetry-cpp::api` | Tracing API headers | Yes |
| `opentelemetry-cpp::sdk` | SDK implementation | Yes |
| `opentelemetry-cpp::ext` | Extensions (exporters) | Yes |
| `opentelemetry-cpp::otlp_grpc_exporter` | OTLP/gRPC export | Recommended |
| `opentelemetry-cpp::otlp_http_exporter` | OTLP/HTTP export | Alternative |
### 2.1.2 Instrumentation Strategy
**Manual Instrumentation** (recommended):
| Approach | Pros | Cons |
| ---------- | ----------------------------------------------------------------- | ------------------------------------------------------- |
| **Manual** | Precise control, optimized placement, rippled-specific attributes | More development effort |
| **Auto** | Less code, automatic coverage | Less control, potential overhead, limited customization |
---
## 2.2 Exporter Configuration
> **OTLP** = OpenTelemetry Protocol
```mermaid
flowchart TB
subgraph nodes["rippled Nodes"]
node1["rippled<br/>Node 1"]
node2["rippled<br/>Node 2"]
node3["rippled<br/>Node 3"]
end
collector["OpenTelemetry<br/>Collector<br/>(sidecar or standalone)"]
subgraph backends["Observability Backends"]
tempo["Tempo"]
elastic["Elastic<br/>APM"]
end
node1 -->|"OTLP/gRPC<br/>:4317"| collector
node2 -->|"OTLP/gRPC<br/>:4317"| collector
node3 -->|"OTLP/gRPC<br/>:4317"| collector
collector --> tempo
collector --> elastic
style nodes fill:#0d47a1,stroke:#082f6a,color:#ffffff
style backends fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style collector fill:#bf360c,stroke:#8c2809,color:#ffffff
```
**Reading the diagram:**
- **rippled Nodes (blue)**: The source of telemetry data. Each rippled node exports spans via OTLP/gRPC on port 4317.
- **OpenTelemetry Collector (red)**: The central aggregation point that receives spans from all nodes. Can run as a sidecar (per-node) or standalone (shared). Handles batching, filtering, and routing.
- **Observability Backends (green)**: The storage and visualization destinations. Tempo is the recommended backend for both development and production, and Elastic APM is an alternative. The Collector routes to one or more backends.
- **Arrows (nodes to collector to backends)**: The data pipeline -- spans flow from nodes to the Collector over gRPC, then the Collector fans out to the configured backends.
### 2.2.1 OTLP/gRPC (Recommended)
```cpp
// Configuration for OTLP over gRPC
namespace otlp = opentelemetry::exporter::otlp;
otlp::OtlpGrpcExporterOptions opts;
opts.endpoint = "localhost:4317";
opts.useTls = true;
opts.sslCaCertPath = "/path/to/ca.crt";
```
### 2.2.2 OTLP/HTTP (Alternative)
```cpp
// Configuration for OTLP over HTTP
namespace otlp = opentelemetry::exporter::otlp;
otlp::OtlpHttpExporterOptions opts;
opts.url = "http://localhost:4318/v1/traces";
opts.content_type = otlp::HttpRequestContentType::kJson; // or kBinary
```
---
## 2.3 Span Naming Conventions
> **TxQ** = Transaction Queue | **UNL** = Unique Node List | **WS** = WebSocket
### 2.3.1 Naming Schema
```
<component>.<operation>[.<sub-operation>]
```
**Examples**:
- `tx.receive` - Transaction received from peer
- `consensus.phase.establish` - Consensus establish phase
- `rpc.command.server_info` - server_info RPC command
### 2.3.2 Complete Span Catalog
```yaml
# Transaction Spans
tx:
receive: "Transaction received from network"
validate: "Transaction signature/format validation"
process: "Full transaction processing"
relay: "Transaction relay to peers"
apply: "Apply transaction to ledger"
# Consensus Spans
consensus:
round: "Complete consensus round"
phase:
open: "Open phase - collecting transactions"
establish: "Establish phase - reaching agreement"
accept: "Accept phase - applying consensus"
proposal:
receive: "Receive peer proposal"
send: "Send our proposal"
validation:
receive: "Receive peer validation"
send: "Send our validation"
# RPC Spans
rpc:
request: "HTTP/WebSocket request handling"
command:
"*": "Specific RPC command (dynamic)"
# Peer Spans
peer:
connect: "Peer connection establishment"
disconnect: "Peer disconnection"
message:
send: "Send protocol message"
receive: "Receive protocol message"
# Ledger Spans
ledger:
acquire: "Ledger acquisition from network"
build: "Build new ledger"
validate: "Ledger validation"
close: "Close ledger"
replay: "Ledger replay executed"
delta: "Delta-based ledger acquired"
# PathFinding Spans
pathfind:
request: "Path request initiated"
compute: "Path computation executed"
# TxQ Spans
txq:
enqueue: "Transaction queued"
apply: "Queued transaction applied"
# Fee/Load Spans
fee:
escalate: "Fee escalation triggered"
# Validator Spans
validator:
list:
fetch: "UNL list fetched"
manifest: "Manifest update processed"
# Amendment Spans
amendment:
vote: "Amendment voting executed"
# SHAMap Spans
shamap:
sync: "State tree synchronization"
# Job Spans
job:
enqueue: "Job added to queue"
execute: "Job execution"
```
---
## 2.4 Attribute Schema
> **TxQ** = Transaction Queue | **UNL** = Unique Node List | **OTLP** = OpenTelemetry Protocol
### 2.4.1 Resource Attributes (Set Once at Startup)
```cpp
// Standard OpenTelemetry semantic conventions
resource::SemanticConventions::SERVICE_NAME = "rippled"
resource::SemanticConventions::SERVICE_VERSION = BuildInfo::getVersionString()
resource::SemanticConventions::SERVICE_INSTANCE_ID = <node_public_key_base58>
// Custom rippled attributes
"xrpl.network.id" = <network_id> // e.g., 0 for mainnet
"xrpl.network.type" = "mainnet" | "testnet" | "devnet" | "standalone"
"xrpl.node.type" = "validator" | "stock" | "reporting"
"xrpl.node.cluster" = <cluster_name> // If clustered
```
### 2.4.2 Span Attributes by Category
#### Transaction Attributes
```cpp
"xrpl.tx.hash" = string // Transaction hash (hex)
"xrpl.tx.type" = string // "Payment", "OfferCreate", etc.
"xrpl.tx.account" = string // Source account (redacted in prod)
"xrpl.tx.sequence" = int64 // Account sequence number
"xrpl.tx.fee" = int64 // Fee in drops
"xrpl.tx.result" = string // "tesSUCCESS", "tecPATH_DRY", etc.
"xrpl.tx.ledger_index" = int64 // Ledger containing transaction
```
#### Consensus Attributes
```cpp
"xrpl.consensus.round" = int64 // Round number
"xrpl.consensus.phase" = string // "open", "establish", "accept"
"xrpl.consensus.mode" = string // "proposing", "observing", etc.
"xrpl.consensus.proposers" = int64 // Number of proposers
"xrpl.consensus.ledger.prev" = string // Previous ledger hash
"xrpl.consensus.ledger.seq" = int64 // Ledger sequence
"xrpl.consensus.tx_count" = int64 // Transactions in consensus set
"xrpl.consensus.duration_ms" = float64 // Round duration
// Phase 4a: Establish-phase gap fill & cross-node correlation
"xrpl.consensus.round_id" = int64 // Consensus round number
"xrpl.consensus.ledger_id" = string // previousLedger.id() — shared across nodes
"xrpl.consensus.trace_strategy" = string // "deterministic" or "attribute"
"xrpl.consensus.converge_percent" = int64 // Convergence % (0-100+)
"xrpl.consensus.establish_count" = int64 // Number of establish iterations
"xrpl.consensus.disputes_count" = int64 // Active disputed transactions
"xrpl.consensus.proposers_agreed" = int64 // Peers agreeing with our position
"xrpl.consensus.proposers_total" = int64 // Total peer positions
"xrpl.consensus.agree_count" = int64 // Peers that agree (haveConsensus)
"xrpl.consensus.disagree_count" = int64 // Peers that disagree
"xrpl.consensus.threshold_percent" = int64 // Current threshold (50/65/70/95)
"xrpl.consensus.result" = string // "yes", "no", "moved_on"
"xrpl.consensus.mode.old" = string // Previous consensus mode
"xrpl.consensus.mode.new" = string // New consensus mode
```
#### RPC Attributes
```cpp
"xrpl.rpc.command" = string // Command name
"xrpl.rpc.version" = int64 // API version
"xrpl.rpc.role" = string // "admin" or "user"
"xrpl.rpc.params" = string // Sanitized parameters (optional)
```
#### Peer & Message Attributes
```cpp
"xrpl.peer.id" = string // Peer public key (base58)
"xrpl.peer.address" = string // IP:port
"xrpl.peer.latency_ms" = float64 // Measured latency
"xrpl.peer.cluster" = string // Cluster name if clustered
"xrpl.message.type" = string // Protocol message type name
"xrpl.message.size_bytes" = int64 // Message size
"xrpl.message.compressed" = bool // Whether compressed
```
#### Ledger & Job Attributes
```cpp
"xrpl.ledger.hash" = string // Ledger hash
"xrpl.ledger.index" = int64 // Ledger sequence/index
"xrpl.ledger.close_time" = int64 // Close time (epoch)
"xrpl.ledger.tx_count" = int64 // Transaction count
"xrpl.job.type" = string // Job type name
"xrpl.job.queue_ms" = float64 // Time spent in queue
"xrpl.job.worker" = int64 // Worker thread ID
```
#### PathFinding Attributes
```cpp
"xrpl.pathfind.source_currency" = string // Source currency code
"xrpl.pathfind.dest_currency" = string // Destination currency code
"xrpl.pathfind.path_count" = int64 // Number of paths found
"xrpl.pathfind.cache_hit" = bool // RippleLineCache hit
```
#### TxQ Attributes
```cpp
"xrpl.txq.queue_depth" = int64 // Current queue depth
"xrpl.txq.fee_level" = int64 // Fee level of transaction
"xrpl.txq.eviction_reason" = string // Why transaction was evicted
```
#### Fee Attributes
```cpp
"xrpl.fee.load_factor" = int64 // Current load factor
"xrpl.fee.escalation_level" = int64 // Fee escalation multiplier
```
#### Validator Attributes
```cpp
"xrpl.validator.list_size" = int64 // UNL size
"xrpl.validator.list_age_sec" = int64 // Seconds since last update
```
#### Amendment Attributes
```cpp
"xrpl.amendment.name" = string // Amendment name
"xrpl.amendment.status" = string // "enabled", "vetoed", "supported"
```
#### SHAMap Attributes
```cpp
"xrpl.shamap.type" = string // "transaction", "state", "account_state"
"xrpl.shamap.missing_nodes" = int64 // Number of missing nodes during sync
"xrpl.shamap.duration_ms" = float64 // Sync duration
```
### 2.4.3 Data Collection Summary
The following table summarizes what data is collected by category:
| Category | Attributes Collected | Purpose |
| --------------- | ---------------------------------------------------------------------- | ---------------------------- |
| **Transaction** | `tx.hash`, `tx.type`, `tx.result`, `tx.fee`, `ledger_index` | Trace transaction lifecycle |
| **Consensus** | `round`, `phase`, `mode`, `proposers` (public keys), `duration_ms` | Analyze consensus timing |
| **RPC** | `command`, `version`, `status`, `duration_ms` | Monitor RPC performance |
| **Peer** | `peer.id` (public key), `latency_ms`, `message.type`, `message.size` | Network topology analysis |
| **Ledger** | `ledger.hash`, `ledger.index`, `close_time`, `tx_count` | Ledger progression tracking |
| **Job** | `job.type`, `queue_ms`, `worker` | JobQueue performance |
| **PathFinding** | `pathfind.source_currency`, `dest_currency`, `path_count`, `cache_hit` | Payment path analysis |
| **TxQ** | `txq.queue_depth`, `fee_level`, `eviction_reason` | Queue depth and fee tracking |
| **Fee** | `fee.load_factor`, `escalation_level` | Fee escalation monitoring |
| **Validator** | `validator.list_size`, `list_age_sec` | UNL health monitoring |
| **Amendment** | `amendment.name`, `status` | Protocol upgrade tracking |
| **SHAMap** | `shamap.type`, `missing_nodes`, `duration_ms` | State tree sync performance |
### 2.4.4 Privacy & Sensitive Data Policy
> **PII** = Personally Identifiable Information
OpenTelemetry instrumentation is designed to collect **operational metadata only**, never sensitive content.
#### Data NOT Collected
The following data is explicitly **excluded** from telemetry collection:
| Excluded Data | Reason |
| ----------------------- | ----------------------------------------- |
| **Private Keys** | Never exposed; not relevant to tracing |
| **Account Balances** | Financial data; privacy sensitive |
| **Transaction Amounts** | Financial data; privacy sensitive |
| **Raw TX Payloads** | May contain sensitive memo/data fields |
| **Personal Data** | No PII collected |
| **IP Addresses** | Configurable; excluded by default in prod |
#### Privacy Protection Mechanisms
| Mechanism | Description |
| ----------------------------- | ------------------------------------------------------------------------- |
| **Account Hashing** | `xrpl.tx.account` is hashed at collector level before storage |
| **Configurable Redaction** | Sensitive fields can be excluded via `[telemetry]` config section |
| **Sampling** | Only 10% of traces recorded by default, reducing data exposure |
| **Local Control** | Node operators have full control over what gets exported |
| **No Raw Payloads** | Transaction content is never recorded, only metadata (hash, type, result) |
| **Collector-Level Filtering** | Additional redaction/hashing can be configured at OTel Collector |
#### Collector-Level Data Protection
The OpenTelemetry Collector can be configured to hash or redact sensitive attributes before export:
```yaml
processors:
attributes:
actions:
# Hash account addresses before storage
- key: xrpl.tx.account
action: hash
# Remove IP addresses entirely
- key: xrpl.peer.address
action: delete
# Redact specific fields
- key: xrpl.rpc.params
action: delete
```
#### Configuration Options for Privacy
In `rippled.cfg`, operators can control data collection granularity:
```ini
[telemetry]
enabled=1
# Disable collection of specific components
trace_transactions=1
trace_consensus=1
trace_rpc=1
trace_peer=0 # Disable peer tracing (high volume, includes addresses)
# Redact specific attributes
redact_account=1 # Hash account addresses before export
redact_peer_address=1 # Remove peer IP addresses
```
> **Note**: The `redact_account` configuration in `rippled.cfg` controls SDK-level redaction before export, while collector-level filtering (see [Collector-Level Data Protection](#collector-level-data-protection) above) provides an additional defense-in-depth layer. Both can operate independently.
> **Key Principle**: Telemetry collects **operational metadata** (timing, counts, hashes) — never **sensitive content** (keys, balances, amounts, raw payloads).
---
## 2.5 Context Propagation Design
> **WS** = WebSocket
### 2.5.1 Propagation Boundaries
```mermaid
flowchart TB
subgraph http["HTTP/WebSocket (RPC)"]
w3c["W3C Trace Context Headers:<br/>traceparent:<br/>00-trace_id-span_id-flags<br/>tracestate: rippled=..."]
end
subgraph protobuf["Protocol Buffers (P2P)"]
proto["message TraceContext {<br/> bytes trace_id = 1; // 16 bytes<br/> bytes span_id = 2; // 8 bytes<br/> uint32 trace_flags = 3;<br/> string trace_state = 4;<br/>}"]
end
subgraph jobqueue["JobQueue (Internal Async)"]
job["Context captured at job creation,<br/>restored at execution<br/><br/>class Job {<br/> otel::context::Context<br/> traceContext_;<br/>};"]
end
style http fill:#0d47a1,stroke:#082f6a,color:#ffffff
style protobuf fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style jobqueue fill:#bf360c,stroke:#8c2809,color:#ffffff
```
**Reading the diagram:**
- **HTTP/WebSocket - RPC (blue)**: For client-facing RPC requests, trace context is propagated using the W3C `traceparent` header. This is the standard approach and works with any OTel-compatible client.
- **Protocol Buffers - P2P (green)**: For peer-to-peer messages between rippled nodes, trace context is embedded as a protobuf `TraceContext` message carrying trace_id, span_id, flags, and optional trace_state.
- **JobQueue - Internal Async (red)**: For asynchronous work within a single node, the OTel context is captured when a job is created and restored when the job executes on a worker thread. This bridges the async gap so spans remain linked.
---
## 2.6 Integration with Existing Observability
> **OTLP** = OpenTelemetry Protocol | **WS** = WebSocket
### 2.6.1 Existing Frameworks Comparison
rippled already has two observability mechanisms. OpenTelemetry complements (not replaces) them:
| Aspect | PerfLog | Beast Insight (StatsD) | OpenTelemetry |
| --------------------- | ----------------------------- | ---------------------------- | ------------------------- |
| **Type** | Logging | Metrics | Distributed Tracing |
| **Data** | JSON log entries | Counters, gauges, histograms | Spans with context |
| **Scope** | Single node | Single node | **Cross-node** |
| **Output** | `perf.log` file | StatsD server | OTLP Collector |
| **Question answered** | "What happened on this node?" | "How many? How fast?" | "What was the journey?" |
| **Correlation** | By timestamp | By metric name | By `trace_id` |
| **Overhead** | Low (file I/O) | Low (UDP packets) | Low-Medium (configurable) |
### 2.6.2 What Each Framework Does Best
#### PerfLog
- **Purpose**: Detailed local event logging for RPC and job execution
- **Strengths**:
- Rich JSON output with timing data
- Already integrated in RPC handlers
- File-based, no external dependencies
- **Limitations**:
- Single-node only (no cross-node correlation)
- No parent-child relationships between events
- Manual log parsing required
```json
// Example PerfLog entry
{
"time": "2024-01-15T10:30:00.123Z",
"method": "submit",
"duration_us": 1523,
"result": "tesSUCCESS"
}
```
#### Beast Insight (StatsD)
- **Purpose**: Real-time metrics for monitoring dashboards
- **Strengths**:
- Aggregated metrics (counters, gauges, histograms)
- Low overhead (UDP, fire-and-forget)
- Good for alerting thresholds
- **Limitations**:
- No request-level detail
- No causal relationships
- Single-node perspective
```cpp
// Example StatsD usage in rippled
insight.increment("rpc.submit.count");
insight.gauge("ledger.age", age);
insight.timing("consensus.round", duration);
```
#### OpenTelemetry (NEW)
- **Purpose**: Distributed request tracing across nodes
- **Strengths**:
- **Cross-node correlation** via `trace_id`
- Parent-child span relationships
- Rich attributes per span
- Industry standard (CNCF)
- **Limitations**:
- Requires collector infrastructure
- Higher complexity than logging
```cpp
// Example OpenTelemetry span
auto span = telemetry.startSpan("tx.relay");
span->SetAttribute("tx.hash", hash);
span->SetAttribute("peer.id", peerId);
// Span automatically linked to parent via context
```
### 2.6.3 When to Use Each
| Scenario | PerfLog | StatsD | OpenTelemetry |
| --------------------------------------- | ---------- | ------ | ------------- |
| "How many TXs per second?" | ❌ | ✅ | ✅ |
| "What's the p99 RPC latency?" | ❌ | ✅ | ✅ |
| "Why was this specific TX slow?" | ⚠️ partial | ❌ | ✅ |
| "Which node delayed consensus?" | ❌ | ❌ | ✅ |
| "What happened on node X at time T?" | ✅ | ❌ | ✅ |
| "Show me the TX journey across 5 nodes" | ❌ | ❌ | ✅ |
### 2.6.4 Coexistence Strategy
> **Note**: Phase 7 replaces the StatsD bridge with native OTel Metrics SDK export. The diagram below shows the Phase 6 intermediate state. See [Phase7_taskList.md](./Phase7_taskList.md) for the migration design where Beast Insight emits via OTLP instead of StatsD.
```mermaid
flowchart TB
subgraph rippled["rippled Process"]
perflog["PerfLog<br/>(JSON to file)"]
insight["Beast Insight<br/>(StatsD)"]
otel["OpenTelemetry<br/>(Tracing)"]
end
perflog --> perffile["perf.log"]
insight --> statsd["StatsD Server"]
otel --> collector["OTLP Collector"]
perffile --> grafana["Grafana<br/>(Unified UI)"]
statsd --> grafana
collector --> grafana
style rippled fill:#212121,stroke:#0a0a0a,color:#ffffff
style grafana fill:#bf360c,stroke:#8c2809,color:#ffffff
```
**Reading the diagram:**
- **rippled Process (dark gray)**: The single rippled node running all three observability frameworks side by side. Each framework operates independently with no interference.
- **PerfLog to perf.log**: PerfLog writes JSON-formatted event logs to a local file. Grafana can ingest these via Loki or a file-based datasource.
- **Beast Insight to StatsD Server**: Insight sends aggregated metrics (counters, gauges) over UDP to a StatsD server. Grafana reads from StatsD-compatible backends like Graphite or Prometheus (via StatsD exporter).
- **OpenTelemetry to OTLP Collector**: OTel exports spans over OTLP/gRPC to a Collector, which then forwards to a trace backend (Tempo).
- **Grafana (red, unified UI)**: All three data streams converge in Grafana, enabling operators to correlate logs, metrics, and traces in a single dashboard.
**Phase 7 target state**: Beast Insight routes to `OTelCollector` (new `Collector` implementation) which exports via OTLP/HTTP to the same collector endpoint as traces. StatsD UDP path becomes a deprecated fallback (`[insight] server=statsd`). See [06-implementation-phases.md §6.8](./06-implementation-phases.md) and [Phase7_taskList.md](./Phase7_taskList.md) for details.
### 2.6.5 Correlation with PerfLog
Trace IDs can be correlated with existing PerfLog entries for comprehensive debugging:
```cpp
// In RPCHandler.cpp - correlate trace with PerfLog
Status doCommand(RPC::JsonContext& context, Json::Value& result)
{
// Start OpenTelemetry span
auto span = context.app.getTelemetry().startSpan(
"rpc.command." + context.method);
// Get trace ID for correlation
auto traceId = span->GetContext().trace_id().IsValid()
? toHex(span->GetContext().trace_id())
: "";
// Use existing PerfLog with trace correlation
auto const curId = context.app.getPerfLog().currentId();
context.app.getPerfLog().rpcStart(context.method, curId);
// Future: Add trace ID to PerfLog entry
// context.app.getPerfLog().setTraceId(curId, traceId);
try {
auto ret = handler(context, result);
context.app.getPerfLog().rpcFinish(context.method, curId);
span->SetStatus(opentelemetry::trace::StatusCode::kOk);
return ret;
} catch (std::exception const& e) {
context.app.getPerfLog().rpcError(context.method, curId);
span->RecordException(e);
span->SetStatus(opentelemetry::trace::StatusCode::kError, e.what());
throw;
}
}
```
---
_Previous: [Architecture Analysis](./01-architecture-analysis.md)_ | _Next: [Implementation Strategy](./03-implementation-strategy.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,528 @@
# Implementation Strategy
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Code Samples](./04-code-samples.md) | [Configuration Reference](./05-configuration-reference.md)
---
## 3.1 Directory Structure
The telemetry implementation follows rippled's existing code organization pattern:
```
include/xrpl/
├── telemetry/
│ ├── Telemetry.h # Main telemetry interface
│ ├── TelemetryConfig.h # Configuration structures
│ ├── TraceContext.h # Context propagation utilities
│ ├── SpanGuard.h # RAII span management
│ └── SpanAttributes.h # Attribute helper functions
src/libxrpl/
├── telemetry/
│ ├── Telemetry.cpp # Implementation
│ ├── TelemetryConfig.cpp # Config parsing
│ ├── TraceContext.cpp # Context serialization
│ └── NullTelemetry.cpp # No-op implementation
src/xrpld/
├── telemetry/
│ ├── TracingInstrumentation.h # Instrumentation macros
│ └── TracingInstrumentation.cpp
```
---
## 3.2 Implementation Approach
<div align="center">
```mermaid
%%{init: {'flowchart': {'nodeSpacing': 20, 'rankSpacing': 30}}}%%
flowchart TB
subgraph phase1["Phase 1: Core"]
direction LR
sdk["SDK Integration"] ~~~ interface["Telemetry Interface"] ~~~ config["Configuration"]
end
subgraph phase2["Phase 2: RPC"]
direction LR
http["HTTP Context"] ~~~ rpc["RPC Handlers"]
end
subgraph phase3["Phase 3: P2P"]
direction LR
proto["Protobuf Context"] ~~~ tx["Transaction Relay"]
end
subgraph phase4["Phase 4: Consensus"]
direction LR
consensus["Consensus Rounds"] ~~~ proposals["Proposals"]
end
phase1 --> phase2 --> phase3 --> phase4
style phase1 fill:#1565c0,stroke:#0d47a1,color:#ffffff
style phase2 fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style phase3 fill:#e65100,stroke:#bf360c,color:#ffffff
style phase4 fill:#c2185b,stroke:#880e4f,color:#ffffff
```
</div>
### Key Principles
1. **Minimal Intrusion**: Instrumentation should not alter existing control flow
2. **Zero-Cost When Disabled**: Use compile-time flags and no-op implementations
3. **Backward Compatibility**: Protocol Buffer extensions use high field numbers
4. **Graceful Degradation**: Tracing failures must not affect node operation
---
## 3.3 Performance Overhead Summary
> **OTLP** = OpenTelemetry Protocol
| Metric | Overhead | Notes |
| ------------- | ---------- | ------------------------------------------------ |
| CPU | 1-3% | Of per-transaction CPU cost (~200μs baseline) |
| Memory | ~10 MB | SDK statics + batch buffer + worker thread stack |
| Network | 10-50 KB/s | Compressed OTLP export to collector |
| Latency (p99) | <2% | With proper sampling configuration |
---
## 3.4 Detailed CPU Overhead Analysis
### 3.4.1 Per-Operation Costs
> **Note on hardware assumptions**: The costs below are based on the official OTel C++ SDK CI benchmarks
> (969 runs on GitHub Actions 2-core shared runners). On production server hardware (3+ GHz Xeon),
> expect costs at the **lower end** of each range (~30-50% improvement over CI hardware).
| Operation | Time (ns) | Frequency | Impact |
| --------------------- | --------- | ---------------------- | ---------- |
| Span creation | 500-1000 | Every traced operation | Low |
| Span end | 100-200 | Every traced operation | Low |
| SetAttribute (string) | 80-120 | 3-5 per span | Low |
| SetAttribute (int) | 40-60 | 2-3 per span | Negligible |
| AddEvent | 100-200 | 0-2 per span | Low |
| Context injection | 150-250 | Per outgoing message | Low |
| Context extraction | 100-180 | Per incoming message | Low |
| GetCurrent context | 10-20 | Thread-local access | Negligible |
**Source**: Span creation based on OTel C++ SDK `BM_SpanCreation` benchmark (AlwaysOnSampler +
SimpleSpanProcessor + InMemoryExporter), median ~1,000 ns on CI hardware. AddEvent includes
timestamp read + string copy + vector push + mutex acquisition. Context injection/extraction
confirmed by `BM_SpanCreationWithScope` benchmark delta (~160 ns).
### 3.4.2 Transaction Processing Overhead
<div align="center">
```mermaid
%%{init: {'pie': {'textPosition': 0.75}}}%%
pie showData
"tx.receive (1400ns)" : 1400
"tx.validate (1200ns)" : 1200
"tx.relay (1200ns)" : 1200
"Context inject (200ns)" : 200
```
**Transaction Tracing Overhead (~4.0μs total)**
</div>
**Overhead percentage**: 4.0 μs / 200 μs (avg tx processing) = **~2.0%**
> **Breakdown**: Each span (tx.receive, tx.validate, tx.relay) costs ~1,000 ns for creation plus
> ~200-400 ns for 3-5 attribute sets. Context injection is ~200 ns (confirmed by benchmarks).
> On production hardware, expect ~2.6 μs total (~1.3% overhead) due to faster span creation (~500-600 ns).
### 3.4.3 Consensus Round Overhead
| Operation | Count | Cost (ns) | Total |
| ---------------------- | ----- | --------- | ---------- |
| consensus.round span | 1 | ~1200 | ~1.2 μs |
| consensus.phase spans | 3 | ~1100 | ~3.3 μs |
| proposal.receive spans | ~20 | ~1100 | ~22 μs |
| proposal.send spans | ~3 | ~1100 | ~3.3 μs |
| Context operations | ~30 | ~200 | ~6 μs |
| **TOTAL** | | | **~36 μs** |
> **Why higher**: Each span costs ~1,000 ns creation + ~100-200 ns for 1-2 attributes, totaling ~1,100-1,200 ns.
> Context operations remain ~200 ns (confirmed by benchmarks). On production hardware, expect ~24 μs total.
**Overhead percentage**: 36 μs / 3s (typical round) = **~0.001%** (negligible)
### 3.4.4 RPC Request Overhead
| Operation | Cost (ns) |
| ---------------- | ------------ |
| rpc.request span | ~1200 |
| rpc.command span | ~1100 |
| Context extract | ~250 |
| Context inject | ~200 |
| **TOTAL** | **~2.75 μs** |
> **Why higher**: Each span costs ~1,000 ns creation + ~100-200 ns for attributes (command name,
> version, role). Context extract/inject costs are confirmed by OTel C++ benchmarks.
- Fast RPC (1ms): 2.75 μs / 1ms = **~0.275%**
- Slow RPC (100ms): 2.75 μs / 100ms = **~0.003%**
---
## 3.5 Memory Overhead Analysis
> **OTLP** = OpenTelemetry Protocol
### 3.5.1 Static Memory
| Component | Size | Allocated |
| ------------------------------------ | ----------- | ---------- |
| TracerProvider singleton | ~64 KB | At startup |
| BatchSpanProcessor (circular buffer) | ~16 KB | At startup |
| BatchSpanProcessor (worker thread) | ~8 MB | At startup |
| OTLP exporter (gRPC channel init) | ~256 KB | At startup |
| Propagator registry | ~8 KB | At startup |
| **Total static** | **~8.3 MB** | |
> **Why higher than earlier estimate**: The BatchSpanProcessor's circular buffer itself is only ~16 KB
> (2049 x 8-byte `AtomicUniquePtr` entries), but it spawns a dedicated worker thread whose default
> stack size on Linux is ~8 MB. The OTLP gRPC exporter allocates memory for channel stubs and TLS
> initialization. The worker thread stack dominates the static footprint.
### 3.5.2 Dynamic Memory
| Component | Size per unit | Max units | Peak |
| -------------------- | -------------- | ---------- | --------------- |
| Active span | ~500-800 bytes | 1000 | ~500-800 KB |
| Queued span (export) | ~500 bytes | 2048 | ~1 MB |
| Attribute storage | ~80 bytes | 5 per span | Included |
| Context storage | ~64 bytes | Per thread | ~6.4 KB |
| **Total dynamic** | | | **~1.5-1.8 MB** |
> **Why active spans are larger**: An active `Span` object includes the wrapper (~88 bytes: shared_ptr,
> mutex, unique_ptr to Recordable) plus `SpanData` (~250 bytes: SpanContext, timestamps, name, status,
> empty containers) plus attribute storage (~200-500 bytes for 3-5 string attributes in a `std::map`).
> Source: `sdk/src/trace/span.h` and `sdk/include/opentelemetry/sdk/trace/span_data.h`.
> Queued spans release the wrapper, keeping only `SpanData` + attributes (~500 bytes).
### 3.5.3 Memory Growth Characteristics
```mermaid
---
config:
xyChart:
width: 700
height: 400
---
xychart-beta
title "Memory Usage vs Span Rate (bounded by queue limit)"
x-axis "Spans/second" [0, 200, 400, 600, 800, 1000]
y-axis "Memory (MB)" 0 --> 12
line [8.5, 9.2, 9.6, 9.9, 10.0, 10.0]
```
**Notes**:
- Memory increases with span rate but **plateaus at queue capacity** (default 2048 spans)
- Batch export prevents unbounded growth
- At queue limit, oldest spans are dropped (not blocked)
- Maximum memory is bounded: ~8.3 MB static (dominated by worker thread stack) + 2048 queued spans x ~500 bytes (~1 MB) + active spans (~0.8 MB) ≈ **~10 MB ceiling**
- The worker thread stack (~8 MB) is virtual memory; actual RSS depends on stack usage (typically much less)
### 3.5.4 Performance Data Sources
The overhead estimates in Sections 3.3-3.5 are derived from the following sources:
| Source | What it covers | URL |
| ------------------------------------------------ | ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| OTel C++ SDK CI benchmarks (969 runs) | Span creation, context activation, sampler overhead | [Benchmark Dashboard](https://open-telemetry.github.io/opentelemetry-cpp/benchmarks/) |
| `api/test/trace/span_benchmark.cc` | API-level span creation (~22 ns no-op) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/api/test/trace/span_benchmark.cc) |
| `sdk/test/trace/sampler_benchmark.cc` | SDK span creation with samplers (~1,000 ns AlwaysOn) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/sdk/test/trace/sampler_benchmark.cc) |
| `sdk/include/.../span_data.h` | SpanData memory layout (~250 bytes base) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/sdk/include/opentelemetry/sdk/trace/span_data.h) |
| `sdk/src/trace/span.h` | Span wrapper memory layout (~88 bytes) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/sdk/src/trace/span.h) |
| `sdk/include/.../batch_span_processor_options.h` | Default queue size (2048), batch size (512) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/sdk/include/opentelemetry/sdk/trace/batch_span_processor_options.h) |
| `sdk/include/.../circular_buffer.h` | CircularBuffer implementation (AtomicUniquePtr array) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/sdk/include/opentelemetry/sdk/common/circular_buffer.h) |
| OTLP proto definition | Serialized span size estimation | [Proto](https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/trace/v1/trace.proto) |
---
## 3.6 Network Overhead Analysis
### 3.6.1 Export Bandwidth
> **Bytes per span**: Estimates use ~500 bytes/span (conservative upper bound). OTLP protobuf analysis
> shows a typical span with 3-5 string attributes serializes to ~200-300 bytes raw; with gzip
> compression (~60-70% of raw) and batching (amortized headers), ~350 bytes/span is more realistic.
> The table uses the conservative estimate for capacity planning.
| Sampling Rate | Spans/sec | Bandwidth | Notes |
| ------------- | --------- | --------- | ---------------- |
| 100% | ~500 | ~250 KB/s | Development only |
| 10% | ~50 | ~25 KB/s | Staging |
| 1% | ~5 | ~2.5 KB/s | Production |
| Error-only | ~1 | ~0.5 KB/s | Minimal overhead |
### 3.6.2 Trace Context Propagation
| Message Type | Context Size | Messages/sec | Overhead |
| ---------------------- | ------------ | ------------ | ----------- |
| TMTransaction | 25 bytes | ~100 | ~2.5 KB/s |
| TMProposeSet | 25 bytes | ~10 | ~250 B/s |
| TMValidation | 25 bytes | ~50 | ~1.25 KB/s |
| **Total P2P overhead** | | | **~4 KB/s** |
---
## 3.7 Optimization Strategies
### 3.7.1 Sampling Strategies
#### Tail Sampling
```mermaid
flowchart TD
trace["New Trace"]
trace --> errors{"Is Error?"}
errors -->|Yes| sample["SAMPLE"]
errors -->|No| consensus{"Is Consensus?"}
consensus -->|Yes| sample
consensus -->|No| slow{"Is Slow?"}
slow -->|Yes| sample
slow -->|No| prob{"Random < 10%?"}
prob -->|Yes| sample
prob -->|No| drop["DROP"]
style sample fill:#4caf50,stroke:#388e3c,color:#fff
style drop fill:#f44336,stroke:#c62828,color:#fff
```
### 3.7.2 Batch Tuning Recommendations
| Environment | Batch Size | Batch Delay | Max Queue |
| ------------------ | ---------- | ----------- | --------- |
| Low-latency | 128 | 1000ms | 512 |
| High-throughput | 1024 | 10000ms | 8192 |
| Memory-constrained | 256 | 2000ms | 512 |
### 3.7.3 Conditional Instrumentation
```cpp
// Compile-time feature flag
#ifndef XRPL_ENABLE_TELEMETRY
// Zero-cost when disabled
#define XRPL_TRACE_SPAN(t, n) ((void)0)
#endif
// Runtime component filtering
if (telemetry.shouldTracePeer())
{
XRPL_TRACE_SPAN(telemetry, "peer.message.receive");
// ... instrumentation
}
// No overhead when component tracing disabled
```
---
## 3.8 Links to Detailed Documentation
- **[Code Samples](./04-code-samples.md)**: Complete implementation code for all components
- **[Configuration Reference](./05-configuration-reference.md)**: Configuration options and collector setup
- **[Implementation Phases](./06-implementation-phases.md)**: Detailed timeline and milestones
---
## 3.9 Code Intrusiveness Assessment
> **TxQ** = Transaction Queue
This section provides a detailed assessment of how intrusive the OpenTelemetry integration is to the existing rippled codebase.
### 3.9.1 Files Modified Summary
| Component | Files Modified | Lines Added | Lines Changed | Architectural Impact |
| --------------------- | -------------- | ----------- | ------------- | -------------------- |
| **Core Telemetry** | 5 new files | ~800 | 0 | None (new module) |
| **Application Init** | 2 files | ~30 | ~5 | Minimal |
| **RPC Layer** | 3 files | ~80 | ~20 | Minimal |
| **Transaction Relay** | 4 files | ~120 | ~40 | Low |
| **Consensus** | 3 files | ~100 | ~30 | Low-Medium |
| **Protocol Buffers** | 1 file | ~25 | 0 | Low |
| **CMake/Build** | 3 files | ~50 | ~10 | Minimal |
| **PathFinding** | 2 | ~80 | ~5 | Minimal |
| **TxQ/Fee** | 2 | ~60 | ~5 | Minimal |
| **Validator/Amend** | 3 | ~40 | ~5 | Minimal |
| **Total** | **~28 files** | **~1,490** | **~120** | **Low** |
### 3.9.2 Detailed File Impact
```mermaid
pie title Code Changes by Component
"New Telemetry Module" : 800
"Transaction Relay" : 160
"Consensus" : 130
"RPC Layer" : 100
"PathFinding" : 80
"TxQ/Fee" : 60
"Validator/Amendment" : 40
"Application Init" : 35
"Protocol Buffers" : 25
"Build System" : 60
```
#### New Files (No Impact on Existing Code)
| File | Lines | Purpose |
| ---------------------------------------------- | ----- | -------------------- |
| `include/xrpl/telemetry/Telemetry.h` | ~160 | Main interface |
| `include/xrpl/telemetry/SpanGuard.h` | ~120 | RAII wrapper |
| `include/xrpl/telemetry/TraceContext.h` | ~80 | Context propagation |
| `src/xrpld/telemetry/TracingInstrumentation.h` | ~60 | Macros |
| `src/libxrpl/telemetry/Telemetry.cpp` | ~200 | Implementation |
| `src/libxrpl/telemetry/TelemetryConfig.cpp` | ~60 | Config parsing |
| `src/libxrpl/telemetry/NullTelemetry.cpp` | ~40 | No-op implementation |
#### Modified Files (Existing Rippled Code)
| File | Lines Added | Lines Changed | Risk Level |
| ------------------------------------------------- | ----------- | ------------- | ---------- |
| `src/xrpld/app/main/Application.cpp` | ~15 | ~3 | Low |
| `include/xrpl/app/main/Application.h` | ~5 | ~2 | Low |
| `src/xrpld/rpc/detail/ServerHandler.cpp` | ~40 | ~10 | Low |
| `src/xrpld/rpc/handlers/*.cpp` | ~30 | ~8 | Low |
| `src/xrpld/overlay/detail/PeerImp.cpp` | ~60 | ~15 | Medium |
| `src/xrpld/overlay/detail/OverlayImpl.cpp` | ~30 | ~10 | Medium |
| `src/xrpld/app/consensus/RCLConsensus.cpp` | ~50 | ~15 | Medium |
| `src/xrpld/app/consensus/RCLConsensusAdaptor.cpp` | ~40 | ~12 | Medium |
| `src/xrpld/core/JobQueue.cpp` | ~20 | ~5 | Low |
| `src/xrpld/app/paths/PathRequest.cpp` | ~40 | ~3 | Low |
| `src/xrpld/app/paths/Pathfinder.cpp` | ~40 | ~2 | Low |
| `src/xrpld/app/misc/TxQ.cpp` | ~40 | ~3 | Low |
| `src/xrpld/app/main/LoadManager.cpp` | ~20 | ~2 | Low |
| `src/xrpld/app/misc/ValidatorList.cpp` | ~20 | ~2 | Low |
| `src/xrpld/app/misc/AmendmentTable.cpp` | ~10 | ~2 | Low |
| `src/xrpld/app/misc/Manifest.cpp` | ~10 | ~1 | Low |
| `src/xrpld/shamap/SHAMap.cpp` | ~20 | ~3 | Low |
| `src/xrpld/overlay/detail/ripple.proto` | ~25 | 0 | Low |
| `CMakeLists.txt` | ~40 | ~8 | Low |
| `cmake/FindOpenTelemetry.cmake` | ~50 | 0 | None (new) |
### 3.9.3 Risk Assessment by Component
<div align="center">
**Do First** ↖ ↗ **Plan Carefully**
```mermaid
quadrantChart
title Code Intrusiveness Risk Matrix
x-axis Low Risk --> High Risk
y-axis Low Value --> High Value
RPC Tracing: [0.2, 0.55]
Transaction Relay: [0.55, 0.85]
Consensus Tracing: [0.75, 0.92]
Peer Message Tracing: [0.85, 0.35]
JobQueue Context: [0.3, 0.42]
Ledger Acquisition: [0.48, 0.65]
PathFinding: [0.38, 0.72]
TxQ and Fees: [0.25, 0.62]
Validator Mgmt: [0.15, 0.35]
```
**Optional** ↙ ↘ **Avoid**
</div>
#### Risk Level Definitions
| Risk Level | Definition | Mitigation |
| ---------- | ---------------------------------------------------------------- | ---------------------------------- |
| **Low** | Additive changes only; no modification to existing logic | Standard code review |
| **Medium** | Minor modifications to existing functions; clear boundaries | Comprehensive unit tests |
| **High** | Changes to core logic or data structures; potential side effects | Integration tests + staged rollout |
### 3.9.4 Architectural Impact Assessment
| Aspect | Impact | Justification |
| -------------------- | ------- | -------------------------------------------------------------------------------- |
| **Data Flow** | Minimal | Read-only instrumentation; no modification to consensus or transaction data flow |
| **Threading Model** | Minimal | Context propagation uses thread-local storage (standard OTel pattern) |
| **Memory Model** | Low | Bounded queues prevent unbounded growth; RAII ensures cleanup |
| **Network Protocol** | Low | Optional fields in protobuf (high field numbers); backward compatible |
| **Configuration** | None | New config section; existing configs unaffected |
| **Build System** | Low | Optional CMake flag; builds work without OpenTelemetry |
| **Dependencies** | Low | OpenTelemetry SDK is optional; null implementation when disabled |
### 3.9.5 Backward Compatibility
| Compatibility | Status | Notes |
| --------------- | ------- | ----------------------------------------------------- |
| **Config File** | ✅ Full | New `[telemetry]` section is optional |
| **Protocol** | ✅ Full | Optional protobuf fields with high field numbers |
| **Build** | ✅ Full | `XRPL_ENABLE_TELEMETRY=OFF` produces identical binary |
| **Runtime** | ✅ Full | `enabled=0` produces zero overhead |
| **API** | ✅ Full | No changes to public RPC or P2P APIs |
### 3.9.6 Rollback Strategy
If issues are discovered after deployment:
1. **Immediate**: Set `enabled=0` in config and restart (zero code change)
2. **Quick**: Rebuild with `XRPL_ENABLE_TELEMETRY=OFF`
3. **Complete**: Revert telemetry commits (clean separation makes this easy)
### 3.9.7 Code Change Examples
**Minimal RPC Instrumentation (Low Intrusiveness):**
```cpp
// Before
void ServerHandler::onRequest(...) {
auto result = processRequest(req);
send(result);
}
// After (only ~10 lines added)
void ServerHandler::onRequest(...) {
XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.request"); // +1 line
XRPL_TRACE_SET_ATTR("xrpl.rpc.command", command); // +1 line
auto result = processRequest(req);
XRPL_TRACE_SET_ATTR("xrpl.rpc.status", status); // +1 line
send(result);
}
```
**Consensus Instrumentation (Medium Intrusiveness):**
```cpp
// Before
void RCLConsensusAdaptor::startRound(...) {
// ... existing logic
}
// After (context storage required)
void RCLConsensusAdaptor::startRound(...) {
XRPL_TRACE_CONSENSUS(app_.getTelemetry(), "consensus.round");
XRPL_TRACE_SET_ATTR("xrpl.consensus.ledger.seq", seq);
// Store context for child spans in phase transitions
currentRoundContext_ = _xrpl_guard_->context(); // New member variable
// ... existing logic unchanged
}
```
---
_Previous: [Design Decisions](./02-design-decisions.md)_ | _Next: [Code Samples](./04-code-samples.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,976 @@
# Configuration Reference
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Code Samples](./04-code-samples.md) | [Implementation Phases](./06-implementation-phases.md)
---
## 5.1 rippled Configuration
> **OTLP** = OpenTelemetry Protocol | **TxQ** = Transaction Queue
### 5.1.1 Configuration File Section
Add to `cfg/xrpld-example.cfg`:
```ini
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY (OpenTelemetry Distributed Tracing)
# ═══════════════════════════════════════════════════════════════════════════════
#
# Enables distributed tracing for transaction flow, consensus, and RPC calls.
# Traces are exported to an OpenTelemetry Collector using OTLP protocol.
#
# [telemetry]
#
# # Enable/disable telemetry (default: 0 = disabled)
# enabled=1
#
# # Exporter type: "otlp_grpc" (default), "otlp_http", or "none"
# exporter=otlp_grpc
#
# # OTLP endpoint (default: localhost:4317 for gRPC, localhost:4318 for HTTP)
# endpoint=localhost:4317
#
# # Use TLS for exporter connection (default: 0)
# use_tls=0
#
# # Path to CA certificate for TLS (optional)
# # tls_ca_cert=/path/to/ca.crt
#
# # Sampling ratio: 0.0-1.0 (default: 1.0 = 100% sampling)
# # Use lower values in production to reduce overhead
# # Default: 1.0 (all traces). For production deployments with high
# # throughput, 0.1 (10%) is recommended to reduce overhead.
# # See Section 7.4.2 for sampling strategy details.
# sampling_ratio=0.1
#
# # Batch processor settings
# batch_size=512 # Spans per batch (default: 512)
# batch_delay_ms=5000 # Max delay before sending batch (default: 5000)
# max_queue_size=2048 # Max queued spans (default: 2048)
#
# # Component-specific tracing (default: all enabled except peer)
# trace_transactions=1 # Transaction relay and processing
# trace_consensus=1 # Consensus rounds and proposals
# trace_rpc=1 # RPC request handling
# trace_peer=0 # Peer messages (high volume, disabled by default)
# trace_ledger=1 # Ledger acquisition and building
# trace_pathfind=1 # Path computation (can be expensive)
# trace_txq=1 # Transaction queue and fee escalation
# trace_validator=0 # Validator list and manifest updates (low volume)
# trace_amendment=0 # Amendment voting (very low volume)
#
# # Service identification (automatically detected if not specified)
# # service_name=rippled
# # service_instance_id=<node_public_key>
[telemetry]
enabled=0
```
### 5.1.2 Configuration Options Summary
| Option | Type | Default | Description |
| --------------------- | ------ | ---------------- | ----------------------------------------- |
| `enabled` | bool | `false` | Enable/disable telemetry |
| `exporter` | string | `"otlp_grpc"` | Exporter type: otlp_grpc, otlp_http, none |
| `endpoint` | string | `localhost:4317` | OTLP collector endpoint |
| `use_tls` | bool | `false` | Enable TLS for exporter connection |
| `tls_ca_cert` | string | `""` | Path to CA certificate file |
| `sampling_ratio` | float | `1.0` | Sampling ratio (0.0-1.0) |
| `batch_size` | uint | `512` | Spans per export batch |
| `batch_delay_ms` | uint | `5000` | Max delay before sending batch (ms) |
| `max_queue_size` | uint | `2048` | Maximum queued spans |
| `trace_transactions` | bool | `true` | Enable transaction tracing |
| `trace_consensus` | bool | `true` | Enable consensus tracing |
| `trace_rpc` | bool | `true` | Enable RPC tracing |
| `trace_peer` | bool | `false` | Enable peer message tracing (high volume) |
| `trace_ledger` | bool | `true` | Enable ledger tracing |
| `trace_pathfind` | bool | `true` | Enable path computation tracing |
| `trace_txq` | bool | `true` | Enable transaction queue tracing |
| `trace_validator` | bool | `false` | Enable validator list/manifest tracing |
| `trace_amendment` | bool | `false` | Enable amendment voting tracing |
| `service_name` | string | `"rippled"` | Service name for traces |
| `service_instance_id` | string | `<node_pubkey>` | Instance identifier |
---
## 5.2 Configuration Parser
> **TxQ** = Transaction Queue
```cpp
// src/libxrpl/telemetry/TelemetryConfig.cpp
#include <xrpl/telemetry/Telemetry.h>
#include <xrpl/basics/Log.h>
namespace xrpl {
namespace telemetry {
Telemetry::Setup
setup_Telemetry(
Section const& section,
std::string const& nodePublicKey,
std::string const& version)
{
Telemetry::Setup setup;
// Basic settings
setup.enabled = section.value_or("enabled", false);
setup.serviceName = section.value_or("service_name", "rippled");
setup.serviceVersion = version;
setup.serviceInstanceId = section.value_or(
"service_instance_id", nodePublicKey);
// Exporter settings
setup.exporterType = section.value_or("exporter", "otlp_grpc");
if (setup.exporterType == "otlp_grpc")
setup.exporterEndpoint = section.value_or("endpoint", "localhost:4317");
else if (setup.exporterType == "otlp_http")
setup.exporterEndpoint = section.value_or("endpoint", "localhost:4318");
setup.useTls = section.value_or("use_tls", false);
setup.tlsCertPath = section.value_or("tls_ca_cert", "");
// Sampling
setup.samplingRatio = section.value_or("sampling_ratio", 1.0);
if (setup.samplingRatio < 0.0 || setup.samplingRatio > 1.0)
{
Throw<std::runtime_error>(
"telemetry.sampling_ratio must be between 0.0 and 1.0");
}
// Batch processor
setup.batchSize = section.value_or("batch_size", 512u);
setup.batchDelay = std::chrono::milliseconds{
section.value_or("batch_delay_ms", 5000u)};
setup.maxQueueSize = section.value_or("max_queue_size", 2048u);
// Component filtering
setup.traceTransactions = section.value_or("trace_transactions", true);
setup.traceConsensus = section.value_or("trace_consensus", true);
setup.traceRpc = section.value_or("trace_rpc", true);
setup.tracePeer = section.value_or("trace_peer", false);
setup.traceLedger = section.value_or("trace_ledger", true);
setup.tracePathfind = section.value_or("trace_pathfind", true);
setup.traceTxQ = section.value_or("trace_txq", true);
setup.traceValidator = section.value_or("trace_validator", false);
setup.traceAmendment = section.value_or("trace_amendment", false);
return setup;
}
} // namespace telemetry
} // namespace xrpl
```
---
## 5.3 Application Integration
### 5.3.1 ApplicationImp Changes
```cpp
// src/xrpld/app/main/Application.cpp (modified)
#include <xrpl/telemetry/Telemetry.h>
class ApplicationImp : public Application
{
// ... existing members ...
// Telemetry (must be constructed early, destroyed late)
std::unique_ptr<telemetry::Telemetry> telemetry_;
public:
ApplicationImp(...)
{
// Initialize telemetry early (before other components)
auto telemetrySection = config_->section("telemetry");
auto telemetrySetup = telemetry::setup_Telemetry(
telemetrySection,
toBase58(TokenType::NodePublic, nodeIdentity_.publicKey()),
BuildInfo::getVersionString());
// Set network attributes
telemetrySetup.networkId = config_->NETWORK_ID;
telemetrySetup.networkType = [&]() {
if (config_->NETWORK_ID == 0) return "mainnet";
if (config_->NETWORK_ID == 1) return "testnet";
if (config_->NETWORK_ID == 2) return "devnet";
return "custom";
}();
telemetry_ = telemetry::make_Telemetry(
telemetrySetup,
logs_->journal("Telemetry"));
// ... rest of initialization ...
}
void start() override
{
// Start telemetry first
if (telemetry_)
telemetry_->start();
// ... existing start code ...
}
void stop() override
{
// ... existing stop code ...
// Stop telemetry last (to capture shutdown spans)
if (telemetry_)
telemetry_->stop();
}
telemetry::Telemetry& getTelemetry() override
{
assert(telemetry_);
return *telemetry_;
}
};
```
### 5.3.2 Application Interface Addition
```cpp
// include/xrpl/app/main/Application.h (modified)
namespace telemetry { class Telemetry; }
class Application
{
public:
// ... existing virtual methods ...
/** Get the telemetry system for distributed tracing */
virtual telemetry::Telemetry& getTelemetry() = 0;
};
```
---
## 5.4 CMake Integration
> **OTLP** = OpenTelemetry Protocol
### 5.4.1 Find OpenTelemetry Module
```cmake
# cmake/FindOpenTelemetry.cmake
# Find OpenTelemetry C++ SDK
#
# This module defines:
# OpenTelemetry_FOUND - System has OpenTelemetry
# OpenTelemetry::api - API library target
# OpenTelemetry::sdk - SDK library target
# OpenTelemetry::otlp_grpc_exporter - OTLP gRPC exporter target
# OpenTelemetry::otlp_http_exporter - OTLP HTTP exporter target
find_package(opentelemetry-cpp CONFIG QUIET)
if(opentelemetry-cpp_FOUND)
set(OpenTelemetry_FOUND TRUE)
# Create imported targets if not already created by config
if(NOT TARGET OpenTelemetry::api)
add_library(OpenTelemetry::api ALIAS opentelemetry-cpp::api)
endif()
if(NOT TARGET OpenTelemetry::sdk)
add_library(OpenTelemetry::sdk ALIAS opentelemetry-cpp::sdk)
endif()
if(NOT TARGET OpenTelemetry::otlp_grpc_exporter)
add_library(OpenTelemetry::otlp_grpc_exporter ALIAS
opentelemetry-cpp::otlp_grpc_exporter)
endif()
else()
# Try pkg-config fallback
find_package(PkgConfig QUIET)
if(PKG_CONFIG_FOUND)
pkg_check_modules(OTEL opentelemetry-cpp QUIET)
if(OTEL_FOUND)
set(OpenTelemetry_FOUND TRUE)
# Create imported targets from pkg-config
add_library(OpenTelemetry::api INTERFACE IMPORTED)
target_include_directories(OpenTelemetry::api INTERFACE
${OTEL_INCLUDE_DIRS})
endif()
endif()
endif()
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(OpenTelemetry
REQUIRED_VARS OpenTelemetry_FOUND)
```
### 5.4.2 CMakeLists.txt Changes
```cmake
# CMakeLists.txt (additions)
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY OPTIONS
# ═══════════════════════════════════════════════════════════════════════════════
option(XRPL_ENABLE_TELEMETRY
"Enable OpenTelemetry distributed tracing support" OFF)
if(XRPL_ENABLE_TELEMETRY)
find_package(OpenTelemetry REQUIRED)
# Define compile-time flag
add_compile_definitions(XRPL_ENABLE_TELEMETRY)
message(STATUS "OpenTelemetry tracing: ENABLED")
else()
message(STATUS "OpenTelemetry tracing: DISABLED")
endif()
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY LIBRARY
# ═══════════════════════════════════════════════════════════════════════════════
if(XRPL_ENABLE_TELEMETRY)
add_library(xrpl_telemetry
src/libxrpl/telemetry/Telemetry.cpp
src/libxrpl/telemetry/TelemetryConfig.cpp
src/libxrpl/telemetry/TraceContext.cpp
)
target_include_directories(xrpl_telemetry
PUBLIC
${CMAKE_CURRENT_SOURCE_DIR}/include
)
target_link_libraries(xrpl_telemetry
PUBLIC
OpenTelemetry::api
OpenTelemetry::sdk
OpenTelemetry::otlp_grpc_exporter
PRIVATE
xrpl_basics
)
# Add to main library dependencies
target_link_libraries(xrpld PRIVATE xrpl_telemetry)
else()
# Create null implementation library
add_library(xrpl_telemetry
src/libxrpl/telemetry/NullTelemetry.cpp
)
target_include_directories(xrpl_telemetry
PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include
)
endif()
```
---
## 5.5 OpenTelemetry Collector Configuration
> **OTLP** = OpenTelemetry Protocol | **APM** = Application Performance Monitoring
### 5.5.1 Development Configuration
```yaml
# otel-collector-dev.yaml
# Minimal configuration for local development
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 100
exporters:
# Console output for debugging
logging:
verbosity: detailed
sampling_initial: 5
sampling_thereafter: 200
# Tempo for trace visualization
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
# Grafana Tempo for trace storage
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, jaeger, otlp/tempo]
```
### 5.5.2 Production Configuration
```yaml
# otel-collector-prod.yaml
# Production configuration with filtering, sampling, and multiple backends
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
tls:
cert_file: /etc/otel/server.crt
key_file: /etc/otel/server.key
ca_file: /etc/otel/ca.crt
processors:
# Memory limiter to prevent OOM
memory_limiter:
check_interval: 1s
limit_mib: 1000
spike_limit_mib: 200
# Batch processing for efficiency
batch:
timeout: 5s
send_batch_size: 512
send_batch_max_size: 1024
# Tail-based sampling (keep errors and slow traces)
tail_sampling:
decision_wait: 10s
num_traces: 100000
expected_new_traces_per_sec: 1000
policies:
# Always keep error traces
- name: errors
type: status_code
status_code:
status_codes: [ERROR]
# Keep slow consensus rounds (>5s)
- name: slow-consensus
type: latency
latency:
threshold_ms: 5000
# Keep slow RPC requests (>1s)
- name: slow-rpc
type: and
and:
and_sub_policy:
- name: rpc-spans
type: string_attribute
string_attribute:
key: xrpl.rpc.command
values: [".*"]
enabled_regex_matching: true
- name: latency
type: latency
latency:
threshold_ms: 1000
# Probabilistic sampling for the rest
- name: probabilistic
type: probabilistic
probabilistic:
sampling_percentage: 10
# Attribute processing
attributes:
actions:
# Hash sensitive data
- key: xrpl.tx.account
action: hash
# Add deployment info
- key: deployment.environment
value: production
action: upsert
exporters:
# Grafana Tempo for long-term storage
otlp/tempo:
endpoint: tempo.monitoring:4317
tls:
insecure: false
ca_file: /etc/otel/tempo-ca.crt
# Elastic APM for correlation with logs
otlp/elastic:
endpoint: apm.elastic:8200
headers:
Authorization: "Bearer ${ELASTIC_APM_TOKEN}"
extensions:
health_check:
endpoint: 0.0.0.0:13133
zpages:
endpoint: 0.0.0.0:55679
service:
extensions: [health_check, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, tail_sampling, attributes, batch]
exporters: [otlp/tempo, otlp/elastic]
```
---
## 5.6 Docker Compose Development Environment
> **OTLP** = OpenTelemetry Protocol
```yaml
# docker-compose-telemetry.yaml
version: "3.8"
services:
# OpenTelemetry Collector
otel-collector:
image: otel/opentelemetry-collector-contrib:0.92.0
container_name: otel-collector
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-dev.yaml:/etc/otel-collector-config.yaml:ro
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
- "13133:13133" # Health check
depends_on:
- tempo
# Tempo for trace visualization
tempo:
image: grafana/tempo:2.6.1
container_name: tempo
ports:
- "3200:3200" # Tempo HTTP API
- "4317" # OTLP gRPC (internal)
# Grafana Tempo for trace storage (recommended for production)
tempo:
image: grafana/tempo:2.7.2
container_name: tempo
command: ["-config.file=/etc/tempo.yaml"]
volumes:
- ./tempo.yaml:/etc/tempo.yaml:ro
- tempo-data:/var/tempo
ports:
- "3200:3200" # HTTP API
# Grafana for dashboards
grafana:
image: grafana/grafana:10.2.3
container_name: grafana
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
volumes:
- ./grafana/provisioning:/etc/grafana/provisioning:ro
- ./grafana/dashboards:/var/lib/grafana/dashboards:ro
ports:
- "3000:3000"
depends_on:
- jaeger
- tempo
# Prometheus for metrics (optional, for correlation)
prometheus:
image: prom/prometheus:v2.48.1
container_name: prometheus
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml:ro
ports:
- "9090:9090"
networks:
default:
name: rippled-telemetry
```
---
## 5.7 Configuration Architecture
> **OTLP** = OpenTelemetry Protocol
```mermaid
flowchart TB
subgraph config["Configuration Sources"]
cfgFile["xrpld.cfg<br/>[telemetry] section"]
cmake["CMake<br/>XRPL_ENABLE_TELEMETRY"]
end
subgraph init["Initialization"]
parse["setup_Telemetry()"]
factory["make_Telemetry()"]
end
subgraph runtime["Runtime Components"]
tracer["TracerProvider"]
exporter["OTLP Exporter"]
processor["BatchProcessor"]
end
subgraph collector["Collector Pipeline"]
recv["Receivers"]
proc["Processors"]
exp["Exporters"]
end
cfgFile --> parse
cmake -->|"compile flag"| parse
parse --> factory
factory --> tracer
tracer --> processor
processor --> exporter
exporter -->|"OTLP"| recv
recv --> proc
proc --> exp
style config fill:#e3f2fd,stroke:#1976d2
style runtime fill:#e8f5e9,stroke:#388e3c
style collector fill:#fff3e0,stroke:#ff9800
```
**Reading the diagram:**
- **Configuration Sources**: `xrpld.cfg` provides runtime settings (endpoint, sampling) while the CMake flag controls whether telemetry is compiled in at all.
- **Initialization**: `setup_Telemetry()` parses config values, then `make_Telemetry()` constructs the provider, processor, and exporter objects.
- **Runtime Components**: The `TracerProvider` creates spans, the `BatchProcessor` buffers them, and the `OTLP Exporter` serializes and sends them over the wire.
- **OTLP arrow to Collector**: Trace data leaves the rippled process via OTLP (gRPC or HTTP) and enters the external Collector pipeline.
- **Collector Pipeline**: `Receivers` ingest OTLP data, `Processors` apply sampling/filtering/enrichment, and `Exporters` forward traces to storage backends (Tempo, etc.).
---
## 5.8 Grafana Integration
> **APM** = Application Performance Monitoring
Step-by-step instructions for integrating rippled traces with Grafana.
### 5.8.1 Data Source Configuration
#### Tempo (Recommended)
```yaml
# grafana/provisioning/datasources/tempo.yaml
apiVersion: 1
datasources:
- name: Tempo
type: tempo
access: proxy
url: http://tempo:3200
jsonData:
httpMethod: GET
tracesToLogs:
datasourceUid: loki
tags: ["service.name", "xrpl.tx.hash"]
mappedTags: [{ key: "trace_id", value: "traceID" }]
mapTagNamesEnabled: true
filterByTraceID: true
serviceMap:
datasourceUid: prometheus
nodeGraph:
enabled: true
search:
hide: false
lokiSearch:
datasourceUid: loki
```
#### Elastic APM
```yaml
# grafana/provisioning/datasources/elastic-apm.yaml
apiVersion: 1
datasources:
- name: Elasticsearch-APM
type: elasticsearch
access: proxy
url: http://elasticsearch:9200
database: "apm-*"
jsonData:
esVersion: "8.0.0"
timeField: "@timestamp"
logMessageField: message
logLevelField: log.level
```
### 5.8.2 Dashboard Provisioning
```yaml
# grafana/provisioning/dashboards/dashboards.yaml
apiVersion: 1
providers:
- name: "rippled-dashboards"
orgId: 1
folder: "rippled"
folderUid: "rippled"
type: file
disableDeletion: false
updateIntervalSeconds: 30
options:
path: /var/lib/grafana/dashboards/rippled
```
### 5.8.3 Example Dashboard: RPC Performance
```json
{
"title": "rippled RPC Performance",
"uid": "rippled-rpc-performance",
"panels": [
{
"title": "RPC Latency by Command",
"type": "heatmap",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && span.xrpl.rpc.command != \"\"} | histogram_over_time(duration) by (span.xrpl.rpc.command)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 0 }
},
{
"title": "RPC Error Rate",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && status.code=error} | rate() by (span.xrpl.rpc.command)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 0 }
},
{
"title": "Top 10 Slowest RPC Commands",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && span.xrpl.rpc.command != \"\"} | avg(duration) by (span.xrpl.rpc.command) | topk(10)"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 8 }
},
{
"title": "Recent Traces",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\"}"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 16 }
}
]
}
```
### 5.8.4 Example Dashboard: Transaction Tracing
```json
{
"title": "rippled Transaction Tracing",
"uid": "rippled-tx-tracing",
"panels": [
{
"title": "Transaction Throughput",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.receive\"} | rate()"
}
],
"gridPos": { "h": 4, "w": 6, "x": 0, "y": 0 }
},
{
"title": "Cross-Node Relay Count",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.relay\"} | avg(span.xrpl.tx.relay_count)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 4 }
},
{
"title": "Transaction Validation Errors",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.validate\" && status.code=error}"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 4 }
}
]
}
```
### 5.8.5 TraceQL Query Examples
Common queries for rippled traces:
```
# Find all traces for a specific transaction hash
{resource.service.name="rippled" && span.xrpl.tx.hash="ABC123..."}
# Find slow RPC commands (>100ms)
{resource.service.name="rippled" && name=~"rpc.command.*"} | duration > 100ms
# Find consensus rounds taking >5 seconds
{resource.service.name="rippled" && name="consensus.round"} | duration > 5s
# Find failed transactions with error details
{resource.service.name="rippled" && name="tx.validate" && status.code=error}
# Find transactions relayed to many peers
{resource.service.name="rippled" && name="tx.relay"} | span.xrpl.tx.relay_count > 10
# Compare latency across nodes
{resource.service.name="rippled" && name="rpc.command.account_info"} | avg(duration) by (resource.service.instance.id)
```
### 5.8.6 Correlation with PerfLog
To correlate OpenTelemetry traces with existing PerfLog data:
**Step 1: Configure Loki to ingest PerfLog**
```yaml
# promtail-config.yaml
scrape_configs:
- job_name: rippled-perflog
static_configs:
- targets:
- localhost
labels:
job: rippled
__path__: /var/log/rippled/perf*.log
pipeline_stages:
- json:
expressions:
trace_id: trace_id
ledger_seq: ledger_seq
tx_hash: tx_hash
- labels:
trace_id:
ledger_seq:
tx_hash:
```
**Step 2: Add trace_id to PerfLog entries**
Modify PerfLog to include trace_id when available:
```cpp
// In PerfLog output, add trace_id from current span context
void logPerf(Json::Value& entry) {
auto span = opentelemetry::trace::GetSpan(
opentelemetry::context::RuntimeContext::GetCurrent());
if (span && span->GetContext().IsValid()) {
char traceIdHex[33];
span->GetContext().trace_id().ToLowerBase16(traceIdHex);
entry["trace_id"] = std::string(traceIdHex, 32);
}
// ... existing logging
}
```
**Step 3: Configure Grafana trace-to-logs link**
In Tempo data source configuration, set up the derived field:
```yaml
jsonData:
tracesToLogs:
datasourceUid: loki
tags: ["trace_id", "xrpl.tx.hash"]
filterByTraceID: true
filterBySpanID: false
```
### 5.8.7 Correlation with Insight/OTel System Metrics
To correlate traces with Beast Insight system metrics:
**Step 1: Export Insight metrics to Prometheus**
Beast Insight metrics are exported natively via OTLP to the OTel Collector,
which exposes them on the Prometheus endpoint alongside spanmetrics. No
separate StatsD exporter is needed when using `server=otel`.
```ini
# xrpld.cfg — native OTel metrics (recommended)
[insight]
server=otel
endpoint=http://localhost:4318/v1/metrics
prefix=rippled
```
**Step 2: Add exemplars to metrics**
OpenTelemetry SDK automatically adds exemplars (trace IDs) to metrics when using the Prometheus exporter. This links metrics spikes to specific traces.
**Step 3: Configure Grafana metric-to-trace link**
```yaml
# In Prometheus data source
jsonData:
exemplarTraceIdDestinations:
- name: trace_id
datasourceUid: tempo
```
**Step 4: Dashboard panel with exemplars**
```json
{
"title": "RPC Latency with Trace Links",
"type": "timeseries",
"datasource": "Prometheus",
"targets": [
{
"expr": "histogram_quantile(0.99, rate(rippled_rpc_duration_seconds_bucket[5m]))",
"exemplar": true
}
]
}
```
This allows clicking on metric data points to jump directly to the related trace.
---
_Previous: [Code Samples](./04-code-samples.md)_ | _Next: [Implementation Phases](./06-implementation-phases.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,641 @@
# Observability Backend Recommendations
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Implementation Phases](./06-implementation-phases.md) | [Appendix](./08-appendix.md)
---
## 7.1 Development/Testing Backends
> **OTLP** = OpenTelemetry Protocol
| Backend | Pros | Cons | Use Case |
| ---------- | ----------------------------------- | ---------------------- | ------------------- |
| **Tempo** | Cost-effective, Grafana integration | Requires Grafana stack | Local dev, CI, Prod |
| **Zipkin** | Simple, lightweight | Basic features | Quick prototyping |
### Quick Start with Tempo
```bash
# Start Tempo with OTLP support
docker run -d --name tempo \
-p 3200:3200 \
-p 4317:4317 \
-p 4318:4318 \
grafana/tempo:2.6.1
```
---
## 7.2 Production Backends
> **APM** = Application Performance Monitoring
| Backend | Pros | Cons | Use Case |
| ----------------- | ----------------------------------------- | ---------------------- | --------------------------- |
| **Grafana Tempo** | Cost-effective, Grafana integration | Requires Grafana stack | Most production deployments |
| **Elastic APM** | Full observability stack, log correlation | Resource intensive | Existing Elastic users |
| **Honeycomb** | Excellent query, high cardinality | SaaS cost | Deep debugging needs |
| **Datadog APM** | Full platform, easy setup | SaaS cost | Enterprise with budget |
### Backend Selection Flowchart
```mermaid
flowchart TD
start[Select Backend] --> budget{Budget<br/>Constraints?}
budget -->|Yes| oss[Open Source]
budget -->|No| saas{Prefer<br/>SaaS?}
oss --> existing{Existing<br/>Stack?}
existing -->|Grafana| tempo[Grafana Tempo]
existing -->|Elastic| elastic[Elastic APM]
existing -->|None| tempo
saas -->|Yes| enterprise{Enterprise<br/>Support?}
saas -->|No| oss
enterprise -->|Yes| datadog[Datadog APM]
enterprise -->|No| honeycomb[Honeycomb]
tempo --> final[Configure Collector]
elastic --> final
honeycomb --> final
datadog --> final
style start fill:#0f172a,stroke:#020617,color:#fff
style budget fill:#334155,stroke:#1e293b,color:#fff
style oss fill:#1e293b,stroke:#0f172a,color:#fff
style existing fill:#334155,stroke:#1e293b,color:#fff
style saas fill:#334155,stroke:#1e293b,color:#fff
style enterprise fill:#334155,stroke:#1e293b,color:#fff
style final fill:#0f172a,stroke:#020617,color:#fff
style tempo fill:#1b5e20,stroke:#0d3d14,color:#fff
style elastic fill:#bf360c,stroke:#8c2809,color:#fff
style honeycomb fill:#0d47a1,stroke:#082f6a,color:#fff
style datadog fill:#4a148c,stroke:#2e0d57,color:#fff
```
**Reading the diagram:**
- **Budget Constraints? (Yes)**: Leads to open-source options. If you already run Grafana or Elastic, pick the matching backend; otherwise default to Grafana Tempo.
- **Budget Constraints? (No) → Prefer SaaS?**: If you want a managed service, choose between Datadog (enterprise support) and Honeycomb (developer-focused). If not, fall back to open-source.
- **Terminal nodes (Tempo / Elastic / Honeycomb / Datadog)**: Each represents a concrete backend choice, all of which feed into the same final step.
- **Configure Collector**: Regardless of backend, you always finish by configuring the OTel Collector to export to your chosen destination.
---
## 7.3 Recommended Production Architecture
> **OTLP** = OpenTelemetry Protocol | **APM** = Application Performance Monitoring | **HA** = High Availability
```mermaid
flowchart TB
subgraph validators["Validator Nodes"]
v1[rippled<br/>Validator 1]
v2[rippled<br/>Validator 2]
end
subgraph stock["Stock Nodes"]
s1[rippled<br/>Stock 1]
s2[rippled<br/>Stock 2]
end
subgraph collector["OTel Collector Cluster"]
c1[Collector<br/>DC1]
c2[Collector<br/>DC2]
end
subgraph backends["Storage Backends"]
tempo[(Grafana<br/>Tempo)]
elastic[(Elastic<br/>APM)]
archive[(S3/GCS<br/>Archive)]
end
subgraph ui["Visualization"]
grafana[Grafana<br/>Dashboards]
end
v1 -->|OTLP| c1
v2 -->|OTLP| c1
s1 -->|OTLP| c2
s2 -->|OTLP| c2
c1 --> tempo
c1 --> elastic
c2 --> tempo
c2 --> archive
tempo --> grafana
elastic --> grafana
%% Note: simplified single-collector-per-DC topology shown for clarity
style validators fill:#b71c1c,stroke:#7f1d1d,color:#ffffff
style stock fill:#0d47a1,stroke:#082f6a,color:#ffffff
style collector fill:#bf360c,stroke:#8c2809,color:#ffffff
style backends fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style ui fill:#4a148c,stroke:#2e0d57,color:#ffffff
```
**Reading the diagram:**
- **Validator / Stock Nodes**: All rippled nodes emit trace data via OTLP. Validators and stock nodes are grouped separately because they may reside in different network zones.
- **Collector Cluster (DC1, DC2)**: Regional collectors receive OTLP from nodes in their datacenter, apply processing (sampling, enrichment), and fan out to multiple backends.
- **Storage Backends**: Tempo and Elastic provide queryable trace storage; S3/GCS Archive provides long-term cold storage for compliance or post-incident analysis.
- **Grafana Dashboards**: The single visualization layer that queries both Tempo and Elastic, giving operators a unified view of all traces.
- **Data flow direction**: Nodes → Collectors → Storage → Grafana. Each arrow represents a network hop; minimizing collector-to-backend hops reduces latency.
> **Note**: Production deployments should use multiple collector instances behind a load balancer for high availability. The diagram shows a simplified single-collector topology for clarity.
---
## 7.4 Architecture Considerations
### 7.4.1 Collector Placement
| Strategy | Description | Pros | Cons |
| ------------- | -------------------- | ------------------------ | ----------------------- |
| **Sidecar** | Collector per node | Isolation, simple config | Resource overhead |
| **DaemonSet** | Collector per host | Shared resources | Complexity |
| **Gateway** | Central collector(s) | Centralized processing | Single point of failure |
**Recommendation**: Use **Gateway** pattern with regional collectors for rippled networks:
- One collector cluster per datacenter/region
- Tail-based sampling at collector level
- Multiple export destinations for redundancy
### 7.4.2 Sampling Strategy
```mermaid
flowchart LR
subgraph head["Head Sampling (Node)"]
hs[Node-level head sampling<br/>configurable, default: 100%<br/>recommended production: 10%]
end
subgraph tail["Tail Sampling (Collector)"]
ts1[Keep all errors]
ts2[Keep slow >5s]
ts3[Keep 10% rest]
end
head --> tail
ts1 --> final[Final Traces]
ts2 --> final
ts3 --> final
style head fill:#0d47a1,stroke:#082f6a,color:#fff
style tail fill:#1b5e20,stroke:#0d3d14,color:#fff
style hs fill:#0d47a1,stroke:#082f6a,color:#fff
style ts1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style ts2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style ts3 fill:#1b5e20,stroke:#0d3d14,color:#fff
style final fill:#bf360c,stroke:#8c2809,color:#fff
```
**Reading the diagram:**
- **Head Sampling (Node)**: The first filter -- each rippled node decides whether to sample a trace at creation time (default 100%, recommended 10% in production). This controls the volume leaving the node.
- **Tail Sampling (Collector)**: The second filter -- the collector inspects completed traces and applies rules: keep all errors, keep anything slower than 5 seconds, and keep 10% of the remainder.
- **Arrow head → tail**: All head-sampled traces flow to the collector, where tail sampling further reduces volume while preserving the most valuable data.
- **Final Traces**: The output after both sampling stages; this is what gets stored and queried. The two-stage approach balances cost with debuggability.
### 7.4.3 Data Retention
| Environment | Hot Storage | Warm Storage | Cold Archive |
| ----------- | ----------- | ------------ | ------------ |
| Development | 24 hours | N/A | N/A |
| Staging | 7 days | N/A | N/A |
| Production | 7 days | 30 days | many years |
---
## 7.5 Integration Checklist
- [ ] Choose primary backend (Tempo recommended for cost/features)
- [ ] Deploy collector cluster with high availability
- [ ] Configure tail-based sampling for error/latency traces
- [ ] Set up Grafana dashboards for trace visualization
- [ ] Configure alerts for trace anomalies
- [ ] Establish data retention policies
- [ ] Test trace correlation with logs and metrics
---
## 7.6 Grafana Dashboard Examples
Pre-built dashboards for rippled observability.
### 7.6.1 Consensus Health Dashboard
```json
{
"title": "rippled Consensus Health",
"uid": "rippled-consensus-health",
"tags": ["rippled", "consensus", "tracing"],
"panels": [
{
"title": "Consensus Round Duration",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"consensus.round\"} | avg(duration) by (resource.service.instance.id)"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"thresholds": {
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 4000 },
{ "color": "red", "value": 5000 }
]
}
}
},
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 0 }
},
{
"title": "Phase Duration Breakdown",
"type": "barchart",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=~\"consensus.phase.*\"} | avg(duration) by (name)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 0 }
},
{
"title": "Proposers per Round",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"consensus.round\"} | avg(span.xrpl.consensus.proposers)"
}
],
"gridPos": { "h": 4, "w": 6, "x": 0, "y": 8 }
},
{
"title": "Recent Slow Rounds (>5s)",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"consensus.round\"} | duration > 5s"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 12 }
}
]
}
```
### 7.6.2 Node Overview Dashboard
```json
{
"title": "rippled Node Overview",
"uid": "rippled-node-overview",
"panels": [
{
"title": "Active Nodes",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\"} | count_over_time() by (resource.service.instance.id) | count()"
}
],
"gridPos": { "h": 4, "w": 4, "x": 0, "y": 0 }
},
{
"title": "Total Transactions (1h)",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.receive\"} | count()"
}
],
"gridPos": { "h": 4, "w": 4, "x": 4, "y": 0 }
},
{
"title": "Error Rate",
"type": "gauge",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && status.code=error} | rate() / {resource.service.name=\"rippled\"} | rate() * 100"
}
],
"fieldConfig": {
"defaults": {
"unit": "percent",
"max": 10,
"thresholds": {
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 1 },
{ "color": "red", "value": 5 }
]
}
}
},
"gridPos": { "h": 4, "w": 4, "x": 8, "y": 0 }
},
{
"title": "Service Map",
"type": "nodeGraph",
"datasource": "Tempo",
"gridPos": { "h": 12, "w": 12, "x": 12, "y": 0 }
}
]
}
```
### 7.6.3 Alert Rules
```yaml
# grafana/provisioning/alerting/rippled-alerts.yaml
apiVersion: 1
groups:
- name: rippled-tracing-alerts
folder: rippled
interval: 1m
rules:
- uid: consensus-slow
title: Consensus Round Slow
condition: A
data:
- refId: A
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="rippled" && name="consensus.round"} | avg(duration) > 5s'
# Note: Verify TraceQL aggregate queries are supported by your
# Tempo version. Aggregate alerting (e.g., avg(duration)) requires
# Tempo 2.3+ with TraceQL metrics enabled.
for: 5m
annotations:
summary: Consensus rounds taking >5 seconds
description: "Consensus duration: {{ $value }}ms"
labels:
severity: warning
- uid: rpc-error-spike
title: RPC Error Rate Spike
condition: B
data:
- refId: B
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="rippled" && name=~"rpc.command.*" && status.code=error} | rate() > 0.05'
# Note: Verify TraceQL aggregate queries are supported by your
# Tempo version. Aggregate alerting (e.g., rate()) requires
# Tempo 2.3+ with TraceQL metrics enabled.
for: 2m
annotations:
summary: RPC error rate >5%
labels:
severity: critical
- uid: tx-throughput-drop
title: Transaction Throughput Drop
condition: C
data:
- refId: C
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="rippled" && name="tx.receive"} | rate() < 10'
for: 10m
annotations:
summary: Transaction throughput below threshold
labels:
severity: warning
```
---
## 7.7 PerfLog and Insight Correlation
> **OTLP** = OpenTelemetry Protocol
How to correlate OpenTelemetry traces with existing rippled observability.
### 7.7.1 Correlation Architecture
```mermaid
flowchart TB
subgraph rippled["rippled Node"]
otel[OpenTelemetry<br/>Spans]
perflog[PerfLog<br/>JSON Logs]
insight[Beast Insight<br/>StatsD Metrics]
end
subgraph collectors["Data Collection"]
otelc[OTel Collector]
promtail[Promtail/Fluentd]
statsd[StatsD Exporter]
end
subgraph storage["Storage"]
tempo[(Tempo)]
loki[(Loki)]
prom[(Prometheus)]
end
subgraph grafana["Grafana"]
traces[Trace View]
logs[Log View]
metrics[Metrics View]
corr[Correlation<br/>Panel]
end
otel -->|OTLP| otelc --> tempo
perflog -->|JSON| promtail --> loki
insight -->|StatsD| statsd --> prom
tempo --> traces
loki --> logs
prom --> metrics
traces --> corr
logs --> corr
metrics --> corr
style rippled fill:#0d47a1,stroke:#082f6a,color:#fff
style collectors fill:#bf360c,stroke:#8c2809,color:#fff
style storage fill:#1b5e20,stroke:#0d3d14,color:#fff
style grafana fill:#4a148c,stroke:#2e0d57,color:#fff
style otel fill:#0d47a1,stroke:#082f6a,color:#fff
style perflog fill:#0d47a1,stroke:#082f6a,color:#fff
style insight fill:#0d47a1,stroke:#082f6a,color:#fff
style otelc fill:#bf360c,stroke:#8c2809,color:#fff
style promtail fill:#bf360c,stroke:#8c2809,color:#fff
style statsd fill:#bf360c,stroke:#8c2809,color:#fff
style tempo fill:#1b5e20,stroke:#0d3d14,color:#fff
style loki fill:#1b5e20,stroke:#0d3d14,color:#fff
style prom fill:#1b5e20,stroke:#0d3d14,color:#fff
style traces fill:#4a148c,stroke:#2e0d57,color:#fff
style logs fill:#4a148c,stroke:#2e0d57,color:#fff
style metrics fill:#4a148c,stroke:#2e0d57,color:#fff
style corr fill:#4a148c,stroke:#2e0d57,color:#fff
```
**Reading the diagram:**
- **rippled Node (three sources)**: A single node emits three independent data streams -- OpenTelemetry spans, PerfLog JSON logs, and Beast Insight StatsD metrics.
- **Data Collection layer**: Each stream has its own collector -- OTel Collector for spans, Promtail/Fluentd for logs, and a StatsD exporter for metrics. They operate independently.
- **Storage layer (Tempo, Loki, Prometheus)**: Each data type lands in a purpose-built store optimized for its query patterns (trace search, log grep, metric aggregation).
- **Grafana Correlation Panel**: The key integration point -- Grafana queries all three stores and links them via shared fields (`trace_id`, `xrpl.tx.hash`, `ledger_seq`), enabling a single-pane debugging experience.
### 7.7.2 Correlation Fields
| Source | Field | Link To | Purpose |
| ----------- | --------------------------- | ------------- | -------------------------- |
| **Trace** | `trace_id` | Logs | Find log entries for trace |
| **Trace** | `xrpl.tx.hash` | Logs, Metrics | Find TX-related data |
| **Trace** | `xrpl.consensus.ledger.seq` | Logs | Find ledger-related logs |
| **PerfLog** | `trace_id` (new) | Traces | Jump to trace from log |
| **PerfLog** | `ledger_seq` | Traces | Find consensus trace |
| **Insight** | `exemplar.trace_id` | Traces | Jump from metric spike |
### 7.7.3 Example: Debugging a Slow Transaction
**Step 1: Find the trace**
```
# In Grafana Explore with Tempo
{resource.service.name="rippled" && span.xrpl.tx.hash="ABC123..."}
```
**Step 2: Get the trace_id from the trace view**
```
Trace ID: 4bf92f3577b34da6a3ce929d0e0e4736
```
**Step 3: Find related PerfLog entries**
```
# In Grafana Explore with Loki
{job="rippled"} |= "4bf92f3577b34da6a3ce929d0e0e4736"
```
**Step 4: Check Insight metrics for the time window**
```
# In Grafana with Prometheus
rate(rippled_tx_applied_total[1m])
@ timestamp_from_trace
```
### 7.7.4 Unified Dashboard Example
```json
{
"title": "rippled Unified Observability",
"uid": "rippled-unified",
"panels": [
{
"title": "Transaction Latency (Traces)",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.receive\"} | histogram_over_time(duration)"
}
],
"gridPos": { "h": 6, "w": 8, "x": 0, "y": 0 }
},
{
"title": "Transaction Rate (Metrics)",
"type": "timeseries",
"datasource": "Prometheus",
"targets": [
{
"expr": "rate(rippled_tx_received_total[5m])",
"legendFormat": "{{ instance }}"
}
],
"fieldConfig": {
"defaults": {
"links": [
{
"title": "View traces",
"url": "/explore?left={\"datasource\":\"Tempo\",\"query\":\"{resource.service.name=\\\"rippled\\\" && name=\\\"tx.receive\\\"}\"}"
}
]
}
},
"gridPos": { "h": 6, "w": 8, "x": 8, "y": 0 }
},
{
"title": "Recent Logs",
"type": "logs",
"datasource": "Loki",
"targets": [
{
"expr": "{job=\"rippled\"} | json"
}
],
"gridPos": { "h": 6, "w": 8, "x": 16, "y": 0 }
},
{
"title": "Trace Search",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\"}"
}
],
"fieldConfig": {
"overrides": [
{
"matcher": { "id": "byName", "options": "traceID" },
"properties": [
{
"id": "links",
"value": [
{
"title": "View trace",
"url": "/explore?left={\"datasource\":\"Tempo\",\"query\":\"${__value.raw}\"}"
},
{
"title": "View logs",
"url": "/explore?left={\"datasource\":\"Loki\",\"query\":\"{job=\\\"rippled\\\"} |= \\\"${__value.raw}\\\"\"}"
}
]
}
]
}
]
},
"gridPos": { "h": 12, "w": 24, "x": 0, "y": 6 }
}
]
}
```
---
_Previous: [Implementation Phases](./06-implementation-phases.md)_ | _Next: [Appendix](./08-appendix.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,265 @@
# Appendix
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Observability Backends](./07-observability-backends.md)
---
## 8.1 Glossary
> **OTLP** = OpenTelemetry Protocol | **TxQ** = Transaction Queue
| Term | Definition |
| --------------------- | ---------------------------------------------------------- |
| **Span** | A unit of work with start/end time, name, and attributes |
| **Trace** | A collection of spans representing a complete request flow |
| **Trace ID** | 128-bit unique identifier for a trace |
| **Span ID** | 64-bit unique identifier for a span within a trace |
| **Context** | Carrier for trace/span IDs across boundaries |
| **Propagator** | Component that injects/extracts context |
| **Sampler** | Decides which traces to record |
| **Exporter** | Sends spans to backend |
| **Collector** | Receives, processes, and forwards telemetry |
| **OTLP** | OpenTelemetry Protocol (wire format) |
| **W3C Trace Context** | Standard HTTP headers for trace propagation |
| **Baggage** | Key-value pairs propagated across service boundaries |
| **Resource** | Entity producing telemetry (service, host, etc.) |
| **Instrumentation** | Code that creates telemetry data |
### rippled-Specific Terms
| Term | Definition |
| ----------------- | ------------------------------------------------------------- |
| **Overlay** | P2P network layer managing peer connections |
| **Consensus** | XRP Ledger consensus algorithm (RCL) |
| **Proposal** | Validator's suggested transaction set for a ledger |
| **Validation** | Validator's signature on a closed ledger |
| **HashRouter** | Component for transaction deduplication |
| **JobQueue** | Thread pool for asynchronous task execution |
| **PerfLog** | Existing performance logging system in rippled |
| **Beast Insight** | Existing metrics framework in rippled |
| **PathFinding** | Payment path computation engine for cross-currency payments |
| **TxQ** | Transaction queue managing fee-based prioritization |
| **LoadManager** | Dynamic fee escalation based on network load |
| **SHAMap** | SHA-256 hash-based map (Merkle trie variant) for ledger state |
### Phase 911 Terms
| Term | Definition |
| --------------------------- | ------------------------------------------------------------------------- |
| **MetricsRegistry** | Centralized class for OTel async gauge registrations (Phase 9) |
| **ObservableGauge** | OTel Metrics SDK async instrument polled via callback at fixed intervals |
| **PeriodicMetricReader** | OTel SDK component that invokes gauge callbacks at configurable intervals |
| **CountedObject** | rippled template that tracks live instance counts via atomic counters |
| **TxQ** | Transaction queue managing fee escalation and ordering |
| **Load Factor** | Combined multiplier affecting transaction cost (local, cluster, network) |
| **OTel Collector Receiver** | Custom Go plugin that polls rippled RPC and emits OTel metrics (Phase 11) |
---
## 8.2 Span Hierarchy Visualization
> **TxQ** = Transaction Queue
```mermaid
flowchart TB
subgraph trace["Trace: Transaction Lifecycle"]
rpc["rpc.request<br/>(entry point)"]
validate["tx.validate"]
relay["tx.relay<br/>(parent span)"]
subgraph peers["Peer Spans"]
p1["peer.send<br/>Peer A"]
p2["peer.send<br/>Peer B"]
p3["peer.send<br/>Peer C"]
end
subgraph pathfinding["PathFinding Spans"]
pathfind["pathfind.request"]
pathcomp["pathfind.compute"]
end
consensus["consensus.round"]
apply["tx.apply"]
subgraph txqueue["TxQ Spans"]
txq["txq.enqueue"]
txqApply["txq.apply"]
end
feeCalc["fee.escalate"]
end
subgraph validators["Validator Spans"]
valFetch["validator.list.fetch"]
valManifest["validator.manifest"]
end
rpc --> validate
rpc --> pathfind
pathfind --> pathcomp
validate --> relay
relay --> p1
relay --> p2
relay --> p3
p1 -.->|"context propagation"| consensus
consensus --> apply
apply --> txq
txq --> txqApply
txq --> feeCalc
style trace fill:#0f172a,stroke:#020617,color:#fff
style peers fill:#1e3a8a,stroke:#172554,color:#fff
style pathfinding fill:#134e4a,stroke:#0f766e,color:#fff
style txqueue fill:#064e3b,stroke:#047857,color:#fff
style validators fill:#4c1d95,stroke:#6d28d9,color:#fff
style rpc fill:#1d4ed8,stroke:#1e40af,color:#fff
style validate fill:#047857,stroke:#064e3b,color:#fff
style relay fill:#047857,stroke:#064e3b,color:#fff
style p1 fill:#0e7490,stroke:#155e75,color:#fff
style p2 fill:#0e7490,stroke:#155e75,color:#fff
style p3 fill:#0e7490,stroke:#155e75,color:#fff
style consensus fill:#fef3c7,stroke:#fde68a,color:#1e293b
style apply fill:#047857,stroke:#064e3b,color:#fff
style pathfind fill:#0e7490,stroke:#155e75,color:#fff
style pathcomp fill:#0e7490,stroke:#155e75,color:#fff
style txq fill:#047857,stroke:#064e3b,color:#fff
style txqApply fill:#047857,stroke:#064e3b,color:#fff
style feeCalc fill:#047857,stroke:#064e3b,color:#fff
style valFetch fill:#6d28d9,stroke:#4c1d95,color:#fff
style valManifest fill:#6d28d9,stroke:#4c1d95,color:#fff
```
**Reading the diagram:**
- **rpc.request (blue, top)**: The entry point — every traced transaction starts as an RPC call; this root span is the parent of all downstream work.
- **tx.validate and pathfind.request (green/teal, first fork)**: The RPC request fans out into transaction validation and, for cross-currency payments, a PathFinding branch (`pathfind.request` -> `pathfind.compute`).
- **tx.relay -> Peer Spans (teal, middle)**: After validation, the transaction is relayed to peers A, B, and C in parallel; each `peer.send` is a sibling child span showing fan-out across the network.
- **context propagation (dashed arrow)**: The dotted line from `peer.send Peer A` to `consensus.round` represents the trace context crossing a node boundary — the receiving validator picks up the same `trace_id` and continues the trace.
- **consensus.round -> tx.apply -> TxQ Spans (green, lower)**: Once consensus accepts the transaction, it is applied to the ledger; the TxQ spans (`txq.enqueue`, `txq.apply`, `fee.escalate`) capture queue depth and fee escalation behavior.
- **Validator Spans (purple, detached)**: `validator.list.fetch` and `validator.manifest` are independent workflows for UNL management — they run on their own traces and are linked to consensus via Span Links, not parent-child relationships.
---
## 8.3 References
> **OTLP** = OpenTelemetry Protocol
### OpenTelemetry Resources
1. [OpenTelemetry C++ SDK](https://github.com/open-telemetry/opentelemetry-cpp)
2. [OpenTelemetry Specification](https://opentelemetry.io/docs/specs/otel/)
3. [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/)
4. [OTLP Protocol Specification](https://opentelemetry.io/docs/specs/otlp/)
### Standards
5. [W3C Trace Context](https://www.w3.org/TR/trace-context/)
6. [W3C Baggage](https://www.w3.org/TR/baggage/)
7. [Protocol Buffers](https://protobuf.dev/)
### rippled Resources
8. [rippled Source Code](https://github.com/XRPLF/rippled)
9. [XRP Ledger Documentation](https://xrpl.org/docs/)
10. [rippled Overlay README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/overlay/README.md)
11. [rippled RPC README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/rpc/README.md)
12. [rippled Consensus README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/app/consensus/README.md)
---
## 8.4 Version History
| Version | Date | Author | Changes |
| ------- | ---------- | ------ | -------------------------------------------------------------- |
| 1.0 | 2026-02-12 | - | Initial implementation plan |
| 1.1 | 2026-02-13 | - | Refactored into modular documents |
| 1.2 | 2026-03-09 | - | Added Phases 911 (future enhancement plans) |
| 1.3 | 2026-03-24 | - | Review fixes: accuracy corrections, cross-document consistency |
---
## 8.5 Document Index
### Plan Documents
| Document | Description |
| -------------------------------------------------------------------- | -------------------------------------------- |
| [OpenTelemetryPlan.md](./OpenTelemetryPlan.md) | Master overview and executive summary |
| [00-tracing-fundamentals.md](./00-tracing-fundamentals.md) | Distributed tracing concepts and OTel primer |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | rippled architecture and trace points |
| [02-design-decisions.md](./02-design-decisions.md) | SDK selection, exporters, span conventions |
| [03-implementation-strategy.md](./03-implementation-strategy.md) | Directory structure, performance analysis |
| [04-code-samples.md](./04-code-samples.md) | C++ code examples for all components |
| [05-configuration-reference.md](./05-configuration-reference.md) | rippled config, CMake, Collector configs |
| [06-implementation-phases.md](./06-implementation-phases.md) | Timeline, tasks, risks, success metrics |
| [07-observability-backends.md](./07-observability-backends.md) | Backend selection and architecture |
| [08-appendix.md](./08-appendix.md) | Glossary, references, version history |
| [09-data-collection-reference.md](./09-data-collection-reference.md) | Span/metric/dashboard inventory |
| [presentation.md](./presentation.md) | Slide deck for OTel plan overview |
### Task Lists
| Document | Description |
| -------------------------------------------------------------------------- | --------------------------------------------------- |
| [POC_taskList.md](./POC_taskList.md) | Proof-of-concept telemetry integration |
| [Phase2_taskList.md](./Phase2_taskList.md) | RPC layer trace instrumentation |
| [Phase3_taskList.md](./Phase3_taskList.md) | Peer overlay & consensus tracing |
| [Phase4_taskList.md](./Phase4_taskList.md) | Transaction lifecycle tracing |
| [Phase5_taskList.md](./Phase5_taskList.md) | Ledger processing & advanced tracing |
| [Phase5_IntegrationTest_taskList.md](./Phase5_IntegrationTest_taskList.md) | Observability stack integration tests |
| [Phase7_taskList.md](./Phase7_taskList.md) | Native OTel metrics migration |
| [Phase8_taskList.md](./Phase8_taskList.md) | Log-trace correlation |
| [Phase9_taskList.md](./Phase9_taskList.md) | Internal metric instrumentation gap fill (future) |
| [Phase10_taskList.md](./Phase10_taskList.md) | Synthetic workload generation & validation (future) |
| [Phase11_taskList.md](./Phase11_taskList.md) | Third-party data collection pipelines (future) |
| [presentation.md](./presentation.md) | Presentation slides for OpenTelemetry plan overview |
> **Note**: Phases 1 and 6 do not have separate task list files. Phase 1 tasks are documented in [06-implementation-phases.md §6.2](./06-implementation-phases.md). Phase 6 tasks are documented in [06-implementation-phases.md §6.7](./06-implementation-phases.md).
---
## 8.6 Phase 911 Cross-Reference Guide
This guide maps Phase 911 content to its location across the documentation.
### Phase 9: Internal Metric Instrumentation Gap Fill
| Content | Location |
| ------------------------------- | ------------------------------------------------------------------------ |
| Plan & architecture | [06-implementation-phases.md §6.8.2](./06-implementation-phases.md) |
| Task list (10 tasks) | [Phase9_taskList.md](./Phase9_taskList.md) |
| Future metric definitions (~50) | [09-data-collection-reference.md §5b](./09-data-collection-reference.md) |
| New class: `MetricsRegistry` | `src/xrpld/telemetry/MetricsRegistry.h/.cpp` (planned) |
| New dashboards | `rippled-fee-market`, `rippled-job-queue` (planned) |
**Metric categories**: NodeStore I/O, Cache Hit Rates, TxQ, PerfLog Per-RPC, PerfLog Per-Job, Counted Objects, Fee Escalation & Load Factors.
### Phase 10: Synthetic Workload Generation & Telemetry Validation
| Content | Location |
| -------------------- | ------------------------------------------------------------------------ |
| Plan & architecture | [06-implementation-phases.md §6.8.3](./06-implementation-phases.md) |
| Task list (7 tasks) | [Phase10_taskList.md](./Phase10_taskList.md) |
| Validation inventory | [09-data-collection-reference.md §5c](./09-data-collection-reference.md) |
| Test harness | `docker/telemetry/docker-compose.workload.yaml` (planned) |
| CI workflow | `.github/workflows/telemetry-validation.yml` (planned) |
**Validates**: 16 spans, 22 attributes, 300+ metrics, 10 dashboards, log-trace correlation.
### Phase 11: Third-Party Data Collection Pipelines
| Content | Location |
| --------------------------------- | ------------------------------------------------------------------------ |
| Plan & architecture | [06-implementation-phases.md §6.8.4](./06-implementation-phases.md) |
| Task list (11 tasks) | [Phase11_taskList.md](./Phase11_taskList.md) |
| External metric definitions (~30) | [09-data-collection-reference.md §5d](./09-data-collection-reference.md) |
| Custom OTel Collector receiver | `docker/telemetry/otel-rippled-receiver/` (planned) |
| Prometheus alerting rules (11) | [09-data-collection-reference.md §5d](./09-data-collection-reference.md) |
| New dashboards (4) | Validator Health, Network Topology, Fee Market (External), DEX & AMM |
**Consumer categories**: Exchanges, Payment Processors, DeFi/AMM, NFT Marketplaces, Analytics Providers, Wallets, Compliance, Academic Researchers, Institutional Custody, CBDC Bridge Operators.
---
_Previous: [Observability Backends](./07-observability-backends.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,245 @@
# [OpenTelemetry](00-tracing-fundamentals.md) Distributed Tracing Implementation Plan for rippled (xrpld)
## Executive Summary
> **OTLP** = OpenTelemetry Protocol
This document provides a comprehensive implementation plan for integrating OpenTelemetry distributed tracing into the rippled XRP Ledger node software. The plan addresses the unique challenges of a decentralized peer-to-peer system where trace context must propagate across network boundaries between independent nodes.
### Key Benefits
- **End-to-end transaction visibility**: Track transactions from submission through consensus to ledger inclusion
- **Consensus round analysis**: Understand timing and behavior of consensus phases across validators
- **RPC performance insights**: Identify slow handlers and optimize response times
- **Network topology understanding**: Visualize message propagation patterns between peers
- **Incident debugging**: Correlate events across distributed nodes during issues
### Estimated Performance Overhead
| Metric | Overhead | Notes |
| ------------- | ---------- | ----------------------------------- |
| CPU | 1-3% | Span creation and attribute setting |
| Memory | 2-5 MB | Batch buffer for pending spans |
| Network | 10-50 KB/s | Compressed OTLP export to collector |
| Latency (p99) | <2% | With proper sampling configuration |
---
## Document Structure
This implementation plan is organized into modular documents for easier navigation:
<div align="center">
```mermaid
flowchart TB
overview["📋 OpenTelemetryPlan.md<br/>(This Document)"]
subgraph fundamentals["Fundamentals"]
fund["00-tracing-fundamentals.md"]
end
subgraph analysis["Analysis & Design"]
arch["01-architecture-analysis.md"]
design["02-design-decisions.md"]
end
subgraph impl["Implementation"]
strategy["03-implementation-strategy.md"]
code["04-code-samples.md"]
config["05-configuration-reference.md"]
end
subgraph deploy["Deployment & Planning"]
phases["06-implementation-phases.md"]
backends["07-observability-backends.md"]
appendix["08-appendix.md"]
poc["POC_taskList.md"]
dataref["09-data-collection-reference.md"]
end
overview --> fundamentals
overview --> analysis
overview --> impl
overview --> deploy
fund --> arch
arch --> design
design --> strategy
strategy --> code
code --> config
config --> phases
phases --> backends
backends --> appendix
phases --> poc
appendix --> dataref
style overview fill:#1b5e20,stroke:#0d3d14,color:#fff,stroke-width:2px
style fundamentals fill:#00695c,stroke:#004d40,color:#fff
style fund fill:#00695c,stroke:#004d40,color:#fff
style analysis fill:#0d47a1,stroke:#082f6a,color:#fff
style impl fill:#bf360c,stroke:#8c2809,color:#fff
style deploy fill:#4a148c,stroke:#2e0d57,color:#fff
style arch fill:#0d47a1,stroke:#082f6a,color:#fff
style design fill:#0d47a1,stroke:#082f6a,color:#fff
style strategy fill:#bf360c,stroke:#8c2809,color:#fff
style code fill:#bf360c,stroke:#8c2809,color:#fff
style config fill:#bf360c,stroke:#8c2809,color:#fff
style phases fill:#4a148c,stroke:#2e0d57,color:#fff
style backends fill:#4a148c,stroke:#2e0d57,color:#fff
style appendix fill:#4a148c,stroke:#2e0d57,color:#fff
style poc fill:#4a148c,stroke:#2e0d57,color:#fff
style dataref fill:#4a148c,stroke:#2e0d57,color:#fff
```
</div>
---
## Table of Contents
| Section | Document | Description |
| ------- | -------------------------------------------------------------- | ---------------------------------------------------------------------- |
| **0** | [Tracing Fundamentals](./00-tracing-fundamentals.md) | Distributed tracing concepts, span relationships, context propagation |
| **1** | [Architecture Analysis](./01-architecture-analysis.md) | rippled component analysis, trace points, instrumentation priorities |
| **2** | [Design Decisions](./02-design-decisions.md) | SDK selection, exporters, span naming, attributes, context propagation |
| **3** | [Implementation Strategy](./03-implementation-strategy.md) | Directory structure, key principles, performance optimization |
| **4** | [Code Samples](./04-code-samples.md) | C++ implementation examples for core infrastructure and key modules |
| **5** | [Configuration Reference](./05-configuration-reference.md) | rippled config, CMake integration, Collector configurations |
| **6** | [Implementation Phases](./06-implementation-phases.md) | 5-phase timeline, tasks, risks, success metrics |
| **7** | [Observability Backends](./07-observability-backends.md) | Backend selection guide and production architecture |
| **8** | [Appendix](./08-appendix.md) | Glossary, references, version history |
| **9** | [Data Collection Reference](./09-data-collection-reference.md) | Complete inventory of spans, attributes, metrics, and dashboards |
| **POC** | [POC Task List](./POC_taskList.md) | Proof of concept tasks for RPC tracing end-to-end demo |
---
## 0. Tracing Fundamentals
This document introduces distributed tracing concepts for readers unfamiliar with the domain. It covers what traces and spans are, how parent-child and follows-from relationships model causality, how context propagates across service boundaries, and how sampling controls data volume. It also maps these concepts to rippled-specific scenarios like transaction relay and consensus.
➡️ **[Read Tracing Fundamentals](./00-tracing-fundamentals.md)**
---
## 1. Architecture Analysis
> **WS** = WebSocket | **TxQ** = Transaction Queue
The rippled node consists of several key components that require instrumentation for comprehensive distributed tracing. The main areas include the RPC server (HTTP/WebSocket), Overlay P2P network, Consensus mechanism (RCLConsensus), JobQueue for async task execution, PathFinding, Transaction Queue (TxQ), fee escalation (LoadManager), ledger acquisition, validator management, and existing observability infrastructure (PerfLog, Insight/StatsD, Journal logging).
Key trace points span across transaction submission via RPC, peer-to-peer message propagation, consensus round execution, ledger building, path computation, transaction queue behavior, fee escalation, and validator health. The implementation prioritizes high-value, low-risk components first: RPC handlers provide immediate value with minimal risk, while consensus tracing requires careful implementation to avoid timing impacts.
➡️ **[Read full Architecture Analysis](./01-architecture-analysis.md)**
---
## 2. Design Decisions
> **OTLP** = OpenTelemetry Protocol | **CNCF** = Cloud Native Computing Foundation
The OpenTelemetry C++ SDK is selected for its CNCF backing, active development, and native performance characteristics. Traces are exported via OTLP/gRPC (primary) or OTLP/HTTP (fallback) to an OpenTelemetry Collector, which provides flexible routing and sampling.
Span naming follows a hierarchical `<component>.<operation>` convention (e.g., `rpc.submit`, `tx.relay`, `consensus.round`). Context propagation uses W3C Trace Context headers for HTTP and embedded Protocol Buffer fields for P2P messages. The implementation coexists with existing PerfLog and Insight observability systems through correlation IDs.
**Data Collection & Privacy**: Telemetry collects only operational metadata (timing, counts, hashes) — never sensitive content (private keys, balances, amounts, raw payloads). Privacy protection includes account hashing, configurable redaction, sampling, and collector-level filtering. Node operators retain full control over telemetry configuration.
➡️ **[Read full Design Decisions](./02-design-decisions.md)**
---
## 3. Implementation Strategy
The telemetry code is organized under `include/xrpl/telemetry/` for headers and `src/libxrpl/telemetry/` for implementation. Key principles include RAII-based span management via `SpanGuard`, conditional compilation with `XRPL_ENABLE_TELEMETRY`, and minimal runtime overhead through batch processing and efficient sampling.
Performance optimization strategies include probabilistic head sampling (10% default), tail-based sampling at the collector for errors and slow traces, batch export to reduce network overhead, and conditional instrumentation that compiles to no-ops when disabled.
➡️ **[Read full Implementation Strategy](./03-implementation-strategy.md)**
---
## 4. Code Samples
C++ implementation examples are provided for the core telemetry infrastructure and key modules:
- `Telemetry.h` - Core interface for tracer access and span creation
- `SpanGuard.h` - RAII wrapper for automatic span lifecycle management
- `TracingInstrumentation.h` - Macros for conditional instrumentation
- Protocol Buffer extensions for trace context propagation
- Module-specific instrumentation (RPC, Consensus, P2P, JobQueue)
- Remaining modules (PathFinding, TxQ, Validator, etc.) follow the same patterns
➡️ **[View all Code Samples](./04-code-samples.md)**
---
## 5. Configuration Reference
> **OTLP** = OpenTelemetry Protocol | **APM** = Application Performance Monitoring
Configuration is handled through the `[telemetry]` section in `xrpld.cfg` with options for enabling/disabling, exporter selection, endpoint configuration, sampling ratios, and component-level filtering. CMake integration includes a `XRPL_ENABLE_TELEMETRY` option for compile-time control.
OpenTelemetry Collector configurations are provided for development and production (with tail-based sampling, Tempo, and Elastic APM). Docker Compose examples enable quick local development environment setup.
➡️ **[View full Configuration Reference](./05-configuration-reference.md)**
---
## 6. Implementation Phases
The implementation spans 13 weeks across 8 phases:
| Phase | Duration | Focus | Key Deliverables |
| ----- | ----------- | --------------------- | ----------------------------------------------------------- |
| 1 | Weeks 1-2 | Core Infrastructure | SDK integration, Telemetry interface, Configuration |
| 2 | Weeks 3-4 | RPC Tracing | HTTP context extraction, Handler instrumentation |
| 3 | Weeks 5-6 | Transaction Tracing | Protocol Buffer context, Relay propagation |
| 4 | Weeks 7-8 | Consensus Tracing | Round spans, Proposal/validation tracing |
| 5 | Week 9 | Documentation | Runbook, Dashboards, Training |
| 6 | Week 10 | StatsD Metrics Bridge | OTel Collector StatsD receiver, 3 Grafana dashboards |
| 7 | Weeks 11-12 | Native OTel Metrics | OTelCollector impl, OTLP metrics export, StatsD deprecation |
| 8 | Week 13 | Log-Trace Correlation | trace_id in logs, Loki ingestion, Tempo↔Loki linking |
**Total Effort**: 65.1 developer-days with 2 developers
➡️ **[View full Implementation Phases](./06-implementation-phases.md)**
---
## 7. Observability Backends
> **APM** = Application Performance Monitoring | **GCS** = Google Cloud Storage
Grafana Tempo is recommended for all environments due to its cost-effectiveness and Grafana integration, while Elastic APM is ideal for organizations with existing Elastic infrastructure.
The recommended production architecture uses a gateway collector pattern with regional collectors performing tail-based sampling, routing traces to multiple backends (Tempo for primary storage, Elastic for log correlation, S3/GCS for long-term archive).
➡️ **[View Observability Backend Recommendations](./07-observability-backends.md)**
---
## 8. Appendix
The appendix contains a glossary of OpenTelemetry and rippled-specific terms, references to external documentation and specifications, version history for this implementation plan, and a complete document index.
➡️ **[View Appendix](./08-appendix.md)**
---
## 9. Data Collection Reference
A single-source-of-truth reference documenting every piece of telemetry data collected by rippled. Covers all 16 OpenTelemetry spans with their 22 attributes, all StatsD metrics (gauges, counters, histograms, overlay traffic), SpanMetrics-derived Prometheus metrics, and all 8 Grafana dashboards. Includes Jaeger search guides and Prometheus query examples.
➡️ **[View Data Collection Reference](./09-data-collection-reference.md)**
---
## POC Task List
A step-by-step task list for building a minimal end-to-end proof of concept that demonstrates distributed tracing in rippled. The POC scope is limited to RPC tracing — showing request traces flowing from rippled through an OpenTelemetry Collector into Tempo, viewable in Grafana.
➡️ **[View POC Task List](./POC_taskList.md)**
---
_This document provides a comprehensive implementation plan for integrating OpenTelemetry distributed tracing into the rippled XRP Ledger node software. For detailed information on any section, follow the links to the corresponding sub-documents._

View File

@@ -0,0 +1,620 @@
# OpenTelemetry POC Task List
> **Goal**: Build a minimal end-to-end proof of concept that demonstrates distributed tracing in rippled. A successful POC will show RPC request traces flowing from rippled through an OTel Collector into Tempo, viewable in Grafana.
>
> **Scope**: RPC tracing only (highest value, lowest risk per the [CRAWL phase](./06-implementation-phases.md#6102-quick-wins-immediate-value) in the implementation phases). No cross-node P2P context propagation or consensus tracing in the POC.
### Related Plan Documents
| Document | Relevance to POC |
| ---------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [00-tracing-fundamentals.md](./00-tracing-fundamentals.md) | Core concepts: traces, spans, context propagation, sampling |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | RPC request flow (§1.5), key trace points (§1.6), instrumentation priority (§1.7) |
| [02-design-decisions.md](./02-design-decisions.md) | SDK selection (§2.1), exporter config (§2.2), span naming (§2.3), attribute schema (§2.4), coexistence with PerfLog/Insight (§2.6) |
| [03-implementation-strategy.md](./03-implementation-strategy.md) | Directory structure (§3.1), key principles (§3.2), performance overhead (§3.3-3.6), conditional compilation (§3.7.3), code intrusiveness (§3.9) |
| [04-code-samples.md](./04-code-samples.md) | Telemetry interface (§4.1), SpanGuard (§4.2), macros (§4.3), RPC instrumentation (§4.5.3) |
| [05-configuration-reference.md](./05-configuration-reference.md) | rippled config (§5.1), config parser (§5.2), Application integration (§5.3), CMake (§5.4), Collector config (§5.5), Docker Compose (§5.6), Grafana (§5.8) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 1 core tasks (§6.2), Phase 2 RPC tasks (§6.3), quick wins (§6.10), definition of done (§6.11) |
| [07-observability-backends.md](./07-observability-backends.md) | Tempo dev setup (§7.1), Grafana dashboards (§7.6), alert rules (§7.6.3) |
---
## Task 0: Docker Observability Stack Setup
> **OTLP** = OpenTelemetry Protocol
**Objective**: Stand up the backend infrastructure to receive, store, and display traces.
**What to do**:
- Create `docker/telemetry/docker-compose.yml` in the repo with three services:
1. **OpenTelemetry Collector** (`otel/opentelemetry-collector-contrib:0.92.0`)
- Expose ports `4317` (OTLP gRPC) and `4318` (OTLP HTTP)
- Expose port `13133` (health check)
- Mount a config file `docker/telemetry/otel-collector-config.yaml`
2. **Tempo** (`grafana/tempo:2.6.1`)
- Expose port `3200` (HTTP API) and `4317` (OTLP gRPC, internal)
3. **Grafana** (`grafana/grafana:latest`) — optional but useful
- Expose port `3000`
- Enable anonymous admin access for local dev (`GF_AUTH_ANONYMOUS_ENABLED=true`, `GF_AUTH_ANONYMOUS_ORG_ROLE=Admin`)
- Provision Tempo as a data source via `docker/telemetry/grafana/provisioning/datasources/tempo.yaml`
- Create `docker/telemetry/otel-collector-config.yaml`:
```yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 100
exporters:
logging:
verbosity: detailed
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp/tempo]
```
- Create Grafana Tempo datasource provisioning file at `docker/telemetry/grafana/provisioning/datasources/tempo.yaml`:
```yaml
apiVersion: 1
datasources:
- name: Tempo
type: tempo
access: proxy
url: http://tempo:3200
```
**Verification**: Run `docker compose -f docker/telemetry/docker-compose.yml up -d`, then:
- `curl http://localhost:13133` returns healthy (Collector)
- `http://localhost:3000` opens Grafana (Tempo datasource available, no traces yet)
**Reference**:
- [05-configuration-reference.md §5.5](./05-configuration-reference.md) — Collector config (dev YAML with Tempo exporter)
- [05-configuration-reference.md §5.6](./05-configuration-reference.md) — Docker Compose development environment
- [07-observability-backends.md §7.1](./07-observability-backends.md) — Tempo quick start and backend selection
- [05-configuration-reference.md §5.8](./05-configuration-reference.md) — Grafana datasource provisioning and dashboards
---
## Task 1: Add OpenTelemetry C++ SDK Dependency
**Objective**: Make `opentelemetry-cpp` available to the build system.
**What to do**:
- Edit `conanfile.py` to add `opentelemetry-cpp` as an **optional** dependency. The gRPC otel plugin flag (`"grpc/*:otel_plugin": False`) in the existing conanfile may need to remain false — we pull the OTel SDK separately.
- Add a Conan option: `with_telemetry = [True, False]` defaulting to `False`
- When `with_telemetry` is `True`, add `opentelemetry-cpp` to `self.requires()`
- Required OTel Conan components: `opentelemetry-cpp` (which bundles api, sdk, and exporters). If the package isn't in Conan Center, consider using `FetchContent` in CMake or building from source as a fallback.
- Edit `CMakeLists.txt`:
- Add option: `option(XRPL_ENABLE_TELEMETRY "Enable OpenTelemetry tracing" OFF)`
- When ON, `find_package(opentelemetry-cpp CONFIG REQUIRED)` and add compile definition `XRPL_ENABLE_TELEMETRY`
- When OFF, do nothing (zero build impact)
- Verify the build succeeds with `-DXRPL_ENABLE_TELEMETRY=OFF` (no regressions) and with `-DXRPL_ENABLE_TELEMETRY=ON` (SDK links successfully).
**Key files**:
- `conanfile.py`
- `CMakeLists.txt`
**Reference**:
- [05-configuration-reference.md §5.4](./05-configuration-reference.md) — CMake integration, `FindOpenTelemetry.cmake`, `XRPL_ENABLE_TELEMETRY` option
- [03-implementation-strategy.md §3.2](./03-implementation-strategy.md) — Key principle: zero-cost when disabled via compile-time flags
- [02-design-decisions.md §2.1](./02-design-decisions.md) — SDK selection rationale and required OTel components
---
## Task 2: Create Core Telemetry Interface and NullTelemetry
**Objective**: Define the `Telemetry` abstract interface and a no-op implementation so the rest of the codebase can reference telemetry without hard-depending on the OTel SDK.
**What to do**:
- Create `include/xrpl/telemetry/Telemetry.h`:
- Define `namespace xrpl::telemetry`
- Define `struct Telemetry::Setup` holding: `enabled`, `exporterEndpoint`, `samplingRatio`, `serviceName`, `serviceVersion`, `serviceInstanceId`, `traceRpc`, `traceTransactions`, `traceConsensus`, `tracePeer`
- Define abstract `class Telemetry` with:
- `virtual void start() = 0;`
- `virtual void stop() = 0;`
- `virtual bool isEnabled() const = 0;`
- `virtual nostd::shared_ptr<Tracer> getTracer(string_view name = "rippled") = 0;`
- `virtual nostd::shared_ptr<Span> startSpan(string_view name, SpanKind kind = kInternal) = 0;`
- `virtual nostd::shared_ptr<Span> startSpan(string_view name, Context const& parentContext, SpanKind kind = kInternal) = 0;`
- `virtual bool shouldTraceRpc() const = 0;`
- `virtual bool shouldTraceTransactions() const = 0;`
- `virtual bool shouldTraceConsensus() const = 0;`
- Factory: `std::unique_ptr<Telemetry> make_Telemetry(Setup const&, beast::Journal);`
- Config parser: `Telemetry::Setup setup_Telemetry(Section const&, std::string const& nodePublicKey, std::string const& version);`
- Create `include/xrpl/telemetry/SpanGuard.h`:
- RAII guard that takes an `nostd::shared_ptr<Span>`, creates a `Scope`, and calls `span->End()` in destructor.
- Convenience: `setAttribute()`, `setOk()`, `setStatus()`, `addEvent()`, `recordException()`, `context()`
- See [04-code-samples.md](./04-code-samples.md) §4.2 for the full implementation.
- Create `src/libxrpl/telemetry/NullTelemetry.cpp`:
- Implements `Telemetry` with all no-ops.
- `isEnabled()` returns `false`, `startSpan()` returns a noop span.
- This is used when `XRPL_ENABLE_TELEMETRY` is OFF or `enabled=0` in config.
- Guard all OTel SDK headers behind `#ifdef XRPL_ENABLE_TELEMETRY`. The `NullTelemetry` implementation should compile without the OTel SDK present.
**Key new files**:
- `include/xrpl/telemetry/Telemetry.h`
- `include/xrpl/telemetry/SpanGuard.h`
- `src/libxrpl/telemetry/NullTelemetry.cpp`
**Reference**:
- [04-code-samples.md §4.1](./04-code-samples.md) — Full `Telemetry` interface with `Setup` struct, lifecycle, tracer access, span creation, and component filtering methods
- [04-code-samples.md §4.2](./04-code-samples.md) — Full `SpanGuard` RAII implementation and `NullSpanGuard` no-op class
- [03-implementation-strategy.md §3.1](./03-implementation-strategy.md) — Directory structure: `include/xrpl/telemetry/` for headers, `src/libxrpl/telemetry/` for implementation
- [03-implementation-strategy.md §3.7.3](./03-implementation-strategy.md) — Conditional instrumentation and zero-cost compile-time disabled pattern
---
## Task 3: Implement OTel-Backed Telemetry
> **OTLP** = OpenTelemetry Protocol
**Objective**: Implement the real `Telemetry` class that initializes the OTel SDK, configures the OTLP exporter and batch processor, and creates tracers/spans.
**What to do**:
- Create `src/libxrpl/telemetry/Telemetry.cpp` (compiled only when `XRPL_ENABLE_TELEMETRY=ON`):
- `class TelemetryImpl : public Telemetry` that:
- In `start()`: creates a `TracerProvider` with:
- Resource attributes: `service.name`, `service.version`, `service.instance.id`
- An `OtlpHttpExporter` pointed at `setup.exporterEndpoint` (default `localhost:4318`)
- A `BatchSpanProcessor` with configurable batch size and delay
- A `TraceIdRatioBasedSampler` using `setup.samplingRatio`
- Sets the global `TracerProvider`
- In `stop()`: calls `ForceFlush()` then shuts down the provider
- In `startSpan()`: delegates to `getTracer()->StartSpan(name, ...)`
- `shouldTraceRpc()` etc. read from `Setup` fields
- Create `src/libxrpl/telemetry/TelemetryConfig.cpp`:
- `setup_Telemetry()` parses the `[telemetry]` config section from `xrpld.cfg`
- Maps config keys: `enabled`, `exporter`, `endpoint`, `sampling_ratio`, `trace_rpc`, `trace_transactions`, `trace_consensus`, `trace_peer`
- Wire `make_Telemetry()` factory:
- If `setup.enabled` is true AND `XRPL_ENABLE_TELEMETRY` is defined: return `TelemetryImpl`
- Otherwise: return `NullTelemetry`
- Add telemetry source files to CMake. When `XRPL_ENABLE_TELEMETRY=ON`, compile `Telemetry.cpp` and `TelemetryConfig.cpp` and link against `opentelemetry-cpp::api`, `opentelemetry-cpp::sdk`, `opentelemetry-cpp::otlp_grpc_exporter`. When OFF, compile only `NullTelemetry.cpp`.
**Key new files**:
- `src/libxrpl/telemetry/Telemetry.cpp`
- `src/libxrpl/telemetry/TelemetryConfig.cpp`
**Key modified files**:
- `CMakeLists.txt` (add telemetry library target)
**Reference**:
- [04-code-samples.md §4.1](./04-code-samples.md) — `Telemetry` interface that `TelemetryImpl` must implement
- [05-configuration-reference.md §5.2](./05-configuration-reference.md) — `setup_Telemetry()` config parser implementation
- [02-design-decisions.md §2.2](./02-design-decisions.md) — OTLP/gRPC exporter config (endpoint, TLS options)
- [02-design-decisions.md §2.4.1](./02-design-decisions.md) — Resource attributes: `service.name`, `service.version`, `service.instance.id`, `xrpl.network.id`
- [03-implementation-strategy.md §3.4](./03-implementation-strategy.md) — Per-operation CPU costs and overhead budget for span creation
- [03-implementation-strategy.md §3.5](./03-implementation-strategy.md) — Memory overhead: static (~456 KB) and dynamic (~1.2 MB) budgets
---
## Task 4: Integrate Telemetry into Application Lifecycle
**Objective**: Wire the `Telemetry` object into `Application` so all components can access it.
**What to do**:
- Edit `src/xrpld/app/main/Application.h`:
- Forward-declare `namespace xrpl::telemetry { class Telemetry; }`
- Add pure virtual method: `virtual telemetry::Telemetry& getTelemetry() = 0;`
- Edit `src/xrpld/app/main/Application.cpp` (the `ApplicationImp` class):
- Add member: `std::unique_ptr<telemetry::Telemetry> telemetry_;`
- In the constructor, after config is loaded and node identity is known:
```cpp
auto const telemetrySection = config_->section("telemetry");
auto telemetrySetup = telemetry::setup_Telemetry(
telemetrySection,
toBase58(TokenType::NodePublic, nodeIdentity_.publicKey()),
BuildInfo::getVersionString());
telemetry_ = telemetry::make_Telemetry(telemetrySetup, logs_->journal("Telemetry"));
```
- In `start()`: call `telemetry_->start()` early
- In `stop()` or destructor: call `telemetry_->stop()` late (to flush pending spans)
- Implement `getTelemetry()` override: return `*telemetry_`
- Add `[telemetry]` section to the example config `cfg/rippled-example.cfg`:
```ini
# [telemetry]
# enabled=1
# endpoint=localhost:4317
# sampling_ratio=1.0
# trace_rpc=1
```
**Key modified files**:
- `src/xrpld/app/main/Application.h`
- `src/xrpld/app/main/Application.cpp`
- `cfg/rippled-example.cfg` (or equivalent example config)
**Reference**:
- [05-configuration-reference.md §5.3](./05-configuration-reference.md) — `ApplicationImp` changes: member declaration, constructor init, `start()`/`stop()` wiring, `getTelemetry()` override
- [05-configuration-reference.md §5.1](./05-configuration-reference.md) — `[telemetry]` config section format and all option defaults
- [03-implementation-strategy.md §3.9.2](./03-implementation-strategy.md) — File impact assessment: `Application.cpp` ~15 lines added, ~3 changed (Low risk)
---
## Task 5: Create Instrumentation Macros
**Objective**: Define convenience macros that make instrumenting code one-liners, and that compile to zero-cost no-ops when telemetry is disabled.
**What to do**:
- Create `src/xrpld/telemetry/TracingInstrumentation.h`:
- When `XRPL_ENABLE_TELEMETRY` is defined:
```cpp
#define XRPL_TRACE_SPAN(telemetry, name) \
auto _xrpl_span_ = (telemetry).startSpan(name); \
::xrpl::telemetry::SpanGuard _xrpl_guard_(_xrpl_span_)
#define XRPL_TRACE_RPC(telemetry, name) \
std::optional<::xrpl::telemetry::SpanGuard> _xrpl_guard_; \
if ((telemetry).shouldTraceRpc()) { \
_xrpl_guard_.emplace((telemetry).startSpan(name)); \
}
#define XRPL_TRACE_SET_ATTR(key, value) \
if (_xrpl_guard_.has_value()) { \
_xrpl_guard_->setAttribute(key, value); \
}
#define XRPL_TRACE_EXCEPTION(e) \
if (_xrpl_guard_.has_value()) { \
_xrpl_guard_->recordException(e); \
}
```
- When `XRPL_ENABLE_TELEMETRY` is NOT defined, all macros expand to `((void)0)`
**Key new file**:
- `src/xrpld/telemetry/TracingInstrumentation.h`
**Reference**:
- [04-code-samples.md §4.3](./04-code-samples.md) — Full macro definitions for `XRPL_TRACE_SPAN`, `XRPL_TRACE_RPC`, `XRPL_TRACE_CONSENSUS`, `XRPL_TRACE_SET_ATTR`, `XRPL_TRACE_EXCEPTION` with both enabled and disabled branches
- [03-implementation-strategy.md §3.7.3](./03-implementation-strategy.md) — Conditional instrumentation pattern: compile-time `#ifndef` and runtime `shouldTrace*()` checks
- [03-implementation-strategy.md §3.9.7](./03-implementation-strategy.md) — Before/after code examples showing minimal intrusiveness (~1-3 lines per instrumentation point)
---
## Task 6: Instrument RPC ServerHandler
> **WS** = WebSocket
**Objective**: Add tracing to the HTTP RPC entry point so every incoming RPC request creates a span.
**What to do**:
- Edit `src/xrpld/rpc/detail/ServerHandler.cpp`:
- `#include` the `TracingInstrumentation.h` header
- In `ServerHandler::onRequest(Session& session)`:
- At the top of the method, add: `XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.request");`
- After the RPC command name is extracted, set attribute: `XRPL_TRACE_SET_ATTR("xrpl.rpc.command", command);`
- After the response status is known, set: `XRPL_TRACE_SET_ATTR("http.status_code", static_cast<int64_t>(statusCode));`
- Wrap error paths with: `XRPL_TRACE_EXCEPTION(e);`
- In `ServerHandler::processRequest(...)`:
- Add a child span: `XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.process");`
- Set method attribute: `XRPL_TRACE_SET_ATTR("xrpl.rpc.method", request_method);`
- In `ServerHandler::onWSMessage(...)` (WebSocket path):
- Add: `XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.ws.message");`
- The goal is to see spans like:
```
rpc.request
└── rpc.process
```
in Tempo/Grafana for every HTTP RPC call.
**Key modified file**:
- `src/xrpld/rpc/detail/ServerHandler.cpp` (~15-25 lines added)
**Reference**:
- [04-code-samples.md §4.5.3](./04-code-samples.md) — Complete `ServerHandler::onRequest()` instrumented code sample with W3C header extraction, span creation, attribute setting, and error handling
- [01-architecture-analysis.md §1.5](./01-architecture-analysis.md) — RPC request flow diagram: HTTP request -> attributes -> jobqueue.enqueue -> rpc.command -> response
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — Key trace points table: `rpc.request` in `ServerHandler.cpp::onRequest()` (Priority: High)
- [02-design-decisions.md §2.3](./02-design-decisions.md) — Span naming convention: `rpc.request`, `rpc.command.*`
- [02-design-decisions.md §2.4.2](./02-design-decisions.md) — RPC span attributes: `xrpl.rpc.command`, `xrpl.rpc.version`, `xrpl.rpc.role`, `xrpl.rpc.params`
- [03-implementation-strategy.md §3.9.2](./03-implementation-strategy.md) — File impact: `ServerHandler.cpp` ~40 lines added, ~10 changed (Low risk)
---
## Task 7: Instrument RPC Command Execution
**Objective**: Add per-command tracing inside the RPC handler so each command (e.g., `submit`, `account_info`, `server_info`) gets its own child span.
**What to do**:
- Edit `src/xrpld/rpc/detail/RPCHandler.cpp`:
- `#include` the `TracingInstrumentation.h` header
- In `doCommand(RPC::JsonContext& context, Json::Value& result)`:
- At the top: `XRPL_TRACE_RPC(context.app.getTelemetry(), "rpc.command." + context.method);`
- Set attributes:
- `XRPL_TRACE_SET_ATTR("xrpl.rpc.command", context.method);`
- `XRPL_TRACE_SET_ATTR("xrpl.rpc.version", static_cast<int64_t>(context.apiVersion));`
- `XRPL_TRACE_SET_ATTR("xrpl.rpc.role", (context.role == Role::ADMIN) ? "admin" : "user");`
- On success: `XRPL_TRACE_SET_ATTR("xrpl.rpc.status", "success");`
- On error: `XRPL_TRACE_SET_ATTR("xrpl.rpc.status", "error");` and set the error message
- After this, traces in Tempo/Grafana should look like:
```
rpc.request (xrpl.rpc.command=account_info)
└── rpc.process
└── rpc.command.account_info (xrpl.rpc.version=2, xrpl.rpc.role=user, xrpl.rpc.status=success)
```
**Key modified file**:
- `src/xrpld/rpc/detail/RPCHandler.cpp` (~15-20 lines added)
**Reference**:
- [04-code-samples.md §4.5.3](./04-code-samples.md) — `ServerHandler::onRequest()` code sample (includes child span pattern for `rpc.command.*`)
- [02-design-decisions.md §2.3](./02-design-decisions.md) — Span naming: `rpc.command.*` pattern with dynamic command name (e.g., `rpc.command.server_info`)
- [02-design-decisions.md §2.4.2](./02-design-decisions.md) — RPC attribute schema: `xrpl.rpc.command`, `xrpl.rpc.version`, `xrpl.rpc.role`, `xrpl.rpc.status`
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — Key trace points table: `rpc.command.*` in `RPCHandler.cpp::doCommand()` (Priority: High)
- [02-design-decisions.md §2.6.5](./02-design-decisions.md) — Correlation with PerfLog: how `doCommand()` can link trace_id with existing PerfLog entries
- [03-implementation-strategy.md §3.4.4](./03-implementation-strategy.md) — RPC request overhead budget: ~1.75 μs total per request
---
## Task 8: Build, Run, and Verify End-to-End
> **OTLP** = OpenTelemetry Protocol
**Objective**: Prove the full pipeline works: rippled emits traces -> OTel Collector receives them -> Tempo stores them for Grafana visualization.
**What to do**:
1. **Start the Docker stack**:
```bash
docker compose -f docker/telemetry/docker-compose.yml up -d
```
Verify Collector health: `curl http://localhost:13133`
2. **Build rippled with telemetry**:
```bash
# Adjust for your actual build workflow
conan install . --build=missing -o with_telemetry=True
cmake --preset default -DXRPL_ENABLE_TELEMETRY=ON
cmake --build --preset default
```
3. **Configure rippled**:
Add to `rippled.cfg` (or your local test config):
```ini
[telemetry]
enabled=1
endpoint=localhost:4317
sampling_ratio=1.0
trace_rpc=1
```
4. **Start rippled** in standalone mode:
```bash
./rippled --conf rippled.cfg -a --start
```
5. **Generate RPC traffic**:
```bash
# server_info
curl -s -X POST http://localhost:5005 \
-H "Content-Type: application/json" \
-d '{"method":"server_info","params":[{}]}'
# ledger
curl -s -X POST http://localhost:5005 \
-H "Content-Type: application/json" \
-d '{"method":"ledger","params":[{"ledger_index":"current"}]}'
# account_info (will error in standalone, that's fine — we trace errors too)
curl -s -X POST http://localhost:5005 \
-H "Content-Type: application/json" \
-d '{"method":"account_info","params":[{"account":"rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"}]}'
```
6. **Verify in Grafana (Tempo)**:
- Open `http://localhost:3000`
- Navigate to Explore → select Tempo datasource
- Search for service `rippled`
- Confirm you see traces with spans: `rpc.request` -> `rpc.process` -> `rpc.command.server_info`
- Click into a trace and verify attributes: `xrpl.rpc.command`, `xrpl.rpc.status`, `xrpl.rpc.version`
7. **Verify zero-overhead when disabled**:
- Rebuild with `XRPL_ENABLE_TELEMETRY=OFF`, or set `enabled=0` in config
- Run the same RPC calls
- Confirm no new traces appear and no errors in rippled logs
**Verification Checklist**:
- [ ] Docker stack starts without errors
- [ ] rippled builds with `-DXRPL_ENABLE_TELEMETRY=ON`
- [ ] rippled starts and connects to OTel Collector (check rippled logs for telemetry messages)
- [ ] Traces appear in Grafana/Tempo under service "rippled"
- [ ] Span hierarchy is correct (parent-child relationships)
- [ ] Span attributes are populated (`xrpl.rpc.command`, `xrpl.rpc.status`, etc.)
- [ ] Error spans show error status and message
- [ ] Building with `XRPL_ENABLE_TELEMETRY=OFF` produces no regressions
- [ ] Setting `enabled=0` at runtime produces no traces and no errors
**Reference**:
- [06-implementation-phases.md §6.11.1](./06-implementation-phases.md) — Phase 1 definition of done: SDK compiles, runtime toggle works, span creation verified in Tempo, config validation passes
- [06-implementation-phases.md §6.11.2](./06-implementation-phases.md#6112-phase-2-rpc-tracing) — Phase 2 definition of done: 100% RPC coverage, traceparent propagation, <1ms p99 overhead, dashboard deployed
- [06-implementation-phases.md §6.8](./06-implementation-phases.md) — Success metrics: trace coverage >95%, CPU overhead <3%, memory <5 MB, latency impact <2%
- [03-implementation-strategy.md §3.9.5](./03-implementation-strategy.md) — Backward compatibility: config optional, protocol unchanged, `XRPL_ENABLE_TELEMETRY=OFF` produces identical binary
- [01-architecture-analysis.md §1.8](./01-architecture-analysis.md) — Observable outcomes: what traces, metrics, and dashboards to expect
---
## Task 9: Document POC Results and Next Steps
> **OTLP** = OpenTelemetry Protocol | **WS** = WebSocket
**Objective**: Capture findings, screenshots, and remaining work for the team.
**What to do**:
- Take screenshots of Grafana/Tempo showing:
- The service list with "rippled"
- A trace with the full span tree
- Span detail view showing attributes
- Document any issues encountered (build issues, SDK quirks, missing attributes)
- Note performance observations (build time impact, any noticeable runtime overhead)
- Write a short summary of what the POC proves and what it doesn't cover yet:
- **Proves**: OTel SDK integrates with rippled, OTLP export works, RPC traces visible
- **Doesn't cover**: Cross-node P2P context propagation, consensus tracing, protobuf trace context, W3C traceparent header extraction, tail-based sampling, production deployment
- Outline next steps (mapping to the full plan phases):
- [Phase 2](./06-implementation-phases.md) completion: [W3C header extraction](./02-design-decisions.md) (§2.5), WebSocket tracing, all [RPC handlers](./01-architecture-analysis.md) (§1.6)
- [Phase 3](./06-implementation-phases.md): [Protobuf `TraceContext` message](./04-code-samples.md) (§4.4), [transaction relay tracing](./04-code-samples.md) (§4.5.1) across nodes
- [Phase 4](./06-implementation-phases.md): [Consensus round and phase tracing](./04-code-samples.md) (§4.5.2)
- [Phase 5](./06-implementation-phases.md): [Production collector config](./05-configuration-reference.md) (§5.5.2), [Grafana dashboards](./07-observability-backends.md) (§7.6), [alerting](./07-observability-backends.md) (§7.6.3)
**Reference**:
- [06-implementation-phases.md §6.1](./06-implementation-phases.md) — Full 5-phase timeline overview and Gantt chart
- [06-implementation-phases.md §6.10](./06-implementation-phases.md) — Crawl-Walk-Run strategy: POC is the CRAWL phase, next steps are WALK and RUN
- [06-implementation-phases.md §6.12](./06-implementation-phases.md) — Recommended implementation order (14 steps across 9 weeks)
- [03-implementation-strategy.md §3.9](./03-implementation-strategy.md) — Code intrusiveness assessment and risk matrix for each remaining component
- [07-observability-backends.md §7.2](./07-observability-backends.md) — Production backend selection (Tempo, Elastic APM, Honeycomb, Datadog)
- [02-design-decisions.md §2.5](./02-design-decisions.md) — Context propagation design: W3C HTTP headers, protobuf P2P, JobQueue internal
- [00-tracing-fundamentals.md](./00-tracing-fundamentals.md) — Reference for team onboarding on distributed tracing concepts
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ------------------------------------ | --------- | -------------- | ---------- |
| 0 | Docker observability stack | 4 | 0 | — |
| 1 | OTel C++ SDK dependency | 0 | 2 | — |
| 2 | Core Telemetry interface + NullImpl | 3 | 0 | 1 |
| 3 | OTel-backed Telemetry implementation | 2 | 1 | 1, 2 |
| 4 | Application lifecycle integration | 0 | 3 | 2, 3 |
| 5 | Instrumentation macros | 1 | 0 | 2 |
| 6 | Instrument RPC ServerHandler | 0 | 1 | 4, 5 |
| 7 | Instrument RPC command execution | 0 | 1 | 4, 5 |
| 8 | End-to-end verification | 0 | 0 | 0-7 |
| 9 | Document results and next steps | 1 | 0 | 8 |
**Parallel work**: Tasks 0 and 1 can run in parallel. Tasks 2 and 5 have no dependency on each other. Tasks 6 and 7 can be done in parallel once Tasks 4 and 5 are complete.
---
## Next Steps (Post-POC)
> **OTLP** = OpenTelemetry Protocol | **WS** = WebSocket
### Metrics Pipeline for Grafana Dashboards
The current POC exports **traces only**. Grafana's Explore view can query Tempo for individual traces, but time-series charts (latency histograms, request throughput, error rates) require a **metrics pipeline**. To enable this:
1. **Add a `spanmetrics` connector** to the OTel Collector config that derives RED metrics (Rate, Errors, Duration) from trace spans automatically:
```yaml
connectors:
spanmetrics:
histogram:
explicit:
buckets: [1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 5s]
dimensions:
- name: xrpl.rpc.command
- name: xrpl.rpc.status
exporters:
prometheus:
endpoint: 0.0.0.0:8889
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp/tempo, spanmetrics]
metrics:
receivers: [spanmetrics]
exporters: [prometheus]
```
2. **Add Prometheus** to the Docker Compose stack to scrape the collector's metrics endpoint.
3. **Add Prometheus as a Grafana datasource** and build dashboards for:
- RPC request latency (p50/p95/p99) by command
- RPC throughput (requests/sec) by command
- Error rate by command
- Span duration distribution
### Additional Instrumentation
- **W3C `traceparent` header extraction** in `ServerHandler` to support cross-service context propagation from external callers
- **WebSocket RPC tracing** in `ServerHandler::onWSMessage()`
- **Transaction relay tracing** across nodes using protobuf `TraceContext` messages
- **Consensus round and phase tracing** for validator coordination visibility
- **Ledger close tracing** to measure close-to-validated latency
### Production Hardening
- **Tail-based sampling** in the OTel Collector to reduce volume while retaining error/slow traces
- **TLS configuration** for the OTLP exporter in production deployments
- **Resource limits** on the batch processor queue to prevent unbounded memory growth
- **Health monitoring** for the telemetry pipeline itself (collector lag, export failures)
### POC Lessons Learned
Issues encountered during POC implementation that inform future work:
| Issue | Resolution | Impact on Future Work |
| -------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------- | ---------------------------------------------------------------- |
| Conan lockfile rejected `opentelemetry-cpp/1.18.0` | Used `--lockfile=""` to bypass | Lockfile must be regenerated when adding new dependencies |
| Conan package only builds OTLP HTTP exporter, not gRPC | Switched from gRPC to HTTP exporter (`localhost:4318/v1/traces`) | HTTP exporter is the default; gRPC requires custom Conan profile |
| CMake target `opentelemetry-cpp::api` etc. don't exist in Conan package | Use umbrella target `opentelemetry-cpp::opentelemetry-cpp` | Conan targets differ from upstream CMake targets |
| OTel Collector `logging` exporter deprecated | Renamed to `debug` exporter | Use `debug` in all collector configs going forward |
| Macro parameter `telemetry` collided with `::xrpl::telemetry::` namespace | Renamed macro params to `_tel_obj_`, `_span_name_` | Avoid common words as macro parameter names |
| `opentelemetry::trace::Scope` creates new context on move | Store scope as member, create once in constructor | SpanGuard move semantics need care with Scope lifecycle |
| `TracerProviderFactory::Create` returns `unique_ptr<sdk::TracerProvider>`, not `nostd::shared_ptr` | Use `std::shared_ptr` member, wrap in `nostd::shared_ptr` for global provider | OTel SDK factory return types don't match API provider types |

View File

@@ -0,0 +1,476 @@
# Phase 10: Synthetic Workload Generation & Telemetry Validation — Task List
> **Status**: In Progress
>
> **Goal**: Build tools that generate realistic XRPL traffic to validate the full Phases 1-9 telemetry stack end-to-end — all spans, attributes, metrics, dashboards, and log-trace correlation — under controlled load.
>
> **Scope**: Python/shell test harness + multi-node docker-compose environment + automated validation scripts + performance benchmarks.
>
> **Branch**: `pratik/otel-phase10-workload-validation` (from `pratik/otel-phase9-metric-gap-fill`)
>
> **Depends on**: Phase 9 (internal metric gap fill) — validates the full metric surface
### Related Plan Documents
| Document | Relevance |
| -------------------------------------------------------------------- | --------------------------------------------------------------- |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 10 plan: motivation, architecture, exit criteria (§6.8.3) |
| [09-data-collection-reference.md](./09-data-collection-reference.md) | Defines the full inventory of spans/metrics to validate |
| [Phase9_taskList.md](./Phase9_taskList.md) | Prerequisite — all internal metrics must be emitting |
### Why This Phase Exists
Before Phases 1-9 can be considered production-ready, we need proof that:
1. All 17 spans fire with correct attributes under real transaction workloads
2. All 255+ StatsD metrics + ~50 Phase 9 metrics appear in Prometheus with non-zero values
3. Log-trace correlation (Phase 8) produces clickable trace_id links in Loki
4. All 12 Grafana dashboards render meaningful data (no empty panels)
5. Performance overhead stays within bounds (< 3% CPU, < 5MB memory)
6. The telemetry stack survives sustained load without data loss or queue backpressure
---
## Task 10.1: Multi-Node Test Harness
**Objective**: Create a docker-compose environment with validator nodes that produces real consensus rounds.
**Implementation notes**:
- Uses a **6-node** validator cluster for full consensus coverage and per-node metric differentiation.
- Nodes run as **local processes** (not in containers) the Docker Compose stack only hosts telemetry backends.
- `run-full-validation.sh` orchestrates: starts telemetry stack generates validator keys starts nodes runs workloads validates cleans up.
- Node config requires:
- `[signing_support] true` for server-side transaction signing
- `[ips]` (not `[ips_fixed]`) so peer connections are counted in `Peer_Finder_Active_*` metrics (fixed peers are excluded from these counters by design in `Counts.h`)
- All trace categories: `trace_rpc=1`, `trace_consensus=1`, `trace_transactions=1`
**Key files**:
- `docker/telemetry/docker-compose.workload.yaml` OTel Collector, Jaeger, Prometheus, Grafana
- `docker/telemetry/workload/generate-validator-keys.sh` generates keys and UNL for N nodes
- `docker/telemetry/workload/run-full-validation.sh` main orchestrator
---
## Task 10.2: RPC Load Generator
**Objective**: Configurable tool that fires all traced RPC commands at controlled rates.
**What to do**:
- Create `docker/telemetry/workload/rpc_load_generator.py`:
- Connects to one or more rippled WebSocket endpoints
- Fires all RPC commands that have trace spans: `server_info`, `ledger`, `tx`, `account_info`, `account_lines`, `fee`, `submit`, etc.
- Configurable parameters: rate (RPS), duration, command distribution weights
- Injects `traceparent` HTTP headers to test W3C context propagation
- Logs progress and errors to stdout
- Command distribution should match realistic production ratios:
- 40% `server_info` / `fee` (health checks)
- 30% `account_info` / `account_lines` / `account_objects` (wallet queries)
- 15% `ledger` / `ledger_data` (explorer queries)
- 10% `tx` / `account_tx` (transaction lookups)
- 5% `book_offers` / `amm_info` (DEX queries)
**Key files**:
- New: `docker/telemetry/workload/rpc_load_generator.py`
- New: `docker/telemetry/workload/requirements.txt`
---
## Task 10.3: Transaction Submitter
**Objective**: Generate diverse transaction types to exercise `tx.*` and `ledger.*` spans.
**Implementation notes**:
- Uses rippled's **native WebSocket command format** (`{"command": "submit", "secret": ..., "tx_json": {...}}`), not JSON-RPC format.
- Response structure: `{"status": "success", "result": {"engine_result": "tesSUCCESS", ...}, "type": "response"}`
- The `ws_request()` helper unwraps `result` so callers read fields directly.
- Error responses have `"status": "error"` at the top level with `error` and `error_message` fields.
- Pre-funds test accounts from the genesis account (`rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh`).
- Requires `[signing_support] true` in node config for server-side signing.
- Supported transaction types: Payment, OfferCreate, OfferCancel, TrustSet, NFTokenMint, NFTokenCreateOffer, EscrowCreate, EscrowFinish, AMMCreate, AMMDeposit.
- Exercises spans: `tx.process` (local submit), `tx.receive` (peer propagation), `tx.apply` (ledger application).
**Key files**:
- `docker/telemetry/workload/tx_submitter.py`
- `docker/telemetry/workload/test_accounts.json` (test account role definitions)
---
## Task 10.4: Telemetry Validation Suite
**Objective**: Automated scripts that verify all expected telemetry data exists after a workload run.
**Implementation notes**:
- `validate_telemetry.py` runs **73 checks** and produces a JSON report.
The 73 checks break down as:
| Category | Count | Source |
| -------------------- | ----- | ----------------------------------------- |
| Service registration | 1 | Jaeger: `rippled` service exists |
| Span existence | 17 | `expected_spans.json` 17 span types |
| Span attributes | 14 | Spans with `required_attributes` |
| Span hierarchies | 2 | Parent-child relationships (1 skipped) |
| Span durations | 1 | All spans > 0 and < 60 s |
| Metric existence | 26 | `expected_metrics.json` 26 metric names |
| Dashboard loads | 12 | `expected_metrics.json` 12 Grafana UIDs |
**Span validation** (queries Jaeger API):
- Lists all registered operations as diagnostics
- Asserts 17 span names from `expected_spans.json` appear in traces
- Validates required attributes on the 14 spans that define them
- Validates 2 active parent-child span hierarchies (1 skipped cross-thread)
- Asserts all span durations are within bounds (> 0, < 60 s)
**Metric validation** (queries Prometheus API):
- Lists all metric names as diagnostics (helps debug naming issues)
- All 26 metrics in `expected_metrics.json` must have > 0 Prometheus series — absence is a FAIL
- Uses the Prometheus `/api/v1/series` endpoint (not instant queries) to avoid false negatives from stale gauges — beast::insight StatsD gauges only emit on value changes, so a gauge that stabilizes goes stale in Prometheus after ~5 minutes
- Validates: 4 SpanMetrics, 6 StatsD gauges, 2 StatsD counters, 3 StatsD histograms, 4 overlay traffic, 7 Phase 9 OTLP metrics (nodestore, cache, txq, rpc_method, object_count, load_factor)
**Dashboard validation**:
- Queries Grafana API for each of the 10 dashboard UIDs
- Asserts dashboards load and have panels
- Output: `validation-report.json` with per-check pass/fail, suitable for CI.
**Key files**:
- `docker/telemetry/workload/validate_telemetry.py`
- `docker/telemetry/workload/expected_spans.json` (span inventory with attributes and hierarchies)
- `docker/telemetry/workload/expected_metrics.json` (metric inventory — all required)
---
## Task 10.5: Performance Benchmark Suite
**Objective**: Measure CPU/memory/latency overhead of the telemetry stack.
**What to do**:
- Create `docker/telemetry/workload/benchmark.sh`:
- **Baseline run**: Start cluster with `[telemetry] enabled=0`, run transaction workload for 5 minutes, record metrics
- **Telemetry run**: Start cluster with full telemetry enabled, run identical workload, record metrics
- **Comparison**: Calculate deltas for:
- CPU usage (per-node average)
- Memory RSS (per-node peak)
- RPC p99 latency
- Transaction throughput (TPS)
- Consensus round time p95
- Ledger close time p95
- Output: Markdown table comparing baseline vs. telemetry, with pass/fail against targets:
- CPU overhead < 3%
- Memory overhead < 5MB
- RPC latency impact < 2ms p99
- Throughput impact < 5%
- Consensus impact < 1%
- Store results in `docker/telemetry/workload/benchmark-results/` for historical tracking.
**Key files**:
- New: `docker/telemetry/workload/benchmark.sh`
- New: `docker/telemetry/workload/collect_system_metrics.sh`
---
## Task 10.6: CI Integration
**Objective**: Wire the validation suite into CI for regression detection.
**What to do**:
- Create a CI workflow (GitHub Actions or equivalent) that:
1. Builds rippled with `-DXRPL_ENABLE_TELEMETRY=ON`
2. Starts the multi-node workload harness
3. Runs the RPC load generator + transaction submitter for 2 minutes
4. Runs the validation suite
5. Runs the benchmark suite
6. Fails the build if any validation check fails or benchmark exceeds thresholds
7. Archives the validation report and benchmark results as artifacts
- This should be a separate workflow (not part of the main CI), triggered manually or on telemetry-related branch changes.
**Key files**:
- New: `.github/workflows/telemetry-validation.yml`
- New: `docker/telemetry/workload/run-full-validation.sh` (orchestrator script)
---
## Task 10.7: Documentation
**Objective**: Document the workload tools and validation process.
**What to do**:
- Create `docker/telemetry/workload/README.md`:
- Quick start guide for running workload harness
- Configuration options for load generator and tx submitter
- How to read validation reports
- How to run benchmarks and interpret results
- Update `docs/telemetry-runbook.md`:
- Add "Validating Telemetry Stack" section
- Add "Performance Benchmarking" section
- Update `OpenTelemetryPlan/09-data-collection-reference.md`:
- Add "Validation" section with expected metric/span counts
---
## Task 10.8: External Dashboard Parity Validation Checks
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md) — validates all additions from Phases 2-4 (span attributes), Phase 7 (metrics), and Phase 9 (dashboards).
>
> **Upstream**: Phase 7 Tasks 7.9-7.16 (metrics emitting), Phase 9 Tasks 9.11-9.13 (dashboards created).
> **Downstream**: Phase 11 (alert rules reference these metrics).
**Objective**: Add ~28 new checks to `validate_telemetry.py` for all span attributes, metrics, and dashboards introduced by the external dashboard parity work.
**New span attribute checks (8)**:
| Span Name | New Attribute | Expected Value |
| --------------------------- | ------------------------------------ | ------------------ |
| `rpc.command.server_info` | `xrpl.node.amendment_blocked` | boolean |
| `rpc.command.server_info` | `xrpl.node.server_state` | non-empty string |
| `tx.receive` | `xrpl.peer.version` | non-empty string |
| `consensus.validation.send` | `xrpl.validation.ledger_hash` | 64-char hex string |
| `consensus.validation.send` | `xrpl.validation.full` | boolean |
| `peer.validation.receive` | `xrpl.peer.validation.ledger_hash` | 64-char hex string |
| `consensus.accept` | `xrpl.consensus.validation_quorum` | int64 > 0 |
| `consensus.accept` | `xrpl.consensus.proposers_validated` | int64 > 0 |
**Implementation**: Add these attributes to `required_attributes` in `expected_spans.json` for the corresponding span entries.
**New metric existence checks (13)**:
| Metric Name | Category |
| ---------------------------------------------------------- | ---------------- |
| `rippled_validation_agreement{metric="agreement_pct_1h"}` | Phase 7+ OTLP |
| `rippled_validation_agreement{metric="agreement_pct_24h"}` | Phase 7+ OTLP |
| `rippled_validator_health{metric="amendment_blocked"}` | Phase 7+ OTLP |
| `rippled_validator_health{metric="unl_expiry_days"}` | Phase 7+ OTLP |
| `rippled_peer_quality{metric="peer_latency_p90_ms"}` | Phase 7+ OTLP |
| `rippled_peer_quality{metric="peers_insane_count"}` | Phase 7+ OTLP |
| `rippled_ledger_economy{metric="base_fee_xrp"}` | Phase 7+ OTLP |
| `rippled_ledger_economy{metric="transaction_rate"}` | Phase 7+ OTLP |
| `rippled_state_tracking{metric="state_value"}` | Phase 7+ OTLP |
| `rippled_ledgers_closed_total` | Phase 7+ Counter |
| `rippled_validations_sent_total` | Phase 7+ Counter |
| `rippled_state_changes_total` | Phase 7+ Counter |
| `rippled_storage_detail{metric="nudb_bytes"}` | Phase 7+ OTLP |
**Implementation**: Add these to `expected_metrics.json` entries.
**New dashboard load checks (3)**:
| Dashboard UID | Dashboard Name |
| ---------------------------- | -------------------------------------------------------------- |
| `rippled-validator-health` | Validator Health |
| `rippled-peer-quality` | Peer Quality |
| `rippled-system-node-health` | System Node Health (updated — verify Ledger Economy row loads) |
**Implementation**: Add UIDs to the dashboard list in `expected_metrics.json`.
**New metric value sanity checks (4)**:
| Check | Condition |
| --------------------- | -------------------------------------- |
| `agreement_pct_1h` | Value in [0.0, 100.0] |
| `unl_expiry_days` | Value > 0 (UNL not expired) |
| `peer_latency_p90_ms` | Value > 0 (peers exist and responding) |
| `state_value` | Value in [0, 6] |
**Implementation**: Add sanity check functions in `validate_telemetry.py` after metric existence checks.
**Total**: ~28 new checks, bringing the validation suite from 73 to ~101 checks.
**Key modified files**:
- `docker/telemetry/workload/validate_telemetry.py`
- `docker/telemetry/workload/expected_spans.json`
- `docker/telemetry/workload/expected_metrics.json`
**Exit Criteria**:
- [ ] All 8 new span attribute checks pass
- [ ] All 13 new metric existence checks pass
- [ ] All 3 new dashboard load checks pass
- [ ] All 4 sanity checks pass
- [ ] Total check count updated in script output and documentation
---
## What "All 73 Checks" Means — Complete Enumeration
The validation suite (`validate_telemetry.py`) runs exactly **73 checks** grouped into 7 categories. Every item below is validated by name in CI. Nothing is optional — failure of any single check fails the entire suite.
### 1. Service Registration (1 check)
| # | Check | Backend |
| --- | ---------------------------------- | ---------------------- |
| 1 | `rippled` service exists in Jaeger | Jaeger `/api/services` |
### 2. Span Existence (17 checks)
Each span name must appear at least once in Jaeger traces for the `rippled` service.
| # | Span Name | Category | Config Flag |
| --- | --------------------------- | ----------- | ---------------------- |
| 2 | `rpc.request` | RPC | `trace_rpc=1` |
| 3 | `rpc.process` | RPC | `trace_rpc=1` |
| 4 | `rpc.ws_message` | RPC | `trace_rpc=1` |
| 5 | `rpc.command.*` | RPC | `trace_rpc=1` |
| 6 | `tx.process` | Transaction | `trace_transactions=1` |
| 7 | `tx.receive` | Transaction | `trace_transactions=1` |
| 8 | `tx.apply` | Transaction | `trace_transactions=1` |
| 9 | `consensus.proposal.send` | Consensus | `trace_consensus=1` |
| 10 | `consensus.ledger_close` | Consensus | `trace_consensus=1` |
| 11 | `consensus.accept` | Consensus | `trace_consensus=1` |
| 12 | `consensus.validation.send` | Consensus | `trace_consensus=1` |
| 13 | `consensus.accept.apply` | Consensus | `trace_consensus=1` |
| 14 | `ledger.build` | Ledger | `trace_ledger=1` |
| 15 | `ledger.validate` | Ledger | `trace_ledger=1` |
| 16 | `ledger.store` | Ledger | `trace_ledger=1` |
| 17 | `peer.proposal.receive` | Peer | `trace_peer=1` |
| 18 | `peer.validation.receive` | Peer | `trace_peer=1` |
### 3. Span Attribute Validation (14 checks)
14 of the 17 spans define `required_attributes`. Each check asserts all listed attributes are present on at least one instance of that span.
| # | Span Name | Required Attributes |
| --- | --------------------------- | -------------------------------------------------------------------------------------------------- |
| 19 | `rpc.command.*` | `xrpl.rpc.command`, `xrpl.rpc.version`, `xrpl.rpc.role`, `xrpl.rpc.status`, `xrpl.rpc.duration_ms` |
| 20 | `tx.process` | `xrpl.tx.hash`, `xrpl.tx.local`, `xrpl.tx.path` |
| 21 | `tx.receive` | `xrpl.peer.id`, `xrpl.tx.hash`, `xrpl.tx.suppressed`, `xrpl.tx.status` |
| 22 | `tx.apply` | `xrpl.ledger.seq`, `xrpl.ledger.tx_count`, `xrpl.ledger.tx_failed` |
| 23 | `consensus.proposal.send` | `xrpl.consensus.round` |
| 24 | `consensus.ledger_close` | `xrpl.consensus.ledger.seq`, `xrpl.consensus.mode` |
| 25 | `consensus.accept` | `xrpl.consensus.proposers` |
| 26 | `consensus.validation.send` | `xrpl.consensus.ledger.seq`, `xrpl.consensus.proposing` |
| 27 | `consensus.accept.apply` | `xrpl.consensus.close_time`, `xrpl.consensus.ledger.seq` |
| 28 | `ledger.build` | `xrpl.ledger.seq`, `xrpl.ledger.tx_count`, `xrpl.ledger.tx_failed` |
| 29 | `ledger.validate` | `xrpl.ledger.seq`, `xrpl.ledger.validations` |
| 30 | `ledger.store` | `xrpl.ledger.seq` |
| 31 | `peer.proposal.receive` | `xrpl.peer.id`, `xrpl.peer.proposal.trusted` |
| 32 | `peer.validation.receive` | `xrpl.peer.id`, `xrpl.peer.validation.trusted` |
### 4. Span Parent-Child Hierarchies (2 checks)
| # | Parent | Child | Status | Notes |
| --- | -------------- | --------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------- |
| 33 | `rpc.process` | `rpc.command.*` | Active | Same thread — always valid |
| 34 | `ledger.build` | `tx.apply` | Active | Same thread — always valid |
| -- | `rpc.request` | `rpc.process` | Skipped | Cross-thread: `onRequest` posts to JobQueue coroutine. Span context not propagated across thread boundary. Requires C++ fix. |
### 5. Span Duration Bounds (1 check)
| # | Check | Criteria |
| --- | ------------------------------ | ---------------------------------------- |
| 35 | All spans have valid durations | Every span duration > 0 and < 60 seconds |
### 6. Metric Existence (26 checks)
Each metric name must have > 0 series in Prometheus (queried via `/api/v1/series` to avoid stale-gauge false negatives).
| # | Metric Name | Category | Source |
| --- | -------------------------------------------------- | ---------------- | ------------------------------------ |
| 36 | `traces_span_metrics_calls_total` | SpanMetrics | OTel Collector spanmetrics connector |
| 37 | `traces_span_metrics_duration_milliseconds_bucket` | SpanMetrics | OTel Collector spanmetrics connector |
| 38 | `traces_span_metrics_duration_milliseconds_count` | SpanMetrics | OTel Collector spanmetrics connector |
| 39 | `traces_span_metrics_duration_milliseconds_sum` | SpanMetrics | OTel Collector spanmetrics connector |
| 40 | `rippled_LedgerMaster_Validated_Ledger_Age` | StatsD Gauge | beast::insight via StatsD UDP |
| 41 | `rippled_LedgerMaster_Published_Ledger_Age` | StatsD Gauge | beast::insight via StatsD UDP |
| 42 | `rippled_State_Accounting_Full_duration` | StatsD Gauge | beast::insight via StatsD UDP |
| 43 | `rippled_Peer_Finder_Active_Inbound_Peers` | StatsD Gauge | beast::insight via StatsD UDP |
| 44 | `rippled_Peer_Finder_Active_Outbound_Peers` | StatsD Gauge | beast::insight via StatsD UDP |
| 45 | `rippled_jobq_job_count` | StatsD Gauge | beast::insight via StatsD UDP |
| 46 | `rippled_rpc_requests_total` | StatsD Counter | beast::insight via StatsD UDP |
| 47 | `rippled_ledger_fetches_total` | StatsD Counter | beast::insight via StatsD UDP |
| 48 | `rippled_rpc_time` | StatsD Histogram | beast::insight via StatsD UDP |
| 49 | `rippled_rpc_size` | StatsD Histogram | beast::insight via StatsD UDP |
| 50 | `rippled_ios_latency` | StatsD Histogram | beast::insight via StatsD UDP |
| 51 | `rippled_total_Bytes_In` | Overlay Traffic | beast::insight via StatsD UDP |
| 52 | `rippled_total_Bytes_Out` | Overlay Traffic | beast::insight via StatsD UDP |
| 53 | `rippled_total_Messages_In` | Overlay Traffic | beast::insight via StatsD UDP |
| 54 | `rippled_total_Messages_Out` | Overlay Traffic | beast::insight via StatsD UDP |
| 55 | `rippled_nodestore_state` | Phase 9 OTLP | MetricsRegistry via OTLP |
| 56 | `rippled_cache_metrics` | Phase 9 OTLP | MetricsRegistry via OTLP |
| 57 | `rippled_txq_metrics` | Phase 9 OTLP | MetricsRegistry via OTLP |
| 58 | `rippled_rpc_method_started_total` | Phase 9 OTLP | MetricsRegistry via OTLP |
| 59 | `rippled_rpc_method_finished_total` | Phase 9 OTLP | MetricsRegistry via OTLP |
| 60 | `rippled_object_count` | Phase 9 OTLP | MetricsRegistry via OTLP |
| 61 | `rippled_load_factor_metrics` | Phase 9 OTLP | MetricsRegistry via OTLP |
### 7. Dashboard Loads (12 checks)
Each Grafana dashboard must load successfully and contain at least one panel.
| # | Dashboard UID | Dashboard Name |
| --- | ------------------------------- | --------------------------------------- |
| 62 | `rippled-rpc-perf` | RPC Performance (OTel) |
| 63 | `rippled-transactions` | Transaction Overview |
| 64 | `rippled-consensus` | Consensus Health |
| 65 | `rippled-ledger-ops` | Ledger Operations |
| 66 | `rippled-peer-net` | Peer Network |
| 67 | `rippled-fee-market` | Fee Market & TxQ |
| 68 | `rippled-job-queue` | Job Queue Analysis |
| 69 | `rippled-system-node-health` | Node Health (System Metrics) |
| 70 | `rippled-system-network` | Network Traffic (System Metrics) |
| 71 | `rippled-system-rpc` | RPC & Pathfinding (System Metrics) |
| 72 | `rippled-system-overlay-detail` | Overlay Traffic Detail (System Metrics) |
| 73 | `rippled-system-ledger-sync` | Ledger Data & Sync (System Metrics) |
---
## Current Status: What Is Working vs. What Is Not
### Working (validated in CI run 23144741908 — 73/73 PASS)
1. **All 17 spans fire** with correct attributes under real workload (RPC + transaction + consensus)
2. **All 26 metrics exist** in Prometheus with non-zero series counts and per-node `exported_instance` labels (Node-1 through Node-6)
3. **All 12 Grafana dashboards** load and render panels (including Phase 9 Fee Market & TxQ, Job Queue Analysis)
4. **All 14 span attribute checks** pass, including `tx.receive` (fixed: default attributes on span creation)
5. **Both parent-child hierarchies** validate (`rpc.process` -> `rpc.command.*`, `ledger.build` -> `tx.apply`)
6. **All span durations** are within bounds (> 0, < 60 s)
7. **RPC load generator** fires 11 command types with < 50% error rate (native WS format)
8. **Transaction submitter** generates 10 transaction types at configurable TPS
9. **6-node validator cluster** starts and reaches consensus; all nodes emit distinct per-node metrics
10. **CI workflow** (`telemetry-validation.yml`) runs on push to `pratik/otel-phase10-*` branches and on `workflow_dispatch`
11. **Validation report** is JSON with exit codes, suitable for CI gating
12. **MetricsRegistry metrics** carry `rippled_` prefix and Resource attributes (`service.name`, `service.instance.id`) for Prometheus per-node filtering
13. **beast::insight OTel metrics** carry `service_instance_id` from `[insight]` config for per-node `exported_instance` labels
### Not Working / Not Available in CI / Not Implemented Yet
1. **Performance benchmark suite** (`benchmark.sh`, `collect_system_metrics.sh`) **not implemented**. Task 10.5 is not started. The exit criterion "Benchmark shows < 3% CPU overhead, < 5MB memory overhead" is **not met**.
2. **`rpc.request` -> `rpc.process` parent-child hierarchy** — **skipped** (not validated). Cross-thread span context propagation is broken: `onRequest` posts a coroutine to the JobQueue for `processRequest`, but the span context is not forwarded through the `std::function` lambda. Requires a C++ fix to capture and inject the parent span into the coroutine.
3. **Log-trace correlation validation** (Phase 8 Loki `trace_id` links) — **not included** in the 71 checks. The validation suite does not query Loki. This was listed in "Why This Phase Exists" item 3 but is not covered by the current validation.
4. **Full StatsD metric coverage** — the validation checks 26 representative metrics, not the full 255+ beast::insight StatsD metrics. Covering all 255+ would require a complete metric enumeration and significantly longer workload runs to trigger every code path.
5. **Sustained load / backpressure testing** — listed in "Why This Phase Exists" item 6 ("telemetry stack survives sustained load without data loss") but **not implemented**. The current workload runs for ~2 minutes, not long enough to test queue saturation.
6. **`docs/telemetry-runbook.md` updates** — Task 10.7 mentions adding "Validating Telemetry Stack" and "Performance Benchmarking" sections. The runbook has **not been updated**.
7. **`09-data-collection-reference.md` updates** — Task 10.7 mentions adding a "Validation" section with expected metric/span counts. This has **not been updated**.
---
## Exit Criteria
- [x] 6-node validator cluster starts and reaches consensus with per-node metric differentiation
- [x] RPC load generator fires all traced RPC commands at configurable rates
- [x] Transaction submitter generates 10 transaction types at configurable TPS
- [x] Validation suite confirms all spans, attributes, and metrics pass (73/73 checks)
- [ ] External dashboard parity checks pass (~28 new checks, Task 10.8)
- [x] All 12 Grafana dashboards render data with `$node` filter showing Node-1 through Node-6
- [ ] Benchmark shows < 3% CPU overhead, < 5MB memory overhead
- [x] CI workflow runs validation on telemetry branch changes
- [x] Validation report output is CI-parseable (JSON with exit codes)

View File

@@ -0,0 +1,544 @@
# Phase 11: Third-Party Data Collection Pipelines — Task List
> **Status**: Future Enhancement
>
> **Goal**: Build a custom OTel Collector receiver that periodically polls rippled's admin RPCs and exports structured metrics for external consumers — making all XRPL health, validator, peer, fee, and DEX data available as Prometheus/OTLP metrics without rippled code changes.
>
> **Scope**: Go-based OTel Collector receiver plugin + Grafana dashboards + Prometheus alerting rules.
>
> **Branch**: `pratik/otel-phase11-third-party-collection` (from `pratik/otel-phase10-workload-validation`)
>
> **Depends on**: Phase 10 (validation harness for testing the new receiver)
### Related Plan Documents
| Document | Relevance |
| -------------------------------------------------------------------- | --------------------------------------------------------------- |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 11 plan: motivation, architecture, exit criteria (§6.8.4) |
| [09-data-collection-reference.md](./09-data-collection-reference.md) | Defines full metric inventory including third-party metrics |
| [Phase10_taskList.md](./Phase10_taskList.md) | Prerequisite — validation harness for testing |
### Third-Party Consumer Gap Analysis
This phase addresses the cross-cutting gap identified during research: **rippled has no native Prometheus/OTLP metrics export for data accessible only via RPC**. Every consumer (exchanges, payment processors, analytics providers, validators, researchers, compliance firms, custodians) must build custom JSON-RPC polling and conversion. This receiver centralizes that work.
| Consumer Category | Data Unlocked by This Phase |
| -------------------------- | ------------------------------------------------------------------ |
| **Exchanges** | Real-time fee estimates, TxQ capacity, server health scores |
| **Payment Processors** | Settlement latency percentiles, corridor health, path availability |
| **Analytics Providers** | Validator metrics, network topology, amendment voting status |
| **DeFi / AMM** | AMM pool TVL, DEX order book depth, trade volumes |
| **Validators / Operators** | Per-peer latency, version distribution, UNL health, alerting |
| **Compliance** | Transaction volume trends, network growth metrics |
| **Academic Researchers** | Consensus performance time-series, decentralization metrics |
| **CBDC / Tokenization** | Token supply tracking, trust line adoption, freeze status |
| **Institutional Custody** | Multi-sig status, escrow tracking, reserve calculations |
| **Wallet Providers** | Server health for node selection, fee prediction data |
---
## Task 11.1: OTel Collector Receiver Scaffold
**Objective**: Create the Go project structure for a custom OTel Collector receiver that polls rippled JSON-RPC.
**What to do**:
- Create `docker/telemetry/otel-rippled-receiver/`:
- `receiver.go` — implements `receiver.Metrics` interface
- `config.go` — configuration struct (endpoint, poll interval, enabled RPCs)
- `factory.go` — receiver factory registration
- `go.mod` / `go.sum` — Go module with OTel Collector SDK dependency
- Configuration model:
```yaml
rippled_receiver:
endpoint: "http://localhost:5005" # rippled admin RPC
poll_interval: 30s # how often to poll
enabled_collectors:
- server_info
- get_counts
- fee
- peers
- validators
- feature
- server_state
amm_pools: [] # optional: AMM pool IDs to track
book_offers_pairs: [] # optional: currency pairs for DEX depth
```
- Build a custom OTel Collector binary that includes this receiver alongside the standard receivers.
**Key files**:
- New: `docker/telemetry/otel-rippled-receiver/receiver.go`
- New: `docker/telemetry/otel-rippled-receiver/config.go`
- New: `docker/telemetry/otel-rippled-receiver/factory.go`
- New: `docker/telemetry/otel-rippled-receiver/go.mod`
- New: `docker/telemetry/otel-rippled-receiver/Dockerfile`
---
## Task 11.2: server_info / server_state Collector
**Objective**: Poll `server_info` and `server_state` and export all fields as OTel metrics.
**What to do**:
- Implement `serverInfoCollector` that calls `server_info` (admin) and extracts:
**Node Health Gauges:**
- `xrpl_server_state` (enum → int: disconnected=0, connected=1, syncing=2, tracking=3, full=4, proposing=5)
- `xrpl_server_state_duration_seconds`
- `xrpl_uptime_seconds`
- `xrpl_io_latency_ms`
- `xrpl_amendment_blocked` (0 or 1)
- `xrpl_peers_count`
- `xrpl_peer_disconnects_total`
- `xrpl_peer_disconnects_resources_total`
- `xrpl_jq_trans_overflow_total`
**Consensus Gauges:**
- `xrpl_last_close_proposers`
- `xrpl_last_close_converge_time_seconds`
- `xrpl_validation_quorum`
**Ledger Gauges:**
- `xrpl_validated_ledger_seq`
- `xrpl_validated_ledger_age_seconds`
- `xrpl_validated_ledger_base_fee_drops`
- `xrpl_validated_ledger_reserve_base_drops`
- `xrpl_validated_ledger_reserve_inc_drops`
- `xrpl_close_time_offset_seconds` (0 when absent)
**Load Factor Gauges:**
- `xrpl_load_factor`
- `xrpl_load_factor_server`
- `xrpl_load_factor_fee_escalation`
- `xrpl_load_factor_fee_queue`
- `xrpl_load_factor_local`
- `xrpl_load_factor_net`
- `xrpl_load_factor_cluster`
**State Accounting Gauges** (per state: disconnected, connected, syncing, tracking, full):
- `xrpl_state_duration_seconds{state="<name>"}`
- `xrpl_state_transitions_total{state="<name>"}`
**Validator Info** (when node is a validator):
- `xrpl_validator_list_count`
- `xrpl_validator_list_expiration_seconds` (epoch)
- `xrpl_validator_list_active` (0 or 1)
**Key files**:
- New: `docker/telemetry/otel-rippled-receiver/collectors/server_info.go`
---
## Task 11.3: get_counts Collector
**Objective**: Poll `get_counts` and export internal object counts and NodeStore stats.
**What to do**:
- Implement `getCountsCollector`:
**Database Gauges:**
- `xrpl_db_size_kb{db="total"}`, `xrpl_db_size_kb{db="ledger"}`, `xrpl_db_size_kb{db="transaction"}`
**NodeStore Gauges:**
- `xrpl_nodestore_reads_total`, `xrpl_nodestore_reads_hit`, `xrpl_nodestore_writes_total`
- `xrpl_nodestore_read_bytes`, `xrpl_nodestore_written_bytes`
- `xrpl_nodestore_read_duration_us`, `xrpl_nodestore_write_load`
- `xrpl_nodestore_read_queue`, `xrpl_nodestore_read_threads_running`
**Cache Gauges:**
- `xrpl_cache_hit_rate{cache="SLE"}`, `xrpl_cache_hit_rate{cache="ledger"}`, `xrpl_cache_hit_rate{cache="accepted_ledger"}`
- `xrpl_cache_size{cache="treenode"}`, `xrpl_cache_size{cache="fullbelow"}`, `xrpl_cache_size{cache="accepted_ledger"}`
**Object Count Gauges:**
- `xrpl_object_count{type="<name>"}` for each counted object type (Transaction, Ledger, NodeObject, STTx, STLedgerEntry, InboundLedger, Pathfinder, etc.)
**Rates:**
- `xrpl_historical_fetch_per_minute`
- `xrpl_local_txs`
**Key files**:
- New: `docker/telemetry/otel-rippled-receiver/collectors/get_counts.go`
---
## Task 11.4: Peer Topology Collector
**Objective**: Poll `peers` and export per-peer and aggregate network metrics.
**What to do**:
- Implement `peersCollector`:
**Aggregate Gauges:**
- `xrpl_peers_inbound_count`
- `xrpl_peers_outbound_count`
- `xrpl_peers_cluster_count`
**Per-Peer Gauges** (with labels `peer_key` truncated to 8 chars for cardinality control):
- `xrpl_peer_latency_ms{peer="<key>", version="<ver>", inbound="<bool>"}`
- `xrpl_peer_uptime_seconds{peer="<key>"}`
- `xrpl_peer_load{peer="<key>"}`
**Distribution Gauges** (aggregated across all peers):
- `xrpl_peer_latency_p50_ms`, `xrpl_peer_latency_p95_ms`, `xrpl_peer_latency_p99_ms`
- `xrpl_peer_version_count{version="<semver>"}` — count of peers per software version
**Tracking Status:**
- `xrpl_peer_diverged_count` — peers with `track=diverged`
- `xrpl_peer_unknown_count` — peers with `track=unknown`
**Key files**:
- New: `docker/telemetry/otel-rippled-receiver/collectors/peers.go`
**Cardinality note**: Per-peer metrics use truncated keys. For large peer sets (50+), the aggregate distribution gauges are preferred over per-peer labels.
---
## Task 11.5: Validator & Amendment Collector
**Objective**: Poll `validators` and `feature` to export validator health and amendment voting status.
**What to do**:
- Implement `validatorCollector`:
**From `validators` RPC:**
- `xrpl_trusted_validators_count`
- `xrpl_validator_signing` (0 or 1 — whether local validator is signing)
**From `feature` RPC:**
- `xrpl_amendment_enabled_count` — total enabled amendments
- `xrpl_amendment_majority_count` — amendments with majority but not yet enabled
- `xrpl_amendment_vetoed_count` — locally vetoed amendments
- `xrpl_amendment_unsupported_majority` (0 or 1) — any unsupported amendment has majority (critical alert)
**Per-amendment with majority** (limited cardinality — only amendments with `majority` set):
- `xrpl_amendment_majority_time{name="<amendment>"}` — epoch time when majority was gained
- `xrpl_amendment_votes{name="<amendment>"}` — current vote count
- `xrpl_amendment_threshold{name="<amendment>"}` — votes needed
**Key files**:
- New: `docker/telemetry/otel-rippled-receiver/collectors/validators.go`
---
## Task 11.6: Fee & TxQ Collector
**Objective**: Poll `fee` RPC and export real-time fee market data.
**What to do**:
- Implement `feeCollector` that calls the public `fee` RPC:
**Fee Level Gauges:**
- `xrpl_fee_current_ledger_size` — transactions in current open ledger
- `xrpl_fee_expected_ledger_size` — expected transactions at close
- `xrpl_fee_max_queue_size` — maximum transaction queue size
- `xrpl_fee_open_ledger_fee_drops` — minimum fee for open ledger inclusion
- `xrpl_fee_median_fee_drops` — median fee level
- `xrpl_fee_minimum_fee_drops` — base reference fee
- `xrpl_fee_queue_size` — current queue depth
- This overlaps with Phase 9's internal TxQ metrics but provides an external-only collection path that doesn't require rippled code changes.
**Key files**:
- New: `docker/telemetry/otel-rippled-receiver/collectors/fee.go`
---
## Task 11.7: DEX & AMM Collector (Optional)
**Objective**: Periodically poll configured AMM pools and order book pairs for DeFi metrics.
**What to do**:
- Implement `dexCollector` (enabled only when `amm_pools` or `book_offers_pairs` are configured):
**AMM Pool Gauges** (per configured pool):
- `xrpl_amm_reserve{pool="<id>", asset="<currency>"}` — pool reserve amount
- `xrpl_amm_lp_token_supply{pool="<id>"}` — outstanding LP tokens
- `xrpl_amm_trading_fee{pool="<id>"}` — pool trading fee (basis points)
- `xrpl_amm_tvl_drops{pool="<id>"}` — total value locked (XRP-denominated)
**Order Book Gauges** (per configured pair):
- `xrpl_orderbook_bid_depth{pair="<base>/<quote>"}` — total bid volume
- `xrpl_orderbook_ask_depth{pair="<base>/<quote>"}` — total ask volume
- `xrpl_orderbook_spread{pair="<base>/<quote>"}` — best bid-ask spread
- `xrpl_orderbook_offer_count{pair="<base>/<quote>", side="bid|ask"}` — number of offers
**Key files**:
- New: `docker/telemetry/otel-rippled-receiver/collectors/dex.go`
**Note**: This is optional because it requires explicit configuration of which pools/pairs to track. Default configuration tracks no DEX data.
---
## Task 11.8: Prometheus Alerting Rules
**Objective**: Create production-ready alerting rules for the metrics exported by this receiver.
**What to do**:
- Create `docker/telemetry/prometheus/rippled-alerts.yml`:
**Tier 1 — Critical (page immediately):**
```yaml
- alert: XRPLServerNotFull
expr: xrpl_server_state < 4
for: 15m
- alert: XRPLAmendmentBlocked
expr: xrpl_amendment_blocked == 1
for: 1m
- alert: XRPLNoPeers
expr: xrpl_peers_count == 0
for: 5m
- alert: XRPLLedgerStale
expr: xrpl_validated_ledger_age_seconds > 120
for: 2m
- alert: XRPLHighIOLatency
expr: xrpl_io_latency_ms > 100
for: 5m
- alert: XRPLUnsupportedAmendmentMajority
expr: xrpl_amendment_unsupported_majority == 1
for: 1m
```
**Tier 2 — Warning (investigate within hours):**
```yaml
- alert: XRPLLowPeerCount
expr: xrpl_peers_count < 10
for: 15m
- alert: XRPLHighLoadFactor
expr: xrpl_load_factor > 10
for: 10m
- alert: XRPLSlowConsensus
expr: xrpl_last_close_converge_time_seconds > 6
for: 5m
- alert: XRPLValidatorListExpiring
expr: (xrpl_validator_list_expiration_seconds - time()) < 86400
for: 1h
- alert: XRPLClockDrift
expr: xrpl_close_time_offset_seconds > 0
for: 5m
- alert: XRPLStateFlapping
expr: rate(xrpl_state_transitions_total{state="full"}[1h]) > 2
for: 30m
```
**Key files**:
- New: `docker/telemetry/prometheus/rippled-alerts.yml`
- Update: `docker/telemetry/prometheus/prometheus.yml` (add rule_files reference)
---
## Task 11.9: New Grafana Dashboards
**Objective**: Create 4 new dashboards for the data exported by the receiver.
**What to do**:
- **Validator Health** (`rippled-validator-health`):
- Server state timeline, state duration breakdown
- Proposer count trend, converge time trend, validation quorum
- Validator list expiration countdown
- Amendment voting status (majority/enabled/vetoed)
- **Network Topology** (`rippled-network-topology`):
- Peer count (inbound/outbound/cluster), peer version distribution
- Peer latency distribution (p50/p95/p99), diverged peer count
- Geographic distribution (if enriched with GeoIP)
- Peer uptime distribution
- **Fee Market** (`rippled-fee-market-external`):
- Current fee levels (open ledger, median, minimum), fee escalation timeline
- Queue depth vs. capacity, transactions per ledger
- Load factor breakdown (server/network/cluster/escalation)
- **DEX & AMM Overview** (`rippled-dex-amm`) (only populated when DEX collectors are configured):
- AMM pool TVL, reserve ratios, LP token supply
- Order book depth per pair, spread trends
- Trading fee revenue estimates
**Key files**:
- New: `docker/telemetry/grafana/dashboards/rippled-validator-health.json`
- New: `docker/telemetry/grafana/dashboards/rippled-network-topology.json`
- New: `docker/telemetry/grafana/dashboards/rippled-fee-market-external.json`
- New: `docker/telemetry/grafana/dashboards/rippled-dex-amm.json`
---
## Task 11.10: Integration with Phase 10 Validation
**Objective**: Extend the Phase 10 validation suite to verify this receiver's metrics.
**What to do**:
- Update `docker/telemetry/workload/validate_telemetry.py`:
- Add assertions for all `xrpl_*` metrics produced by the receiver
- Verify metric labels have expected values
- Verify alerting rules fire correctly (inject a "bad" state and check alert)
- Update `docker/telemetry/docker-compose.workload.yaml`:
- Add the custom OTel Collector build with the rippled receiver
- Configure the receiver to poll one of the test nodes
**Key files**:
- Update: `docker/telemetry/workload/validate_telemetry.py`
- Update: `docker/telemetry/docker-compose.workload.yaml`
- Update: `docker/telemetry/workload/expected_metrics.json`
---
## Task 11.11: Documentation
**Objective**: Document the receiver, its metrics, deployment, and alerting.
**What to do**:
- Create `docker/telemetry/otel-rippled-receiver/README.md`:
- Architecture overview (how the receiver fits into the OTel Collector)
- Configuration reference (all config options with defaults)
- Metric reference table (all exported metrics with types and labels)
- Deployment guide (building custom collector binary, docker-compose integration)
- Update `OpenTelemetryPlan/09-data-collection-reference.md`:
- Add "Third-Party Metrics (OTel Collector Receiver)" section
- Add new Grafana dashboard reference (4 dashboards)
- Add alerting rules reference
- Update `docs/telemetry-runbook.md`:
- Add "Third-Party Metrics Receiver" troubleshooting section
- Add alerting playbook (what to do for each Tier 1/Tier 2 alert)
---
## Task 11.12: Alert Rules for External Dashboard Parity Metrics
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md) — 18 alert rules ported from the community [xrpl-validator-dashboard](https://github.com/realgrapedrop/xrpl-validator-dashboard).
>
> **Upstream**: Phase 7 Tasks 7.9-7.16 (metrics), Phase 9 Tasks 9.11-9.13 (dashboards).
> **Downstream**: None — terminal task in the parity chain.
**Objective**: Add Grafana alerting rules for the Phase 7+ parity metrics (validation agreement, validator health, peer quality, state tracking, ledger economy). These complement Task 11.8's `xrpl_*` alerts by covering the `rippled_*` internal metrics.
**Critical Group** (8 rules, eval interval 10s):
| Rule | Condition | For |
| ------------------- | --------------------------------------------------------------- | --- |
| Agreement Below 90% | `rippled_validation_agreement{metric="agreement_pct_24h"} < 90` | 30s |
| Not Proposing | `rippled_state_tracking{metric="state_value"} < 6` | 10s |
| Unhealthy State | `rippled_state_tracking{metric="state_value"} < 4` | 10s |
| Amendment Blocked | `rippled_validator_health{metric="amendment_blocked"} == 1` | 1m |
| UNL Expiring | `rippled_validator_health{metric="unl_expiry_days"} < 14` | 1h |
| High IO Latency | `histogram_quantile(0.95, rippled_ios_latency_bucket) > 50` | 1m |
| High Load Factor | `rippled_load_factor_metrics{metric="load_factor"} > 1000` | 1m |
| Peer Count Critical | `rippled_server_info{metric="peers"} < 5` | 1m |
**Network Group** (3 rules, eval interval 10s):
| Rule | Condition | For |
| ------------------------- | ------------------------------------------------------------------- | --- |
| Peer Drop >10% | `delta(rippled_server_info{metric="peers"}[30s]) / ... * 100 < -10` | 30s |
| Peer Drop >30% | Same formula, threshold -30 | 30s |
| P90 Latency + Disconnects | `peer_latency_p90_ms > 500 AND rate(disconnects) > 0` | 2m |
**Performance Group** (7 rules, eval interval 10s):
| Rule | Condition | For |
| ------------------- | -------------------------------------------------------------- | --- |
| CPU High | Per-core CPU > 80% (requires node_exporter) | 2m |
| Memory Critical | Memory usage > 90% (requires node_exporter) | 1m |
| Disk Warning | Disk usage > 85% (requires node_exporter) | 2m |
| Job Queue Overflow | `rate(rippled_jq_trans_overflow_total[5m]) > 0` | 1m |
| Upgrade Recommended | `rippled_peer_quality{metric="peers_higher_version_pct"} > 60` | 1m |
| TX Rate Drop | Transaction rate dropped > 50% in 5m window | 5m |
| Stale Ledger | `rippled_ledger_economy{metric="ledger_age_seconds"} > 30` | 1m |
**Notification channel templates**: Email/SMTP, Discord, Slack, PagerDuty.
**Key files**:
- New/extend: `docker/telemetry/grafana/alerting/alert-rules-parity.yaml`
- New: `docker/telemetry/grafana/alerting/contact-points.yaml` (template configs)
- New: `docker/telemetry/grafana/alerting/notification-policies.yaml`
**Exit Criteria**:
- [ ] All 18 rules evaluate without errors in Grafana alerting UI
- [ ] Critical rules fire within expected timeframe when conditions are met
- [ ] Notification channel templates are documented (not hard-coded to any service)
---
## Task 11.13: Dual-Datasource Architecture Documentation
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md)
**Objective**: Document the external dashboard's "fast path" pattern as a future optimization for real-time panels.
**Pattern**: A lightweight Prometheus scrape endpoint (separate from OTLP pipeline) that polls critical metrics every 2-5s, bypassing the 10s OTLP metric reader interval and Prometheus scrape interval.
**Use case**: Real-time state panels (server state, ledger age, peer count) where 10-15s latency is too slow for operational dashboards.
**Decision**: Document as a future option, not implement now. The current 10s interval is acceptable for v1. The external dashboard achieves 2-5s freshness by polling RPC directly, which is what the Phase 11 receiver already does. Adding a separate scrape endpoint to rippled would only be needed if sub-second metric freshness is required from the internal metrics pipeline.
**What to document**:
- Architecture comparison: OTLP pipeline (10-15s) vs. direct scrape (2-5s) vs. push gateway
- When to consider: operator feedback indicating 10s is insufficient for alerting SLOs
- How to implement if needed: add `/metrics` HTTP endpoint to rippled with Prometheus client library
- Trade-offs: additional port, additional dependency, duplication with OTLP metrics
**Key files**:
- Update: `OpenTelemetryPlan/09-data-collection-reference.md` (add "Future: Dual-Datasource Architecture" section)
- Update: `docs/telemetry-runbook.md` (add brief note in performance tuning section)
**Exit Criteria**:
- [ ] Architecture comparison documented with clear trade-offs
- [ ] Decision rationale recorded (why deferred, when to revisit)
---
## Exit Criteria
- [ ] Custom OTel Collector receiver builds and starts without errors
- [ ] All `xrpl_*` metrics from server_info, get_counts, peers, validators, fee appear in Prometheus
- [ ] Metrics update at configured poll interval (default 30s)
- [ ] 4 new Grafana dashboards operational with data
- [ ] Prometheus alerting rules fire correctly for simulated failure conditions
- [ ] DEX/AMM collector works when configured (optional — not required for base exit criteria)
- [ ] Phase 10 validation suite passes with receiver metrics included
- [ ] Receiver handles rippled restart/unavailability gracefully (no crash, logs warning, retries)
- [ ] Documentation complete: receiver README, metric reference, alerting playbook
- [ ] Go receiver has unit tests with >80% coverage
- [ ] 18 Grafana alert rules for Phase 7+ parity metrics evaluate correctly (Task 11.12)
- [ ] Dual-datasource architecture documented with trade-offs (Task 11.13)

View File

@@ -0,0 +1,263 @@
# Phase 2: RPC Tracing Completion Task List
> **Goal**: Complete full RPC tracing coverage with W3C Trace Context propagation, unit tests, and performance validation. Build on the POC foundation to achieve production-quality RPC observability.
>
> **Scope**: W3C header extraction, TraceContext propagation utilities, unit tests for core telemetry, integration tests for RPC tracing, and performance benchmarks.
>
> **Branch**: `pratik/otel-phase2-rpc-tracing` (from `pratik/OpenTelemetry_and_DistributedTracing_planning`)
### Related Plan Documents
| Document | Relevance |
| ------------------------------------------------------------ | ------------------------------------------------------------- |
| [04-code-samples.md](./04-code-samples.md) | TraceContextPropagator (§4.4.2), RPC instrumentation (§4.5.3) |
| [02-design-decisions.md](./02-design-decisions.md) | W3C Trace Context (§2.5), span attributes (§2.4.2) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 2 tasks (§6.3), definition of done (§6.11.2) |
---
## Task 2.1: Implement W3C Trace Context HTTP Header Extraction
**Objective**: Extract `traceparent` and `tracestate` headers from incoming HTTP RPC requests so external callers can propagate their trace context into rippled.
**What to do**:
- Create `include/xrpl/telemetry/TraceContextPropagator.h`:
- `extractFromHeaders(headerGetter)` - extract W3C traceparent/tracestate from HTTP headers
- `injectToHeaders(ctx, headerSetter)` - inject trace context into response headers
- Use OTel's `TextMapPropagator` with `W3CTraceContextPropagator` for standards compliance
- Only compiled when `XRPL_ENABLE_TELEMETRY` is defined
- Create `src/libxrpl/telemetry/TraceContextPropagator.cpp`:
- Implement a simple `TextMapCarrier` adapter for HTTP headers
- Use `opentelemetry::context::propagation::GlobalTextMapPropagator` for extraction/injection
- Register the W3C propagator in `TelemetryImpl::start()`
- Modify `src/xrpld/rpc/detail/ServerHandler.cpp`:
- In the HTTP request handler, extract parent context from headers before creating span
- Pass extracted context to `startSpan()` as parent
- Inject trace context into response headers
**Key new files**:
- `include/xrpl/telemetry/TraceContextPropagator.h`
- `src/libxrpl/telemetry/TraceContextPropagator.cpp`
**Key modified files**:
- `src/xrpld/rpc/detail/ServerHandler.cpp`
- `src/libxrpl/telemetry/Telemetry.cpp` (register W3C propagator)
**Reference**:
- [04-code-samples.md §4.4.2](./04-code-samples.md) — TraceContextPropagator with extractFromHeaders/injectToHeaders
- [02-design-decisions.md §2.5](./02-design-decisions.md) — W3C Trace Context propagation design
---
## Task 2.2: Add XRPL_TRACE_PEER Macro
**Objective**: Add the missing peer-tracing macro for future Phase 3 use and ensure macro completeness.
**What to do**:
- Edit `src/xrpld/telemetry/TracingInstrumentation.h`:
- Add `XRPL_TRACE_PEER(_tel_obj_, _span_name_)` macro that checks `shouldTracePeer()`
- Add `XRPL_TRACE_LEDGER(_tel_obj_, _span_name_)` macro (for future ledger tracing)
- Ensure disabled variants expand to `((void)0)`
**Key modified file**:
- `src/xrpld/telemetry/TracingInstrumentation.h`
---
## Task 2.3: Add shouldTraceLedger() to Telemetry Interface
**Objective**: The `Setup` struct has a `traceLedger` field but there's no corresponding virtual method. Add it for interface completeness.
**What to do**:
- Edit `include/xrpl/telemetry/Telemetry.h`:
- Add `virtual bool shouldTraceLedger() const = 0;`
- Update all implementations:
- `src/libxrpl/telemetry/Telemetry.cpp` (TelemetryImpl, NullTelemetryOtel)
- `src/libxrpl/telemetry/NullTelemetry.cpp` (NullTelemetry)
**Key modified files**:
- `include/xrpl/telemetry/Telemetry.h`
- `src/libxrpl/telemetry/Telemetry.cpp`
- `src/libxrpl/telemetry/NullTelemetry.cpp`
---
## Task 2.4: Unit Tests for Core Telemetry Infrastructure
**Objective**: Add unit tests for the core telemetry abstractions to validate correctness and catch regressions.
**What to do**:
- Create `src/test/telemetry/Telemetry_test.cpp`:
- Test NullTelemetry: verify all methods return expected no-op values
- Test Setup defaults: verify all Setup fields have correct defaults
- Test setup_Telemetry config parser: verify parsing of [telemetry] section
- Test enabled/disabled factory paths
- Test shouldTrace\* methods respect config flags
- Create `src/test/telemetry/SpanGuard_test.cpp`:
- Test SpanGuard RAII lifecycle (span ends on destruction)
- Test move constructor works correctly
- Test setAttribute, setOk, setStatus, addEvent, recordException
- Test context() returns valid context
- Add test files to CMake build
**Key new files**:
- `src/test/telemetry/Telemetry_test.cpp`
- `src/test/telemetry/SpanGuard_test.cpp`
**Reference**:
- [06-implementation-phases.md §6.11.1](./06-implementation-phases.md) — Phase 1 exit criteria (unit tests passing)
---
## Task 2.5: Enhance RPC Span Attributes
**Objective**: Add additional attributes to RPC spans per the semantic conventions defined in the plan.
**What to do**:
- Edit `src/xrpld/rpc/detail/ServerHandler.cpp`:
- Add `http.method` attribute for HTTP requests
- Add `http.status_code` attribute for responses
- Add `net.peer.ip` attribute for client IP (if available)
- Edit `src/xrpld/rpc/detail/RPCHandler.cpp`:
- Add `xrpl.rpc.duration_ms` attribute on completion
- Add error message attribute on failure: `xrpl.rpc.error_message`
**Key modified files**:
- `src/xrpld/rpc/detail/ServerHandler.cpp`
- `src/xrpld/rpc/detail/RPCHandler.cpp`
**Reference**:
- [02-design-decisions.md §2.4.2](./02-design-decisions.md) — RPC attribute schema
---
## Task 2.6: Build Verification and Performance Baseline
**Objective**: Verify the build succeeds with and without telemetry, and establish a performance baseline.
**What to do**:
1. Build with `telemetry=ON` and verify no compilation errors
2. Build with `telemetry=OFF` and verify no regressions
3. Run existing unit tests to verify no breakage
4. Document any build issues in lessons.md
**Verification Checklist**:
- [ ] `conan install . --build=missing -o telemetry=True` succeeds
- [ ] `cmake --preset default -Dtelemetry=ON` configures correctly
- [ ] Build succeeds with telemetry ON
- [ ] Build succeeds with telemetry OFF
- [ ] Existing tests pass with telemetry ON
- [ ] Existing tests pass with telemetry OFF
---
## Task 2.8: RPC Span Attribute Enrichment — Node Health Context
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md) — adds node-level health context inspired by the community [xrpl-validator-dashboard](https://github.com/realgrapedrop/xrpl-validator-dashboard).
>
> **Downstream**: Phase 7 (MetricsRegistry uses these attributes for alerting context), Phase 10 (validation checks for these attributes).
**Objective**: Add node-level health state to every `rpc.command.*` span so operators can correlate RPC behavior with node state in Jaeger/Tempo.
**What to do**:
- Edit `src/xrpld/rpc/detail/RPCHandler.cpp`:
- In the `rpc.command.*` span creation block (after existing `setAttribute` calls for `xrpl.rpc.command`, `xrpl.rpc.version`, etc.):
- Add `xrpl.node.amendment_blocked` (bool) — from `context.app.getOPs().isAmendmentBlocked()`
- Add `xrpl.node.server_state` (string) — from `context.app.getOPs().strOperatingMode()`
**New span attributes**:
| Attribute | Type | Source | Example |
| ----------------------------- | ------ | ------------------------------------------- | -------- |
| `xrpl.node.amendment_blocked` | bool | `context.app.getOPs().isAmendmentBlocked()` | `true` |
| `xrpl.node.server_state` | string | `context.app.getOPs().strOperatingMode()` | `"full"` |
**Rationale**: When a node is amendment-blocked or in a degraded state, every RPC response is suspect. Tagging spans with this state enables Jaeger queries like:
```
{name=~"rpc.command.*"} | xrpl.node.amendment_blocked = true
```
This surfaces all RPCs served during a blocked period — critical for post-incident analysis.
**Key modified files**:
- `src/xrpld/rpc/detail/RPCHandler.cpp`
**Exit Criteria**:
- [ ] `rpc.command.server_info` spans carry `xrpl.node.amendment_blocked` and `xrpl.node.server_state` attributes
- [ ] No measurable latency impact (attribute values are cached atomics, not computed per-call)
- [ ] Attributes appear in Jaeger span detail view
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ------------------------------------------- | --------- | -------------- | ---------- |
| 2.1 | W3C Trace Context header extraction | 2 | 2 | POC |
| 2.2 | Add XRPL_TRACE_PEER/LEDGER macros | 0 | 1 | POC |
| 2.3 | Add shouldTraceLedger() interface method | 0 | 3 | POC |
| 2.4 | Unit tests for core telemetry | 2 | 1 | POC |
| 2.5 | Enhanced RPC span attributes | 0 | 2 | POC |
| 2.6 | Build verification and performance baseline | 0 | 0 | 2.1-2.5 |
| 2.8 | RPC span attribute enrichment (node health) | 0 | 1 | 2.5 |
**Parallel work**: Tasks 2.1, 2.2, 2.3 can run in parallel. Task 2.4 depends on 2.3. Task 2.5 can run in parallel with 2.4. Task 2.6 depends on all others. Task 2.8 depends on 2.5 (existing span creation must be in place).
---
## Known Issues / Future Work
### Thread safety of TelemetryImpl::stop() vs startSpan()
`TelemetryImpl::stop()` resets `sdkProvider_` (a `std::shared_ptr`) without
synchronization. `getTracer()` reads the same member from RPC handler threads.
This is a data race if any thread calls `startSpan()` concurrently with `stop()`.
**Current mitigation**: `Application::stop()` shuts down `serverHandler_`,
`overlay_`, and `jobQueue_` before calling `telemetry_->stop()`, so no callers
remain. See comments in `Telemetry.cpp:stop()` and `Application.cpp`.
**TODO**: Add an `std::atomic<bool> stopped_` flag checked in `getTracer()` to
make this robust against future shutdown order changes.
### Macro incompatibility: XRPL_TRACE_SPAN vs XRPL_TRACE_SET_ATTR
`XRPL_TRACE_SPAN` and `XRPL_TRACE_SPAN_KIND` declare `_xrpl_guard_` as a bare
`SpanGuard`, but `XRPL_TRACE_SET_ATTR` and `XRPL_TRACE_EXCEPTION` call
`_xrpl_guard_.has_value()` which requires `std::optional<SpanGuard>`. Using
`XRPL_TRACE_SPAN` followed by `XRPL_TRACE_SET_ATTR` in the same scope would
fail to compile.
**Current mitigation**: No call site currently uses `XRPL_TRACE_SPAN` — all
production code uses the conditional macros (`XRPL_TRACE_RPC`, `XRPL_TRACE_TX`,
etc.) which correctly wrap the guard in `std::optional`.
**TODO**: Either make `XRPL_TRACE_SPAN`/`XRPL_TRACE_SPAN_KIND` also wrap in
`std::optional`, or document that `XRPL_TRACE_SET_ATTR` is only compatible with
the conditional macros.

View File

@@ -0,0 +1,300 @@
# Phase 3: Transaction Tracing Task List
> **Goal**: Trace the full transaction lifecycle from RPC submission through peer relay, including cross-node context propagation via Protocol Buffer extensions. This is the WALK phase that demonstrates true distributed tracing.
>
> **Scope**: Protocol Buffer `TraceContext` message, context serialization, PeerImp transaction instrumentation, NetworkOPs processing instrumentation, HashRouter visibility, and multi-node relay context propagation.
>
> **Branch**: `pratik/otel-phase3-tx-tracing` (from `pratik/otel-phase2-rpc-tracing`)
### Related Plan Documents
| Document | Relevance |
| ------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ |
| [04-code-samples.md](./04-code-samples.md) | TraceContext protobuf (§4.4.1), PeerImp instrumentation (§4.5.1), context serialization (§4.4.2) |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | Transaction flow (§1.3), key trace points (§1.6) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 3 tasks (§6.4), definition of done (§6.11.3) |
| [02-design-decisions.md](./02-design-decisions.md) | Context propagation design (§2.5), attribute schema (§2.4.3) |
---
## Task 3.1: Define TraceContext Protocol Buffer Message
**Objective**: Add trace context fields to the P2P protocol messages so trace IDs can propagate across nodes.
**What to do**:
- Edit `include/xrpl/proto/xrpl.proto` (or `src/ripple/proto/ripple.proto`, wherever the proto is):
- Add `TraceContext` message definition:
```protobuf
message TraceContext {
bytes trace_id = 1; // 16-byte trace identifier
bytes span_id = 2; // 8-byte span identifier
uint32 trace_flags = 3; // bit 0 = sampled
string trace_state = 4; // W3C tracestate value
}
```
- Add `optional TraceContext trace_context = 1001;` to:
- `TMTransaction`
- `TMProposeSet` (for Phase 4 use)
- `TMValidation` (for Phase 4 use)
- Use high field numbers (1001+) to avoid conflicts with existing fields
- Regenerate protobuf C++ code
**Key modified files**:
- `include/xrpl/proto/xrpl.proto` (or equivalent)
**Reference**:
- [04-code-samples.md §4.4.1](./04-code-samples.md) — TraceContext message definition
- [02-design-decisions.md §2.5.2](./02-design-decisions.md) — Protocol buffer context propagation design
---
## Task 3.2: Implement Protobuf Context Serialization
**Objective**: Create utilities to serialize/deserialize OTel trace context to/from protobuf `TraceContext` messages.
**What to do**:
- Create `include/xrpl/telemetry/TraceContextPropagator.h` (extend from Phase 2 if exists, or add protobuf methods):
- Add protobuf-specific methods:
- `static Context extractFromProtobuf(protocol::TraceContext const& proto)` — reconstruct OTel context from protobuf fields
- `static void injectToProtobuf(Context const& ctx, protocol::TraceContext& proto)` — serialize current span context into protobuf fields
- Both methods guard behind `#ifdef XRPL_ENABLE_TELEMETRY`
- Create/extend `src/libxrpl/telemetry/TraceContextPropagator.cpp`:
- Implement extraction: read trace_id (16 bytes), span_id (8 bytes), trace_flags from protobuf, construct `SpanContext`, wrap in `Context`
- Implement injection: get current span from context, serialize its TraceId, SpanId, and TraceFlags into protobuf fields
**Key new/modified files**:
- `include/xrpl/telemetry/TraceContextPropagator.h`
- `src/libxrpl/telemetry/TraceContextPropagator.cpp`
**Reference**:
- [04-code-samples.md §4.4.2](./04-code-samples.md) — Full extract/inject implementation
---
## Task 3.3: Instrument PeerImp Transaction Handling
**Objective**: Add trace spans to the peer-level transaction receive and relay path.
**What to do**:
- Edit `src/xrpld/overlay/detail/PeerImp.cpp`:
- In `onMessage(TMTransaction)` / `handleTransaction()`:
- Extract parent trace context from incoming `TMTransaction::trace_context` field (if present)
- Create `tx.receive` span as child of extracted context (or new root if none)
- Set attributes: `xrpl.tx.hash`, `xrpl.peer.id`, `xrpl.tx.status`
- On HashRouter suppression (duplicate): set `xrpl.tx.suppressed=true`, add `tx.duplicate` event
- Wrap validation call with child span `tx.validate`
- Wrap relay with `tx.relay` span
- When relaying to peers:
- Inject current trace context into outgoing `TMTransaction::trace_context`
- Set `xrpl.tx.relay_count` attribute
- Include `TracingInstrumentation.h` and use `XRPL_TRACE_TX` macro
**Key modified files**:
- `src/xrpld/overlay/detail/PeerImp.cpp`
**Reference**:
- [04-code-samples.md §4.5.1](./04-code-samples.md) — Full PeerImp instrumentation example
- [01-architecture-analysis.md §1.3](./01-architecture-analysis.md) — Transaction flow diagram
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — tx.receive trace point
---
## Task 3.4: Instrument NetworkOPs Transaction Processing
**Objective**: Trace the transaction processing pipeline in NetworkOPs, covering both sync and async paths.
**What to do**:
- Edit `src/xrpld/app/misc/NetworkOPs.cpp`:
- In `processTransaction()`:
- Create `tx.process` span
- Set attributes: `xrpl.tx.hash`, `xrpl.tx.type`, `xrpl.tx.local` (whether from RPC or peer)
- Record whether sync or async path is taken
- In `doTransactionAsync()`:
- Capture parent context before queuing
- Create `tx.queue` span with queue depth attribute
- Add event when transaction is dequeued for processing
- In `doTransactionSync()`:
- Create `tx.process_sync` span
- Record result (applied, queued, rejected)
**Key modified files**:
- `src/xrpld/app/misc/NetworkOPs.cpp`
**Reference**:
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — tx.validate and tx.process trace points
- [02-design-decisions.md §2.4.3](./02-design-decisions.md) — Transaction attribute schema
---
## Task 3.5: Instrument HashRouter for Dedup Visibility
**Objective**: Make transaction deduplication visible in traces by recording HashRouter decisions as span attributes/events.
**What to do**:
- Edit `src/xrpld/overlay/detail/PeerImp.cpp` (in handleTransaction):
- After calling `HashRouter::shouldProcess()` or `addSuppressionPeer()`:
- Record `xrpl.tx.suppressed` attribute (true/false)
- Record `xrpl.tx.flags` showing current HashRouter state (SAVED, TRUSTED, etc.)
- Add `tx.first_seen` or `tx.duplicate` event
- This is NOT a modification to HashRouter itself — just recording its decisions as span attributes in the existing PeerImp instrumentation from Task 3.3.
**Key modified files**:
- `src/xrpld/overlay/detail/PeerImp.cpp` (same changes as 3.3, logically grouped)
---
## Task 3.6: Context Propagation in Transaction Relay
**Objective**: Ensure trace context flows correctly when transactions are relayed between peers, creating linked spans across nodes.
**What to do**:
- Verify the relay path injects trace context:
- When `PeerImp` relays a transaction, the `TMTransaction` message should carry `trace_context`
- When a remote peer receives it, the context is extracted and used as parent
- Test context propagation:
- Manually verify with 2+ node setup that trace IDs match across nodes
- Confirm parent-child span relationships are correct in Jaeger
- Handle edge cases:
- Missing trace context (older peers): create new root span
- Corrupted trace context: log warning, create new root span
- Sampled-out traces: respect trace flags
**Key modified files**:
- `src/xrpld/overlay/detail/PeerImp.cpp`
- `src/xrpld/overlay/detail/OverlayImpl.cpp` (if relay method needs context param)
**Reference**:
- [02-design-decisions.md §2.5](./02-design-decisions.md) — Context propagation design
- [04-code-samples.md §4.5.1](./04-code-samples.md) — Relay context injection pattern
---
## Task 3.7: Build Verification and Testing
**Objective**: Verify all Phase 3 changes compile and work correctly.
**What to do**:
1. Build with `telemetry=ON` — verify no compilation errors
2. Build with `telemetry=OFF` — verify no regressions
3. Run existing unit tests
4. Verify protobuf regeneration produces correct C++ code
5. Document any issues encountered
**Verification Checklist**:
- [ ] Protobuf changes generate valid C++
- [ ] Build succeeds with telemetry ON
- [ ] Build succeeds with telemetry OFF
- [ ] Existing tests pass
- [ ] No undefined symbols from new telemetry calls
---
## Task 3.8: Transaction Span Peer Version Attribute
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md) — adds peer version context inspired by the community [xrpl-validator-dashboard](https://github.com/realgrapedrop/xrpl-validator-dashboard).
>
> **Upstream**: Phase 2 (RPC span infrastructure must exist).
> **Downstream**: Phase 10 (validation checks for this attribute).
**Objective**: Add the relaying peer's rippled version to `tx.receive` spans so operators can correlate transaction issues with peer version mismatches during network upgrades.
**What to do**:
- Edit `src/xrpld/overlay/detail/PeerImp.cpp`:
- In the `tx.receive` span block (after existing `xrpl.peer.id` setAttribute call):
- Add `xrpl.peer.version` (string) — from `this->getVersion()`
- Only set if `getVersion()` returns a non-empty string (avoid empty-string attributes)
**New span attribute**:
| Attribute | Type | Source | Example |
| ------------------- | ------ | -------------------- | ----------------- |
| `xrpl.peer.version` | string | `peer->getVersion()` | `"rippled-2.4.0"` |
**Rationale**: Transaction relay is where version mismatches cause subtle serialization or validation bugs. Tracing "this tx came from a v2.3.0 peer" helps diagnose compatibility issues. The community dashboard tracks peer versions externally; this brings version awareness into the trace itself.
**Key modified files**:
- `src/xrpld/overlay/detail/PeerImp.cpp`
**Exit Criteria**:
- [ ] `tx.receive` spans carry `xrpl.peer.version` attribute with a non-empty version string
- [ ] Attribute is omitted (not set to empty string) when `getVersion()` returns empty
- [ ] Attribute visible in Jaeger span detail view
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ----------------------------------- | --------- | -------------- | ---------- |
| 3.1 | TraceContext protobuf message | 0 | 1 | Phase 2 |
| 3.2 | Protobuf context serialization | 1-2 | 0 | 3.1 |
| 3.3 | PeerImp transaction instrumentation | 0 | 1 | 3.2 |
| 3.4 | NetworkOPs transaction processing | 0 | 1 | Phase 2 |
| 3.5 | HashRouter dedup visibility | 0 | 1 | 3.3 |
| 3.6 | Relay context propagation | 0 | 1-2 | 3.3, 3.5 |
| 3.7 | Build verification and testing | 0 | 0 | 3.1-3.6 |
| 3.8 | TX span peer version attribute | 0 | 1 | 3.3 |
**Parallel work**: Tasks 3.1 and 3.4 can start in parallel. Task 3.2 depends on 3.1. Tasks 3.3 and 3.5 depend on 3.2. Task 3.6 depends on 3.3 and 3.5. Task 3.8 depends on 3.3 (span must exist).
**Exit Criteria** (from [06-implementation-phases.md §6.11.3](./06-implementation-phases.md)):
- [ ] Transaction traces span across nodes
- [ ] Trace context in Protocol Buffer messages
- [ ] HashRouter deduplication visible in traces
- [ ] <5% overhead on transaction throughput
---
## Known Issues / Future Work
### Propagation utilities not yet wired into P2P flow
`extractFromProtobuf()` and `injectToProtobuf()` in `TraceContextPropagator.h`
are implemented and tested but not called from production code. To enable
cross-node distributed traces:
- Call `injectToProtobuf()` in `PeerImp` when sending `TMTransaction` /
`TMProposeSet` messages
- Call `extractFromProtobuf()` in the corresponding message handlers to
reconstruct the parent span context, then pass it to `startSpan()` as the
parent
This was deferred to validate single-node tracing performance first.
### Unused trace_state proto field
The `TraceContext.trace_state` field (field 4) in `xrpl.proto` is reserved for
W3C `tracestate` vendor-specific key-value pairs but is not read or written by
`TraceContextPropagator`. Wire it when cross-vendor trace propagation is needed.
No wire cost since proto `optional` fields are zero-cost when absent.

View File

@@ -0,0 +1,896 @@
# Phase 4: Consensus Tracing Task List
> **Goal**: Full observability into consensus rounds — track round lifecycle, phase transitions, proposal handling, and validation. This is the RUN phase that completes the distributed tracing story.
>
> **Scope**: RCLConsensus instrumentation for round starts, phase transitions (open/establish/accept), proposal send/receive, validation handling, and correlation with transaction traces from Phase 3.
>
> **Branch**: `pratik/otel-phase4-consensus-tracing` (from `pratik/otel-phase3-tx-tracing`)
### Related Plan Documents
| Document | Relevance |
| ------------------------------------------------------------ | ----------------------------------------------------------- |
| [04-code-samples.md](./04-code-samples.md) | Consensus instrumentation (§4.5.2), consensus span patterns |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | Consensus round flow (§1.4), key trace points (§1.6) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 4 tasks (§6.5), definition of done (§6.11.4) |
| [02-design-decisions.md](./02-design-decisions.md) | Consensus attribute schema (§2.4.4) |
---
## Task 4.1: Instrument Consensus Round Start
**Objective**: Create a root span for each consensus round that captures the round's key parameters.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp`:
- In `RCLConsensus::startRound()` (or the Adaptor's startRound):
- Create `consensus.round` span using `XRPL_TRACE_CONSENSUS` macro
- Set attributes:
- `xrpl.consensus.ledger.prev` — previous ledger hash
- `xrpl.consensus.ledger.seq` — target ledger sequence
- `xrpl.consensus.proposers` — number of trusted proposers
- `xrpl.consensus.mode` — "proposing" or "observing"
- Store the span context for use by child spans in phase transitions
- Add a member to hold current round trace context:
- `opentelemetry::context::Context currentRoundContext_` (guarded by `#ifdef`)
- Updated at round start, used by phase transition spans
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `src/xrpld/app/consensus/RCLConsensus.h` (add context member)
**Reference**:
- [04-code-samples.md §4.5.2](./04-code-samples.md) — startRound instrumentation example
- [01-architecture-analysis.md §1.4](./01-architecture-analysis.md) — Consensus round flow
---
## Task 4.2: Instrument Phase Transitions
**Objective**: Create child spans for each consensus phase (open, establish, accept) to show timing breakdown.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp`:
- Identify where phase transitions occur (the `Consensus<Adaptor>` template drives this)
- For each phase entry:
- Create span as child of `currentRoundContext_`: `consensus.phase.open`, `consensus.phase.establish`, `consensus.phase.accept`
- Set `xrpl.consensus.phase` attribute
- Add `phase.enter` event at start, `phase.exit` event at end
- Record phase duration in milliseconds
- In the `onClose` adaptor method:
- Create `consensus.ledger_close` span
- Set attributes: close_time, mode, transaction count in initial position
- Note: The Consensus template class in `src/xrpld/consensus/Consensus.h` drives phase transitions — Phase 4a instruments directly in the template
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- Possibly `include/xrpl/consensus/Consensus.h` (for template-level phase tracking)
**Reference**:
- [04-code-samples.md §4.5.2](./04-code-samples.md) — phaseTransition instrumentation
---
## Task 4.3: Instrument Proposal Handling
**Objective**: Trace proposal send and receive to show validator coordination.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp`:
- In `Adaptor::propose()`:
- Create `consensus.proposal.send` span
- Set attributes: `xrpl.consensus.round` (proposal sequence), proposal hash
- Inject trace context into outgoing `TMProposeSet::trace_context` (from Phase 3 protobuf)
- In `Adaptor::peerProposal()` (or wherever peer proposals are received):
- Extract trace context from incoming `TMProposeSet::trace_context`
- Create `consensus.proposal.receive` span as child of extracted context
- Set attributes: `xrpl.consensus.proposer` (node ID), `xrpl.consensus.round`
- In `Adaptor::share(RCLCxPeerPos)`:
- Create `consensus.proposal.relay` span for relaying peer proposals
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
**Reference**:
- [04-code-samples.md §4.5.2](./04-code-samples.md) — peerProposal instrumentation
- [02-design-decisions.md §2.4.4](./02-design-decisions.md) — Consensus attribute schema
---
## Task 4.4: Instrument Validation Handling
**Objective**: Trace validation send and receive to show ledger validation flow.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp` (or the validation handler):
- When sending our validation:
- Create `consensus.validation.send` span
- Set attributes: validated ledger hash, sequence, signing time
- When receiving a peer validation:
- Extract trace context from `TMValidation::trace_context` (if present)
- Create `consensus.validation.receive` span
- Set attributes: `xrpl.consensus.validator` (node ID), ledger hash
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `src/xrpld/app/misc/NetworkOPs.cpp` (if validation handling is here)
---
## Task 4.5: Add Consensus-Specific Attributes
**Objective**: Enrich consensus spans with detailed attributes for debugging and analysis.
**What to do**:
- Review all consensus spans and ensure they include:
- `xrpl.consensus.ledger.seq` — target ledger sequence number
- `xrpl.consensus.round` — consensus round number
- `xrpl.consensus.mode` — proposing/observing/wrongLedger
- `xrpl.consensus.phase` — current phase name
- `xrpl.consensus.phase_duration_ms` — time spent in phase
- `xrpl.consensus.proposers` — number of trusted proposers
- `xrpl.consensus.tx_count` — transactions in proposed set
- `xrpl.consensus.disputes` — number of disputed transactions
- `xrpl.consensus.converge_percent` — convergence percentage
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
---
## Task 4.6: Correlate Transaction and Consensus Traces
**Objective**: Link transaction traces from Phase 3 with consensus traces so you can follow a transaction from submission through consensus into the ledger.
**What to do**:
- In `onClose()` or `onAccept()`:
- When building the consensus position, link the round span to individual transaction spans using span links (if OTel SDK supports it) or events
- At minimum, record the transaction hashes included in the consensus set as span events: `tx.included` with `xrpl.tx.hash` attribute
- In `processTransactionSet()` (NetworkOPs):
- If the consensus round span context is available, create child spans for each transaction applied to the ledger
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `src/xrpld/app/misc/NetworkOPs.cpp`
---
## Task 4.7: Build Verification and Testing
**Objective**: Verify all Phase 4 changes compile and don't affect consensus timing.
**What to do**:
1. Build with `telemetry=ON` — verify no compilation errors
2. Build with `telemetry=OFF` — verify no regressions (critical for consensus code)
3. Run existing consensus-related unit tests
4. Verify that all macros expand to no-ops when disabled
5. Check that no consensus-critical code paths are affected by instrumentation overhead
**Verification Checklist**:
- [ ] Build succeeds with telemetry ON
- [ ] Build succeeds with telemetry OFF
- [ ] Existing consensus tests pass
- [ ] No new includes in consensus headers when telemetry is OFF
- [ ] Phase timing instrumentation doesn't use blocking operations
---
## Task 4.8: Consensus Validation Span Enrichment — External Dashboard Parity
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md) — adds validation agreement context inspired by the community [xrpl-validator-dashboard](https://github.com/realgrapedrop/xrpl-validator-dashboard).
>
> **Upstream**: Phase 4 tasks 4.1-4.4 (span creation must exist).
> **Downstream**: Phase 7 (ValidationTracker reads these attributes), Phase 10 (validation checks).
**Objective**: Add ledger hash, validation type, and quorum data to consensus validation spans on both send and receive paths. This enables trace-level validation agreement analysis — filter by ledger hash to see which validators agreed for a given ledger.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp`:
- On the `consensus.validation.send` span (in `validate()` / `doAccept()`):
- Add `xrpl.validation.ledger_hash` (string) — the ledger hash being validated
- Add `xrpl.validation.full` (bool) — whether this is a full validation (not partial)
- On the `consensus.accept` span (in `onAccept()`):
- Add `xrpl.consensus.validation_quorum` (int64) — from `app_.validators().quorum()`
- Add `xrpl.consensus.proposers_validated` (int64) — from `result.proposers`
- Edit `src/xrpld/overlay/detail/PeerImp.cpp`:
- On the `peer.validation.receive` span:
- Add `xrpl.peer.validation.ledger_hash` (string) — from deserialized `STValidation` object
- Add `xrpl.peer.validation.full` (bool) — from `STValidation` flags
**New span attributes**:
| Span | Attribute | Type | Source |
| --------------------------- | ------------------------------------ | ------ | --------------------------------- |
| `consensus.validation.send` | `xrpl.validation.ledger_hash` | string | Ledger hash from validate() args |
| `consensus.validation.send` | `xrpl.validation.full` | bool | Full vs partial validation |
| `peer.validation.receive` | `xrpl.peer.validation.ledger_hash` | string | From STValidation deserialization |
| `peer.validation.receive` | `xrpl.peer.validation.full` | bool | From STValidation flags |
| `consensus.accept` | `xrpl.consensus.validation_quorum` | int64 | `app_.validators().quorum()` |
| `consensus.accept` | `xrpl.consensus.proposers_validated` | int64 | `result.proposers` |
**Rationale**: The external dashboard's most valuable feature is validation agreement tracking. By recording the ledger hash on both outgoing and incoming validation spans, we create the raw data for agreement analysis at the trace level. Example Tempo query:
```
{name="consensus.validation.send"} | xrpl.validation.ledger_hash = "A1B2C3..."
```
Phase 7's `ValidationTracker` builds metric-level aggregation (1h/24h agreement %) on top of this data.
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `src/xrpld/overlay/detail/PeerImp.cpp`
**Exit Criteria**:
- [ ] `consensus.validation.send` spans carry `xrpl.validation.ledger_hash` and `xrpl.validation.full`
- [ ] `peer.validation.receive` spans carry `xrpl.peer.validation.ledger_hash` and `xrpl.peer.validation.full`
- [ ] `consensus.accept` spans carry `xrpl.consensus.validation_quorum` and `xrpl.consensus.proposers_validated`
- [ ] Ledger hash attributes match between send and receive for the same ledger
- [ ] No impact on consensus performance
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ------------------------------------------- | --------- | -------------- | ------------- |
| 4.1 | Consensus round start instrumentation | 0 | 2 | Phase 3 |
| 4.2 | Phase transition instrumentation | 0 | 1-2 | 4.1 |
| 4.3 | Proposal handling instrumentation | 0 | 1 | 4.1 |
| 4.4 | Validation handling instrumentation | 0 | 1-2 | 4.1 |
| 4.5 | Consensus-specific attributes | 0 | 1 | 4.2, 4.3, 4.4 |
| 4.6 | Transaction-consensus correlation | 0 | 2 | 4.2, Phase 3 |
| 4.7 | Build verification and testing | 0 | 0 | 4.1-4.6 |
| 4.8 | Validation span enrichment (ext. dashboard) | 0 | 2 | 4.4 |
**Parallel work**: Tasks 4.2, 4.3, and 4.4 can run in parallel after 4.1 is complete. Task 4.5 depends on all three. Task 4.6 depends on 4.2 and Phase 3. Task 4.8 depends on 4.4 (validation spans must exist).
### Implemented Spans
| Span Name | Method | Key Attributes |
| --------------------------- | ---------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `consensus.proposal.send` | `Adaptor::propose` | `xrpl.consensus.round` |
| `consensus.ledger_close` | `Adaptor::onClose` | `xrpl.consensus.ledger.seq`, `xrpl.consensus.mode` |
| `consensus.accept` | `Adaptor::onAccept` | `xrpl.consensus.proposers`, `xrpl.consensus.round_time_ms` |
| `consensus.accept.apply` | `Adaptor::doAccept` | `xrpl.consensus.close_time`, `close_time_correct`, `close_resolution_ms`, `state`, `proposing`, `round_time_ms`, `ledger.seq`, `parent_close_time`, `close_time_self`, `close_time_vote_bins`, `resolution_direction` |
| `consensus.validation.send` | `Adaptor::onAccept` (via validate) | `xrpl.consensus.proposing` |
#### Close Time Attributes (consensus.accept.apply)
The `consensus.accept.apply` span captures ledger close time agreement details
driven by `avCT_CONSENSUS_PCT` (75% validator agreement threshold):
- **`xrpl.consensus.close_time`** — Agreed-upon ledger close time (epoch seconds). When validators disagree (`consensusCloseTime == epoch`), this is synthetically set to `prevCloseTime + 1s`.
- **`xrpl.consensus.close_time_correct`** — `true` if validators reached agreement, `false` if they "agreed to disagree" (close time forced to prev+1s).
- **`xrpl.consensus.close_resolution_ms`** — Rounding granularity for close time (starts at 30s, decreases as ledger interval stabilizes).
- **`xrpl.consensus.state`** — `"finished"` (normal) or `"moved_on"` (consensus failed, adopted best available).
- **`xrpl.consensus.proposing`** — Whether this node was proposing.
- **`xrpl.consensus.round_time_ms`** — Total consensus round duration.
- **`xrpl.consensus.parent_close_time`** — Previous ledger's close time (epoch seconds). Enables computing close-time deltas across consecutive rounds without correlating separate spans.
- **`xrpl.consensus.close_time_self`** — This node's own proposed close time before consensus voting.
- **`xrpl.consensus.close_time_vote_bins`** — Number of distinct close-time vote bins from peer proposals. Higher values indicate less agreement among validators.
- **`xrpl.consensus.resolution_direction`** — Whether close-time resolution `"increased"` (coarser), `"decreased"` (finer), or stayed `"unchanged"` relative to the previous ledger.
**Exit Criteria** (from [06-implementation-phases.md §6.11.4](./06-implementation-phases.md)):
- [x] Complete consensus round traces
- [x] Phase transitions visible
- [x] Proposals and validations traced
- [x] Close time agreement tracked (per `avCT_CONSENSUS_PCT`)
- [x] No impact on consensus timing
---
# Phase 4a: Establish-Phase Gap Fill & Cross-Node Correlation
> **Goal**: Fill tracing gaps in the consensus establish phase (disputes, convergence,
> threshold escalation, mode changes) and establish cross-node correlation using a
> deterministic shared trace ID derived from `previousLedger.id()`.
>
> **Approach**: Direct instrumentation in `Consensus.h` — the generic consensus
> template has full access to internal state (`convergePercent_`, `result_->disputes`,
> `mode_`, threshold logic). Telemetry access comes via a single new adaptor
> method `getTelemetry()`. Long-lived spans (round, establish) are stored as
> class members using `SpanGuard` directly — NOT the `XRPL_TRACE_*` convenience
> macros (which create local variables named `_xrpl_guard_`). Short-lived
> scoped spans (update_positions, check) can use the macros. All code compiles
> to no-ops when `XRPL_ENABLE_TELEMETRY` is not defined.
>
> **Branch**: `pratik/otel-phase4-consensus-tracing`
## Design: Switchable Correlation Strategy
Two strategies for cross-node trace correlation, switchable via config:
### Strategy A — Deterministic Trace ID (Default)
Derive `trace_id = SHA256(previousLedger.id())[0:16]` so all nodes in the same
consensus round share the same trace_id without P2P context propagation.
- **Pros**: All nodes appear in the same trace in Tempo/Jaeger automatically.
No collector-side post-processing needed.
- **Cons**: Overrides OTel's random trace_id generation; requires custom
`IdGenerator` or manual span context construction.
### Strategy B — Attribute-Based Correlation
Use normal random trace_id but attach `xrpl.consensus.ledger_id` as an attribute
on every consensus span. Correlation happens at query time via Tempo/Grafana
`by attribute` queries.
- **Pros**: Standard OTel trace_id semantics; no SDK customization.
- **Cons**: Cross-node correlation requires query-time joins, not automatic.
### Config
```ini
[telemetry]
# "deterministic" (default) or "attribute"
consensus_trace_strategy=deterministic
```
### Implementation
In `RCLConsensus::Adaptor::startRound()`:
- If `deterministic`:
1. Compute `trace_id_bytes = SHA256(prevLedgerID)[0:16]`
2. Construct `opentelemetry::trace::TraceId(trace_id_bytes)`
3. Create a synthetic `SpanContext` with this trace_id and a random span_id:
```cpp
auto traceId = opentelemetry::trace::TraceId(trace_id_bytes);
auto spanId = opentelemetry::trace::SpanId(random_8_bytes);
auto syntheticCtx = opentelemetry::trace::SpanContext(
traceId, spanId, opentelemetry::trace::TraceFlags(1), false);
```
4. Wrap in `opentelemetry::context::Context` via
`opentelemetry::trace::SetSpan(context, syntheticSpan)`
5. Call `startSpan("consensus.round", parentContext)` so the new span
inherits the deterministic trace_id.
- If `attribute`: start a normal `consensus.round` span, set
`xrpl.consensus.ledger_id = previousLedger.id()` as attribute.
Both strategies always set `xrpl.consensus.round_id` (round number) and
`xrpl.consensus.ledger_id` (previous ledger hash) as attributes.
---
## Design: Span Hierarchy
```
consensus.round (root — created in RCLConsensus::startRound, closed at accept)
│ link → previous round's SpanContext (follows-from)
├── consensus.establish (phaseEstablish → acceptance, in Consensus.h)
│ ├── consensus.update_positions (each updateOurPositions call)
│ │ └── consensus.dispute.resolve (per-tx dispute resolution event)
│ ├── consensus.check (each haveConsensus call)
│ └── consensus.mode_change (short-lived span in adaptor on mode transition)
├── consensus.accept (existing onAccept span — reparented under round)
└── consensus.validation.send (existing — reparented, follows-from link to round)
```
### Span Links (follows-from relationships)
| Link Source | Link Target | Rationale |
| ----------------------------------------- | -------------------------- | ------------------------------------------------------------------------------ |
| `consensus.round` (N+1) | `consensus.round` (N) | Causal chain: round N+1 exists because round N accepted |
| `consensus.validation.send` | `consensus.round` | Validation follows from the round that produced it; may outlive the round span |
| _(Phase 4b)_ Received proposal processing | Sender's `consensus.round` | Cross-node causal link via P2P context propagation |
---
## Task 4a.0: Prerequisites — Extend SpanGuard and Telemetry APIs
**Objective**: Add missing API surface needed by later tasks.
**What to do**:
1. **Add `SpanGuard::addEvent()` with attributes** (needed by Task 4a.5):
The current `addEvent(string_view name)` only accepts a name. Add an
overload that accepts key-value attributes:
```cpp
void addEvent(std::string_view name,
std::initializer_list<
std::pair<opentelemetry::nostd::string_view,
opentelemetry::common::AttributeValue>> attributes)
{
span_->AddEvent(std::string(name), attributes);
}
```
2. **Add a `Telemetry::startSpan()` overload that accepts span links** (needed by Tasks 4a.2, 4a.8):
The current `startSpan()` has no span link support. Add an overload that
accepts a vector of `SpanContext` links for follows-from relationships:
```cpp
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span>
startSpan(
std::string_view name,
opentelemetry::context::Context const& parentContext,
std::vector<opentelemetry::trace::SpanContext> const& links,
opentelemetry::trace::SpanKind kind = opentelemetry::trace::SpanKind::kInternal) = 0;
```
3. **Add `XRPL_TRACE_ADD_EVENT` macro** (needed by Task 4a.5):
Add to `TracingInstrumentation.h` to expose `addEvent(name, attrs)` through
the macro interface (consistent with `XRPL_TRACE_SET_ATTR` pattern):
```cpp
#ifdef XRPL_ENABLE_TELEMETRY
#define XRPL_TRACE_ADD_EVENT(name, ...) \
if (_xrpl_guard_.has_value()) \
{ \
_xrpl_guard_->addEvent(name, __VA_ARGS__); \
}
#else
#define XRPL_TRACE_ADD_EVENT(name, ...) ((void)0)
#endif
```
**Key modified files**:
- `include/xrpl/telemetry/SpanGuard.h` — add `addEvent()` overload
- `include/xrpl/telemetry/Telemetry.h` — add `startSpan()` with links
- `src/xrpld/telemetry/Telemetry.cpp` — implement new overload
- `src/xrpld/telemetry/NullTelemetry.cpp` — no-op implementation
- `src/xrpld/telemetry/TracingInstrumentation.h` — add `XRPL_TRACE_ADD_EVENT` macro
---
## Task 4a.1: Adaptor `getTelemetry()` Method
**Objective**: Give `Consensus.h` access to the telemetry subsystem without
coupling the generic template to OTel headers.
**What to do**:
- Add `getTelemetry()` method to the Adaptor concept (returns
`xrpl::telemetry::Telemetry&`). The return type is already forward-declared
behind `#ifdef XRPL_ENABLE_TELEMETRY`.
- Implement in `RCLConsensus::Adaptor` — delegates to `app_.getTelemetry()`.
- In `Consensus.h`, the `XRPL_TRACE_*` macros call
`adaptor_.getTelemetry()` — when telemetry is disabled, the macros expand to
`((void)0)` and the method is never called.
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.h` — declare `getTelemetry()`
- `src/xrpld/app/consensus/RCLConsensus.cpp` — implement `getTelemetry()`
---
## Task 4a.2: Switchable Round Span with Deterministic Trace ID
**Objective**: Create a `consensus.round` root span in `startRound()` that uses
the switchable correlation strategy. Store span context as a member for child
spans in `Consensus.h`.
**What to do**:
- In `RCLConsensus::Adaptor::startRound()` (or a new helper):
- Read `consensus_trace_strategy` from config.
- **Deterministic**: compute `trace_id = SHA256(prevLedgerID)[0:16]`.
Construct a `SpanContext` with this trace_id, then start
`consensus.round` span as child of that context.
- **Attribute**: start normal `consensus.round` span.
- Set attributes on both: `xrpl.consensus.round_id`,
`xrpl.consensus.ledger_id`, `xrpl.consensus.ledger.seq`,
`xrpl.consensus.mode`.
- Store the round span in `Consensus` as a member (see Task 4a.3).
- If a previous round's span context is available, add a **span link**
(follows-from) to establish the round chain.
- Add `createDeterministicTraceId(hash)` utility to
`include/xrpl/telemetry/Telemetry.h` (returns 16-byte trace ID from a
256-bit hash by truncation).
- Add `consensus_trace_strategy` to `Telemetry::Setup` and
`TelemetryConfig.cpp` parser:
```cpp
/** Cross-node correlation strategy: "deterministic" or "attribute". */
std::string consensusTraceStrategy = "deterministic";
```
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `include/xrpl/telemetry/Telemetry.h` — `createDeterministicTraceId()`
- `src/xrpld/telemetry/TelemetryConfig.cpp` — parse new config option
---
## Task 4a.3: Span Members in `Consensus.h`
**Objective**: Add span storage to the `Consensus` class so that spans created
in `startRound()` (adaptor) are accessible from `phaseEstablish()`,
`updateOurPositions()`, and `haveConsensus()` (template methods).
**What to do**:
- Add to `Consensus` private members (guarded by `#ifdef XRPL_ENABLE_TELEMETRY`):
```cpp
#ifdef XRPL_ENABLE_TELEMETRY
std::optional<xrpl::telemetry::SpanGuard> roundSpan_;
std::optional<xrpl::telemetry::SpanGuard> establishSpan_;
opentelemetry::context::Context prevRoundContext_;
#endif
```
- `roundSpan_` is created in `startRound()` via the adaptor and stored.
Its `SpanGuard::Scope` member keeps the span active on the thread context
for the entire round lifetime.
- `establishSpan_` is created when entering phaseEstablish and cleared on accept.
It becomes a child of `roundSpan_` via OTel's thread-local context propagation.
- `prevRoundContext_` stores the previous round's context for follows-from links.
**Threading assumption**: `startRound()`, `phaseEstablish()`, `updateOurPositions()`,
and `haveConsensus()` all run on the same thread (the consensus job queue thread).
This is required for the `SpanGuard::Scope`-based parent-child hierarchy to work.
The `Consensus` class documentation confirms it is NOT thread-safe and calls are
serialized by the application.
- Add conditional include at top of `Consensus.h`:
```cpp
#ifdef XRPL_ENABLE_TELEMETRY
#include <xrpl/telemetry/SpanGuard.h>
#include <xrpld/telemetry/TracingInstrumentation.h>
#endif
```
**Key modified files**:
- `src/xrpld/consensus/Consensus.h`
---
## Task 4a.4: Instrument `phaseEstablish()`
**Objective**: Create `consensus.establish` span wrapping the establish phase,
with attributes for convergence progress.
**What to do**:
- At the start of `phaseEstablish()` (line 1298), if `establishSpan_` is not
yet created, create it as child of `roundSpan_` using the **direct API**
(NOT the `XRPL_TRACE_CONSENSUS` macro, which creates a local variable):
```cpp
#ifdef XRPL_ENABLE_TELEMETRY
if (!establishSpan_ && adaptor_.getTelemetry().shouldTraceConsensus())
{
establishSpan_.emplace(
adaptor_.getTelemetry().startSpan("consensus.establish"));
}
#endif
```
- Set attributes on each call:
- `xrpl.consensus.converge_percent` — `convergePercent_`
- `xrpl.consensus.establish_count` — `establishCounter_`
- `xrpl.consensus.proposers` — `currPeerPositions_.size()`
- On phase exit (transition to accept), close the establish span and record
final duration.
**Key modified files**:
- `src/xrpld/consensus/Consensus.h` — `phaseEstablish()` method
---
## Task 4a.5: Instrument `updateOurPositions()`
**Objective**: Trace each position update cycle including dispute resolution
details.
**What to do**:
- At the start of `updateOurPositions()` (line 1418), create a scoped child
span. This method is called and returns within a single `phaseEstablish()`
call, so the `XRPL_TRACE_CONSENSUS` macro works here (scoped local):
```cpp
XRPL_TRACE_CONSENSUS(adaptor_.getTelemetry(), "consensus.update_positions");
```
- Set attributes:
- `xrpl.consensus.disputes_count` — `result_->disputes.size()`
- `xrpl.consensus.converge_percent` — current convergence
- `xrpl.consensus.proposers_agreed` — count of peers with same position
- `xrpl.consensus.proposers_total` — total peer positions
- Inside the dispute resolution loop, for each dispute that changes our vote,
add an **event** with attributes using `XRPL_TRACE_ADD_EVENT` (from Task 4a.0):
```cpp
XRPL_TRACE_ADD_EVENT("dispute.resolve", {
{"xrpl.tx.id", std::string(tx_id)},
{"xrpl.dispute.our_vote", our_vote},
{"xrpl.dispute.yays", static_cast<int64_t>(yays)},
{"xrpl.dispute.nays", static_cast<int64_t>(nays)}
});
```
**Key modified files**:
- `src/xrpld/consensus/Consensus.h` — `updateOurPositions()` method
---
## Task 4a.6: Instrument `haveConsensus()` (Threshold & Convergence)
**Objective**: Trace consensus checking including threshold escalation
(`ConsensusParms::AvalancheState::{init, mid, late, stuck}`).
**What to do**:
- At the start of `haveConsensus()` (line 1598), create a scoped child span:
```cpp
XRPL_TRACE_CONSENSUS(adaptor_.getTelemetry(), "consensus.check");
```
- Set attributes:
- `xrpl.consensus.agree_count` — peers that agree with our position
- `xrpl.consensus.disagree_count` — peers that disagree
- `xrpl.consensus.converge_percent` — convergence percentage
- `xrpl.consensus.result` — ConsensusState result (Yes/No/MovedOn)
- The free function `checkConsensus()` in `Consensus.cpp` (line 151) determines
thresholds based on `currentAgreeTime`. Threshold values come from
`ConsensusParms::avalancheCutoffs` (defined in `ConsensusParms.h`).
The escalation states are `ConsensusParms::AvalancheState::{init, mid, late, stuck}`.
Record the effective threshold as an attribute on the span:
- `xrpl.consensus.threshold_percent` — current threshold from `avalancheCutoffs`
**Key modified files**:
- `src/xrpld/consensus/Consensus.h` — `haveConsensus()` method
---
## Task 4a.7: Instrument Mode Changes
**Objective**: Trace consensus mode transitions (proposing ↔ observing,
wrongLedger, switchedLedger).
**What to do**:
Mode changes are rare (typically 0-1 per round), so a **standalone short-lived
span** is appropriate (not an event). This captures timing of the mode change
itself.
- In `RCLConsensus::Adaptor::onModeChange()`, create a scoped span:
```cpp
XRPL_TRACE_CONSENSUS(app_.getTelemetry(), "consensus.mode_change");
XRPL_TRACE_SET_ATTR("xrpl.consensus.mode.old", to_string(before).c_str());
XRPL_TRACE_SET_ATTR("xrpl.consensus.mode.new", to_string(after).c_str());
```
- Note: `MonitoredMode::set()` (line 304 in `Consensus.h`) calls
`adaptor_.onModeChange(before, after)` — so the span is created in the
adaptor, which already has telemetry access. No instrumentation needed
in `Consensus.h` for this task.
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp` — `onModeChange()`
---
## Task 4a.8: Reparent Existing Spans Under Round
**Objective**: Make existing consensus spans (`consensus.accept`,
`consensus.accept.apply`, `consensus.validation.send`) children of the
`consensus.round` root span instead of being standalone.
**What to do**:
- The existing spans in `onAccept()`, `doAccept()`, and `validate()` use
`XRPL_TRACE_CONSENSUS(app_.getTelemetry(), ...)` which creates standalone
spans on the current thread's context.
- After Task 4a.2 creates the round span and stores it, these methods run on
the same thread within the round span's scope, so they automatically become
children. Verify this works correctly.
- For `consensus.validation.send`: add a **span link** (follows-from) to the
round span context, since the validation may be processed after the round
completes.
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp` — verify parent-child hierarchy
---
## Task 4a.9: Build Verification and Testing
**Objective**: Verify all Phase 4a changes compile cleanly with telemetry ON
and OFF, and don't affect consensus timing.
**What to do**:
1. Build with `telemetry=ON` — verify no compilation errors
2. Build with `telemetry=OFF` — verify macros expand to no-ops, no new includes
leak into `Consensus.h` when disabled
3. Run existing consensus unit tests
4. Verify `#ifdef XRPL_ENABLE_TELEMETRY` guards on all new members in
`Consensus.h`
5. Run `pccl` pre-commit checks
**Verification Checklist**:
- [x] Build succeeds with telemetry ON
- [x] Build succeeds with telemetry OFF
- [x] Existing consensus tests pass
- [x] `Consensus.h` has zero OTel includes when telemetry is OFF
- [x] No new virtual calls in hot consensus paths
- [x] `pccl` passes
---
## Phase 4a Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ------------------------------------------------ | --------- | -------------- | ---------- |
| 4a.0 | Prerequisites: extend SpanGuard & Telemetry APIs | 0 | 4 | Phase 4 |
| 4a.1 | Adaptor `getTelemetry()` method | 0 | 2 | Phase 4 |
| 4a.2 | Switchable round span with deterministic traceID | 0 | 3 | 4a.0, 4a.1 |
| 4a.3 | Span members in `Consensus.h` | 0 | 1 | 4a.1 |
| 4a.4 | Instrument `phaseEstablish()` | 0 | 1 | 4a.3 |
| 4a.5 | Instrument `updateOurPositions()` | 0 | 1 | 4a.0, 4a.3 |
| 4a.6 | Instrument `haveConsensus()` (thresholds) | 0 | 1 | 4a.3 |
| 4a.7 | Instrument mode changes | 0 | 1 | 4a.1 |
| 4a.8 | Reparent existing spans under round | 0 | 1 | 4a.0, 4a.2 |
| 4a.9 | Build verification and testing | 0 | 0 | 4a.0-4a.8 |
**Parallel work**: Tasks 4a.0 and 4a.1 can run in parallel. Tasks 4a.4, 4a.5, 4a.6, and 4a.7 can run in parallel after 4a.3 (and 4a.0 for 4a.5).
### New Spans (Phase 4a)
| Span Name | Location | Key Attributes |
| ---------------------------- | ------------------ | ---------------------------------------------------------------------------------- |
| `consensus.round` | `RCLConsensus.cpp` | `round_id`, `ledger_id`, `ledger.seq`, `mode`; link → prev round |
| `consensus.establish` | `Consensus.h` | `converge_percent`, `establish_count`, `proposers` |
| `consensus.update_positions` | `Consensus.h` | `disputes_count`, `converge_percent`, `proposers_agreed`, `proposers_total` |
| `consensus.check` | `Consensus.h` | `agree_count`, `disagree_count`, `converge_percent`, `result`, `threshold_percent` |
| `consensus.mode_change` | `RCLConsensus.cpp` | `mode.old`, `mode.new` |
### New Events (Phase 4a)
| Event Name | Parent Span | Attributes |
| ----------------- | ---------------------------- | ----------------------------------- |
| `dispute.resolve` | `consensus.update_positions` | `tx_id`, `our_vote`, `yays`, `nays` |
### New Attributes (Phase 4a)
```cpp
// Round-level (on consensus.round)
"xrpl.consensus.round_id" = int64 // Consensus round number
"xrpl.consensus.ledger_id" = string // previousLedger.id() hash
"xrpl.consensus.trace_strategy" = string // "deterministic" or "attribute"
// Establish-level
"xrpl.consensus.converge_percent" = int64 // Convergence % (0-100+)
"xrpl.consensus.establish_count" = int64 // Number of establish iterations
"xrpl.consensus.disputes_count" = int64 // Active disputes
"xrpl.consensus.proposers_agreed" = int64 // Peers agreeing with us
"xrpl.consensus.proposers_total" = int64 // Total peer positions
"xrpl.consensus.agree_count" = int64 // Peers that agree (haveConsensus)
"xrpl.consensus.disagree_count" = int64 // Peers that disagree
"xrpl.consensus.threshold_percent" = int64 // Current threshold (50/65/70/95)
"xrpl.consensus.result" = string // "yes", "no", "moved_on"
// Mode change
"xrpl.consensus.mode.old" = string // Previous mode
"xrpl.consensus.mode.new" = string // New mode
```
### Implementation Notes
- **Separation of concerns**: All non-trivial telemetry code extracted to private
helpers (`startRoundTracing`, `createValidationSpan`, `startEstablishTracing`,
`updateEstablishTracing`, `endEstablishTracing`). Business logic methods contain
only single-line `#ifdef` blocks calling these helpers.
- **Thread safety**: `createValidationSpan()` runs on the jtACCEPT worker thread.
Instead of accessing `roundSpan_` across threads, a `roundSpanContext_` snapshot
(lightweight `SpanContext` value type) is captured on the consensus thread in
`startRoundTracing()` and read by `createValidationSpan()`. The job queue
provides the happens-before guarantee.
- **Macro safety**: `XRPL_TRACE_ADD_EVENT` uses `do { } while (0)` to prevent
dangling-else issues.
- **Config validation**: `consensus_trace_strategy` is validated to be either
`"deterministic"` or `"attribute"`, falling back to `"deterministic"` for
unrecognised values.
- **Plan deviation**: `roundSpan_` is stored in `RCLConsensus::Adaptor` (not
`Consensus.h`) because the adaptor has access to telemetry config and can
implement the deterministic trace ID strategy. `establishSpan_` is correctly
in `Consensus.h` as planned.
---
# Phase 4b: Cross-Node Propagation (Future — Documentation Only)
> **Goal**: Wire `TraceContextPropagator` for P2P messages so that proposals
> and validations carry trace context between nodes. This enables true
> distributed tracing where a proposal sent by Node A creates a child span
> on Node B.
>
> **Status**: NOT IMPLEMENTED. The protobuf fields and propagator class exist
> but are not wired. This section documents the design for future work.
## Architecture
```
Node A (proposing) Node B (receiving)
───────────────── ──────────────────
consensus.round consensus.round
├── propose() ├── peerProposal()
│ └── TraceContextPropagator │ └── TraceContextPropagator
│ ::injectToProtobuf( │ ::extractFromProtobuf(
│ TMProposeSet.trace_context) │ TMProposeSet.trace_context)
│ │ └── span link → Node A's context
└── validate() └── onValidation()
└── inject into TMValidation └── extract from TMValidation
```
## Wiring Points
| Message | Inject Location | Extract Location | Protobuf Field |
| --------------- | ---------------------------------- | ----------------------------------- | -------------------------- |
| `TMProposeSet` | `Adaptor::propose()` | `PeerImp::onMessage(TMProposeSet)` | field 1001: `TraceContext` |
| `TMValidation` | `Adaptor::validate()` | `PeerImp::onMessage(TMValidation)` | field 1001: `TraceContext` |
| `TMTransaction` | `NetworkOPs::processTransaction()` | `PeerImp::onMessage(TMTransaction)` | field 1001: `TraceContext` |
## Span Link Semantics
Received messages use **span links** (follows-from), NOT parent-child:
- The receiver's processing span links to the sender's context
- This preserves each node's independent trace tree
- Cross-node correlation visible via linked traces in Tempo/Jaeger
## Interaction with Deterministic Trace ID (Strategy A)
When using deterministic trace_id (Phase 4a default), cross-node spans already
share the same trace_id. P2P propagation adds **span-level** linking:
- Without propagation: spans from different nodes appear in the same trace
(same trace_id) but without parent-child or follows-from relationships.
- With propagation: spans have explicit links showing which proposal/validation
from Node A caused processing on Node B.
## Prerequisites
- Phase 4a (this task list) — establish phase tracing must be in place
- `TraceContextPropagator` class (already exists in
`include/xrpl/telemetry/TraceContextPropagator.h`)
- Protobuf `TraceContext` message (already exists, field 1001)

View File

@@ -0,0 +1,221 @@
# Phase 5: Integration Test Task List
> **Goal**: End-to-end verification of the complete telemetry pipeline using a
> 6-node consensus network. Proves that RPC, transaction, and consensus spans
> flow through the observability stack (otel-collector, Jaeger, Prometheus,
> Grafana) under realistic conditions.
>
> **Scope**: Integration test script, manual testing plan, 6-node local network
> setup, Jaeger/Prometheus/Grafana verification.
>
> **Branch**: `pratik/otel-phase5-docs-deployment`
### Related Plan Documents
| Document | Relevance |
| ---------------------------------------------------------------- | ------------------------------------------ |
| [07-observability-backends.md](./07-observability-backends.md) | Jaeger, Grafana, Prometheus setup |
| [05-configuration-reference.md](./05-configuration-reference.md) | Collector config, Docker Compose |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 5 tasks, definition of done |
| [Phase5_taskList.md](./Phase5_taskList.md) | Phase 5 main task list (5.6 = integration) |
---
## Task IT.1: Create Integration Test Script
**Objective**: Automated bash script that stands up a 6-node xrpld network
with telemetry, exercises all span categories, and verifies data in
Jaeger/Prometheus.
**What to do**:
- Create `docker/telemetry/integration-test.sh`:
- Prerequisites check (docker, xrpld binary, curl, jq)
- Start observability stack via `docker compose`
- Generate 6 validator key pairs via temp standalone xrpld
- Generate 6 node configs + shared `validators.txt`
- Start 6 xrpld nodes in consensus mode (`--start`, no `-a`)
- Wait for all nodes to reach `"proposing"` state (120s timeout)
**Key new file**: `docker/telemetry/integration-test.sh`
**Verification**:
- [ ] Script starts without errors
- [ ] All 6 nodes reach "proposing" state
- [ ] Observability stack is healthy (otel-collector, Jaeger, Prometheus, Grafana)
---
## Task IT.2: RPC Span Verification (Phase 2)
**Objective**: Verify RPC spans flow through the telemetry pipeline.
**What to do**:
- Send `server_info`, `server_state`, `ledger` RPCs to node1 (port 5005)
- Wait for batch export (5s)
- Query Jaeger API for:
- `rpc.request` spans (ServerHandler::onRequest)
- `rpc.process` spans (ServerHandler::processRequest)
- `rpc.command.server_info` spans (callMethod)
- `rpc.command.server_state` spans (callMethod)
- `rpc.command.ledger` spans (callMethod)
- Verify `xrpl.rpc.command` attribute present on `rpc.command.*` spans
**Verification**:
- [ ] Jaeger shows `rpc.request` traces
- [ ] Jaeger shows `rpc.process` traces
- [ ] Jaeger shows `rpc.command.*` traces with correct attributes
---
## Task IT.3: Transaction Span Verification (Phase 3)
**Objective**: Verify transaction spans flow through the telemetry pipeline.
**What to do**:
- Get genesis account sequence via `account_info` RPC
- Submit Payment transaction using genesis seed (`snoPBrXtMeMyMHUVTgbuqAfg1SUTb`)
- Wait for consensus inclusion (10s)
- Query Jaeger API for:
- `tx.process` spans (NetworkOPsImp::processTransaction) on submitting node
- `tx.receive` spans (PeerImp::handleTransaction) on peer nodes
- Verify `xrpl.tx.hash` attribute on `tx.process` spans
- Verify `xrpl.peer.id` attribute on `tx.receive` spans
**Verification**:
- [ ] Jaeger shows `tx.process` traces with `xrpl.tx.hash`
- [ ] Jaeger shows `tx.receive` traces with `xrpl.peer.id`
---
## Task IT.4: Consensus Span Verification (Phase 4)
**Objective**: Verify consensus spans flow through the telemetry pipeline.
**What to do**:
- Consensus runs automatically in 6-node network
- Query Jaeger API for:
- `consensus.proposal.send` (Adaptor::propose)
- `consensus.ledger_close` (Adaptor::onClose)
- `consensus.accept` (Adaptor::onAccept)
- `consensus.validation.send` (Adaptor::validate)
- Verify attributes:
- `xrpl.consensus.mode` on `consensus.ledger_close`
- `xrpl.consensus.proposers` on `consensus.accept`
- `xrpl.consensus.ledger.seq` on `consensus.validation.send`
**Verification**:
- [ ] Jaeger shows `consensus.ledger_close` traces with `xrpl.consensus.mode`
- [ ] Jaeger shows `consensus.accept` traces with `xrpl.consensus.proposers`
- [ ] Jaeger shows `consensus.proposal.send` traces
- [ ] Jaeger shows `consensus.validation.send` traces
---
## Task IT.5: Spanmetrics Verification (Phase 5)
**Objective**: Verify spanmetrics connector derives RED metrics from spans.
**What to do**:
- Query Prometheus for `traces_span_metrics_calls_total`
- Query Prometheus for `traces_span_metrics_duration_milliseconds_count`
- Verify Grafana loads at `http://localhost:3000`
**Verification**:
- [ ] Prometheus returns non-empty results for `traces_span_metrics_calls_total`
- [ ] Prometheus returns non-empty results for duration histogram
- [ ] Grafana UI accessible with dashboards visible
---
## Task IT.6: Manual Testing Plan
**Objective**: Document how to run tests manually for future reference.
**What to do**:
- Create `docker/telemetry/TESTING.md` with:
- Prerequisites section
- Single-node standalone test (quick verification)
- 6-node consensus test (full verification)
- Expected span catalog (all 12 span names with attributes)
- Verification queries (Jaeger API, Prometheus API)
- Troubleshooting guide
**Key new file**: `docker/telemetry/TESTING.md`
**Verification**:
- [ ] Document covers both single-node and multi-node testing
- [ ] All 12 span names documented with source file and attributes
- [ ] Troubleshooting section covers common failure modes
---
## Task IT.7: Run and Verify
**Objective**: Execute the integration test and validate results.
**What to do**:
- Run `docker/telemetry/integration-test.sh` locally
- Debug any failures
- Leave stack running for manual verification
- Share URLs:
- Jaeger: `http://localhost:16686`
- Grafana: `http://localhost:3000`
- Prometheus: `http://localhost:9090`
**Verification**:
- [ ] Script completes with all checks passing
- [ ] Jaeger UI shows rippled service with all expected span names
- [ ] Grafana dashboards load and show data
---
## Task IT.8: Commit
**Objective**: Commit all new files to Phase 5 branch.
**What to do**:
- Run `pcc` (pre-commit checks)
- Commit 3 new files to `pratik/otel-phase5-docs-deployment`
**Verification**:
- [ ] `pcc` passes
- [ ] Commit created on Phase 5 branch
---
## Summary
| Task | Description | New Files | Depends On |
| ---- | ----------------------------- | --------- | ---------- |
| IT.1 | Integration test script | 1 | Phase 5 |
| IT.2 | RPC span verification | 0 | IT.1 |
| IT.3 | Transaction span verification | 0 | IT.1 |
| IT.4 | Consensus span verification | 0 | IT.1 |
| IT.5 | Spanmetrics verification | 0 | IT.1 |
| IT.6 | Manual testing plan | 1 | -- |
| IT.7 | Run and verify | 0 | IT.1-IT.6 |
| IT.8 | Commit | 0 | IT.7 |
**Exit Criteria**:
- [ ] All 6 xrpld nodes reach "proposing" state
- [ ] All 11 expected span names visible in Jaeger
- [ ] Spanmetrics available in Prometheus
- [ ] Grafana dashboards show data
- [ ] Manual testing plan document complete

View File

@@ -0,0 +1,241 @@
# Phase 5: Documentation & Deployment Task List
> **Goal**: Production readiness — Grafana dashboards, spanmetrics pipeline, operator runbook, alert definitions, and final integration testing. This phase ensures the telemetry system is useful and maintainable in production.
>
> **Scope**: Grafana dashboard definitions, OTel Collector spanmetrics connector, Prometheus integration, alert rules, operator documentation, and production-ready Docker Compose stack.
>
> **Branch**: `pratik/otel-phase5-docs-deployment` (from `pratik/otel-phase4-consensus-tracing`)
### Related Plan Documents
| Document | Relevance |
| ---------------------------------------------------------------- | -------------------------------------------------------------------------- |
| [07-observability-backends.md](./07-observability-backends.md) | Jaeger setup (§7.1), Grafana dashboards (§7.6), alerts (§7.6.3) |
| [05-configuration-reference.md](./05-configuration-reference.md) | Collector config (§5.5), production config (§5.5.2), Docker Compose (§5.6) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 5 tasks (§6.6), definition of done (§6.11.5) |
---
## Task 5.1: Add Spanmetrics Connector to OTel Collector
**Objective**: Derive RED metrics (Rate, Errors, Duration) from trace spans automatically, enabling Grafana time-series dashboards.
**What to do**:
- Edit `docker/telemetry/otel-collector-config.yaml`:
- Add `spanmetrics` connector:
```yaml
connectors:
spanmetrics:
histogram:
explicit:
buckets: [1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 5s]
dimensions:
- name: xrpl.rpc.command
- name: xrpl.rpc.status
- name: xrpl.consensus.phase
- name: xrpl.tx.type
```
- Add `prometheus` exporter:
```yaml
exporters:
prometheus:
endpoint: 0.0.0.0:8889
```
- Wire the pipeline:
```yaml
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp/jaeger, spanmetrics]
metrics:
receivers: [spanmetrics]
exporters: [prometheus]
```
- Edit `docker/telemetry/docker-compose.yml`:
- Expose port `8889` on the collector for Prometheus scraping
- Add Prometheus service
- Add Prometheus as Grafana datasource
**Key modified files**:
- `docker/telemetry/otel-collector-config.yaml`
- `docker/telemetry/docker-compose.yml`
**Key new files**:
- `docker/telemetry/prometheus.yml` (Prometheus scrape config)
- `docker/telemetry/grafana/provisioning/datasources/prometheus.yaml`
**Reference**:
- [POC_taskList.md §Next Steps](./POC_taskList.md) — Metrics pipeline for Grafana dashboards
---
## Task 5.2: Create Grafana Dashboards
**Objective**: Provide pre-built Grafana dashboards for RPC performance, transaction lifecycle, and consensus health.
**What to do**:
- Create `docker/telemetry/grafana/provisioning/dashboards/dashboards.yaml` (provisioning config)
- Create dashboard JSON files:
1. **RPC Performance Dashboard** (`rpc-performance.json`):
- RPC request latency (p50/p95/p99) by command — histogram panel
- RPC throughput (requests/sec) by command — time series
- RPC error rate by command — bar gauge
- Top slowest RPC commands — table
2. **Transaction Overview Dashboard** (`transaction-overview.json`):
- Transaction processing rate — time series
- Transaction latency distribution — histogram
- Suppression rate (duplicates) — stat panel
- Transaction processing path (sync vs async) — pie chart
3. **Consensus Health Dashboard** (`consensus-health.json`):
- Consensus round duration — time series
- Phase duration breakdown (open/establish/accept) — stacked bar
- Proposals sent/received per round — stat panel
- Consensus mode distribution (proposing/observing) — pie chart
- Store dashboards in `docker/telemetry/grafana/dashboards/`
**Key new files**:
- `docker/telemetry/grafana/provisioning/dashboards/dashboards.yaml`
- `docker/telemetry/grafana/dashboards/rpc-performance.json`
- `docker/telemetry/grafana/dashboards/transaction-overview.json`
- `docker/telemetry/grafana/dashboards/consensus-health.json`
**Reference**:
- [07-observability-backends.md §7.6](./07-observability-backends.md) — Grafana dashboard specifications
- [01-architecture-analysis.md §1.8.3](./01-architecture-analysis.md) — Dashboard panel examples
---
## Task 5.3: Define Alert Rules
**Objective**: Create alert definitions for key telemetry anomalies.
**What to do**:
- Create `docker/telemetry/grafana/provisioning/alerting/alerts.yaml`:
- **RPC Latency Alert**: p99 latency > 1s for any command over 5 minutes
- **RPC Error Rate Alert**: Error rate > 5% for any command over 5 minutes
- **Consensus Duration Alert**: Round duration > 10s (warn), > 30s (critical)
- **Transaction Processing Alert**: Processing rate drops below threshold
- **Telemetry Pipeline Health**: No spans received for > 2 minutes
**Key new files**:
- `docker/telemetry/grafana/provisioning/alerting/alerts.yaml`
**Reference**:
- [07-observability-backends.md §7.6.3](./07-observability-backends.md) — Alert rule definitions
---
## Task 5.4: Production Collector Configuration
**Objective**: Create a production-ready OTel Collector configuration with tail-based sampling and resource limits.
**What to do**:
- Create `docker/telemetry/otel-collector-config-production.yaml`:
- Tail-based sampling policy:
- Always sample errors and slow traces
- 10% base sampling rate for normal traces
- Always sample first trace for each unique RPC command
- Resource limits:
- Memory limiter processor (80% of available memory)
- Queued retry for export failures
- TLS configuration for production endpoints
- Health check endpoint
**Key new files**:
- `docker/telemetry/otel-collector-config-production.yaml`
**Reference**:
- [05-configuration-reference.md §5.5.2](./05-configuration-reference.md) — Production collector config
---
## Task 5.5: Operator Runbook
**Objective**: Create operator documentation for managing the telemetry system in production.
**What to do**:
- Create `docs/telemetry-runbook.md`:
- **Setup**: How to enable telemetry in rippled
- **Configuration**: All config options with descriptions
- **Collector Deployment**: Docker Compose vs. Kubernetes vs. bare metal
- **Troubleshooting**: Common issues and resolutions
- No traces appearing
- High memory usage from telemetry
- Collector connection failures
- Sampling configuration tuning
- **Performance Tuning**: Batch size, queue size, sampling ratio guidelines
- **Upgrading**: How to upgrade OTel SDK and Collector versions
**Key new files**:
- `docs/telemetry-runbook.md`
---
## Task 5.6: Final Integration Testing
**Objective**: Validate the complete telemetry stack end-to-end.
**What to do**:
1. Start full Docker stack (Collector, Jaeger, Grafana, Prometheus)
2. Build rippled with `telemetry=ON`
3. Run in standalone mode with telemetry enabled
4. Generate RPC traffic and verify traces in Jaeger
5. Verify dashboards populate in Grafana
6. Verify alerts trigger correctly
7. Test telemetry OFF path (no regressions)
8. Run full test suite
**Verification Checklist**:
- [ ] Docker stack starts without errors
- [ ] Traces appear in Jaeger with correct hierarchy
- [ ] Grafana dashboards show metrics derived from spans
- [ ] Prometheus scrapes spanmetrics successfully
- [ ] Alerts can be triggered by simulated conditions
- [ ] Build succeeds with telemetry ON and OFF
- [ ] Full test suite passes
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ---------------------------------- | --------- | -------------- | ---------- |
| 5.1 | Spanmetrics connector + Prometheus | 2 | 2 | Phase 4 |
| 5.2 | Grafana dashboards | 4 | 0 | 5.1 |
| 5.3 | Alert definitions | 1 | 0 | 5.1 |
| 5.4 | Production collector config | 1 | 0 | Phase 4 |
| 5.5 | Operator runbook | 1 | 0 | Phase 4 |
| 5.6 | Final integration testing | 0 | 0 | 5.1-5.5 |
**Parallel work**: Tasks 5.1, 5.4, and 5.5 can run in parallel. Tasks 5.2 and 5.3 depend on 5.1. Task 5.6 depends on all others.
**Exit Criteria** (from [06-implementation-phases.md §6.11.5](./06-implementation-phases.md)):
- [ ] Dashboards deployed and showing data
- [ ] Alerts configured and tested
- [ ] Operator documentation complete
- [ ] Production collector config ready
- [ ] Full test suite passes

View File

@@ -0,0 +1,536 @@
# Phase 7: Native OTel Metrics Migration — Task List
> **Goal**: Replace `StatsDCollector` with a native OpenTelemetry Metrics SDK implementation behind the existing `beast::insight::Collector` interface, eliminating the StatsD UDP dependency.
>
> **Scope**: New `OTelCollectorImpl` class, `CollectorManager` config change, OTel Collector pipeline update, Grafana dashboard metric name migration, integration tests.
>
> **Branch**: `pratik/otel-phase7-native-metrics` (from `pratik/otel-phase6-statsd`)
### Related Plan Documents
| Document | Relevance |
| -------------------------------------------------------------------- | --------------------------------------------------------------- |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 7 plan: motivation, architecture, exit criteria (§6.8) |
| [02-design-decisions.md](./02-design-decisions.md) | Collector interface design, beast::insight coexistence strategy |
| [05-configuration-reference.md](./05-configuration-reference.md) | `[insight]` and `[telemetry]` config sections |
| [09-data-collection-reference.md](./09-data-collection-reference.md) | Complete metric inventory that must be preserved |
---
## Task 7.1: Add OTel Metrics SDK to Build Dependencies
**Objective**: Enable the OTel C++ Metrics SDK components in the build system.
**What to do**:
- Edit `conanfile.py`:
- Add OTel metrics SDK components to the dependency list when `telemetry=True`
- Components needed: `opentelemetry-cpp::metrics`, `opentelemetry-cpp::otlp_http_metric_exporter`
- Edit `CMakeLists.txt` (telemetry section):
- Link `opentelemetry::metrics` and `opentelemetry::otlp_http_metric_exporter` targets
**Key modified files**:
- `conanfile.py`
- `CMakeLists.txt` (or the relevant telemetry cmake target)
**Reference**: [05-configuration-reference.md §5.3](./05-configuration-reference.md) — CMake integration
---
## Task 7.2: Implement OTelCollector Class
**Objective**: Create the core `OTelCollector` implementation that maps beast::insight instruments to OTel Metrics SDK instruments.
**What to do**:
- Create `include/xrpl/beast/insight/OTelCollector.h`:
- Public factory: `static std::shared_ptr<OTelCollector> New(std::string const& endpoint, std::string const& prefix, beast::Journal journal)`
- Derives from `StatsDCollector` (or directly from `Collector` — TBD based on shared code)
- Create `src/libxrpl/beast/insight/OTelCollector.cpp` (~400-500 lines):
- **OTelCounterImpl**: Wraps `opentelemetry::metrics::Counter<int64_t>`. `increment(amount)` calls `counter->Add(amount)`.
- **OTelGaugeImpl**: Uses `opentelemetry::metrics::ObservableGauge<uint64_t>` with an async callback. `set(value)` stores value atomically; callback reads it during collection.
- **OTelMeterImpl**: Wraps `opentelemetry::metrics::Counter<uint64_t>`. `increment(amount)` calls `counter->Add(amount)`. Semantically identical to Counter but unsigned.
- **OTelEventImpl**: Wraps `opentelemetry::metrics::Histogram<double>`. `notify(duration)` calls `histogram->Record(duration.count())`. Uses explicit bucket boundaries matching SpanMetrics: [1, 5, 10, 25, 50, 100, 250, 500, 1000, 5000] ms.
- **OTelHookImpl**: Stores handler function. Called during periodic metric collection (same 1s pattern via PeriodicMetricReader).
- **OTelCollectorImp**: Main class.
- Creates `MeterProvider` with `PeriodicMetricReader` (1s export interval)
- Creates `OtlpHttpMetricExporter` pointing to `[telemetry]` endpoint
- Sets resource attributes (service.name, service.instance.id) matching trace exporter
- Implements all `make_*()` factory methods
- Prefixes metric names with `[insight] prefix=` value
- Guard all OTel SDK includes with `#ifdef XRPL_ENABLE_TELEMETRY` to compile to `NullCollector` equivalents when telemetry disabled.
**Key new files**:
- `include/xrpl/beast/insight/OTelCollector.h`
- `src/libxrpl/beast/insight/OTelCollector.cpp`
**Key patterns to follow**:
- Match `StatsDCollector.cpp` structure: private impl classes, intrusive list for metrics, strand-based thread safety
- Match existing telemetry code style from `src/libxrpl/telemetry/Telemetry.cpp`
- Use RAII for MeterProvider lifecycle (shutdown on destructor)
**Reference**: [04-code-samples.md](./04-code-samples.md) — code style and patterns
---
## Task 7.3: Update CollectorManager
**Objective**: Add `server=otel` config option to route metric creation to the new OTel backend.
**What to do**:
- Edit `src/xrpld/app/main/CollectorManager.cpp`:
- In the constructor, add a third branch after `server == "statsd"`:
```cpp
else if (server == "otel")
{
// Read endpoint from [telemetry] section
auto const endpoint = get(telemetryParams, "endpoint",
"http://localhost:4318/v1/metrics");
std::string const& prefix(get(params, "prefix"));
m_collector = beast::insight::OTelCollector::New(
endpoint, prefix, journal);
}
```
- This requires access to the `[telemetry]` config section — may need to pass it as a parameter or read from Application config.
- Edit `src/xrpld/app/main/CollectorManager.h`:
- Add `#include <xrpl/beast/insight/OTelCollector.h>`
**Key modified files**:
- `src/xrpld/app/main/CollectorManager.cpp`
- `src/xrpld/app/main/CollectorManager.h`
---
## Task 7.4: Update OTel Collector Configuration
**Objective**: Add a metrics pipeline to the OTLP receiver and remove the StatsD receiver dependency.
**What to do**:
- Edit `docker/telemetry/otel-collector-config.yaml`:
- Remove `statsd` receiver (no longer needed when `server=otel`)
- Add metrics pipeline under `service.pipelines`:
```yaml
metrics:
receivers: [otlp, spanmetrics]
processors: [batch]
exporters: [prometheus]
```
- The OTLP receiver already listens on :4318 — it just needs to be added to the metrics pipeline receivers.
- Keep `spanmetrics` connector in the metrics pipeline so span-derived RED metrics continue working.
- Edit `docker/telemetry/docker-compose.yml`:
- Remove UDP :8125 port mapping from otel-collector service
- Update rippled service config: change `[insight] server=statsd` to `server=otel`
**Key modified files**:
- `docker/telemetry/otel-collector-config.yaml`
- `docker/telemetry/docker-compose.yml`
**Note**: Keep a commented-out `statsd` receiver block for operators who need backward compatibility.
---
## Task 7.5: Preserve Metric Names in Prometheus
**Objective**: Ensure existing Grafana dashboards continue working with identical metric names.
**What to do**:
- In `OTelCollector.cpp`, construct OTel instrument names to match existing Prometheus metric names:
- beast::insight `make_gauge("LedgerMaster", "Validated_Ledger_Age")` → OTel instrument name: `rippled_LedgerMaster_Validated_Ledger_Age`
- The prefix + group + name concatenation must produce the same string as `StatsDCollector`'s format
- Use underscores as separators (matching StatsD convention)
- Verify in integration test that key Prometheus queries still return data:
- `rippled_LedgerMaster_Validated_Ledger_Age`
- `rippled_Peer_Finder_Active_Inbound_Peers`
- `rippled_rpc_requests`
**Key consideration**: OTel Prometheus exporter may normalize metric names differently than StatsD receiver. Test this early (Task 7.2) and adjust naming strategy if needed. The OTel SDK's Prometheus exporter adds `_total` suffix to counters and converts dots to underscores — match existing conventions.
---
## Task 7.6: Update Grafana Dashboards
**Objective**: Update the 3 StatsD dashboards if any metric names change due to OTLP export format differences.
**What to do**:
- If Task 7.5 confirms metric names are preserved exactly, no dashboard changes needed.
- If OTLP export produces different names (e.g., `_total` suffix on counters), update:
- `docker/telemetry/grafana/dashboards/statsd-node-health.json`
- `docker/telemetry/grafana/dashboards/statsd-network-traffic.json`
- `docker/telemetry/grafana/dashboards/statsd-rpc-pathfinding.json`
- Rename dashboard titles from "StatsD" to "System Metrics" or similar (since they're no longer StatsD-sourced).
**Key modified files**:
- `docker/telemetry/grafana/dashboards/statsd-*.json` (3 files, conditionally)
---
## Task 7.7: Update Integration Tests
**Objective**: Verify the full OTLP metrics pipeline end-to-end.
**What to do**:
- Edit `docker/telemetry/integration-test.sh`:
- Update test config to use `[insight] server=otel`
- Verify metrics arrive in Prometheus via OTLP (not StatsD)
- Add check that StatsD receiver is no longer required
- Preserve all existing metric presence checks
**Key modified files**:
- `docker/telemetry/integration-test.sh`
---
## Task 7.8: Update Documentation
**Objective**: Update all plan docs, runbook, and reference docs to reflect the migration.
**What to do**:
- Edit `docs/telemetry-runbook.md`:
- Update `[insight]` config examples to show `server=otel`
- Update troubleshooting section (no more StatsD UDP debugging)
- Edit `OpenTelemetryPlan/09-data-collection-reference.md`:
- Update Data Flow Overview diagram (remove StatsD receiver)
- Update Section 2 header from "StatsD Metrics" to "System Metrics (OTel native)"
- Update config examples
- Edit `OpenTelemetryPlan/05-configuration-reference.md`:
- Add `server=otel` option to `[insight]` section docs
- Edit `docker/telemetry/TESTING.md`:
- Update setup instructions to use `server=otel`
**Key modified files**:
- `docs/telemetry-runbook.md`
- `OpenTelemetryPlan/09-data-collection-reference.md`
- `OpenTelemetryPlan/05-configuration-reference.md`
- `docker/telemetry/TESTING.md`
---
## Task 7.9: ValidationTracker — Validation Agreement Computation
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md) — the most valuable metric from the community [xrpl-validator-dashboard](https://github.com/realgrapedrop/xrpl-validator-dashboard).
>
> **Upstream**: Phase 4 Task 4.8 (validation span attributes provide ledger hash context).
> **Downstream**: Phase 9 (Validator Health dashboard), Phase 10 (validation checks), Phase 11 (agreement alert rules).
**Objective**: Implement a stateful class that tracks whether our validator's validations agree with network consensus, maintaining rolling 1h and 24h windows with an 8-second grace period and 5-minute late repair window.
**Architecture**:
```
consensus.validation.send ────> ValidationTracker ────> MetricsRegistry
(records our validation (reconciles after (exports agreement
for ledger X) 8s grace period) gauges every 10s)
ledger.validate ──────────────> ValidationTracker
(records which ledger (marks ledger X as
network validated) agreed or missed)
```
**What to do**:
- Create `src/xrpld/telemetry/ValidationTracker.h`:
- `recordOurValidation(ledgerHash, ledgerSeq)` — called when we send a validation
- `recordNetworkValidation(ledgerHash, seq)` — called when a ledger is fully validated
- `reconcile()` — called periodically; reconciles pending ledger events after 8s grace period
- Getters: `agreementPct1h()`, `agreementPct24h()`, `agreements1h()`, `missed1h()`, `agreements24h()`, `missed24h()`, `totalAgreements()`, `totalMissed()`, `totalValidationsSent()`, `totalValidationsChecked()`
- Thread-safety: atomics for counters, mutex for window deques
- Create `src/xrpld/telemetry/detail/ValidationTracker.cpp`:
- Reconciliation logic: after 8s grace period, check if `weValidated && networkValidated && sameHash` → agreement; else missed
- Late repair: if a late validation arrives within 5 minutes, correct a false-positive miss
- Sliding window: `std::deque<WindowEvent>` evicts entries older than 1h/24h on each reconciliation pass
- Ring buffer of 1000 `LedgerEvent` structs for pending reconciliation
- Add recording hooks (modifying Phase 4 code from Phase 7 branch):
- `RCLConsensus.cpp` `validate()`: call `tracker.recordOurValidation()`
- `LedgerMaster.cpp` fully-validated path: call `tracker.recordNetworkValidation()`
**Key data structures**:
```cpp
struct LedgerEvent {
uint256 ledgerHash;
LedgerIndex seq;
TimePoint closeTime;
bool weValidated = false;
bool networkValidated = false;
bool reconciled = false;
bool agreed = false;
};
struct WindowEvent {
TimePoint time;
bool agreed;
};
```
**Key new files**:
- `src/xrpld/telemetry/ValidationTracker.h`
- `src/xrpld/telemetry/detail/ValidationTracker.cpp`
**Key modified files**:
- `src/xrpld/telemetry/MetricsRegistry.h` (add ValidationTracker member)
- `src/xrpld/telemetry/MetricsRegistry.cpp` (add gauge callback reading from tracker)
- `src/xrpld/app/consensus/RCLConsensus.cpp` (add recording hooks)
- `src/xrpld/app/ledger/detail/LedgerMaster.cpp` (add recording hook)
**Exit Criteria**:
- [ ] ValidationTracker correctly tracks agreement with 8s grace period
- [ ] 5-minute late repair corrects false-positive misses
- [ ] Thread-safe (atomics + mutex for window deques)
- [ ] Rolling windows correctly evict stale entries
- [ ] Unit tests: normal agreement, missed validation, late repair, window eviction
---
## Task 7.10: Validator Health Observable Gauges
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md)
**Objective**: Export amendment blocked, UNL health, and quorum data as a native OTel observable gauge.
**What to do**:
- In `MetricsRegistry.cpp` `registerAsyncGauges()`, add:
```cpp
validatorHealthGauge_ = meter_->CreateDoubleObservableGauge(
"rippled_validator_health", "Validator health indicators");
```
**Gauge label values**:
| Label `metric=` | Type | Source |
| ------------------- | ------ | ------------------------------------------------- |
| `amendment_blocked` | int64 | `app_.getOPs().isAmendmentBlocked()` → 0/1 |
| `unl_blocked` | int64 | `app_.getOPs().isUNLBlocked()` → 0/1 |
| `unl_expiry_days` | double | `app_.validators().expires()` → days until expiry |
| `validation_quorum` | int64 | `app_.validators().quorum()` |
**Key modified files**: `src/xrpld/telemetry/MetricsRegistry.h/.cpp`
**Exit Criteria**:
- [ ] All 4 label values emitted every 10s
- [ ] `unl_expiry_days` is negative when expired, positive when active
- [ ] Values visible in Prometheus
---
## Task 7.11: Peer Quality Observable Gauges
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md)
**Objective**: Export peer health aggregates (latency P90, insane peers, version awareness) as a native OTel observable gauge.
**What to do**:
- In `MetricsRegistry.cpp` `registerAsyncGauges()`, add a callback that iterates `app_.overlay().foreach(...)` to:
- Collect per-peer latency values, sort, compute P90
- Count peers with `tracking_ == diverged` (insane)
- Compare peer `getVersion()` to own version for upgrade awareness
**Gauge label values**:
| Label `metric=` | Type | Source |
| -------------------------- | ------ | ------------------------------------- |
| `peer_latency_p90_ms` | double | P90 from sorted peer latencies |
| `peers_insane_count` | int64 | Peers with diverged tracking status |
| `peers_higher_version_pct` | double | % of peers on newer rippled version |
| `upgrade_recommended` | int64 | 1 if `peers_higher_version_pct > 60%` |
**Implementation note**: The callback runs every 10s on the metrics reader thread. Iterating ~50-200 peers is acceptable overhead.
**Key modified files**: `src/xrpld/telemetry/MetricsRegistry.h/.cpp`
**Exit Criteria**:
- [ ] P90 latency computed correctly
- [ ] Insane count matches `peers` RPC output
- [ ] Version comparison handles format variations (e.g., "rippled-2.4.0-rc1")
---
## Task 7.12: Ledger Economy Observable Gauges
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md)
**Objective**: Export fee, reserve, ledger age, and transaction rate as a native OTel observable gauge.
**Gauge label values**:
| Label `metric=` | Type | Source |
| -------------------- | ------ | --------------------------------------------------- |
| `base_fee_xrp` | double | Base fee from validated ledger fee settings (drops) |
| `reserve_base_xrp` | double | Account reserve from validated ledger (drops) |
| `reserve_inc_xrp` | double | Owner reserve increment (drops) |
| `ledger_age_seconds` | double | `now - lastValidatedCloseTime` |
| `transaction_rate` | double | Derived: tx count delta / time delta (smoothed) |
**Key modified files**: `src/xrpld/telemetry/MetricsRegistry.h/.cpp`
**Exit Criteria**:
- [ ] Fee values match `server_info` RPC output
- [ ] `ledger_age_seconds` increases monotonically between ledger closes
- [ ] `transaction_rate` is smoothed (rolling average)
---
## Task 7.13: State Tracking Observable Gauges
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md)
**Objective**: Export extended state value (0-6 encoding combining OperatingMode + ConsensusMode) and time-in-current-state.
**Gauge label values**:
| Label `metric=` | Type | Source |
| ------------------------------- | ------ | ----------------------------------------------- |
| `state_value` | int64 | 0-6 encoding (see spec for mapping) |
| `time_in_current_state_seconds` | double | `now - lastModeChangeTime` from StateAccounting |
**State value encoding**: 0=disconnected, 1=connected, 2=syncing, 3=tracking, 4=full, 5=validating (full + validating), 6=proposing (full + proposing).
**Key modified files**: `src/xrpld/telemetry/MetricsRegistry.h/.cpp`
**Exit Criteria**:
- [ ] `state_value` correctly combines OperatingMode and ConsensusMode
- [ ] `time_in_current_state_seconds` resets on mode change
---
## Task 7.14: Storage Detail and Sync Info Gauges
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md)
**Objective**: Export NuDB-specific storage size and initial sync duration.
**Gauge label values**:
| Gauge Name | Label `metric=` | Type | Source |
| ------------------------ | ------------------------------- | ------ | ----------------------------- |
| `rippled_storage_detail` | `nudb_bytes` | int64 | NuDB backend file size |
| `rippled_sync_info` | `initial_sync_duration_seconds` | double | Time from start to first FULL |
**Key modified files**: `src/xrpld/telemetry/MetricsRegistry.h/.cpp`
**Exit Criteria**:
- [ ] NuDB file size reported in bytes (0 if NuDB not configured)
- [ ] Sync duration captured once and remains stable after reaching FULL
---
## Task 7.15: New Synchronous Counters
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md)
**Objective**: Add 7 new event counters incremented at their respective instrumentation sites.
| Counter Name | Increment Site | Source File |
| ------------------------------------- | -------------------------------- | --------------------- |
| `rippled_ledgers_closed_total` | `onAccept()` in consensus | RCLConsensus.cpp |
| `rippled_validations_sent_total` | `validate()` in consensus | RCLConsensus.cpp |
| `rippled_validations_checked_total` | Network validation received | LedgerMaster.cpp |
| `rippled_validation_agreements_total` | ValidationTracker reconciliation | ValidationTracker.cpp |
| `rippled_validation_missed_total` | ValidationTracker reconciliation | ValidationTracker.cpp |
| `rippled_state_changes_total` | `setMode()` in NetworkOPs | NetworkOPs.cpp |
| `rippled_jq_trans_overflow_total` | Job queue overflow path | JobQueue.cpp |
**Key modified files**: `src/xrpld/telemetry/MetricsRegistry.h/.cpp` (declarations), plus recording sites in RCLConsensus.cpp, LedgerMaster.cpp, NetworkOPs.cpp, JobQueue.cpp
**Exit Criteria**:
- [ ] All 7 counters monotonically increase during normal operation
- [ ] Counter values match expected rates (e.g., ledgers_closed ≈ 1 per 3-5s)
---
## Task 7.16: Validation Agreement Observable Gauge
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md)
**Objective**: Export rolling window agreement stats from `ValidationTracker` (Task 7.9).
**Gauge label values**:
| Gauge Name | Label `metric=` | Type | Source |
| ------------------------------ | ------------------- | ------ | --------------------------- |
| `rippled_validation_agreement` | `agreement_pct_1h` | double | `tracker.agreementPct1h()` |
| | `agreements_1h` | int64 | `tracker.agreements1h()` |
| | `missed_1h` | int64 | `tracker.missed1h()` |
| | `agreement_pct_24h` | double | `tracker.agreementPct24h()` |
| | `agreements_24h` | int64 | `tracker.agreements24h()` |
| | `missed_24h` | int64 | `tracker.missed24h()` |
**Key modified files**: `src/xrpld/telemetry/MetricsRegistry.cpp`
**Exit Criteria**:
- [ ] Agreement percentages in range [0.0, 100.0]
- [ ] Window stats stabilize after 1h/24h of operation
---
## Summary Table
| Task | Description | New Files | Modified Files | Depends On |
| ---- | -------------------------------------- | --------- | -------------- | ---------- |
| 7.1 | Add OTel Metrics SDK to build deps | 0 | 2 | — |
| 7.2 | Implement OTelCollector class | 2 | 0 | 7.1 |
| 7.3 | Update CollectorManager config routing | 0 | 2 | 7.2 |
| 7.4 | Update OTel Collector YAML and Docker | 0 | 2 | 7.3 |
| 7.5 | Preserve metric names in Prometheus | 0 | 1 | 7.2 |
| 7.6 | Update Grafana dashboards (if needed) | 0 | 3 | 7.5 |
| 7.7 | Update integration tests | 0 | 1 | 7.4 |
| 7.8 | Update documentation | 0 | 4 | 7.6 |
| 7.9 | ValidationTracker (agreement tracking) | 2 | 4 | 7.2, P4.8 |
| 7.10 | Validator health observable gauges | 0 | 2 | 7.2 |
| 7.11 | Peer quality observable gauges | 0 | 2 | 7.2 |
| 7.12 | Ledger economy observable gauges | 0 | 2 | 7.2 |
| 7.13 | State tracking observable gauges | 0 | 2 | 7.2 |
| 7.14 | Storage detail and sync info gauges | 0 | 2 | 7.2 |
| 7.15 | New synchronous counters | 0 | 6 | 7.2 |
| 7.16 | Validation agreement observable gauge | 0 | 1 | 7.9 |
**Parallel work**: Tasks 7.4 and 7.5 can run in parallel after 7.2/7.3 complete. Task 7.6 depends on 7.5's findings. Tasks 7.7 and 7.8 can run in parallel after 7.6. Tasks 7.10-7.14 can all run in parallel after 7.2. Task 7.15 depends on 7.2. Task 7.16 depends on 7.9. Task 7.9 depends on 7.2 and Phase 4 Task 4.8.
**Exit Criteria** (from [06-implementation-phases.md §6.8](./06-implementation-phases.md)):
- [ ] All 255+ metrics visible in Prometheus via OTLP pipeline (no StatsD receiver)
- [ ] `server=otel` is the default in development docker-compose
- [ ] `server=statsd` still works as a fallback
- [ ] Existing Grafana dashboards display data correctly
- [ ] Integration test passes with OTLP-only metrics pipeline
- [ ] No performance regression vs StatsD baseline (< 1% CPU overhead)
- [ ] Deferred Task 6.1 (`|m` wire format) no longer relevant — Meter mapped to OTel Counter
- [ ] ValidationTracker agreement % stabilizes after 1h under normal consensus
- [ ] All new gauges and counters visible in Prometheus with non-zero values

View File

@@ -0,0 +1,232 @@
# Phase 8: Log-Trace Correlation and Centralized Log Ingestion — Task List
> **Goal**: Inject trace context (trace_id, span_id) into rippled's Journal log output for log-trace correlation, and add OTel Collector filelog receiver to ingest logs into Grafana Loki for unified observability.
>
> **Scope**: Two independent sub-phases — 8a (code change: trace_id in logs) and 8b (infra only: filelog receiver to Loki). No changes to the `beast::Journal` public API.
>
> **Branch**: `pratik/otel-phase8-log-correlation` (from `pratik/otel-phase7-native-metrics`)
### Related Plan Documents
| Document | Relevance |
| ---------------------------------------------------------------- | -------------------------------------------------------------- |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 8 plan: motivation, architecture, exit criteria (§6.8.1) |
| [07-observability-backends.md](./07-observability-backends.md) | Loki backend recommendation, Grafana data source provisioning |
| [Phase7_taskList.md](./Phase7_taskList.md) | Prerequisite — native OTel metrics pipeline must be working |
| [05-configuration-reference.md](./05-configuration-reference.md) | `[telemetry]` config (trace_id injection toggle) |
---
## Task 8.1: Inject trace_id into Logs::format()
**Objective**: Add OTel trace context to every log line that is emitted within an active span.
**What to do**:
- Edit `src/libxrpl/basics/Log.cpp`:
- In `Logs::format()` (around line 346), after severity is appended, check for active OTel span:
```cpp
#ifdef XRPL_ENABLE_TELEMETRY
auto span = opentelemetry::trace::GetSpan(
opentelemetry::context::RuntimeContext::GetCurrent());
auto ctx = span->GetContext();
if (ctx.IsValid())
{
// Append trace context as structured fields
char traceId[33], spanId[17];
ctx.trace_id().ToLowerBase16(traceId);
ctx.span_id().ToLowerBase16(spanId);
output += "trace_id=";
output.append(traceId, 32);
output += " span_id=";
output.append(spanId, 16);
output += ' ';
}
#endif
```
- Add `#include` for OTel context headers, guarded by `#ifdef XRPL_ENABLE_TELEMETRY`
- Edit `include/xrpl/basics/Log.h`:
- No changes needed — format() signature unchanged
**Key modified files**:
- `src/libxrpl/basics/Log.cpp`
**Performance note**: `GetSpan()` and `GetContext()` are thread-local reads with no locking — measured at <10ns per call. With ~1000 JLOG calls/min, this adds <10us/min of overhead.
---
## Task 8.2: Add Loki to Docker Compose Stack
**Objective**: Add Grafana Loki as a log storage backend in the development observability stack.
**What to do**:
- Edit `docker/telemetry/docker-compose.yml`:
- Add Loki service:
```yaml
loki:
image: grafana/loki:2.9.0
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
```
- Add Loki as a Grafana data source in provisioning
- Create `docker/telemetry/grafana/provisioning/datasources/loki.yaml`:
- Configure Loki data source with derived fields linking `trace_id` to Tempo
**Key new files**:
- `docker/telemetry/grafana/provisioning/datasources/loki.yaml`
**Key modified files**:
- `docker/telemetry/docker-compose.yml`
---
## Task 8.3: Add Filelog Receiver to OTel Collector
**Objective**: Configure the OTel Collector to tail rippled's log file and export to Loki.
**What to do**:
- Edit `docker/telemetry/otel-collector-config.yaml`:
- Add `filelog` receiver:
```yaml
receivers:
filelog:
include: [/var/log/rippled/debug.log]
operators:
- type: regex_parser
regex: '^(?P<timestamp>\S+)\s+(?P<partition>\S+):(?P<severity>\S+)\s+(?:trace_id=(?P<trace_id>[a-f0-9]+)\s+span_id=(?P<span_id>[a-f0-9]+)\s+)?(?P<message>.*)$'
timestamp:
parse_from: attributes.timestamp
layout: "%Y-%m-%dT%H:%M:%S.%fZ"
```
- Add logs pipeline:
```yaml
service:
pipelines:
logs:
receivers: [filelog]
processors: [batch]
exporters: [otlp/loki]
```
- Add Loki exporter:
```yaml
exporters:
otlp/loki:
endpoint: loki:3100
tls:
insecure: true
```
- Mount rippled's log directory into the collector container via docker-compose volume
**Key modified files**:
- `docker/telemetry/otel-collector-config.yaml`
- `docker/telemetry/docker-compose.yml`
---
## Task 8.4: Configure Grafana Trace-to-Log Correlation
**Objective**: Enable one-click navigation from Tempo traces to Loki logs in Grafana.
**What to do**:
- Edit Grafana Tempo data source provisioning to add `tracesToLogs` configuration:
```yaml
tracesToLogs:
datasourceUid: loki
filterByTraceID: true
filterBySpanID: false
tags: ["partition", "severity"]
```
- Edit Grafana Loki data source provisioning to add `derivedFields` linking trace_id back to Tempo:
```yaml
derivedFields:
- datasourceUid: tempo
matcherRegex: "trace_id=(\\w+)"
name: TraceID
url: "$${__value.raw}"
```
**Key modified files**:
- `docker/telemetry/grafana/provisioning/datasources/loki.yaml`
- `docker/telemetry/grafana/provisioning/datasources/` (Tempo data source file)
---
## Task 8.5: Update Integration Tests
**Objective**: Verify trace_id appears in logs and Loki correlation works.
**What to do**:
- Edit `docker/telemetry/integration-test.sh`:
- After sending RPC requests (which create spans), grep rippled's log output for `trace_id=`
- Verify trace_id matches a trace visible in Jaeger
- Optionally: query Loki via API to confirm log ingestion
**Key modified files**:
- `docker/telemetry/integration-test.sh`
---
## Task 8.6: Update Documentation
**Objective**: Document the log correlation feature in runbook and reference docs.
**What to do**:
- Edit `docs/telemetry-runbook.md`:
- Add "Log-Trace Correlation" section explaining how to use Grafana Tempo -> Loki linking
- Add LogQL query examples for filtering by trace_id
- Edit `OpenTelemetryPlan/09-data-collection-reference.md`:
- Add new section "3. Log Correlation" between SpanMetrics and StatsD sections
- Document the log format with trace_id injection
- Document Loki as a new backend
- Edit `docker/telemetry/TESTING.md`:
- Add log correlation verification steps
**Key modified files**:
- `docs/telemetry-runbook.md`
- `OpenTelemetryPlan/09-data-collection-reference.md`
- `docker/telemetry/TESTING.md`
---
## Summary Table
| Task | Description | Sub-Phase | New Files | Modified Files | Depends On |
| ---- | ------------------------------------------ | --------- | --------- | -------------- | ---------- |
| 8.1 | Inject trace_id into Logs::format() | 8a | 0 | 1 | Phase 7 |
| 8.2 | Add Loki to Docker Compose stack | 8b | 1 | 1 | -- |
| 8.3 | Add filelog receiver to OTel Collector | 8b | 0 | 2 | 8.1, 8.2 |
| 8.4 | Configure Grafana trace-to-log correlation | 8b | 0 | 2 | 8.3 |
| 8.5 | Update integration tests | 8a + 8b | 0 | 1 | 8.4 |
| 8.6 | Update documentation | 8a + 8b | 0 | 3 | 8.5 |
**Parallel work**: Task 8.2 (Loki infra) can run in parallel with Task 8.1 (code change). Tasks 8.3-8.6 are sequential.
**Exit Criteria** (from [06-implementation-phases.md §6.8.1](./06-implementation-phases.md)):
- [ ] Log lines within active spans contain `trace_id=<hex> span_id=<hex>`
- [ ] Log lines outside spans have no trace context (no empty fields)
- [ ] Loki ingests rippled logs via OTel Collector filelog receiver
- [ ] Grafana Tempo -> Loki one-click correlation works
- [ ] Grafana Loki -> Tempo reverse lookup works via derived field
- [ ] Integration test verifies trace_id presence in logs
- [ ] No performance regression from trace_id injection (< 0.1% overhead)

View File

@@ -0,0 +1,447 @@
# Phase 9: Internal Metric Instrumentation Gap Fill — Task List
> **Status**: Future Enhancement
>
> **Goal**: Instrument rippled to emit ~50+ metrics that exist in `get_counts`/`server_info`/TxQ/PerfLog but currently lack time-series export via the OTel or beast::insight pipelines.
>
> **Scope**: Hybrid approach — extend `beast::insight` for metrics near existing registrations, use OTel Metrics SDK `ObservableGauge` callbacks for new categories (TxQ, PerfLog, CountedObjects).
>
> **Branch**: `pratik/otel-phase9-metric-gap-fill` (from `pratik/otel-phase8-log-correlation`)
>
> **Depends on**: Phase 7 (native OTel metrics pipeline) and Phase 8 (log-trace correlation)
### Related Plan Documents
| Document | Relevance |
| -------------------------------------------------------------------- | -------------------------------------------------------------- |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 9 plan: motivation, architecture, exit criteria (§6.8.2) |
| [09-data-collection-reference.md](./09-data-collection-reference.md) | Current metric inventory + future metrics section |
| [Phase7_taskList.md](./Phase7_taskList.md) | Prerequisite — OTel Metrics SDK and `OTelCollector` class |
| [Phase8_taskList.md](./Phase8_taskList.md) | Prerequisite — log-trace correlation |
### Third-Party Consumer Context
These metrics serve multiple external consumer categories identified during research:
| Consumer Category | Key Metrics They Need |
| ------------------------- | --------------------------------------------------------------- |
| **Exchanges** | Fee escalation levels, TxQ depth, settlement latency |
| **Payment Processors** | Load factors, io_latency, transaction throughput |
| **Analytics Providers** | NodeStore I/O, cache hit rates, counted objects |
| **Validators/Operators** | Per-job execution times, PerfLog RPC counters, consensus timing |
| **Academic Researchers** | Consensus performance time-series, fee market dynamics |
| **Institutional Custody** | Server health scores, reserve calculations, node availability |
---
## Task 9.1: NodeStore I/O Metrics
**Objective**: Export node store read/write performance as time-series metrics.
**What to do**:
- In `src/libxrpl/nodestore/Database.cpp`, extend existing `beast::insight` registrations to add:
- Gauge: `node_reads_total` (cumulative read operations)
- Gauge: `node_reads_hit` (cache-served reads)
- Gauge: `node_writes` (cumulative write operations)
- Gauge: `node_written_bytes` (cumulative bytes written)
- Gauge: `node_read_bytes` (cumulative bytes read)
- Gauge: `node_reads_duration_us` (cumulative read time in microseconds)
- Gauge: `write_load` (current write load score)
- Gauge: `read_queue` (items in read queue)
- These values are already computed in `Database::getCountsJson()` (line ~236). Wire the same counters to `beast::insight` hooks.
**Key modified files**:
- `src/libxrpl/nodestore/Database.cpp`
- `src/libxrpl/nodestore/Database.h` (add insight members)
**Derived Prometheus metrics**: `rippled_nodestore_reads_total`, `rippled_nodestore_reads_hit`, `rippled_nodestore_write_load`, etc.
**Grafana dashboard**: Add "NodeStore I/O" panel group to _Node Health_ dashboard.
---
## Task 9.2: Cache Hit Rate Metrics
**Objective**: Export SHAMap and ledger cache performance as time-series gauges.
**What to do**:
- Register OTel `ObservableGauge` callbacks (via Phase 7's `OTelCollector`) for:
- `SLE_hit_rate` — SLE cache hit rate (0.01.0)
- `ledger_hit_rate` — Ledger object cache hit rate
- `AL_hit_rate` — AcceptedLedger cache hit rate
- `treenode_cache_size` — SHAMap TreeNode cache size (entries)
- `treenode_track_size` — Tracked tree nodes
- `fullbelow_size` — FullBelow cache size
- The callback should read from the same sources as `GetCounts.cpp` handler (line ~43).
- Create a centralized `MetricsRegistry` class that holds all OTel async gauge registrations, polled at 10-second intervals by the `PeriodicMetricReader`.
**Key modified files**:
- New: `src/xrpld/telemetry/MetricsRegistry.h` / `.cpp`
- `src/xrpld/rpc/handlers/GetCounts.cpp` (extract shared access methods)
- `src/xrpld/app/main/Application.cpp` (register MetricsRegistry at startup)
**Derived Prometheus metrics**: `rippled_cache_SLE_hit_rate`, `rippled_cache_ledger_hit_rate`, `rippled_cache_treenode_size`, etc.
---
## Task 9.3: Transaction Queue (TxQ) Metrics
**Objective**: Export TxQ depth, capacity, and fee escalation levels as time-series.
**What to do**:
- Register OTel `ObservableGauge` callbacks for TxQ state (from `TxQ.h` line ~143):
- `txq_count` — Current transactions in queue
- `txq_max_size` — Maximum queue capacity
- `txq_in_ledger` — Transactions in current open ledger
- `txq_per_ledger` — Expected transactions per ledger
- `txq_reference_fee_level` — Reference fee level
- `txq_min_processing_fee_level` — Minimum fee to get processed
- `txq_med_fee_level` — Median fee level in queue
- `txq_open_ledger_fee_level` — Open ledger fee escalation level
- Add to the `MetricsRegistry` (Task 9.2).
**Key modified files**:
- `src/xrpld/telemetry/MetricsRegistry.cpp` (add TxQ callbacks)
- `src/xrpld/app/tx/detail/TxQ.h` (expose metrics accessor if needed)
**Derived Prometheus metrics**: `rippled_txq_count`, `rippled_txq_max_size`, `rippled_txq_open_ledger_fee_level`, etc.
**Grafana dashboard**: New _Fee Market & TxQ_ dashboard (`rippled-fee-market`).
---
## Task 9.4: PerfLog Per-RPC Method Metrics
**Objective**: Export per-RPC-method call counts and latency as OTel metrics.
**What to do**:
- Register OTel instruments for PerfLog RPC counters (from `PerfLogImp.cpp` line ~63):
- Counter: `rippled_rpc_method_started_total{method="<name>"}` — calls started
- Counter: `rippled_rpc_method_finished_total{method="<name>"}` — calls completed
- Counter: `rippled_rpc_method_errored_total{method="<name>"}` — calls errored
- Histogram: `rippled_rpc_method_duration_us{method="<name>"}` — execution time distribution
- Use OTel `Counter<int64_t>` and `Histogram<double>` instruments with `method` attribute label.
- Hook into the existing PerfLog callback mechanism rather than adding new instrumentation points.
**Key modified files**:
- `src/xrpld/perflog/detail/PerfLogImp.cpp` (add OTel instrument updates alongside existing JSON counters)
- `src/xrpld/telemetry/MetricsRegistry.cpp` (register instruments)
**Derived Prometheus metrics**: `rippled_rpc_method_started_total{method="server_info"}`, `rippled_rpc_method_duration_us_bucket{method="ledger"}`, etc.
**Grafana dashboard**: Add "Per-Method RPC Breakdown" panel group to _RPC Performance_ dashboard.
---
## Task 9.5: PerfLog Per-Job-Type Metrics
**Objective**: Export per-job-type queue and execution metrics.
**What to do**:
- Register OTel instruments for PerfLog job counters:
- Counter: `rippled_job_queued_total{job_type="<name>"}` — jobs queued
- Counter: `rippled_job_started_total{job_type="<name>"}` — jobs started
- Counter: `rippled_job_finished_total{job_type="<name>"}` — jobs completed
- Histogram: `rippled_job_queued_duration_us{job_type="<name>"}` — time spent waiting in queue
- Histogram: `rippled_job_running_duration_us{job_type="<name>"}` — execution time distribution
- Hook into PerfLog's existing job tracking alongside Task 9.4.
**Key modified files**:
- `src/xrpld/perflog/detail/PerfLogImp.cpp`
- `src/xrpld/telemetry/MetricsRegistry.cpp`
**Derived Prometheus metrics**: `rippled_job_queued_total{job_type="ledgerData"}`, `rippled_job_running_duration_us_bucket{job_type="transaction"}`, etc.
**Grafana dashboard**: New _Job Queue Analysis_ dashboard (`rippled-job-queue`).
---
## Task 9.6: Counted Object Instance Metrics
**Objective**: Export live instance counts for key internal object types.
**What to do**:
- Register OTel `ObservableGauge` callbacks for `CountedObject<T>` instance counts:
- `rippled_object_count{type="Transaction"}` — live Transaction objects
- `rippled_object_count{type="Ledger"}` — live Ledger objects
- `rippled_object_count{type="NodeObject"}` — live NodeObject instances
- `rippled_object_count{type="STTx"}` — serialized transaction objects
- `rippled_object_count{type="STLedgerEntry"}` — serialized ledger entries
- `rippled_object_count{type="InboundLedger"}` — ledgers being fetched
- `rippled_object_count{type="Pathfinder"}` — active pathfinding computations
- `rippled_object_count{type="PathRequest"}` — active path requests
- `rippled_object_count{type="HashRouterEntry"}` — hash router entries
- The `CountedObject` template already tracks these via atomic counters. The callback just reads the current counts.
**Key modified files**:
- `src/xrpld/telemetry/MetricsRegistry.cpp` (add counted object callbacks)
- `include/xrpl/basics/CountedObject.h` (may need static accessor for iteration)
**Derived Prometheus metrics**: `rippled_object_count{type="Transaction"}`, `rippled_object_count{type="NodeObject"}`, etc.
**Grafana dashboard**: Add "Object Instance Counts" panel to _Node Health_ dashboard.
---
## Task 9.7: Fee Escalation & Load Factor Metrics
**Objective**: Export the full load factor breakdown as time-series.
**What to do**:
- Register OTel `ObservableGauge` callbacks for load factors (from `NetworkOPs.cpp` line ~2694):
- `load_factor` — combined transaction cost multiplier
- `load_factor_server` — server + cluster + network contribution
- `load_factor_local` — local server load only
- `load_factor_net` — network-wide load estimate
- `load_factor_cluster` — cluster peer load
- `load_factor_fee_escalation` — open ledger fee escalation
- `load_factor_fee_queue` — queue entry fee level
- These overlap with some existing StatsD metrics but provide finer granularity (individual factor breakdown vs. combined value).
**Key modified files**:
- `src/xrpld/telemetry/MetricsRegistry.cpp`
- `src/xrpld/app/misc/NetworkOPs.cpp` (expose load factor accessors if needed)
**Derived Prometheus metrics**: `rippled_load_factor`, `rippled_load_factor_fee_escalation`, etc.
**Grafana dashboard**: Add "Load Factor Breakdown" panel to _Fee Market & TxQ_ dashboard.
---
## Task 9.7a: push_metrics.py Parity — Missing Observable Gauges
**Objective**: Fill the remaining metric gaps between the external `push_metrics.py` script (in `ripplex-ansible`) and the internal OTel `MetricsRegistry` observable gauges. After this task, all metrics collected by `push_metrics.py` that CAN be collected internally are covered.
**What was done**:
- Extended existing `cacheHitRateGauge_` callback with `AL_size` (AcceptedLedger cache size)
- Extended existing `nodeStoreGauge_` callback with 4 new metrics from `getCountsJson()`:
- `node_reads_duration_us` (JSON string — uses `std::stoll(asString())`)
- `read_request_bundle` (native JSON int)
- `read_threads_running` (native JSON int)
- `read_threads_total` (native JSON int)
- Added new `rippled_server_info` Int64ObservableGauge with 8 metrics:
- `server_state` — operating mode as int (0=DISCONNECTED .. 4=FULL)
- `uptime` — seconds since server start
- `peers` — total peer count
- `validated_ledger_seq` — validated ledger sequence (atomic read)
- `ledger_current_index` — current open ledger sequence
- `peer_disconnects_resources` — cumulative resource-related disconnects
- `last_close_proposers` — from `getConsensusInfo()["previous_proposers"]`
- `last_close_converge_time_ms` — from `getConsensusInfo()["previous_mseconds"]`
- Added new `rippled_build_info` Int64ObservableGauge (info-style, value=1 with `version` label)
- Added new `rippled_complete_ledgers` Int64ObservableGauge parsing comma-separated ranges into `{bound, index}` pairs
- Added new `rippled_db_metrics` Int64ObservableGauge with 4 metrics:
- `db_kb_total`, `db_kb_ledger`, `db_kb_transaction` (SQLite stat queries)
- `historical_perminute` (historical ledger fetch rate)
**Key modified files**:
- `src/xrpld/telemetry/MetricsRegistry.h` (4 new gauge members, updated ASCII diagram)
- `src/xrpld/telemetry/MetricsRegistry.cpp` (4 new callback registrations, 2 callback extensions)
**Not implementable inside rippled**:
- `connection_count_51233/51234` — OS-level port connection counts from external shell script (`get_connection.sh`)
**Derived Prometheus metrics**: `rippled_server_info{metric="server_state"}`, `rippled_build_info{version="2.4.0"}`, `rippled_complete_ledgers{bound="start",index="0"}`, `rippled_db_metrics{metric="db_kb_total"}`, etc.
**Grafana dashboard**: New panels added to _Node Health_ dashboard (`system-node-health.json`).
---
## Task 9.8: New Grafana Dashboards
**Objective**: Create Grafana dashboards for the new metric categories.
**What to do**:
- Create 2 new dashboards:
1. **Fee Market & TxQ** (`rippled-fee-market`) — TxQ depth/capacity, fee levels, load factor breakdown, fee escalation timeline
2. **Job Queue Analysis** (`rippled-job-queue`) — Per-job-type rates, queue wait times, execution times, job queue depth
- Update 2 existing dashboards:
1. **Node Health** (`rippled-statsd-node-health`) — Add NodeStore I/O panels, cache hit rate panels, object instance counts
2. **RPC Performance** (`rippled-rpc-perf`) — Add per-method RPC breakdown panels
**Key modified files**:
- New: `docker/telemetry/grafana/dashboards/rippled-fee-market.json`
- New: `docker/telemetry/grafana/dashboards/rippled-job-queue.json`
- `docker/telemetry/grafana/dashboards/rippled-statsd-node-health.json`
- `docker/telemetry/grafana/dashboards/rippled-rpc-perf.json`
---
## Task 9.9: Update Documentation
**Objective**: Update telemetry reference docs with all new metrics.
**What to do**:
- Update `OpenTelemetryPlan/09-data-collection-reference.md`:
- Add new section for OTel SDK-exported metrics (NodeStore, cache, TxQ, PerfLog, CountedObjects, load factors)
- Update Grafana dashboard reference table (add 2 new dashboards)
- Add Prometheus query examples for new metrics
- Update `docs/telemetry-runbook.md`:
- Add alerting rules for new metrics (NodeStore write_load, TxQ capacity, cache hit rate degradation)
- Add troubleshooting entries for new metric categories
**Key modified files**:
- `OpenTelemetryPlan/09-data-collection-reference.md`
- `docs/telemetry-runbook.md`
---
## Task 9.10: Integration Tests
**Objective**: Verify all new metrics appear in Prometheus after a test workload.
**What to do**:
- Extend the existing telemetry integration test:
- Start rippled with `[telemetry] enabled=1` and `[insight] server=otel`
- Submit a batch of RPC calls and transactions
- Query Prometheus for each new metric family
- Assert non-zero values for: NodeStore reads, cache hit rates, TxQ count, PerfLog RPC counters, object counts, load factors
- Add unit tests for the `MetricsRegistry` class:
- Verify callback registration and deregistration
- Verify metric values match `get_counts` JSON output
- Verify graceful behavior when telemetry is disabled
**Key modified files**:
- `src/test/telemetry/MetricsRegistry_test.cpp` (new)
- Existing integration test script (extend assertions)
---
## Task 9.11: Validator Health Dashboard (External Dashboard Parity)
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md) — dashboards for Phase 7 metrics inspired by the community [xrpl-validator-dashboard](https://github.com/realgrapedrop/xrpl-validator-dashboard).
>
> **Upstream**: Phase 7 Tasks 7.9-7.16 (metrics must be emitting).
> **Downstream**: Phase 10 (dashboard load checks), Phase 11 (alert rules reference these panels).
**Objective**: Create a Grafana dashboard for validation agreement, amendment/UNL health, and state tracking.
**Dashboard**: `rippled-validator-health.json`
| Panel | Type | PromQL |
| -------------------------- | ---------- | ---------------------------------------------------------------- |
| Agreement % (1h) | stat | `rippled_validation_agreement{metric="agreement_pct_1h"}` |
| Agreement % (24h) | stat | `rippled_validation_agreement{metric="agreement_pct_24h"}` |
| Agreements vs Missed (1h) | bargauge | `agreements_1h` and `missed_1h` side by side |
| Agreements vs Missed (24h) | bargauge | `agreements_24h` and `missed_24h` side by side |
| Validation Rate | stat | `rate(rippled_validations_sent_total[5m]) * 60` |
| Validations Checked Rate | stat | `rate(rippled_validations_checked_total[5m]) * 60` |
| Amendment Blocked | stat | `rippled_validator_health{metric="amendment_blocked"}` |
| UNL Expiry (days) | stat | `rippled_validator_health{metric="unl_expiry_days"}` |
| Validation Quorum | stat | `rippled_validator_health{metric="validation_quorum"}` |
| State Value Timeline | timeseries | `rippled_state_tracking{metric="state_value"}` |
| Time in Current State | stat | `rippled_state_tracking{metric="time_in_current_state_seconds"}` |
| State Changes Rate | stat | `rate(rippled_state_changes_total[1h])` |
| Ledgers Closed Rate | stat | `rate(rippled_ledgers_closed_total[5m]) * 60` |
**Dashboard conventions**: `$node` template variable for `exported_instance` filtering, dark theme, matching existing panel sizes and color schemes.
**Key new files**: `docker/telemetry/grafana/dashboards/rippled-validator-health.json`
**Exit Criteria**:
- [ ] All 13 panels render with non-zero data during normal operation
- [ ] `$node` filter works correctly for multi-node deployments
- [ ] Amendment blocked and UNL expiry panels use color thresholds (red=blocked/expiring)
---
## Task 9.12: Peer Quality Dashboard (External Dashboard Parity)
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md)
**Objective**: Create a Grafana dashboard for peer health aggregates.
**Dashboard**: `rippled-peer-quality.json`
| Panel | Type | PromQL |
| ---------------------- | ---------- | ---------------------------------------------------------------- |
| P90 Peer Latency | timeseries | `rippled_peer_quality{metric="peer_latency_p90_ms"}` |
| Insane/Diverged Peers | stat | `rippled_peer_quality{metric="peers_insane_count"}` |
| Higher Version Peers % | stat | `rippled_peer_quality{metric="peers_higher_version_pct"}` |
| Upgrade Recommended | stat | `rippled_peer_quality{metric="upgrade_recommended"}` |
| Resource Disconnects | timeseries | `rippled_Overlay_Peer_Disconnects_Charges` |
| Inbound vs Outbound | bargauge | `rippled_Peer_Finder_Active_Inbound_Peers`, `..._Outbound_Peers` |
**Key new files**: `docker/telemetry/grafana/dashboards/rippled-peer-quality.json`
**Exit Criteria**:
- [ ] All 6 panels render correctly
- [ ] P90 latency panel shows trend over time
- [ ] Upgrade recommended panel uses color threshold (red=1, green=0)
---
## Task 9.13: Ledger Economy Dashboard Panels (External Dashboard Parity)
> **Source**: [External Dashboard Parity](../docs/superpowers/specs/2026-03-30-external-dashboard-parity-design.md)
**Objective**: Add "Ledger Economy" row to the existing `system-node-health.json` dashboard.
| Panel | Type | PromQL |
| -------------------- | ---------- | ----------------------------------------------------- |
| Base Fee (drops) | stat | `rippled_ledger_economy{metric="base_fee_xrp"}` |
| Reserve Base (drops) | stat | `rippled_ledger_economy{metric="reserve_base_xrp"}` |
| Reserve Inc (drops) | stat | `rippled_ledger_economy{metric="reserve_inc_xrp"}` |
| Ledger Age | stat | `rippled_ledger_economy{metric="ledger_age_seconds"}` |
| Transaction Rate | timeseries | `rippled_ledger_economy{metric="transaction_rate"}` |
**Key modified files**: `docker/telemetry/grafana/dashboards/system-node-health.json`
**Exit Criteria**:
- [ ] 5 new panels render correctly in existing dashboard
- [ ] Fee values match `server_info` RPC output
- [ ] Transaction rate shows smooth trend (not spiky)
---
## Exit Criteria
- [ ] All ~50 new metrics visible in Prometheus via OTLP pipeline
- [ ] `MetricsRegistry` class registers/deregisters cleanly with OTel SDK
- [ ] Async gauge callbacks execute at 10s intervals without performance impact
- [ ] 2 new Grafana dashboards operational (Fee Market, Job Queue)
- [ ] 2 existing dashboards updated with new panel groups
- [ ] Integration test validates all new metric families are non-zero
- [ ] No performance regression (< 0.5% CPU overhead from new callbacks)
- [ ] Documentation updated with full new metric inventory
- [ ] Validator Health dashboard renders all 13 panels
- [ ] Peer Quality dashboard renders all 6 panels
- [ ] Ledger Economy panels added to system-node-health dashboard

View File

@@ -0,0 +1,735 @@
# OpenTelemetry Distributed Tracing for rippled
---
## Slide 1: Introduction
> **CNCF** = Cloud Native Computing Foundation
### What is OpenTelemetry?
OpenTelemetry is an open-source, CNCF-backed observability framework for distributed tracing, metrics, and logs.
### Why OpenTelemetry for rippled?
- **End-to-End Transaction Visibility**: Track transactions from submission → consensus → ledger inclusion
- **Cross-Node Correlation**: Follow requests across multiple independent nodes using a unique `trace_id`
- **Consensus Round Analysis**: Understand timing and behavior across validators
- **Incident Debugging**: Correlate events across distributed nodes during issues
```mermaid
flowchart LR
A["Node A<br/>tx.receive<br/>trace_id: abc123"] --> B["Node B<br/>tx.relay<br/>trace_id: abc123"] --> C["Node C<br/>tx.validate<br/>trace_id: abc123"] --> D["Node D<br/>ledger.apply<br/>trace_id: abc123"]
style A fill:#1565c0,stroke:#0d47a1,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#2e7d32,stroke:#1b5e20,color:#fff
style D fill:#e65100,stroke:#bf360c,color:#fff
```
**Reading the diagram:**
- **Node A (blue, leftmost)**: The originating node that first receives the transaction and assigns a new `trace_id: abc123`; this ID becomes the correlation key for the entire distributed trace.
- **Node B and Node C (green, middle)**: Relay and validation nodes — each creates its own span but carries the same `trace_id`, so their work is linked to the original submission without any central coordinator.
- **Node D (orange, rightmost)**: The final node that applies the transaction to the ledger; the trace now spans the full lifecycle from submission to ledger inclusion.
- **Left-to-right flow**: The horizontal progression shows the real-world message path — a transaction hops from node to node, and the shared `trace_id` stitches all hops into a single queryable trace.
> **Trace ID: abc123** — All nodes share the same trace, enabling cross-node correlation.
---
## Slide 2: OpenTelemetry vs Open Source Alternatives
> **CNCF** = Cloud Native Computing Foundation
| Feature | OpenTelemetry | Jaeger | Zipkin | SkyWalking | Pinpoint | Prometheus |
| ------------------- | ---------------- | ---------------- | ------------------ | ---------- | ---------- | ---------- |
| **Tracing** | YES | YES | YES | YES | YES | NO |
| **Metrics** | YES | NO | NO | YES | YES | YES |
| **Logs** | YES | NO | NO | YES | NO | NO |
| **C++ SDK** | YES Official | YES (Deprecated) | YES (Unmaintained) | NO | NO | YES |
| **Vendor Neutral** | YES Primary goal | NO | NO | NO | NO | NO |
| **Instrumentation** | Manual + Auto | Manual | Manual | Auto-first | Auto-first | Manual |
| **Backend** | Any (exporters) | Self | Self | Self | Self | Self |
| **CNCF Status** | Incubating | Graduated | NO | Incubating | NO | Graduated |
> **Why OpenTelemetry?** It's the only actively maintained, full-featured C++ option with vendor neutrality — allowing export to Tempo, Prometheus, Grafana, or any commercial backend without changing instrumentation.
---
## Slide 3: Adoption Scope — Traces Only (Current Plan)
OpenTelemetry supports three signal types: **Traces**, **Metrics**, and **Logs**. rippled already captures metrics (StatsD via Beast Insight) and logs (Journal/PerfLog). The question is: how much of OTel do we adopt?
> **Scenario A**: Add distributed tracing. Keep StatsD for metrics and Journal for logs.
```mermaid
flowchart LR
subgraph rippled["rippled Process"]
direction TB
OTel["OTel SDK<br/>(Traces)"]
Insight["Beast Insight<br/>(StatsD Metrics)"]
Journal["Journal + PerfLog<br/>(Logging)"]
end
OTel -->|"OTLP"| Collector["OTel Collector"]
Insight -->|"UDP"| StatsD["StatsD Server"]
Journal -->|"File I/O"| LogFile["perf.log / debug.log"]
Collector --> Tempo["Tempo / Jaeger"]
StatsD --> Graphite["Graphite / Grafana"]
LogFile --> Loki["Loki (optional)"]
style rippled fill:#424242,stroke:#212121,color:#fff
style OTel fill:#2e7d32,stroke:#1b5e20,color:#fff
style Insight fill:#1565c0,stroke:#0d47a1,color:#fff
style Journal fill:#e65100,stroke:#bf360c,color:#fff
style Collector fill:#2e7d32,stroke:#1b5e20,color:#fff
```
| Aspect | Details |
| ------------------------------ | --------------------------------------------------------------------------------------------------------------- |
| **What changes for operators** | Deploy OTel Collector + trace backend. Existing StatsD and log pipelines stay as-is. |
| **Codebase impact** | New `Telemetry` module (~1500 LOC). Beast Insight and Journal untouched. |
| **New capabilities** | Cross-node trace correlation, span-based debugging, request lifecycle visibility. |
| **What we still can't do** | Correlate metrics with specific traces natively. StatsD metrics remain fire-and-forget with no trace exemplars. |
| **Maintenance burden** | Three separate observability systems to maintain (OTel + StatsD + Journal). |
| **Risk** | Lowest — additive change, no existing systems disturbed. |
---
## Slide 4: Future Adoption — Metrics & Logs via OTel
### Scenario B: + OTel Metrics (Replace StatsD)
> Migrate StatsD to OTel Metrics API, exposing Prometheus-compatible metrics. Remove Beast Insight.
```mermaid
flowchart LR
subgraph rippled["rippled Process"]
direction TB
OTel["OTel SDK<br/>(Traces + Metrics)"]
Journal["Journal + PerfLog<br/>(Logging)"]
end
OTel -->|"OTLP"| Collector["OTel Collector"]
Journal -->|"File I/O"| LogFile["perf.log / debug.log"]
Collector --> Tempo["Tempo<br/>(Traces)"]
Collector --> Prom["Prometheus<br/>(Metrics)"]
LogFile --> Loki["Loki (optional)"]
style rippled fill:#424242,stroke:#212121,color:#fff
style OTel fill:#2e7d32,stroke:#1b5e20,color:#fff
style Journal fill:#e65100,stroke:#bf360c,color:#fff
style Collector fill:#2e7d32,stroke:#1b5e20,color:#fff
```
- **Better metrics?** Yes — Prometheus gives native histograms (p50/p95/p99), multi-dimensional labels, and exemplars linking metric spikes to traces.
- **Codebase**: Remove `Beast::Insight` + `StatsDCollector` (~2000 LOC). Single SDK for traces and metrics.
- **Operator effort**: Rewrite dashboards from StatsD/Graphite queries to PromQL. Run both in parallel during transition.
- **Risk**: Medium — operators must migrate monitoring infrastructure.
### Scenario C: + OTel Logs (Full Stack)
> Also replace Journal logging with OTel Logs API. Single SDK for everything.
```mermaid
flowchart LR
subgraph rippled["rippled Process"]
OTel["OTel SDK<br/>(Traces + Metrics + Logs)"]
end
OTel -->|"OTLP"| Collector["OTel Collector"]
Collector --> Tempo["Tempo<br/>(Traces)"]
Collector --> Prom["Prometheus<br/>(Metrics)"]
Collector --> Loki["Loki / Elastic<br/>(Logs)"]
style rippled fill:#424242,stroke:#212121,color:#fff
style OTel fill:#2e7d32,stroke:#1b5e20,color:#fff
style Collector fill:#2e7d32,stroke:#1b5e20,color:#fff
```
- **Structured logging**: OTel Logs API outputs structured records with `trace_id`, `span_id`, severity, and attributes by design.
- **Full correlation**: Every log line carries `trace_id`. Click trace → see logs. Click metric spike → see trace → see logs.
- **Codebase**: Remove Beast Insight (~2000 LOC) + simplify Journal/PerfLog (~3000 LOC). One dependency instead of three.
- **Risk**: Highest — `beast::Journal` is deeply embedded in every component. Large refactor. OTel C++ Logs API is newer (stable since v1.11, less battle-tested).
### Recommendation
```mermaid
flowchart LR
A["Phase 1<br/><b>Traces Only</b><br/>(Current Plan)"] --> B["Phase 2<br/><b>+ Metrics</b><br/>(Replace StatsD)"] --> C["Phase 3<br/><b>+ Logs</b><br/>(Full OTel)"]
style A fill:#2e7d32,stroke:#1b5e20,color:#fff
style B fill:#1565c0,stroke:#0d47a1,color:#fff
style C fill:#e65100,stroke:#bf360c,color:#fff
```
| Phase | Signal | Strategy | Risk |
| -------------------- | --------- | -------------------------------------------------------------- | ------ |
| **Phase 1** (now) | Traces | Add OTel traces. Keep StatsD and Journal. Prove value. | Low |
| **Phase 2** (future) | + Metrics | Migrate StatsD → Prometheus via OTel. Remove Beast Insight. | Medium |
| **Phase 3** (future) | + Logs | Adopt OTel Logs API. Align with structured logging initiative. | High |
> **Key Takeaway**: Start with traces (unique value, lowest risk), then incrementally adopt metrics and logs as the OTel infrastructure proves itself.
---
## Slide 5: Comparison with rippled's Existing Solutions
### Current Observability Stack
| Aspect | PerfLog (JSON) | StatsD (Metrics) | OpenTelemetry (NEW) |
| --------------------- | --------------------- | --------------------- | --------------------------- |
| **Type** | Logging | Metrics | Distributed Tracing |
| **Scope** | Single node | Single node | **Cross-node** |
| **Data** | JSON log entries | Counters, gauges | Spans with context |
| **Correlation** | By timestamp | By metric name | By `trace_id` |
| **Overhead** | Low (file I/O) | Low (UDP) | Low-Medium (configurable) |
| **Question Answered** | "What happened here?" | "How many? How fast?" | **"What was the journey?"** |
### Use Case Matrix
| Scenario | PerfLog | StatsD | OpenTelemetry |
| -------------------------------- | ------- | ------ | ------------- |
| "How many TXs per second?" | ❌ | ✅ | ❌ |
| "Why was this specific TX slow?" | ⚠️ | ❌ | ✅ |
| "Which node delayed consensus?" | ❌ | ❌ | ✅ |
| "Show TX journey across 5 nodes" | ❌ | ❌ | ✅ |
> **Key Insight**: In the **traces-only** approach (Phase 1), OpenTelemetry **complements** existing systems. In future phases, OTel metrics and logs could **replace** StatsD and Journal respectively — see Slides 3-4 for the full adoption roadmap.
---
## Slide 6: Architecture
> **OTLP** = OpenTelemetry Protocol | **WS** = WebSocket
### High-Level Integration Architecture
```mermaid
flowchart TB
subgraph rippled["rippled Node"]
subgraph services["Core Services"]
direction LR
RPC["RPC Server<br/>(HTTP/WS)"] ~~~ Overlay["Overlay<br/>(P2P Network)"] ~~~ Consensus["Consensus<br/>(RCLConsensus)"]
end
Telemetry["Telemetry Module<br/>(OpenTelemetry SDK)"]
services --> Telemetry
end
Telemetry -->|OTLP/gRPC| Collector["OTel Collector"]
Collector --> Tempo["Grafana Tempo"]
Collector --> Elastic["Elastic APM"]
style rippled fill:#424242,stroke:#212121,color:#fff
style services fill:#1565c0,stroke:#0d47a1,color:#fff
style Telemetry fill:#2e7d32,stroke:#1b5e20,color:#fff
style Collector fill:#e65100,stroke:#bf360c,color:#fff
```
**Reading the diagram:**
- **Core Services (blue, top)**: RPC Server, Overlay, and Consensus are the three primary components that generate trace data — they represent the entry points for client requests, peer messages, and consensus rounds respectively.
- **Telemetry Module (green, middle)**: The OpenTelemetry SDK sits below the core services and receives span data from all three; it acts as a single collection point within the rippled process.
- **OTel Collector (orange, center)**: An external process that receives spans over OTLP/gRPC from the Telemetry Module; it decouples rippled from backend choices and handles batching, sampling, and routing.
- **Backends (bottom row)**: Tempo and Elastic APM are interchangeable — the Collector fans out to any combination, so operators can switch backends without modifying rippled code.
- **Top-to-bottom flow**: Data flows from instrumented code down through the SDK, out over the network to the Collector, and finally into storage/visualization backends.
### Context Propagation
```mermaid
sequenceDiagram
participant Client
participant NodeA as Node A
participant NodeB as Node B
Client->>NodeA: Submit TX (no context)
Note over NodeA: Creates trace_id: abc123<br/>span: tx.receive
NodeA->>NodeB: Relay TX<br/>(traceparent: abc123)
Note over NodeB: Links to trace_id: abc123<br/>span: tx.relay
```
- **HTTP/RPC**: W3C Trace Context headers (`traceparent`)
- **P2P Messages**: Protocol Buffer extension fields
---
## Slide 7: Implementation Plan
### 5-Phase Rollout (9 Weeks)
> **Note**: Dates shown are relative to project start, not calendar dates.
```mermaid
gantt
title Implementation Timeline
dateFormat YYYY-MM-DD
axisFormat Week %W
section Phase 1
Core Infrastructure :p1, 2024-01-01, 2w
section Phase 2
RPC Tracing :p2, after p1, 2w
section Phase 3
Transaction Tracing :p3, after p2, 2w
section Phase 4
Consensus Tracing :p4, after p3, 2w
section Phase 5
Documentation :p5, after p4, 1w
```
### Phase Details
| Phase | Focus | Key Deliverables | Effort |
| ----- | ------------------- | -------------------------------------------- | ------- |
| 1 | Core Infrastructure | SDK integration, Telemetry interface, Config | 10 days |
| 2 | RPC Tracing | HTTP context extraction, Handler spans | 10 days |
| 3 | Transaction Tracing | Protobuf context, P2P relay propagation | 10 days |
| 4 | Consensus Tracing | Round spans, Proposal/validation tracing | 10 days |
| 5 | Documentation | Runbook, Dashboards, Training | 7 days |
**Total Effort**: ~47 developer-days (2 developers)
> **Future Phases** (not in current scope): After traces are stable, OTel metrics can replace StatsD (~3 weeks), and OTel logs can replace Journal (~4 weeks, aligned with structured logging initiative). See Slides 3-4 for the full adoption roadmap.
---
## Slide 8: Performance Overhead
> **OTLP** = OpenTelemetry Protocol
### Estimated System Impact
| Metric | Overhead | Notes |
| ----------------- | ---------- | ------------------------------------------------ |
| **CPU** | 1-3% | Span creation and attribute setting |
| **Memory** | ~10 MB | SDK statics + batch buffer + worker thread stack |
| **Network** | 10-50 KB/s | Compressed OTLP export to collector |
| **Latency (p99)** | <2% | With proper sampling configuration |
#### How We Arrived at These Numbers
**Assumptions (XRPL mainnet baseline)**:
| Parameter | Value | Source |
| ------------------------- | ---------------------- | --------------------------------------------------------------------------------------------------- |
| Transaction throughput | ~25 TPS (peaks to ~50) | Mainnet average |
| Default peers per node | 21 | `peerfinder/detail/Tuning.h` (`defaultMaxPeers`) |
| Consensus round frequency | ~1 round / 3-4 seconds | `ConsensusParms.h` (`ledgerMIN_CONSENSUS=1950ms`) |
| Proposers per round | ~20-35 | Mainnet UNL size |
| P2P message rate | ~160 msgs/sec | See message breakdown below |
| Avg TX processing time | ~200 μs | Profiled baseline |
| Single span creation cost | 500-1000 ns | OTel C++ SDK benchmarks (see [3.5.4](./03-implementation-strategy.md#354-performance-data-sources)) |
**P2P message breakdown** (per node, mainnet):
| Message Type | Rate | Derivation |
| ------------- | ------------ | --------------------------------------------------------------------- |
| TMTransaction | ~100/sec | ~25 TPS × ~4 relay hops per TX, deduplicated by HashRouter |
| TMValidation | ~50/sec | ~35 validators × ~1 validation/3s round ~12/sec, plus relay fan-out |
| TMProposeSet | ~10/sec | ~35 proposers / 3s round ~12/round, clustered in establish phase |
| **Total** | **~160/sec** | **Only traced message types counted** |
**CPU (1-3%) — Calculation**:
Per-transaction tracing cost breakdown:
| Operation | Cost | Notes |
| ----------------------------------------------- | ----------- | ------------------------------------------ |
| `tx.receive` span (create + end + 4 attributes) | ~1400 ns | ~1000ns create + ~200ns end + 4×50ns attrs |
| `tx.validate` span | ~1200 ns | ~1000ns create + ~200ns for 2 attributes |
| `tx.relay` span | ~1200 ns | ~1000ns create + ~200ns for 2 attributes |
| Context injection into P2P message | ~200 ns | Serialize trace_id + span_id into protobuf |
| **Total per TX** | **~4.0 μs** | |
> **CPU overhead**: 4.0 μs / 200 μs baseline = **~2.0% per transaction**. Under high load with consensus + RPC spans overlapping, reaches ~3%. Consensus itself adds only ~36 μs per 3-second round (~0.001%), so the TX path dominates. On production server hardware (3+ GHz Xeon), span creation drops to ~500-600 ns, bringing per-TX cost to ~2.6 μs (~1.3%). See [Section 3.5.4](./03-implementation-strategy.md#354-performance-data-sources) for benchmark sources.
**Memory (~10 MB) — Calculation**:
| Component | Size | Notes |
| --------------------------------------------- | ------------------ | ------------------------------------- |
| TracerProvider + Exporter (gRPC channel init) | ~320 KB | Allocated once at startup |
| BatchSpanProcessor (circular buffer) | ~16 KB | 2049 × 8-byte AtomicUniquePtr entries |
| BatchSpanProcessor (worker thread stack) | ~8 MB | Default Linux thread stack size |
| Active spans (in-flight, max ~1000) | ~500-800 KB | ~500-800 bytes/span × 1000 concurrent |
| Export queue (batch buffer, max 2048 spans) | ~1 MB | ~500 bytes/span × 2048 queue depth |
| Thread-local context storage (~100 threads) | ~6.4 KB | ~64 bytes/thread |
| **Total** | **~10 MB ceiling** | |
> Memory plateaus once the export queue fills — the `max_queue_size=2048` config bounds growth.
> The worker thread stack (~8 MB) dominates the static footprint but is virtual memory; actual RSS
> depends on stack usage (typically much less). Active spans are larger than originally estimated
> (~500-800 bytes) because the OTel SDK `Span` object includes a mutex (~40 bytes), `SpanData`
> recordable (~250 bytes base), and `std::map`-based attribute storage (~200-500 bytes for 3-5
> string attributes). See [Section 3.5.4](./03-implementation-strategy.md#354-performance-data-sources) for source references.
**Network (10-50 KB/s) — Calculation**:
Two sources of network overhead:
**(A) OTLP span export to Collector:**
| Sampling Rate | Effective Spans/sec | Avg Span Size (compressed) | Bandwidth |
| -------------------------- | ------------------- | -------------------------- | ------------ |
| 100% (dev only) | ~500 | ~500 bytes | ~250 KB/s |
| **10% (recommended prod)** | **~50** | **~500 bytes** | **~25 KB/s** |
| 1% (minimal) | ~5 | ~500 bytes | ~2.5 KB/s |
> The ~500 spans/sec at 100% comes from: ~100 TX spans + ~160 P2P context spans + ~23 consensus spans/round + ~50 RPC spans = ~500/sec. OTLP protobuf with gzip compression yields ~500 bytes/span average.
**(B) P2P trace context overhead** (added to existing messages, always-on regardless of sampling):
| Message Type | Rate | Context Size | Bandwidth |
| ------------- | -------- | ------------ | ------------- |
| TMTransaction | ~100/sec | 29 bytes | ~2.9 KB/s |
| TMValidation | ~50/sec | 29 bytes | ~1.5 KB/s |
| TMProposeSet | ~10/sec | 29 bytes | ~0.3 KB/s |
| **Total P2P** | | | **~4.7 KB/s** |
> **Combined**: 25 KB/s (OTLP export at 10%) + 5 KB/s (P2P context) ≈ **~30 KB/s typical**. The 10-50 KB/s range covers 10-20% sampling under normal to peak mainnet load.
**Latency (<2%) — Calculation**:
| Path | Tracing Cost | Baseline | Overhead |
| ------------------------------ | ------------ | -------- | -------- |
| Fast RPC (e.g., `server_info`) | 2.75 μs | ~1 ms | 0.275% |
| Slow RPC (e.g., `path_find`) | 2.75 μs | ~100 ms | 0.003% |
| Transaction processing | 4.0 μs | ~200 μs | 2.0% |
| Consensus round | 36 μs | ~3 sec | 0.001% |
> At p99, even the worst case (TX processing at 2.0%) is within the 1-3% range. RPC and consensus overhead are negligible. On production hardware, TX overhead drops to ~1.3%.
### Per-Message Overhead (Context Propagation)
Each P2P message carries trace context with the following overhead:
| Field | Size | Description |
| ------------- | ------------- | ----------------------------------------- |
| `trace_id` | 16 bytes | Unique identifier for the entire trace |
| `span_id` | 8 bytes | Current span (becomes parent on receiver) |
| `trace_flags` | 1 byte | Sampling decision flags |
| `trace_state` | 0-4 bytes | Optional vendor-specific data |
| **Total** | **~29 bytes** | **Added per traced P2P message** |
```mermaid
flowchart LR
subgraph msg["P2P Message with Trace Context"]
A["Original Message<br/>(variable size)"] --> B["+ TraceContext<br/>(~29 bytes)"]
end
subgraph breakdown["Context Breakdown"]
C["trace_id<br/>16 bytes"]
D["span_id<br/>8 bytes"]
E["flags<br/>1 byte"]
F["state<br/>0-4 bytes"]
end
B --> breakdown
style A fill:#424242,stroke:#212121,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#1565c0,stroke:#0d47a1,color:#fff
style D fill:#1565c0,stroke:#0d47a1,color:#fff
style E fill:#e65100,stroke:#bf360c,color:#fff
style F fill:#4a148c,stroke:#2e0d57,color:#fff
```
**Reading the diagram:**
- **Original Message (gray, left)**: The existing P2P message payload of variable size this is unchanged; trace context is appended, never modifying the original data.
- **+ TraceContext (green, right of message)**: The additional 29-byte context block attached to each traced message; the arrow from the original message shows it is a pure addition.
- **Context Breakdown (right subgraph)**: The four fields `trace_id` (16 bytes), `span_id` (8 bytes), `flags` (1 byte), and `state` (0-4 bytes) show exactly what is added and their individual sizes.
- **Color coding**: Blue fields (`trace_id`, `span_id`) are the core identifiers required for trace correlation; orange (`flags`) controls sampling decisions; purple (`state`) is optional vendor data typically omitted.
> **Note**: 29 bytes represents ~1-6% overhead depending on message size (500B simple TX to 5KB proposal), which is acceptable for the observability benefits provided.
### Mitigation Strategies
```mermaid
flowchart LR
A["Head Sampling<br/>10% default"] --> B["Tail Sampling<br/>Keep errors/slow"] --> C["Batch Export<br/>Reduce I/O"] --> D["Conditional Compile<br/>XRPL_ENABLE_TELEMETRY"]
style A fill:#1565c0,stroke:#0d47a1,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#e65100,stroke:#bf360c,color:#fff
style D fill:#4a148c,stroke:#2e0d57,color:#fff
```
> For a detailed explanation of head vs. tail sampling, see Slide 9.
### Kill Switches (Rollback Options)
1. **Config Disable**: Set `enabled=0` in config instant disable, no restart needed for sampling
2. **Rebuild**: Compile with `XRPL_ENABLE_TELEMETRY=OFF` zero overhead (no-op)
3. **Full Revert**: Clean separation allows easy commit reversion
---
## Slide 9: Sampling Strategies — Head vs. Tail
> Sampling controls **which traces are recorded and exported**. Without sampling, every operation generates a trace — at 500+ spans/sec, this overwhelms storage and network. Sampling lets you keep the signal, discard the noise.
### Head Sampling (Decision at Start)
The sampling decision is made **when a trace begins**, before any work is done. A random number is generated; if it falls within the configured ratio, the entire trace is recorded. Otherwise, the trace is silently dropped.
```mermaid
flowchart LR
A["New Request<br/>Arrives"] --> B{"Random < 10%?"}
B -->|"Yes (1 in 10)"| C["Record Entire Trace<br/>(all spans)"]
B -->|"No (9 in 10)"| D["Drop Entire Trace<br/>(zero overhead)"]
style C fill:#2e7d32,stroke:#1b5e20,color:#fff
style D fill:#c62828,stroke:#8c2809,color:#fff
style B fill:#1565c0,stroke:#0d47a1,color:#fff
```
| Aspect | Details |
| ----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Where it runs** | Inside rippled (SDK-level). Configured via `sampling_ratio` in `rippled.cfg`. |
| **When the decision happens** | At trace creation time before the first span is even populated. |
| **How it works** | `sampling_ratio=0.1` means each trace has a 10% probability of being recorded. Dropped traces incur near-zero overhead (no spans created, no attributes set, no export). |
| **Propagation** | Once a trace is sampled, the `trace_flags` field (1 byte in the context header) tells downstream nodes to also sample it. Unsampled traces propagate `trace_flags=0`, so downstream nodes skip them too. |
| **Pros** | Lowest overhead. Simple to configure. Predictable resource usage. |
| **Cons** | **Blind** it doesn't know if the trace will be interesting. A rare error or slow consensus round has only a 10% chance of being captured. |
| **Best for** | High-volume, steady-state traffic where most traces look similar (e.g., routine RPC requests). |
**rippled configuration**:
```ini
[telemetry]
# Record 10% of traces (recommended for production)
sampling_ratio=0.1
```
### Tail Sampling (Decision at End)
The sampling decision is made **after the trace completes**, based on its actual content was it slow? Did it error? Was it a consensus round? This requires buffering complete traces before deciding.
```mermaid
flowchart TB
A["All Traces<br/>Buffered (100%)"] --> B["OTel Collector<br/>Evaluates Rules"]
B --> C{"Error?"}
C -->|Yes| K["KEEP"]
C -->|No| D{"Slow?<br/>(>5s consensus,<br/>>1s RPC)"}
D -->|Yes| K
D -->|No| E{"Random < 10%?"}
E -->|Yes| K
E -->|No| F["DROP"]
style K fill:#2e7d32,stroke:#1b5e20,color:#fff
style F fill:#c62828,stroke:#8c2809,color:#fff
style B fill:#1565c0,stroke:#0d47a1,color:#fff
style C fill:#e65100,stroke:#bf360c,color:#fff
style D fill:#e65100,stroke:#bf360c,color:#fff
style E fill:#4a148c,stroke:#2e0d57,color:#fff
```
| Aspect | Details |
| ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Where it runs** | In the **OTel Collector** (external process), not inside rippled. rippled exports 100% of traces; the Collector decides what to keep. |
| **When the decision happens** | After the Collector has received all spans for a trace (waits `decision_wait=10s` for stragglers). |
| **How it works** | Policy rules evaluate the completed trace: keep all errors, keep slow operations above a threshold, keep all consensus rounds, then probabilistically sample the rest at 10%. |
| **Pros** | **Never misses important traces**. Errors, slow requests, and consensus anomalies are always captured regardless of probability. |
| **Cons** | Higher resource usage rippled must export 100% of spans to the Collector, which buffers them in memory before deciding. The Collector needs more RAM (configured via `num_traces` and `decision_wait`). |
| **Best for** | Production troubleshooting where you can't afford to miss errors or anomalies. |
**Collector configuration** (tail sampling rules for rippled):
```yaml
processors:
tail_sampling:
decision_wait: 10s # Wait for all spans in a trace
num_traces: 100000 # Buffer up to 100K concurrent traces
policies:
- name: errors # Always keep error traces
type: status_code
status_code: { status_codes: [ERROR] }
- name: slow-consensus # Keep consensus rounds >5s
type: latency
latency: { threshold_ms: 5000 }
- name: slow-rpc # Keep slow RPC requests >1s
type: latency
latency: { threshold_ms: 1000 }
- name: probabilistic # Sample 10% of everything else
type: probabilistic
probabilistic: { sampling_percentage: 10 }
```
### Head vs. Tail — Side-by-Side
| | Head Sampling | Tail Sampling |
| ----------------------------- | ---------------------------------------- | ------------------------------------------------ |
| **Decision point** | Trace start (inside rippled) | Trace end (in OTel Collector) |
| **Knows trace content?** | No (random coin flip) | Yes (evaluates completed trace) |
| **Overhead on rippled** | Lowest (dropped traces = no-op) | Higher (must export 100% to Collector) |
| **Collector resource usage** | Low (receives only sampled traces) | Higher (buffers all traces before deciding) |
| **Captures all errors?** | No (only if trace was randomly selected) | **Yes** (error policy catches them) |
| **Captures slow operations?** | No (random) | **Yes** (latency policy catches them) |
| **Configuration** | `rippled.cfg`: `sampling_ratio=0.1` | `otel-collector.yaml`: `tail_sampling` processor |
| **Best for** | High-throughput steady-state | Troubleshooting & anomaly detection |
### Recommended Strategy for rippled
Use **both** in a layered approach:
```mermaid
flowchart LR
subgraph rippled["rippled (Head Sampling)"]
HS["sampling_ratio=1.0<br/>(export everything)"]
end
subgraph collector["OTel Collector (Tail Sampling)"]
TS["Keep: errors + slow + 10% random<br/>Drop: routine traces"]
end
subgraph storage["Backend Storage"]
ST["Only interesting traces<br/>stored long-term"]
end
rippled -->|"100% of spans"| collector -->|"~15-20% kept"| storage
style rippled fill:#424242,stroke:#212121,color:#fff
style collector fill:#1565c0,stroke:#0d47a1,color:#fff
style storage fill:#2e7d32,stroke:#1b5e20,color:#fff
```
> **Why this works**: rippled exports everything (no blind drops), the Collector applies intelligent filtering (keep errors/slow/anomalies, sample the rest), and only ~15-20% of traces reach storage. If Collector resource usage becomes a concern, add head sampling at `sampling_ratio=0.5` to halve the export volume while still giving the Collector enough data for good tail-sampling decisions.
---
## Slide 10: Data Collection & Privacy
### What Data is Collected
| Category | Attributes Collected | Purpose |
| --------------- | ------------------------------------------------------------------------------------ | --------------------------- |
| **Transaction** | `tx.hash`, `tx.type`, `tx.result`, `tx.fee`, `ledger_index` | Trace transaction lifecycle |
| **Consensus** | `round`, `phase`, `mode`, `proposers` (count of proposing validators), `duration_ms` | Analyze consensus timing |
| **RPC** | `command`, `version`, `status`, `duration_ms` | Monitor RPC performance |
| **Peer** | `peer.id`(public key), `latency_ms`, `message.type`, `message.size` | Network topology analysis |
| **Ledger** | `ledger.hash`, `ledger.index`, `close_time`, `tx_count` | Ledger progression tracking |
| **Job** | `job.type`, `queue_ms`, `worker` | JobQueue performance |
### What is NOT Collected (Privacy Guarantees)
```mermaid
flowchart LR
subgraph notCollected["❌ NOT Collected"]
direction LR
A["Private Keys"] ~~~ B["Account Balances"] ~~~ C["Transaction Amounts"]
end
subgraph alsoNot["❌ Also Excluded"]
direction LR
D["IP Addresses<br/>(configurable)"] ~~~ E["Personal Data"] ~~~ F["Raw TX Payloads"]
end
style A fill:#c62828,stroke:#8c2809,color:#fff
style B fill:#c62828,stroke:#8c2809,color:#fff
style C fill:#c62828,stroke:#8c2809,color:#fff
style D fill:#c62828,stroke:#8c2809,color:#fff
style E fill:#c62828,stroke:#8c2809,color:#fff
style F fill:#c62828,stroke:#8c2809,color:#fff
```
**Reading the diagram:**
- **NOT Collected (top row, red)**: Private Keys, Account Balances, and Transaction Amounts are explicitly excluded these are financial/security-sensitive fields that telemetry never touches.
- **Also Excluded (bottom row, red)**: IP Addresses (configurable per deployment), Personal Data, and Raw TX Payloads are also excluded these protect operator and user privacy.
- **All-red styling**: Every box is styled in red to visually reinforce that these are hard exclusions, not optional the telemetry system has no code path to collect any of these fields.
- **Two-row layout**: The split between "NOT Collected" and "Also Excluded" distinguishes between financial data (top) and operational/personal data (bottom), making the privacy boundaries clear to auditors.
### Privacy Protection Mechanisms
| Mechanism | Description |
| -------------------------- | ------------------------------------------------------------- |
| **Account Hashing** | `xrpl.tx.account` is hashed at collector level before storage |
| **Configurable Redaction** | Sensitive fields can be excluded via config |
| **Sampling** | Only 10% of traces recorded by default (reduces exposure) |
| **Local Control** | Node operators control what gets exported |
| **No Raw Payloads** | Transaction content is never recorded, only metadata |
> **Key Principle**: Telemetry collects **operational metadata** (timing, counts, hashes) — never **sensitive content** (keys, balances, amounts).
---
## Slide 11: External Dashboard Parity (Phase 7+)
### Bridging Community Monitoring into Native OTel
The community [xrpl-validator-dashboard](https://github.com/realgrapedrop/xrpl-validator-dashboard) provides 86 metrics for validator operators. We integrated the 29 missing metrics natively into the OTel pipeline.
### New Metric Categories
```mermaid
graph LR
subgraph "New Observable Gauges"
VH["Validator Health<br/>amendment_blocked, UNL expiry,<br/>quorum"]
PQ["Peer Quality<br/>P90 latency, insane peers,<br/>version awareness"]
LE["Ledger Economy<br/>fees, reserves, tx rate,<br/>ledger age"]
ST["State Tracking<br/>state value 0-6,<br/>time in state"]
VA["Validation Agreement<br/>1h/24h agreement %,<br/>agreements, misses"]
end
subgraph "Counters"
C1["ledgers_closed_total"]
C2["validations_sent_total"]
C3["state_changes_total"]
end
style VH fill:#1565c0,color:#fff
style PQ fill:#2e7d32,color:#fff
style LE fill:#e65100,color:#fff
style ST fill:#6a1b9a,color:#fff
style VA fill:#c62828,color:#fff
style C1 fill:#37474f,color:#fff
style C2 fill:#37474f,color:#fff
style C3 fill:#37474f,color:#fff
```
### ValidationTracker — Agreement Computation
```mermaid
sequenceDiagram
participant C as RCLConsensus
participant VT as ValidationTracker
participant MR as MetricsRegistry
participant P as Prometheus
C->>VT: recordOurValidation(hash, seq)
Note over VT: Stores pending event
C->>VT: recordNetworkValidation(hash, seq)
Note over VT: Marks network validated
MR->>VT: reconcile() [every 10s]
Note over VT: After 8s grace period:<br/>both validated → agreed<br/>only one → missed<br/>5min late repair window
MR->>P: Export agreement_pct_1h/24h
```
### New Grafana Dashboards
| Dashboard | Key Panels |
| ---------------- | --------------------------------------------------- |
| Validator Health | Agreement %, amendment blocked, quorum, state value |
| Peer Quality | P90 latency, version awareness, upgrade recommended |
| Ledger Economy | Base fee, reserves, ledger age, transaction rate |
---
_End of Presentation_

View File

@@ -1529,3 +1529,46 @@ validators.txt
# set to ssl_verify to 0.
[ssl_verify]
1
#-------------------------------------------------------------------------------
#
# 11. Telemetry (OpenTelemetry Tracing)
#
#-------------------------------------------------------------------------------
#
# Enables distributed tracing via OpenTelemetry. Requires building with
# -DXRPL_ENABLE_TELEMETRY=ON (telemetry Conan option).
#
# [telemetry]
#
# enabled=0
#
# Enable or disable telemetry at runtime. Default: 0 (disabled).
#
# endpoint=http://localhost:4318/v1/traces
#
# The OpenTelemetry Collector endpoint (OTLP/HTTP). Default: http://localhost:4318/v1/traces.
#
# exporter=otlp_http
#
# Exporter type: otlp_http. Default: otlp_http.
#
# sampling_ratio=1.0
#
# Fraction of traces to sample (0.0 to 1.0). Default: 1.0 (all traces).
#
# trace_rpc=1
#
# Enable RPC request tracing. Default: 1.
#
# trace_transactions=1
#
# Enable transaction lifecycle tracing. Default: 1.
#
# trace_consensus=1
#
# Enable consensus round tracing. Default: 1.
#
# trace_peer=0
#
# Enable peer message tracing (high volume). Default: 0.
#

View File

@@ -78,6 +78,13 @@ include(target_link_modules)
# Level 01
add_module(xrpl beast)
target_link_libraries(xrpl.libxrpl.beast PUBLIC xrpl.imports.main)
# OTelCollector in beast/insight uses OTel Metrics SDK when telemetry is enabled.
if(telemetry)
target_link_libraries(
xrpl.libxrpl.beast
PUBLIC opentelemetry-cpp::opentelemetry-cpp
)
endif()
include(GitInfo)
add_module(xrpl git)
@@ -204,6 +211,23 @@ target_link_libraries(
add_module(xrpl tx)
target_link_libraries(xrpl.libxrpl.tx PUBLIC xrpl.libxrpl.ledger)
# Telemetry module — OpenTelemetry distributed tracing support.
# Sources: include/xrpl/telemetry/ (headers), src/libxrpl/telemetry/ (impl).
# When telemetry=ON, links the Conan-provided umbrella target
# opentelemetry-cpp::opentelemetry-cpp (individual component targets like
# ::api, ::sdk are not available in the Conan package).
add_module(xrpl telemetry)
target_link_libraries(
xrpl.libxrpl.telemetry
PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.beast
)
if(telemetry)
target_link_libraries(
xrpl.libxrpl.telemetry
PUBLIC opentelemetry-cpp::opentelemetry-cpp
)
endif()
add_library(xrpl.libxrpl)
set_target_properties(xrpl.libxrpl PROPERTIES OUTPUT_NAME xrpl)
@@ -235,6 +259,7 @@ target_link_modules(
resource
server
shamap
telemetry
tx
)

View File

@@ -27,8 +27,12 @@ file(
src/*.cpp
src/*.md
Builds/*.md
*.md
)
# Add only top-level .md files (README, CONTRIBUTING, etc.) without
# recursing into dot-directories like .claude/ whose files are not
# valid Doxygen/CMake sources.
file(GLOB doxygen_top_md CONFIGURE_DEPENDS "*.md")
list(APPEND doxygen_input ${doxygen_top_md})
list(APPEND doxygen_input external/README.md)
set(dependencies "${doxygen_input}" "${doxyfile}")

View File

@@ -10,10 +10,13 @@
"rocksdb/10.5.1#4a197eca381a3e5ae8adf8cffa5aacd0%1765850186.86",
"re2/20251105#8579cfd0bda4daf0683f9e3898f964b4%1774398111.888",
"protobuf/6.33.5#d96d52ba5baaaa532f47bda866ad87a5%1773224203.27",
"opentelemetry-cpp/1.18.0#efd9851e173f8a13b9c7d35232de8cf1%1750409186.472",
"openssl/3.6.1#e6399de266349245a4542fc5f6c71552%1774458290.139",
"nudb/2.0.9#0432758a24204da08fee953ec9ea03cb%1769436073.32",
"nlohmann_json/3.11.3#45828be26eb619a2e04ca517bb7b828d%1701220705.259",
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504%1765850143.914",
"libiconv/1.17#1e65319e945f2d31941a9d28cc13c058%1765842973.492",
"libcurl/8.18.0#364bc3755cb9ef84ed9a7ae9c7efc1c1%1770984390.024",
"libbacktrace/cci.20210118#a7691bfccd8caaf66309df196790a5a1%1765842973.03",
"libarchive/3.8.1#ffee18995c706e02bf96e7a2f7042e0d%1765850144.736",
"jemalloc/5.3.0#e951da9cf599e956cebc117880d2d9f8%1729241615.244",
@@ -30,9 +33,15 @@
"zlib/1.3.1#cac0f6daea041b0ccf42934163defb20%1774439233.809",
"strawberryperl/5.32.1.1#8d114504d172cfea8ea1662d09b6333e%1774447376.964",
"protobuf/6.33.5#d96d52ba5baaaa532f47bda866ad87a5%1773224203.27",
"pkgconf/2.5.1#93c2051284cba1279494a43a4fcfeae2%1757684701.089",
"opentelemetry-proto/1.4.0#4096a3b05916675ef9628f3ffd571f51%1732731336.11",
"ninja/1.13.2#c8c5dc2a52ed6e4e42a66d75b4717ceb%1764096931.974",
"nasm/2.16.01#31e26f2ee3c4346ecd347911bd126904%1765850144.707",
"msys2/cci.latest#d22fe7b2808f5fd34d0a7923ace9c54f%1770657326.649",
"meson/1.10.0#60786758ea978964c24525de19603cf4%1768294926.103",
"m4/1.4.19#5d7a4994e5875d76faf7acf3ed056036%1774365463.87",
"libtool/2.4.7#14e7739cc128bc1623d2ed318008e47e%1755679003.847",
"gnu-config/cci.20210814#466e9d4d7779e1c142443f7ea44b4284%1762363589.329",
"cmake/4.3.0#b939a42e98f593fb34d3a8c5cc860359%1774439249.183",
"b2/5.4.2#ffd6084a119587e70f11cd45d1a386e2%1774439233.447",
"automake/1.16.5#b91b7c384c3deaa9d535be02da14d04f%1755524470.56",

View File

@@ -22,6 +22,7 @@ class Xrpl(ConanFile):
"rocksdb": [True, False],
"shared": [True, False],
"static": [True, False],
"telemetry": [True, False],
"tests": [True, False],
"unity": [True, False],
"xrpld": [True, False],
@@ -54,6 +55,7 @@ class Xrpl(ConanFile):
"rocksdb": True,
"shared": False,
"static": True,
"telemetry": True,
"tests": False,
"unity": False,
"xrpld": False,
@@ -145,6 +147,10 @@ class Xrpl(ConanFile):
self.requires("jemalloc/5.3.0")
if self.options.rocksdb:
self.requires("rocksdb/10.5.1")
# OpenTelemetry C++ SDK for distributed tracing (optional).
# Provides OTLP/HTTP exporter, batch span processor, and trace API.
if self.options.telemetry:
self.requires("opentelemetry-cpp/1.18.0")
self.requires("xxhash/0.8.3", transitive_headers=True)
exports_sources = (
@@ -173,6 +179,7 @@ class Xrpl(ConanFile):
tc.variables["rocksdb"] = self.options.rocksdb
tc.variables["BUILD_SHARED_LIBS"] = self.options.shared
tc.variables["static"] = self.options.static
tc.variables["telemetry"] = self.options.telemetry
tc.variables["unity"] = self.options.unity
tc.variables["xrpld"] = self.options.xrpld
tc.generate()
@@ -225,3 +232,5 @@ class Xrpl(ConanFile):
]
if self.options.rocksdb:
libxrpl.requires.append("rocksdb::librocksdb")
if self.options.telemetry:
libxrpl.requires.append("opentelemetry-cpp::opentelemetry-cpp")

View File

@@ -87,6 +87,8 @@ words:
- daria
- dcmake
- dearmor
- Dedup
- dedup
- deleteme
- demultiplexer
- deserializaton
@@ -97,6 +99,7 @@ words:
- doxyfile
- dxrpl
- endmacro
- EOCFG
- exceptioned
- Falco
- fcontext
@@ -142,6 +145,7 @@ words:
- libxrpl
- llection
- LOCALGOOD
- logql
- logwstream
- lseq
- lsmf
@@ -182,6 +186,7 @@ words:
- NOLINTNEXTLINE
- nonxrp
- noripple
- nostd
- nudb
- nullptr
- nunl
@@ -194,8 +199,11 @@ words:
- permdex
- perminute
- permissioned
- pgrep
- pkill
- pointee
- populator
- pratik
- preauth
- preauthorization
- preauthorize
@@ -210,7 +218,9 @@ words:
- qalloc
- queuable
- Raphson
- reparent
- replayer
- reqps
- rerere
- retriable
- RIPD
@@ -267,6 +277,7 @@ words:
- txjson
- txn
- txns
- txqueue
- txs
- UBSAN
- ubsan
@@ -307,6 +318,10 @@ words:
- xchain
- ximinez
- EXPECT_STREQ
- Gantt
- gantt
- otelc
- traceql
- XMACRO
- xrpkuwait
- xrpl
@@ -314,3 +329,7 @@ words:
- xrplf
- xxhash
- xxhasher
- xychart
- zpages
- ripplex
- mseconds

2
docker/telemetry/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
# Runtime data generated by xrpld and telemetry stack
data/

703
docker/telemetry/TESTING.md Normal file
View File

@@ -0,0 +1,703 @@
# OpenTelemetry Integration Testing Guide
This document describes how to verify the rippled OpenTelemetry telemetry
pipeline end-to-end, from span generation through the observability stack
(otel-collector, Jaeger, Prometheus, Grafana).
---
## Prerequisites
### Build xrpld with telemetry
```bash
conan install . --build=missing -o telemetry=True
cmake --preset default -Dtelemetry=ON
cmake --build --preset default --target xrpld
```
The binary is at `.build/xrpld`.
### Required tools
- **Docker** with `docker compose` (v2)
- **curl**
- **jq** (JSON processor)
### Verify binary
```bash
.build/xrpld --version
```
---
## Test 1: Single-Node Standalone (Quick Verification)
This test verifies RPC and transaction spans in standalone mode. Consensus
spans will not fire because standalone mode does not run consensus.
### Step 1: Start the observability stack
```bash
docker compose -f docker/telemetry/docker-compose.yml up -d
```
Wait for services to be ready:
```bash
# otel-collector health
curl -sf http://localhost:13133/ && echo "collector ready"
# Jaeger UI
curl -sf http://localhost:16686/ > /dev/null && echo "jaeger ready"
```
### Step 2: Start xrpld in standalone mode
```bash
.build/xrpld --conf docker/telemetry/xrpld-telemetry.cfg -a --start
```
Wait a few seconds for the node to initialize.
### Step 3: Exercise RPC spans
```bash
# server_info
curl -s http://localhost:5005 \
-d '{"method":"server_info"}' | jq .result.info.server_state
# server_state
curl -s http://localhost:5005 \
-d '{"method":"server_state"}' | jq .result.state.server_state
# ledger
curl -s http://localhost:5005 \
-d '{"method":"ledger","params":[{"ledger_index":"current"}]}' \
| jq .result.ledger_current_index
```
### Step 4: Submit a transaction
Close the ledger first (required in standalone mode):
```bash
curl -s http://localhost:5005 -d '{"method":"ledger_accept"}'
```
Submit a Payment from the genesis account:
```bash
curl -s http://localhost:5005 -d '{
"method": "submit",
"params": [{
"secret": "snoPBrXtMeMyMHUVTgbuqAfg1SUTb",
"tx_json": {
"TransactionType": "Payment",
"Account": "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh",
"Destination": "rPMh7Pi9ct699iZUTWzJaUMR1o42VEfGqF",
"Amount": "10000000"
}
}]
}' | jq .result.engine_result
```
Expected result: `"tesSUCCESS"`.
Close the ledger again to finalize:
```bash
curl -s http://localhost:5005 -d '{"method":"ledger_accept"}'
```
### Step 5: Verify traces in Jaeger
Wait 5 seconds for the batch export, then:
```bash
JAEGER="http://localhost:16686"
# Check rippled service is registered
curl -s "$JAEGER/api/services" | jq '.data'
# Check RPC spans
curl -s "$JAEGER/api/traces?service=rippled&operation=rpc.request&limit=5&lookback=1h" \
| jq '.data | length'
curl -s "$JAEGER/api/traces?service=rippled&operation=rpc.process&limit=5&lookback=1h" \
| jq '.data | length'
curl -s "$JAEGER/api/traces?service=rippled&operation=rpc.command.server_info&limit=5&lookback=1h" \
| jq '.data | length'
# Check transaction spans
curl -s "$JAEGER/api/traces?service=rippled&operation=tx.process&limit=5&lookback=1h" \
| jq '.data | length'
```
Or open the Jaeger UI: http://localhost:16686
### Step 6: Teardown
```bash
# Kill xrpld (Ctrl+C or)
kill $(pgrep -f 'xrpld.*xrpld-telemetry')
# Stop observability stack
docker compose -f docker/telemetry/docker-compose.yml down
# Clean xrpld data
rm -rf data/
```
### Expected spans (standalone mode)
| Span Name | Expected | Notes |
| --------------------------- | -------- | ----------------------------- |
| `rpc.request` | Yes | Every HTTP RPC call |
| `rpc.process` | Yes | Every RPC processing |
| `rpc.command.server_info` | Yes | server_info RPC |
| `rpc.command.server_state` | Yes | server_state RPC |
| `rpc.command.ledger` | Yes | ledger RPC |
| `rpc.command.submit` | Yes | submit RPC |
| `rpc.command.ledger_accept` | Yes | ledger_accept RPC |
| `tx.process` | Yes | Transaction submission |
| `tx.receive` | No | No peers in standalone |
| `consensus.*` | No | Consensus disabled standalone |
---
## Test 2: 6-Node Consensus Network (Full Verification)
This test verifies ALL span categories including consensus and peer
transaction relay, using a 6-node validator network.
### Automated
Run the integration test script:
```bash
bash docker/telemetry/integration-test.sh
```
The script will:
1. Start the observability stack
2. Generate 6 validator key pairs
3. Create config files for each node
4. Start all 6 nodes
5. Wait for consensus ("proposing" state)
6. Exercise RPC, submit transactions
7. Verify all span categories in Jaeger
8. Verify spanmetrics in Prometheus
9. Print results and leave the stack running
### Manual
If you prefer to run the steps manually:
#### Step 1: Start observability stack
```bash
docker compose -f docker/telemetry/docker-compose.yml up -d
```
#### Step 2: Generate validator keys
Start a temporary standalone xrpld:
```bash
.build/xrpld --conf docker/telemetry/xrpld-telemetry.cfg -a --start &
TEMP_PID=$!
sleep 5
```
Generate 6 key pairs:
```bash
for i in $(seq 1 6); do
curl -s http://localhost:5005 \
-d '{"method":"validation_create"}' | jq '.result'
done
```
Record the `validation_seed` and `validation_public_key` for each.
Kill the temporary node:
```bash
kill $TEMP_PID
rm -rf data/
```
#### Step 3: Create node configs
For each node (1-6), create a config file. Template:
```ini
[server]
port_rpc
port_peer
[port_rpc]
port = {5004 + node_number}
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_peer]
port = {51234 + node_number}
ip = 0.0.0.0
protocol = peer
[node_db]
type=NuDB
path=/tmp/xrpld-integration/node{N}/nudb
online_delete=256
[database_path]
/tmp/xrpld-integration/node{N}/db
[debug_logfile]
/tmp/xrpld-integration/node{N}/debug.log
[validation_seed]
{seed from step 2}
[validators_file]
/tmp/xrpld-integration/validators.txt
[ips_fixed]
127.0.0.1 51235
127.0.0.1 51236
127.0.0.1 51237
127.0.0.1 51238
127.0.0.1 51239
127.0.0.1 51240
[peer_private]
1
[telemetry]
enabled=1
endpoint=http://localhost:4318/v1/traces
exporter=otlp_http
sampling_ratio=1.0
batch_size=512
batch_delay_ms=2000
max_queue_size=2048
trace_rpc=1
trace_transactions=1
trace_consensus=1
trace_peer=0
trace_ledger=1
[rpc_startup]
{ "command": "log_level", "severity": "warning" }
[ssl_verify]
0
```
#### Step 4: Create validators.txt
```ini
[validators]
{public_key_1}
{public_key_2}
{public_key_3}
{public_key_4}
{public_key_5}
{public_key_6}
```
#### Step 5: Start all 6 nodes
```bash
for i in $(seq 1 6); do
.build/xrpld --conf /tmp/xrpld-integration/node$i/xrpld.cfg --start &
echo $! > /tmp/xrpld-integration/node$i/xrpld.pid
done
```
#### Step 6: Wait for consensus
Poll each node until `server_state` = `"proposing"`:
```bash
for port in 5005 5006 5007 5008 5009 5010; do
while true; do
state=$(curl -s http://localhost:$port \
-d '{"method":"server_info"}' \
| jq -r '.result.info.server_state')
echo "Port $port: $state"
[ "$state" = "proposing" ] && break
sleep 5
done
done
```
#### Step 7: Exercise RPC and submit transaction
```bash
# RPC calls
curl -s http://localhost:5005 -d '{"method":"server_info"}'
curl -s http://localhost:5005 -d '{"method":"server_state"}'
curl -s http://localhost:5005 -d '{"method":"ledger","params":[{"ledger_index":"current"}]}'
# Submit transaction
curl -s http://localhost:5005 -d '{
"method": "submit",
"params": [{
"secret": "snoPBrXtMeMyMHUVTgbuqAfg1SUTb",
"tx_json": {
"TransactionType": "Payment",
"Account": "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh",
"Destination": "rPMh7Pi9ct699iZUTWzJaUMR1o42VEfGqF",
"Amount": "10000000"
}
}]
}'
```
Wait 15 seconds for consensus and batch export.
#### Step 8: Verify in Jaeger
See the "Verification Queries" section below.
---
## Expected Span Catalog
All 16 production span names instrumented across Phases 2-5:
| Span Name | Source File | Phase | Key Attributes | How to Trigger |
| --------------------------- | --------------------- | ----- | ---------------------------------------------------------------------------------------- | ------------------------- |
| `rpc.request` | ServerHandler.cpp:271 | 2 | -- | Any HTTP RPC call |
| `rpc.process` | ServerHandler.cpp:573 | 2 | -- | Any HTTP RPC call |
| `rpc.ws_message` | ServerHandler.cpp:384 | 2 | -- | WebSocket RPC message |
| `rpc.command.<name>` | RPCHandler.cpp:161 | 2 | `xrpl.rpc.command`, `xrpl.rpc.version`, `xrpl.rpc.role` | Any RPC command |
| `tx.process` | NetworkOPs.cpp:1227 | 3 | `xrpl.tx.hash`, `xrpl.tx.local`, `xrpl.tx.path` | Submit transaction |
| `tx.receive` | PeerImp.cpp:1273 | 3 | `xrpl.peer.id` | Peer relays transaction |
| `consensus.proposal.send` | RCLConsensus.cpp:177 | 4 | `xrpl.consensus.round` | Consensus proposing phase |
| `consensus.ledger_close` | RCLConsensus.cpp:282 | 4 | `xrpl.consensus.ledger.seq`, `xrpl.consensus.mode` | Ledger close event |
| `consensus.accept` | RCLConsensus.cpp:395 | 4 | `xrpl.consensus.proposers`, `xrpl.consensus.round_time_ms` | Ledger accepted |
| `consensus.validation.send` | RCLConsensus.cpp:753 | 4 | `xrpl.consensus.ledger.seq`, `xrpl.consensus.proposing` | Validation sent |
| `consensus.accept.apply` | RCLConsensus.cpp:453 | 4 | `xrpl.consensus.close_time`, `close_time_correct`, `close_resolution_ms`, `state` | Ledger apply + close time |
| `tx.apply` | BuildLedger.cpp:88 | 5 | `xrpl.ledger.tx_count`, `xrpl.ledger.tx_failed` | Ledger close (tx set) |
| `ledger.build` | BuildLedger.cpp:31 | 5 | `xrpl.ledger.seq`, `xrpl.ledger.close_time`, `close_time_correct`, `close_resolution_ms` | Ledger build |
| `ledger.validate` | LedgerMaster.cpp:915 | 5 | `xrpl.ledger.seq`, `xrpl.ledger.validations` | Ledger validated |
| `ledger.store` | LedgerMaster.cpp:409 | 5 | `xrpl.ledger.seq` | Ledger stored |
| `peer.proposal.receive` | PeerImp.cpp:1667 | 5 | `xrpl.peer.id`, `xrpl.peer.proposal.trusted` | Peer sends proposal |
| `peer.validation.receive` | PeerImp.cpp:2264 | 5 | `xrpl.peer.id`, `xrpl.peer.validation.trusted` | Peer sends validation |
---
## Verification Queries
### Jaeger API
Base URL: `http://localhost:16686`
```bash
JAEGER="http://localhost:16686"
# List all services
curl -s "$JAEGER/api/services" | jq '.data'
# List operations for rippled
curl -s "$JAEGER/api/services/rippled/operations" | jq '.data'
# Query traces by operation
for op in "rpc.request" "rpc.process" \
"rpc.command.server_info" "rpc.command.server_state" "rpc.command.ledger" \
"tx.process" "tx.receive" "tx.apply" \
"consensus.proposal.send" "consensus.ledger_close" \
"consensus.accept" "consensus.accept.apply" \
"consensus.validation.send" \
"ledger.build" "ledger.validate" "ledger.store" \
"peer.proposal.receive" "peer.validation.receive"; do
count=$(curl -s "$JAEGER/api/traces?service=rippled&operation=$op&limit=5&lookback=1h" \
| jq '.data | length')
printf "%-35s %s traces\n" "$op" "$count"
done
```
### Prometheus API
Base URL: `http://localhost:9090`
```bash
PROM="http://localhost:9090"
# Span call counts (from spanmetrics connector)
curl -s "$PROM/api/v1/query?query=traces_span_metrics_calls_total" \
| jq '.data.result[] | {span: .metric.span_name, count: .value[1]}'
# Latency histogram
curl -s "$PROM/api/v1/query?query=traces_span_metrics_duration_milliseconds_count" \
| jq '.data.result[] | {span: .metric.span_name, count: .value[1]}'
# RPC calls by command
curl -s "$PROM/api/v1/query?query=traces_span_metrics_calls_total{span_name=~\"rpc.command.*\"}" \
| jq '.data.result[] | {command: .metric["xrpl.rpc.command"], count: .value[1]}'
```
### System Metrics (beast::insight via OTel native)
rippled's built-in `beast::insight` framework exports metrics natively via OTLP/HTTP to the OTel Collector
on port 4318 (same endpoint as traces). These appear in Prometheus alongside spanmetrics.
Requires `[insight]` config in `xrpld.cfg`:
```ini
[insight]
server=otel
endpoint=http://localhost:4318/v1/metrics
prefix=rippled
```
Verify system metrics in Prometheus:
```bash
# Ledger age gauge
curl -s "$PROM/api/v1/query?query=rippled_LedgerMaster_Validated_Ledger_Age" | jq '.data.result'
# Peer counts
curl -s "$PROM/api/v1/query?query=rippled_Peer_Finder_Active_Inbound_Peers" | jq '.data.result'
# RPC request counter
curl -s "$PROM/api/v1/query?query=rippled_rpc_requests" | jq '.data.result'
# State accounting
curl -s "$PROM/api/v1/query?query=rippled_State_Accounting_Full_duration" | jq '.data.result'
# Overlay traffic
curl -s "$PROM/api/v1/query?query=rippled_total_Bytes_In" | jq '.data.result'
```
Key system metrics (prefix `rippled_`):
| Metric | Type | Source |
| ------------------------------------- | --------- | ----------------------------------------- |
| `LedgerMaster_Validated_Ledger_Age` | gauge | LedgerMaster.h:373 |
| `LedgerMaster_Published_Ledger_Age` | gauge | LedgerMaster.h:374 |
| `State_Accounting_{Mode}_duration` | gauge | NetworkOPs.cpp:774 |
| `State_Accounting_{Mode}_transitions` | gauge | NetworkOPs.cpp:780 |
| `Peer_Finder_Active_Inbound_Peers` | gauge | PeerfinderManager.cpp:214 |
| `Peer_Finder_Active_Outbound_Peers` | gauge | PeerfinderManager.cpp:215 |
| `Overlay_Peer_Disconnects` | gauge | OverlayImpl.h:557 |
| `job_count` | gauge | JobQueue.cpp:26 |
| `rpc_requests` | counter | ServerHandler.cpp:108 |
| `rpc_time` | histogram | ServerHandler.cpp:110 |
| `rpc_size` | histogram | ServerHandler.cpp:109 |
| `ios_latency` | histogram | Application.cpp:438 |
| `pathfind_fast` | histogram | PathRequests.h:23 |
| `pathfind_full` | histogram | PathRequests.h:24 |
| `ledger_fetches` | counter | InboundLedgers.cpp:44 |
| `ledger_history_mismatch` | counter | LedgerHistory.cpp:16 |
| `warn` | counter | Logic.h:33 |
| `drop` | counter | Logic.h:34 |
| `{category}_Bytes_In/Out` | gauge | OverlayImpl.h:535 (57 traffic categories) |
| `{category}_Messages_In/Out` | gauge | OverlayImpl.h:535 (57 traffic categories) |
### Grafana
Open http://localhost:3000 (anonymous admin access enabled).
Pre-configured dashboards (span-derived):
- **RPC Performance**: Request rates, latency percentiles by command, top commands, WebSocket rate
- **Transaction Overview**: Transaction processing rates, apply duration, peer relay, failed tx rate
- **Consensus Health**: Consensus round duration, proposer counts, mode tracking, accept heatmap
- **Ledger Operations**: Build/validate/store rates and durations, TX apply metrics
- **Peer Network**: Proposal/validation receive rates, trusted vs untrusted breakdown (requires `trace_peer=1`)
Pre-configured dashboards (system metrics):
- **Node Health (System Metrics)**: Validated/published ledger age, operating mode, I/O latency, job queue
- **Network Traffic (System Metrics)**: Peer counts, disconnects, overlay traffic by category
- **RPC & Pathfinding (System Metrics)**: RPC request rate/time/size, pathfinding duration, resource warnings
Pre-configured datasources:
- **Jaeger**: Trace data at `http://jaeger:16686`
- **Tempo**: Trace data at `http://tempo:3200` (via Grafana Explore)
- **Prometheus**: Metrics at `http://prometheus:9090`
- **Loki**: Log data at `http://loki:3100` (via Grafana Explore)
---
## Test 3: Log-Trace Correlation (Phase 8)
Phase 8 injects `trace_id` and `span_id` into rippled's log output when
a log line is emitted within an active OTel span. This test verifies the
end-to-end log-trace correlation pipeline.
### Step 1: Verify trace_id in log output
After running Test 1 or Test 2 (which generate RPC spans), check the
rippled debug.log for trace context:
```bash
grep 'trace_id=[a-f0-9]\{32\} span_id=[a-f0-9]\{16\}' /path/to/debug.log
```
Expected: log lines with `trace_id=<32hex> span_id=<16hex>` between the
severity code and the message. Example:
```
2024-01-15T10:30:45.123Z RPCHandler:NFO trace_id=abc123def456789012345678abcdef01 span_id=0123456789abcdef Calling server_info
```
Lines emitted outside of an active span (background tasks, startup) will
NOT have trace context — this is expected.
### Step 2: Cross-check trace_id in Jaeger
Extract a `trace_id` from the log and verify it exists in Jaeger:
```bash
TRACE_ID=$(grep -o 'trace_id=[a-f0-9]\{32\}' /path/to/debug.log | head -1 | cut -d= -f2)
echo "Checking trace: $TRACE_ID"
curl -s "http://localhost:16686/api/traces/$TRACE_ID" | jq '.data | length'
```
Expected result: `1` (the trace exists in Jaeger).
### Step 3: Verify Loki log ingestion
The OTel Collector's filelog receiver tails rippled's debug.log and
exports parsed entries to Loki. Verify Loki has received entries:
```bash
# Query Loki for any rippled logs
curl -sG "http://localhost:3100/loki/api/v1/query" \
--data-urlencode 'query={job="rippled"}' \
--data-urlencode 'limit=5' | jq '.data.result | length'
```
Expected: > 0 results.
### Step 4: Verify Grafana Tempo-to-Loki correlation
1. Open Grafana at http://localhost:3000
2. Navigate to **Explore** -> select **Tempo** datasource
3. Search for a trace (e.g., operation `rpc.command.server_info`)
4. Click **"Logs for this trace"** in the trace detail view
5. Verify that Loki log lines appear, filtered by the trace's `trace_id`
### Step 5: Verify Grafana Loki-to-Tempo correlation
1. In Grafana **Explore**, select **Loki** datasource
2. Query: `{job="rippled"} |= "trace_id="`
3. In the log results, click the **TraceID** derived field link
4. Verify it navigates to the full trace in Tempo
### Expected results
| Check | Expected |
| ------------------------------ | ---------------------------------------- |
| `trace_id=` in debug.log | Present in log lines within active spans |
| `span_id=` in debug.log | Present alongside trace_id |
| Logs without active span | No trace_id/span_id fields |
| trace_id in Jaeger | Matches a valid trace |
| Loki log ingestion | Logs visible via LogQL |
| Tempo -> Loki "Logs for trace" | Shows correlated log lines |
| Loki -> Tempo TraceID link | Navigates to correct trace |
---
## Troubleshooting
### No traces in Jaeger
1. Check otel-collector logs:
```bash
docker compose -f docker/telemetry/docker-compose.yml logs otel-collector
```
2. Verify xrpld telemetry config has `enabled=1` and correct endpoint
3. Check that otel-collector port 4318 is accessible:
```bash
curl -sf http://localhost:4318 && echo "reachable"
```
4. Increase `batch_delay_ms` or decrease `batch_size` in xrpld config
### Nodes not reaching "proposing" state
1. Check that all peer ports (51235-51240) are not in use:
```bash
for p in 51235 51236 51237 51238 51239 51240; do
ss -tlnp | grep ":$p " && echo "port $p in use"
done
```
2. Verify `[ips_fixed]` lists all 6 peer ports
3. Verify `validators.txt` has all 6 public keys
4. Check node debug logs: `tail -50 /tmp/xrpld-integration/node1/debug.log`
5. Ensure `[peer_private]` is set to `1` (prevents reaching out to public network)
### Transaction not processing
1. Verify genesis account exists:
```bash
curl -s http://localhost:5005 \
-d '{"method":"account_info","params":[{"account":"rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"}]}' \
| jq .result.account_data.Balance
```
2. Check submit response for error codes
3. In standalone mode, remember to call `ledger_accept` after submitting
### No trace_id in log output (Phase 8)
1. Verify rippled was built with `telemetry=ON` (`-Dtelemetry=ON` in CMake)
2. Verify `enabled=1` in the `[telemetry]` config section
3. Log lines only contain trace context when emitted inside an active span.
Background logs (startup, periodic tasks outside spans) will not have
`trace_id`/`span_id`.
4. Ensure the trace category is enabled (e.g., `trace_rpc=1` for RPC logs)
### No logs in Loki (Phase 8)
1. Verify the log file mount in docker-compose.yml:
```yaml
volumes:
- /tmp/xrpld-integration:/var/log/rippled:ro
```
2. Check OTel Collector logs for filelog receiver errors:
```bash
docker compose -f docker/telemetry/docker-compose.yml logs otel-collector | grep -i "filelog\|loki\|error"
```
3. Verify Loki is running:
```bash
curl -s http://localhost:3100/ready
```
4. Verify the filelog receiver glob pattern matches your log files:
The default pattern is `/var/log/rippled/*/debug.log`
### Grafana trace-log links not working (Phase 8)
1. Verify `tracesToLogs` is configured in the Tempo datasource provisioning
(`docker/telemetry/grafana/provisioning/datasources/tempo.yaml`)
2. Verify `derivedFields` is configured in the Loki datasource provisioning
(`docker/telemetry/grafana/provisioning/datasources/loki.yaml`)
3. Restart Grafana after changing provisioning files:
```bash
docker compose -f docker/telemetry/docker-compose.yml restart grafana
```
### Spanmetrics not appearing in Prometheus
1. Verify otel-collector config has `spanmetrics` connector
2. Check that the metrics pipeline is configured:
```yaml
service:
pipelines:
metrics:
receivers: [otlp, spanmetrics]
exporters: [prometheus]
```
3. Verify Prometheus can reach collector:
```bash
curl -s http://localhost:9090/api/v1/targets | jq '.data.activeTargets'
```

View File

@@ -0,0 +1,115 @@
# Docker Compose workload harness for Phase 10 telemetry validation.
#
# Runs a 5-node validator cluster with full OTel telemetry stack:
# - 5 rippled validator nodes (consensus network)
# - OTel Collector (traces + StatsD metrics)
# - Jaeger (trace search UI)
# - Tempo (production trace backend)
# - Prometheus (metrics)
# - Loki (log aggregation for log-trace correlation)
# - Grafana (dashboards + trace/log exploration)
#
# Usage:
# # Start the harness (requires pre-built xrpld image or mount binary):
# docker compose -f docker/telemetry/docker-compose.workload.yaml up -d
#
# # Or use the orchestrator:
# docker/telemetry/workload/run-full-validation.sh
#
# Prerequisites:
# - xrpld binary built with -DXRPL_ENABLE_TELEMETRY=ON
# - Validator keys generated via generate-validator-keys.sh
# - Node configs generated by run-full-validation.sh
#
# Note: No Docker healthchecks are defined here. The orchestrator script
# (run-full-validation.sh) polls each service endpoint directly from the
# host, which avoids issues with missing curl/wget in container images.
services:
# ---------------------------------------------------------------------------
# Telemetry Backend Stack
# ---------------------------------------------------------------------------
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
command: ["--config=/etc/otel-collector-config.yaml"]
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
- "8125:8125/udp" # StatsD UDP (beast::insight metrics)
- "8889:8889" # Prometheus metrics endpoint
- "13133:13133" # Health check
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml:ro
# Mount the validation workdir so filelog receiver can tail node logs.
- /tmp/xrpld-validation:/var/log/rippled:ro
depends_on:
- jaeger
- tempo
networks:
- workload-net
jaeger:
image: jaegertracing/all-in-one:latest
environment:
- COLLECTOR_OTLP_ENABLED=true
ports:
- "16686:16686" # Jaeger UI
- "14250:14250" # gRPC
networks:
- workload-net
tempo:
image: grafana/tempo:2.7.2
command: ["-config.file=/etc/tempo.yaml"]
ports:
- "3200:3200" # Tempo HTTP API
volumes:
- ./tempo.yaml:/etc/tempo.yaml:ro
- tempo-data:/var/tempo
networks:
- workload-net
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
depends_on:
- otel-collector
networks:
- workload-net
loki:
image: grafana/loki:3.4.2
ports:
- "3100:3100" # Loki HTTP API
command: ["-config.file=/etc/loki/local-config.yaml"]
networks:
- workload-net
grafana:
image: grafana/grafana:latest
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
ports:
- "3000:3000"
volumes:
- ./grafana/provisioning:/etc/grafana/provisioning:ro
- ./grafana/dashboards:/var/lib/grafana/dashboards:ro
depends_on:
- jaeger
- tempo
- prometheus
- loki
networks:
- workload-net
volumes:
tempo-data:
networks:
workload-net:
driver: bridge

View File

@@ -0,0 +1,123 @@
# Docker Compose stack for rippled OpenTelemetry observability.
#
# Provides services for local development:
# - otel-collector: receives OTLP traces from rippled, batches and
# forwards them to Jaeger and Tempo. Also tails rippled log files
# via filelog receiver and exports to Loki. Listens on ports
# 4317 (gRPC), 4318 (HTTP), and 8125 (StatsD UDP).
# - jaeger: all-in-one tracing backend with UI on port 16686.
# - tempo: Grafana Tempo tracing backend, queryable via Grafana Explore
# on port 3000. Recommended for production (S3/GCS storage, TraceQL).
# - loki: Grafana Loki log aggregation backend for centralized log
# ingestion and log-trace correlation (Phase 8).
# - grafana: dashboards on port 3000, pre-configured with Jaeger, Tempo,
# Prometheus, and Loki datasources.
#
# Usage:
# docker compose -f docker/telemetry/docker-compose.yml up -d
#
# Configure rippled to export traces by adding to xrpld.cfg:
# [telemetry]
# enabled=1
# endpoint=http://localhost:4318/v1/traces
version: "3.8"
services:
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
command: ["--config=/etc/otel-collector-config.yaml"]
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP (traces + native OTel metrics)
- "8889:8889" # Prometheus metrics (spanmetrics + OTLP)
- "13133:13133" # Health check
# StatsD UDP port removed — beast::insight now uses native OTLP.
# Uncomment if using server=statsd fallback:
# - "8125:8125/udp"
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml:ro
# Phase 8: Mount rippled log directory for filelog receiver.
# The integration test writes logs to /tmp/xrpld-integration/;
# mount it read-only so the collector can tail debug.log files.
- /tmp/xrpld-integration:/var/log/rippled:ro
depends_on:
- jaeger
- tempo
- loki
networks:
- rippled-telemetry
jaeger:
image: jaegertracing/all-in-one:latest
environment:
- COLLECTOR_OTLP_ENABLED=true
ports:
- "16686:16686" # Jaeger UI
- "14250:14250" # gRPC
networks:
- rippled-telemetry
tempo:
image: grafana/tempo:2.7.2
command: ["-config.file=/etc/tempo.yaml"]
ports:
- "3200:3200" # Tempo HTTP API (health, query)
volumes:
- ./tempo.yaml:/etc/tempo.yaml:ro
- tempo-data:/var/tempo
networks:
- rippled-telemetry
# Phase 8: Grafana Loki for centralized log ingestion and log-trace
# correlation. Loki 3.x supports native OTLP ingestion, so the OTel
# Collector exports via otlphttp to Loki's /otlp endpoint.
# Query logs via Grafana Explore -> Loki at http://localhost:3000.
loki:
image: grafana/loki:3.4.2
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
volumes:
- loki-data:/loki
networks:
- rippled-telemetry
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus-data:/prometheus
depends_on:
- otel-collector
networks:
- rippled-telemetry
grafana:
image: grafana/grafana:latest
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
ports:
- "3000:3000"
volumes:
- ./grafana/provisioning:/etc/grafana/provisioning:ro
- ./grafana/dashboards:/var/lib/grafana/dashboards:ro
depends_on:
- jaeger
- tempo
- prometheus
- loki
networks:
- rippled-telemetry
volumes:
tempo-data:
prometheus-data:
loki-data:
networks:
rippled-telemetry:
driver: bridge

View File

@@ -0,0 +1,454 @@
{
"annotations": {
"list": []
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Consensus Round Duration",
"description": "p95 and p50 duration of consensus accept rounds. The consensus.accept span (RCLConsensus.cpp:395) measures the time to process an accepted ledger including transaction application and state finalization. The span carries xrpl.consensus.proposers and xrpl.consensus.round_time_ms attributes. Normal range is 3-6 seconds on mainnet.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"consensus.accept\"}[5m])))",
"legendFormat": "P95 Round Duration [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le, exported_instance) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"consensus.accept\"}[5m])))",
"legendFormat": "P50 Round Duration [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Consensus Proposals Sent Rate",
"description": "Rate at which this node sends consensus proposals to the network. Sourced from the consensus.proposal.send span (RCLConsensus.cpp:177) which fires each time the node proposes a transaction set. The span carries xrpl.consensus.round identifying the consensus round number. A healthy proposing node should show steady proposal output.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"consensus.proposal.send\"}[5m]))",
"legendFormat": "Proposals / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Proposals / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Ledger Close Duration",
"description": "p95 duration of the ledger close event. The consensus.ledger_close span (RCLConsensus.cpp:282) measures the time from when consensus triggers a ledger close to completion. Carries xrpl.consensus.ledger.seq and xrpl.consensus.mode attributes. Compare with Consensus Round Duration to understand how close timing relates to overall round time.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"consensus.ledger_close\"}[5m])))",
"legendFormat": "P95 Close Duration [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Validation Send Rate",
"description": "Rate at which this node sends ledger validations to the network. Sourced from the consensus.validation.send span (RCLConsensus.cpp:753). Each validation confirms the node has fully validated a ledger. The span carries xrpl.consensus.ledger.seq and xrpl.consensus.proposing. Should closely track the ledger close rate when the node is healthy.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"consensus.validation.send\"}[5m]))",
"legendFormat": "Validations / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops"
},
"overrides": []
}
},
{
"title": "Ledger Apply Duration (doAccept)",
"description": "Time spent applying the consensus result to build a new ledger. Measured by the consensus.accept.apply span in doAccept().",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"consensus.accept.apply\"}[5m])))",
"legendFormat": "P95 Apply Duration [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le, exported_instance) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"consensus.accept.apply\"}[5m])))",
"legendFormat": "P50 Apply Duration [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Close Time Agreement",
"description": "Rate of close time agreement vs disagreement across consensus rounds. Based on xrpl.consensus.close_time_correct attribute (true = validators agreed, false = agreed to disagree per avCT_CONSENSUS_PCT).",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"consensus.accept.apply\"}[5m]))",
"legendFormat": "Total Rounds / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Rounds / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Consensus Mode Over Time",
"description": "Breakdown of consensus ledger close events by the node's consensus mode (Proposing, Observing, Wrong Ledger, Switched Ledger). Grouped by the xrpl.consensus.mode span attribute from consensus.ledger_close. A healthy validator should be predominantly in Proposing mode. Frequent Wrong Ledger or Switched Ledger indicates sync issues.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (xrpl_consensus_mode, exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_consensus_mode=~\"$consensus_mode\", span_name=\"consensus.ledger_close\"}[5m]))",
"legendFormat": "{{xrpl_consensus_mode}} [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Events / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Accept vs Close Rate",
"description": "Compares the rate of consensus.accept (ledger accepted after consensus) vs consensus.ledger_close (ledger close initiated). These should track closely in a healthy network. A divergence means some close events are not completing the accept phase, potentially indicating consensus failures or timeouts.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"consensus.accept\"}[5m]))",
"legendFormat": "Accepts / Sec [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"consensus.ledger_close\"}[5m]))",
"legendFormat": "Closes / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Events / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Validation vs Close Rate",
"description": "Compares the rate of consensus.validation.send vs consensus.ledger_close. Each validated ledger should produce one validation message. If validations lag behind closes, the node may be falling behind on validation or experiencing issues with the validation pipeline.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 32
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"consensus.validation.send\"}[5m]))",
"legendFormat": "Validations / Sec [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"consensus.ledger_close\"}[5m]))",
"legendFormat": "Closes / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Events / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Consensus Accept Duration Heatmap",
"description": "Heatmap showing the distribution of consensus.accept span durations across histogram buckets over time. Each cell represents how many accept events fell into that duration bucket in a 5m window. Useful for detecting outlier consensus rounds that take abnormally long.",
"type": "heatmap",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 32
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"yAxis": {
"axisLabel": "Duration (ms)"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(increase(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"consensus.accept\"}[5m])) by (le)",
"legendFormat": "{{le}}",
"format": "heatmap"
}
]
}
],
"schemaVersion": 39,
"tags": ["rippled", "consensus", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id — e.g. Node-1)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
},
{
"name": "consensus_mode",
"label": "Consensus Mode",
"description": "Filter by consensus mode (Proposing, Observing, Wrong Ledger, Switched Ledger)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total{span_name=\"consensus.ledger_close\"}, xrpl_consensus_mode)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Consensus Health",
"uid": "rippled-consensus"
}

View File

@@ -0,0 +1,353 @@
{
"annotations": {
"list": []
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Ledger Build Rate",
"description": "Rate at which new ledgers are being built. The ledger.build span (BuildLedger.cpp:31) wraps the entire buildLedgerImpl() function which creates a new ledger from a parent, applies transactions, flushes SHAMap nodes, and sets the accepted state. Should match the consensus close rate (~0.25/sec on mainnet with ~4s rounds).",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"ledger.build\"}[5m]))",
"legendFormat": "Builds / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops"
},
"overrides": []
}
},
{
"title": "Ledger Build Duration",
"description": "p95 and p50 duration of ledger builds. Measures the full buildLedgerImpl() call including transaction application, SHAMap flushing, and ledger acceptance. The span records xrpl.ledger.seq as an attribute. Long build times indicate expensive transaction sets or I/O pressure from SHAMap flushes.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"ledger.build\"}[5m])))",
"legendFormat": "P95 Build Duration [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le, exported_instance) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"ledger.build\"}[5m])))",
"legendFormat": "P50 Build Duration [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Ledger Validation Rate",
"description": "Rate at which ledgers pass the validation threshold and are accepted as fully validated. The ledger.validate span (LedgerMaster.cpp:915) fires in checkAccept() only after the ledger receives sufficient trusted validations (>= quorum). Records xrpl.ledger.seq and xrpl.ledger.validations (the number of validations received).",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"ledger.validate\"}[5m]))",
"legendFormat": "Validations / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops"
},
"overrides": []
}
},
{
"title": "Ledger Build Duration Heatmap",
"description": "Heatmap showing the distribution of ledger.build durations across histogram buckets over time. Each cell represents the count of ledger builds that fell into that duration bucket in a 5m window. Useful for spotting occasional slow ledger builds that may not appear in percentile charts.",
"type": "heatmap",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"yAxis": {
"axisLabel": "Duration (ms)"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(increase(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"ledger.build\"}[5m])) by (le)",
"legendFormat": "{{le}}",
"format": "heatmap"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms"
},
"overrides": []
}
},
{
"title": "Transaction Apply Duration",
"description": "p95 and p50 duration of applying the consensus transaction set during ledger building. The tx.apply span (BuildLedger.cpp:88) wraps applyTransactions() which iterates through the CanonicalTXSet with multiple retry passes. Records xrpl.ledger.tx_count (successful) and xrpl.ledger.tx_failed (failed) as attributes.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"tx.apply\"}[5m])))",
"legendFormat": "P95 tx.apply [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le, exported_instance) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"tx.apply\"}[5m])))",
"legendFormat": "P50 tx.apply [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Transaction Apply Rate",
"description": "Rate of tx.apply span invocations, reflecting how frequently the transaction application phase runs during ledger building. Each ledger build triggers one tx.apply call. Should closely match the ledger build rate.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"tx.apply\"}[5m]))",
"legendFormat": "tx.apply / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Operations / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Ledger Store Rate",
"description": "Rate at which ledgers are stored into the ledger history. The ledger.store span (LedgerMaster.cpp:409) wraps storeLedger() which inserts the ledger into the LedgerHistory cache. Records xrpl.ledger.seq. Should match the ledger build rate under normal operation.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"ledger.store\"}[5m]))",
"legendFormat": "Stores / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops"
},
"overrides": []
}
},
{
"title": "Build vs Close Duration",
"description": "Compares p95 durations of ledger.build (the actual ledger construction in BuildLedger.cpp) vs consensus.ledger_close (the consensus close event in RCLConsensus.cpp). Build time is a subset of close time. A large gap between them indicates overhead in the consensus pipeline outside of ledger construction itself.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"ledger.build\"}[5m])))",
"legendFormat": "P95 ledger.build [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"consensus.ledger_close\"}[5m])))",
"legendFormat": "P95 consensus.ledger_close [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "ledger", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id \u2014 e.g. Node-1)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Ledger Operations",
"uid": "rippled-ledger-ops"
}

View File

@@ -0,0 +1,227 @@
{
"annotations": {
"list": []
},
"description": "Requires trace_peer=1 in the [telemetry] config section.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Peer Proposal Receive Rate",
"description": "Rate of consensus proposals received from network peers. The peer.proposal.receive span (PeerImp.cpp:1667) fires in onMessage(TMProposeSet) for each incoming proposal. Records xrpl.peer.id (sending peer) and xrpl.peer.proposal.trusted (whether the proposer is in our UNL). Requires trace_peer=1 in the telemetry config.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"peer.proposal.receive\"}[5m]))",
"legendFormat": "Proposals Received / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Proposals / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Peer Validation Receive Rate",
"description": "Rate of ledger validations received from network peers. The peer.validation.receive span (PeerImp.cpp:2264) fires in onMessage(TMValidation) for each incoming validation message. Records xrpl.peer.id (sending peer) and xrpl.peer.validation.trusted (whether the validator is trusted). Requires trace_peer=1 in the telemetry config.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"peer.validation.receive\"}[5m]))",
"legendFormat": "Validations Received / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Validations / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Proposals Trusted vs Untrusted",
"description": "Pie chart showing the ratio of proposals received from trusted validators (in our UNL) vs untrusted validators. Grouped by the xrpl.peer.proposal.trusted span attribute (true/false). A healthy node connected to a well-configured UNL should see a significant portion of trusted proposals. Note: proposals that fail early validation may not have the trusted attribute set.",
"type": "piechart",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (xrpl_peer_proposal_trusted, exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_peer_proposal_trusted=~\"$proposal_trusted\", span_name=\"peer.proposal.receive\"}[5m]))",
"legendFormat": "Trusted = {{xrpl_peer_proposal_trusted}} [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops"
},
"overrides": []
}
},
{
"title": "Validations Trusted vs Untrusted",
"description": "Pie chart showing the ratio of validations received from trusted validators (in our UNL) vs untrusted validators. Grouped by the xrpl.peer.validation.trusted span attribute (true/false). Monitoring this helps detect if the node is receiving validations from the expected set of trusted validators. Note: validations that fail early checks may not have the trusted attribute set.",
"type": "piechart",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (xrpl_peer_validation_trusted, exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_peer_validation_trusted=~\"$validation_trusted\", span_name=\"peer.validation.receive\"}[5m]))",
"legendFormat": "Trusted = {{xrpl_peer_validation_trusted}} [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops"
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "peer", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id \u2014 e.g. Node-1)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
},
{
"name": "proposal_trusted",
"label": "Proposal Trusted",
"description": "Filter by proposal trust status (true = from trusted validator)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total{span_name=\"peer.proposal.receive\"}, xrpl_peer_proposal_trusted)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
},
{
"name": "validation_trusted",
"label": "Validation Trusted",
"description": "Filter by validation trust status (true = from trusted validator)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total{span_name=\"peer.validation.receive\"}, xrpl_peer_validation_trusted)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Peer Network",
"uid": "rippled-peer-net"
}

View File

@@ -0,0 +1,343 @@
{
"annotations": {
"list": []
},
"description": "Fee market dynamics: TxQ depth/capacity, fee escalation levels, and load factor breakdown. Sourced from OTel MetricsRegistry observable gauges (Phase 9).",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Transaction Queue Depth",
"description": "Current number of transactions waiting in the queue vs. maximum capacity. Sourced from MetricsRegistry txq_metrics observable gauge with metric=txq_count and metric=txq_max_size.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_txq_metrics{exported_instance=~\"$node\", metric=\"txq_count\"}",
"legendFormat": "Queue Depth [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_txq_metrics{exported_instance=~\"$node\", metric=\"txq_max_size\"}",
"legendFormat": "Max Capacity [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Transactions",
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 10
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Transactions Per Ledger",
"description": "Transactions in the current open ledger vs. expected per-ledger count. Sourced from txq_metrics with metric=txq_in_ledger and metric=txq_per_ledger.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_txq_metrics{exported_instance=~\"$node\", metric=\"txq_in_ledger\"}",
"legendFormat": "In Ledger [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_txq_metrics{exported_instance=~\"$node\", metric=\"txq_per_ledger\"}",
"legendFormat": "Expected Per Ledger [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Transactions",
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 10
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Fee Escalation Levels",
"description": "Fee levels that control transaction queue admission. Reference fee level is the baseline; open ledger fee level triggers escalation. Sourced from txq_metrics observable gauge.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_txq_metrics{exported_instance=~\"$node\", metric=\"txq_reference_fee_level\"}",
"legendFormat": "Reference Fee Level [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_txq_metrics{exported_instance=~\"$node\", metric=\"txq_min_processing_fee_level\"}",
"legendFormat": "Min Processing Fee Level [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_txq_metrics{exported_instance=~\"$node\", metric=\"txq_med_fee_level\"}",
"legendFormat": "Median Fee Level [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_txq_metrics{exported_instance=~\"$node\", metric=\"txq_open_ledger_fee_level\"}",
"legendFormat": "Open Ledger Fee Level [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Fee Level",
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 5,
"scaleDistribution": {
"type": "log",
"log": 2
}
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Load Factor Breakdown",
"description": "Decomposed load factor components: server (max of local, net, cluster), fee escalation, fee queue, and combined. Values are unitless multipliers where 1.0 = no load. Sourced from load_factor_metrics observable gauge.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_load_factor_metrics{exported_instance=~\"$node\", metric=\"load_factor\"}",
"legendFormat": "Combined Load Factor [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_load_factor_metrics{exported_instance=~\"$node\", metric=\"load_factor_server\"}",
"legendFormat": "Server [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_load_factor_metrics{exported_instance=~\"$node\", metric=\"load_factor_fee_escalation\"}",
"legendFormat": "Fee Escalation [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_load_factor_metrics{exported_instance=~\"$node\", metric=\"load_factor_fee_queue\"}",
"legendFormat": "Fee Queue [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Multiplier",
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 5
},
"color": {
"mode": "palette-classic"
},
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 2
},
{
"color": "red",
"value": 10
}
]
}
},
"overrides": []
}
},
{
"title": "Load Factor Components",
"description": "Individual load factor contributors: local server load, network load, and cluster load. Only differ from 1.0 under load conditions. Sourced from load_factor_metrics observable gauge.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_load_factor_metrics{exported_instance=~\"$node\", metric=\"load_factor_local\"}",
"legendFormat": "Local [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_load_factor_metrics{exported_instance=~\"$node\", metric=\"load_factor_net\"}",
"legendFormat": "Network [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_load_factor_metrics{exported_instance=~\"$node\", metric=\"load_factor_cluster\"}",
"legendFormat": "Cluster [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Multiplier",
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 5
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "otel", "fee-market"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {},
"timezone": "browser",
"title": "Fee Market & TxQ",
"uid": "rippled-fee-market",
"version": 1
}

View File

@@ -0,0 +1,395 @@
{
"annotations": {
"list": []
},
"description": "Job queue analysis: per-job-type throughput rates, queue wait times, and execution times. Sourced from OTel MetricsRegistry synchronous counters and histograms (Phase 9).",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Job Throughput Rate (Per Second)",
"description": "Rate of jobs queued, started, and finished across all job types. Computed as rate() over the OTel counter values. High queue rates with low finish rates indicate backlog.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(rippled_job_queued_total{exported_instance=~\"$node\", job_type=~\"$job_type\"}[5m]))",
"legendFormat": "Queued/s [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(rippled_job_started_total{exported_instance=~\"$node\", job_type=~\"$job_type\"}[5m]))",
"legendFormat": "Started/s [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(rippled_job_finished_total{exported_instance=~\"$node\", job_type=~\"$job_type\"}[5m]))",
"legendFormat": "Finished/s [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 10,
"axisLabel": "Operations / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Per-Job-Type Queued Rate",
"description": "Rate of jobs queued broken down by job_type label. Identifies which job types contribute most to queue activity.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"legend": {
"displayMode": "table",
"placement": "right",
"calcs": ["mean", "max"]
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, rate(rippled_job_queued_total{exported_instance=~\"$node\", job_type=~\"$job_type\"}[5m]))",
"legendFormat": "{{job_type}} [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"drawStyle": "line",
"lineWidth": 1,
"fillOpacity": 5,
"axisLabel": "Operations / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Per-Job-Type Finish Rate",
"description": "Rate of jobs completing broken down by job_type. Compare with queued rate to identify backlog per type.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"legend": {
"displayMode": "table",
"placement": "right",
"calcs": ["mean", "max"]
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, rate(rippled_job_finished_total{exported_instance=~\"$node\", job_type=~\"$job_type\"}[5m]))",
"legendFormat": "{{job_type}} [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"drawStyle": "line",
"lineWidth": 1,
"fillOpacity": 5,
"axisLabel": "Operations / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Job Queue Wait Time (P50, P95, P99)",
"description": "Histogram quantiles for time jobs spend waiting in the queue before execution starts. High values indicate thread pool saturation.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le, exported_instance) (rate(rippled_job_queued_duration_us_bucket{exported_instance=~\"$node\", job_type=~\"$job_type\"}[5m])))",
"legendFormat": "P50 [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(rippled_job_queued_duration_us_bucket{exported_instance=~\"$node\", job_type=~\"$job_type\"}[5m])))",
"legendFormat": "P95 [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.99, sum by (le, exported_instance) (rate(rippled_job_queued_duration_us_bucket{exported_instance=~\"$node\", job_type=~\"$job_type\"}[5m])))",
"legendFormat": "P99 [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "us",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 5,
"axisLabel": "Duration (μs)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Job Execution Time (P50, P95, P99)",
"description": "Histogram quantiles for actual job execution time. High values indicate expensive operations or resource contention.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le, exported_instance) (rate(rippled_job_running_duration_us_bucket{exported_instance=~\"$node\", job_type=~\"$job_type\"}[5m])))",
"legendFormat": "P50 [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(rippled_job_running_duration_us_bucket{exported_instance=~\"$node\", job_type=~\"$job_type\"}[5m])))",
"legendFormat": "P95 [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.99, sum by (le, exported_instance) (rate(rippled_job_running_duration_us_bucket{exported_instance=~\"$node\", job_type=~\"$job_type\"}[5m])))",
"legendFormat": "P99 [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "us",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 5,
"axisLabel": "Duration (μs)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Per-Job-Type Execution Time (P95)",
"description": "95th percentile execution time broken down by job type. Identifies the slowest job types.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"legend": {
"displayMode": "table",
"placement": "right",
"calcs": ["mean", "max"]
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, histogram_quantile(0.95, sum by (le, job_type, exported_instance) (rate(rippled_job_running_duration_us_bucket{exported_instance=~\"$node\", job_type=~\"$job_type\"}[5m]))))",
"legendFormat": "{{job_type}} [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "us",
"custom": {
"drawStyle": "line",
"lineWidth": 1,
"fillOpacity": 5,
"axisLabel": "Duration (μs)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "otel", "job-queue"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
},
{
"name": "job_type",
"label": "Job Type",
"description": "Filter by job type",
"type": "query",
"query": "label_values(rippled_job_queued_total, job_type)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {},
"timezone": "browser",
"title": "Job Queue Analysis",
"uid": "rippled-job-queue",
"version": 1
}

View File

@@ -0,0 +1,391 @@
{
"annotations": {
"list": []
},
"description": "Peer network quality metrics: latency, divergence, version distribution, upgrade recommendations, disconnects, and connection direction balance. Requires push_metrics.py or equivalent OTel metric source.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "P90 Peer Latency",
"description": "90th percentile peer-to-peer latency in milliseconds over time. High latency indicates network congestion or geographically distant peers.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_peer_quality{metric=\"peer_latency_p90_ms\",exported_instance=~\"$node\"}",
"legendFormat": "P90 Latency [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Latency (ms)",
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 10,
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
},
"color": {
"mode": "palette-classic"
},
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 200
},
{
"color": "red",
"value": 500
}
]
}
},
"overrides": []
}
},
{
"title": "Insane/Diverged Peers",
"description": "Count of peers whose ledger state is considered insane or diverged from the network. Non-zero values suggest those peers are misbehaving or on a fork.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 6,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_peer_quality{metric=\"peers_insane_count\",exported_instance=~\"$node\"}",
"legendFormat": "Insane Peers [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 1
},
{
"color": "red",
"value": 3
}
]
},
"custom": {}
},
"overrides": []
}
},
{
"title": "Higher Version Peers %",
"description": "Percentage of connected peers running a higher rippled version. A high percentage suggests this node should be upgraded.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 6,
"x": 18,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_peer_quality{metric=\"peers_higher_version_pct\",exported_instance=~\"$node\"}",
"legendFormat": "Higher Version % [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "percent",
"min": 0,
"max": 100,
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 30
},
{
"color": "red",
"value": 60
}
]
},
"custom": {}
},
"overrides": []
}
},
{
"title": "Upgrade Recommended",
"description": "Whether an upgrade is recommended based on peer version analysis (1=recommended, 0=not needed). Triggered when a majority of peers run a newer version.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 6,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_peer_quality{metric=\"upgrade_recommended\",exported_instance=~\"$node\"}",
"legendFormat": "Upgrade [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"mappings": [
{
"type": "value",
"options": {
"0": { "text": "No", "color": "green" }
}
},
{
"type": "value",
"options": {
"1": { "text": "Yes", "color": "red" }
}
}
],
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 1
}
]
},
"custom": {}
},
"overrides": []
}
},
{
"title": "Resource Disconnects",
"description": "Cumulative peer disconnections due to resource limit violations over time. Rising values indicate aggressive or misbehaving peers being dropped.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 10,
"x": 6,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_server_info{metric=\"peer_disconnects_resources\",exported_instance=~\"$node\"}",
"legendFormat": "Disconnects [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Disconnects",
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 10,
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Inbound vs Outbound Peers",
"description": "Comparison of active inbound and outbound peer connections. A healthy node should have a balanced mix. All-inbound may indicate NAT/firewall issues preventing outbound connections.",
"type": "bargauge",
"gridPos": {
"h": 8,
"w": 8,
"x": 16,
"y": 8
},
"options": {
"orientation": "horizontal",
"displayMode": "gradient",
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_Peer_Finder_Active_Inbound_Peers{exported_instance=~\"$node\"}",
"legendFormat": "Inbound [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_Peer_Finder_Active_Outbound_Peers{exported_instance=~\"$node\"}",
"legendFormat": "Outbound [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
}
]
},
"custom": {}
},
"overrides": [
{
"matcher": {
"id": "byRegexp",
"options": "Inbound.*"
},
"properties": [
{
"id": "color",
"value": {
"mode": "fixed",
"fixedColor": "blue"
}
}
]
},
{
"matcher": {
"id": "byRegexp",
"options": "Outbound.*"
},
"properties": [
{
"id": "color",
"value": {
"mode": "fixed",
"fixedColor": "orange"
}
}
]
}
]
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "otel", "peer", "network", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(rippled_peer_quality, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Peer Quality",
"uid": "rippled-peer-quality"
}

View File

@@ -0,0 +1,404 @@
{
"annotations": {
"list": []
},
"description": "Per-RPC-method performance: call rates, error rates, and latency distributions. Sourced from OTel MetricsRegistry synchronous counters and histograms (Phase 9).",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "RPC Call Rate (All Methods)",
"description": "Aggregate rate of RPC calls started, finished, and errored across all methods. Computed as rate() over OTel counters.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(rippled_rpc_method_started_total{exported_instance=~\"$node\", method=~\"$method\"}[5m]))",
"legendFormat": "Started/s [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(rippled_rpc_method_finished_total{exported_instance=~\"$node\", method=~\"$method\"}[5m]))",
"legendFormat": "Finished/s [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(rippled_rpc_method_errored_total{exported_instance=~\"$node\", method=~\"$method\"}[5m]))",
"legendFormat": "Errored/s [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 10,
"axisLabel": "Operations / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Per-Method Call Rate (Top 10)",
"description": "Per-method RPC call rate, showing the 10 most active methods. Useful for identifying hot paths.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"legend": {
"displayMode": "table",
"placement": "right",
"calcs": ["mean", "max"]
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, rate(rippled_rpc_method_started_total{exported_instance=~\"$node\", method=~\"$method\"}[5m]))",
"legendFormat": "{{method}} [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"drawStyle": "line",
"lineWidth": 1,
"fillOpacity": 5,
"axisLabel": "Operations / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Per-Method Error Rate (Top 10)",
"description": "Per-method RPC error rate. Non-zero values warrant investigation. Common culprits: invalid parameters, resource exhaustion.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"legend": {
"displayMode": "table",
"placement": "right",
"calcs": ["mean", "max"]
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, rate(rippled_rpc_method_errored_total{exported_instance=~\"$node\", method=~\"$method\"}[5m]))",
"legendFormat": "{{method}} [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"drawStyle": "line",
"lineWidth": 1,
"fillOpacity": 5,
"axisLabel": "Operations / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "RPC Latency (P50, P95, P99) - All Methods",
"description": "Histogram quantiles for RPC execution time across all methods. Sourced from rpc_method_duration_us histogram.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le, exported_instance) (rate(rippled_rpc_method_duration_us_bucket{exported_instance=~\"$node\", method=~\"$method\"}[5m])))",
"legendFormat": "P50 [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(rippled_rpc_method_duration_us_bucket{exported_instance=~\"$node\", method=~\"$method\"}[5m])))",
"legendFormat": "P95 [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.99, sum by (le, exported_instance) (rate(rippled_rpc_method_duration_us_bucket{exported_instance=~\"$node\", method=~\"$method\"}[5m])))",
"legendFormat": "P99 [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "us",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 5,
"axisLabel": "Duration (μs)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Per-Method Latency P95 (Top 10 Slowest)",
"description": "95th percentile execution time per method. Identifies the slowest RPC endpoints.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"legend": {
"displayMode": "table",
"placement": "right",
"calcs": ["mean", "max"]
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, histogram_quantile(0.95, sum by (le, method, exported_instance) (rate(rippled_rpc_method_duration_us_bucket{exported_instance=~\"$node\", method=~\"$method\"}[5m]))))",
"legendFormat": "{{method}} [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "us",
"custom": {
"drawStyle": "line",
"lineWidth": 1,
"fillOpacity": 5,
"axisLabel": "Duration (μs)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "RPC Error Ratio by Method",
"description": "Error ratio (errors / total started) per method. Values above 0.05 (5%) warrant investigation.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"legend": {
"displayMode": "table",
"placement": "right",
"calcs": ["mean", "max"]
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, rate(rippled_rpc_method_errored_total{exported_instance=~\"$node\", method=~\"$method\"}[5m]) / (rate(rippled_rpc_method_started_total{exported_instance=~\"$node\", method=~\"$method\"}[5m]) > 0))",
"legendFormat": "{{method}} [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "percentunit",
"min": 0,
"max": 1,
"custom": {
"drawStyle": "line",
"lineWidth": 1,
"fillOpacity": 5,
"axisLabel": "Error Ratio",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
},
"color": {
"mode": "palette-classic"
},
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 0.05
},
{
"color": "red",
"value": 0.25
}
]
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "otel", "rpc"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
},
{
"name": "method",
"label": "RPC Method",
"description": "Filter by RPC method",
"type": "query",
"query": "label_values(rippled_rpc_method_started_total, method)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {},
"timezone": "browser",
"title": "RPC Performance (OTel)",
"uid": "rippled-rpc-perf",
"version": 1
}

View File

@@ -0,0 +1,714 @@
{
"annotations": {
"list": []
},
"description": "Validator health metrics: agreement rates, validation counts, amendment status, UNL expiry, server state tracking, and ledger close rates. Requires push_metrics.py or equivalent OTel metric source.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "--- Validation Agreement ---",
"type": "row",
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 0
},
"collapsed": false,
"panels": []
},
{
"title": "Agreement % (1h)",
"description": "Validation agreement percentage over the last 1 hour. Values below 80% indicate the validator is frequently disagreeing with the network consensus.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 6,
"x": 0,
"y": 1
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validation_agreement{metric=\"agreement_pct_1h\",exported_instance=~\"$node\"}",
"legendFormat": "Agreement 1h [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "percent",
"min": 0,
"max": 100,
"thresholds": {
"steps": [
{
"color": "red",
"value": null
},
{
"color": "yellow",
"value": 80
},
{
"color": "green",
"value": 95
}
]
},
"custom": {}
},
"overrides": []
}
},
{
"title": "Agreement % (24h)",
"description": "Validation agreement percentage over the last 24 hours. A sustained value below 90% may indicate configuration drift or network partition.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 6,
"x": 6,
"y": 1
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validation_agreement{metric=\"agreement_pct_24h\",exported_instance=~\"$node\"}",
"legendFormat": "Agreement 24h [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "percent",
"min": 0,
"max": 100,
"thresholds": {
"steps": [
{
"color": "red",
"value": null
},
{
"color": "yellow",
"value": 80
},
{
"color": "green",
"value": 95
}
]
},
"custom": {}
},
"overrides": []
}
},
{
"title": "Agreements vs Missed (1h)",
"description": "Comparison of successful agreements and missed validations over 1 hour. High missed count indicates the validator is not participating in consensus rounds.",
"type": "bargauge",
"gridPos": {
"h": 8,
"w": 6,
"x": 12,
"y": 1
},
"options": {
"orientation": "horizontal",
"displayMode": "gradient",
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validation_agreement{metric=\"agreements_1h\",exported_instance=~\"$node\"}",
"legendFormat": "Agreements 1h [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validation_agreement{metric=\"missed_1h\",exported_instance=~\"$node\"}",
"legendFormat": "Missed 1h [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
}
]
},
"custom": {}
},
"overrides": [
{
"matcher": {
"id": "byRegexp",
"options": "Missed.*"
},
"properties": [
{
"id": "color",
"value": {
"mode": "fixed",
"fixedColor": "red"
}
}
]
}
]
}
},
{
"title": "Agreements vs Missed (24h)",
"description": "Comparison of successful agreements and missed validations over 24 hours. Provides a longer-term view of validator reliability.",
"type": "bargauge",
"gridPos": {
"h": 8,
"w": 6,
"x": 18,
"y": 1
},
"options": {
"orientation": "horizontal",
"displayMode": "gradient",
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validation_agreement{metric=\"agreements_24h\",exported_instance=~\"$node\"}",
"legendFormat": "Agreements 24h [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validation_agreement{metric=\"missed_24h\",exported_instance=~\"$node\"}",
"legendFormat": "Missed 24h [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
}
]
},
"custom": {}
},
"overrides": [
{
"matcher": {
"id": "byRegexp",
"options": "Missed.*"
},
"properties": [
{
"id": "color",
"value": {
"mode": "fixed",
"fixedColor": "red"
}
}
]
}
]
}
},
{
"title": "--- Validation Rates ---",
"type": "row",
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 9
},
"collapsed": false,
"panels": []
},
{
"title": "Validation Rate",
"description": "Rate of validations sent per minute (5m average). Indicates how actively this node is producing validations. Expected ~10-12/min on mainnet (one per ledger close).",
"type": "stat",
"gridPos": {
"h": 8,
"w": 6,
"x": 0,
"y": 10
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_validations_sent_total{exported_instance=~\"$node\"}[5m]) * 60",
"legendFormat": "Sent/min [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"thresholds": {
"steps": [
{
"color": "red",
"value": null
},
{
"color": "yellow",
"value": 5
},
{
"color": "green",
"value": 8
}
]
},
"custom": {}
},
"overrides": []
}
},
{
"title": "Validations Checked Rate",
"description": "Rate of validations checked (received from peers) per minute (5m average). Indicates how many validations from the network are being processed.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 6,
"x": 6,
"y": 10
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_validations_checked_total{exported_instance=~\"$node\"}[5m]) * 60",
"legendFormat": "Checked/min [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {}
},
"overrides": []
}
},
{
"title": "Amendment Blocked",
"description": "Whether the node is amendment-blocked (1=blocked, 0=normal). An amendment-blocked node cannot validate and needs a software upgrade.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 6,
"x": 12,
"y": 10
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validator_health{metric=\"amendment_blocked\",exported_instance=~\"$node\"}",
"legendFormat": "Blocked [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"mappings": [
{
"type": "value",
"options": {
"0": { "text": "OK", "color": "green" }
}
},
{
"type": "value",
"options": {
"1": { "text": "BLOCKED", "color": "red" }
}
}
],
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 1
}
]
},
"custom": {}
},
"overrides": []
}
},
{
"title": "UNL Expiry (days)",
"description": "Days until the UNL (Unique Node List) expires. A value below 7 requires attention to renew the UNL before it expires and the validator stops trusting peers.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 6,
"x": 18,
"y": 10
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validator_health{metric=\"unl_expiry_days\",exported_instance=~\"$node\"}",
"legendFormat": "UNL Expiry [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"thresholds": {
"steps": [
{
"color": "red",
"value": null
},
{
"color": "yellow",
"value": 7
},
{
"color": "green",
"value": 30
}
]
},
"custom": {}
},
"overrides": []
}
},
{
"title": "--- Server State & Consensus ---",
"type": "row",
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 18
},
"collapsed": false,
"panels": []
},
{
"title": "Validation Quorum",
"description": "Minimum number of trusted validations needed to declare a ledger validated. Tracks the current quorum requirement from the validator list.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 6,
"x": 0,
"y": 19
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validator_health{metric=\"validation_quorum\",exported_instance=~\"$node\"}",
"legendFormat": "Quorum [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {}
},
"overrides": []
}
},
{
"title": "State Value Timeline",
"description": "Numeric encoding of the server operating state over time. Useful for correlating state changes with other metrics.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 18,
"x": 6,
"y": 19
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_state_tracking{metric=\"state_value\",exported_instance=~\"$node\"}",
"legendFormat": "State [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "State",
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 10,
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Time in Current State",
"description": "How long the server has been in its current operating state, in seconds. Short durations with frequent changes indicate instability.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 8,
"x": 0,
"y": 27
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_state_tracking{metric=\"time_in_current_state_seconds\",exported_instance=~\"$node\"}",
"legendFormat": "Time in State [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "s",
"custom": {}
},
"overrides": []
}
},
{
"title": "State Changes Rate",
"description": "Rate of server state changes per hour. A healthy node should have near-zero state changes. Frequent transitions indicate network or configuration issues.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 8,
"x": 8,
"y": 27
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_state_changes_total{exported_instance=~\"$node\"}[1h])",
"legendFormat": "Changes/hr [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 1
},
{
"color": "red",
"value": 5
}
]
},
"custom": {}
},
"overrides": []
}
},
{
"title": "Ledgers Closed Rate",
"description": "Rate of ledgers closed per minute (5m average). On mainnet, expect roughly 10-12/min. Deviations indicate consensus timing issues.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 8,
"x": 16,
"y": 27
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_ledgers_closed_total{exported_instance=~\"$node\"}[5m]) * 60",
"legendFormat": "Closed/min [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"thresholds": {
"steps": [
{
"color": "red",
"value": null
},
{
"color": "yellow",
"value": 5
},
{
"color": "green",
"value": 8
}
]
},
"custom": {}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "otel", "validator", "health", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(rippled_validation_agreement, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Validator Health",
"uid": "rippled-validator-health"
}

View File

@@ -0,0 +1,376 @@
{
"annotations": {
"list": []
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "RPC Request Rate by Command",
"description": "Per-second rate of RPC command executions, broken down by command name (e.g. server_info, submit). Calculated as rate(traces_span_metrics_calls_total{span_name=~\"rpc.command.*\"}) over a 5m window, grouped by the xrpl.rpc.command span attribute.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (xrpl_rpc_command, exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=~\"rpc.command.*\"}[5m]))",
"legendFormat": "{{xrpl_rpc_command}} [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "reqps",
"custom": {
"axisLabel": "Requests / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "RPC Latency P95 by Command",
"description": "95th percentile response time for each RPC command. Computed from the spanmetrics duration histogram using histogram_quantile(0.95) over rpc.command.* spans, grouped by xrpl.rpc.command. High values indicate slow commands that may need optimization.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, xrpl_rpc_command, exported_instance) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=~\"rpc.command.*\"}[5m])))",
"legendFormat": "P95 {{xrpl_rpc_command}} [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Latency (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "RPC Error Rate",
"description": "Percentage of RPC commands that completed with an error status, per command. Calculated as (error calls / total calls) * 100, where errors have status_code=STATUS_CODE_ERROR. Thresholds: green < 1%, yellow 1-5%, red > 5%.",
"type": "bargauge",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (xrpl_rpc_command, exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=~\"rpc.command.*\", status_code=\"STATUS_CODE_ERROR\"}[5m])) / sum by (xrpl_rpc_command, exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=~\"rpc.command.*\"}[5m])) * 100",
"legendFormat": "{{xrpl_rpc_command}} [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "percent",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 1
},
{
"color": "red",
"value": 5
}
]
}
},
"overrides": []
}
},
{
"title": "RPC Latency Heatmap",
"description": "Distribution of RPC command response times across histogram buckets. Shows the density of requests at each latency level over time. Each cell represents the count of requests that fell into that duration bucket in a 5m window. Useful for spotting bimodal latency patterns.",
"type": "heatmap",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"yAxis": {
"axisLabel": "Duration (ms)"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(increase(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=~\"rpc.command.*\"}[5m])) by (le)",
"legendFormat": "{{le}}",
"format": "heatmap"
}
]
},
{
"title": "Overall RPC Throughput",
"description": "Aggregate RPC throughput showing two layers of the request pipeline. rpc.request is the outer HTTP handler (ServerHandler.cpp:271) that accepts incoming connections. rpc.process is the inner processing layer (ServerHandler.cpp:573) that parses and dispatches. A gap between the two indicates requests being queued or rejected before processing.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=\"rpc.request\"}[5m]))",
"legendFormat": "rpc.request / Sec [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=\"rpc.process\"}[5m]))",
"legendFormat": "rpc.process / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "reqps",
"custom": {
"axisLabel": "Requests / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "RPC Success vs Error",
"description": "Aggregate rate of successful vs failed RPC commands across all command types. Success = status_code UNSET (OpenTelemetry default for OK spans). Error = status_code STATUS_CODE_ERROR. A sustained error rate warrants investigation via per-command breakdown above.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=~\"rpc.command.*\", status_code=\"STATUS_CODE_UNSET\"}[5m]))",
"legendFormat": "Success [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=~\"rpc.command.*\", status_code=\"STATUS_CODE_ERROR\"}[5m]))",
"legendFormat": "Error [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Commands / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Top Commands by Volume",
"description": "Top 10 most frequently called RPC commands by total invocation count over the last 5 minutes. Uses topk(10, increase(calls_total)) to rank commands. Helps identify the hottest API endpoints driving load on the node.",
"type": "bargauge",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, sum by (xrpl_rpc_command, exported_instance) (increase(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=~\"rpc.command.*\"}[5m])))",
"legendFormat": "{{xrpl_rpc_command}} [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none"
},
"overrides": []
}
},
{
"title": "WebSocket Message Rate",
"description": "Rate of incoming WebSocket RPC messages processed by the server. Sourced from the rpc.ws_message span (ServerHandler.cpp:384). Only active when clients connect via WebSocket instead of HTTP. Zero is normal if only HTTP RPC is in use.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=\"rpc.ws_message\"}[5m]))",
"legendFormat": "WS Messages / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops"
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "rpc", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id — e.g. Node-1)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
},
{
"name": "command",
"label": "RPC Command",
"description": "Filter by RPC command name (e.g., server_info, submit)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total{span_name=~\"rpc.command.*\"}, xrpl_rpc_command)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "RPC Performance",
"uid": "rippled-rpc-perf"
}

View File

@@ -0,0 +1,470 @@
{
"annotations": { "list": [] },
"description": "Network traffic and peer metrics from beast::insight StatsD. Requires [insight] server=statsd in rippled config.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Active Peers",
"description": "Number of active inbound and outbound peer connections. Sourced from Peer_Finder.Active_Inbound_Peers and Peer_Finder.Active_Outbound_Peers gauges (PeerfinderManager.cpp:214-215). A healthy mainnet node typically has 10-21 outbound and 0-85 inbound peers depending on configuration.",
"type": "timeseries",
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 0 },
"options": {
"tooltip": { "mode": "multi", "sort": "desc" }
},
"targets": [
{
"datasource": { "type": "prometheus" },
"expr": "rippled_Peer_Finder_Active_Inbound_Peers",
"legendFormat": "Inbound Peers"
},
{
"datasource": { "type": "prometheus" },
"expr": "rippled_Peer_Finder_Active_Outbound_Peers",
"legendFormat": "Outbound Peers"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Peers",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Peer Disconnects",
"description": "Cumulative count of peer disconnections. Sourced from the Overlay.Peer_Disconnects gauge (OverlayImpl.h:557). A rising trend indicates network instability, aggressive peer management, or resource exhaustion causing connection drops.",
"type": "timeseries",
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 0 },
"options": {
"tooltip": { "mode": "multi", "sort": "desc" }
},
"targets": [
{
"datasource": { "type": "prometheus" },
"expr": "rippled_Overlay_Peer_Disconnects",
"legendFormat": "Disconnects"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Disconnects",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Total Network Bytes",
"description": "Total bytes sent and received across all peer connections. Sourced from the total.Bytes_In and total.Bytes_Out traffic category gauges (OverlayImpl.h:535-548). Provides a high-level view of network bandwidth consumption.",
"type": "timeseries",
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 8 },
"options": {
"tooltip": { "mode": "multi", "sort": "desc" }
},
"targets": [
{
"datasource": { "type": "prometheus" },
"expr": "rippled_total_Bytes_In",
"legendFormat": "Bytes In"
},
{
"datasource": { "type": "prometheus" },
"expr": "rippled_total_Bytes_Out",
"legendFormat": "Bytes Out"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Total Network Messages",
"description": "Total messages sent and received across all peer connections. Sourced from the total.Messages_In and total.Messages_Out traffic category gauges (OverlayImpl.h:535-548). Shows the overall message throughput of the overlay network.",
"type": "timeseries",
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 8 },
"options": {
"tooltip": { "mode": "multi", "sort": "desc" }
},
"targets": [
{
"datasource": { "type": "prometheus" },
"expr": "rippled_total_Messages_In",
"legendFormat": "Messages In"
},
{
"datasource": { "type": "prometheus" },
"expr": "rippled_total_Messages_Out",
"legendFormat": "Messages Out"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Messages",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Transaction Traffic",
"description": "Bytes and messages for transaction-related overlay traffic. Includes the transactions traffic category (OverlayImpl/TrafficCount.h). Spikes indicate high transaction volume on the network or transaction flooding.",
"type": "timeseries",
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 16 },
"options": {
"tooltip": { "mode": "multi", "sort": "desc" }
},
"targets": [
{
"datasource": { "type": "prometheus" },
"expr": "rippled_transactions_Messages_In",
"legendFormat": "TX Messages In"
},
{
"datasource": { "type": "prometheus" },
"expr": "rippled_transactions_Messages_Out",
"legendFormat": "TX Messages Out"
},
{
"datasource": { "type": "prometheus" },
"expr": "rippled_transactions_duplicate_Messages_In",
"legendFormat": "TX Duplicate In"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Messages",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Proposal Traffic",
"description": "Messages for consensus proposal overlay traffic. Includes proposals, proposals_untrusted, and proposals_duplicate categories (TrafficCount.h). High untrusted or duplicate counts may indicate UNL misconfiguration or network spam.",
"type": "timeseries",
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 16 },
"options": {
"tooltip": { "mode": "multi", "sort": "desc" }
},
"targets": [
{
"datasource": { "type": "prometheus" },
"expr": "rippled_proposals_Messages_In",
"legendFormat": "Proposals In"
},
{
"datasource": { "type": "prometheus" },
"expr": "rippled_proposals_Messages_Out",
"legendFormat": "Proposals Out"
},
{
"datasource": { "type": "prometheus" },
"expr": "rippled_proposals_untrusted_Messages_In",
"legendFormat": "Untrusted In"
},
{
"datasource": { "type": "prometheus" },
"expr": "rippled_proposals_duplicate_Messages_In",
"legendFormat": "Duplicate In"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Messages",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Validation Traffic",
"description": "Messages for validation overlay traffic. Includes validations, validations_untrusted, and validations_duplicate categories (TrafficCount.h). Monitoring trusted vs untrusted validation traffic helps detect UNL health issues.",
"type": "timeseries",
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 24 },
"options": {
"tooltip": { "mode": "multi", "sort": "desc" }
},
"targets": [
{
"datasource": { "type": "prometheus" },
"expr": "rippled_validations_Messages_In",
"legendFormat": "Validations In"
},
{
"datasource": { "type": "prometheus" },
"expr": "rippled_validations_Messages_Out",
"legendFormat": "Validations Out"
},
{
"datasource": { "type": "prometheus" },
"expr": "rippled_validations_untrusted_Messages_In",
"legendFormat": "Untrusted In"
},
{
"datasource": { "type": "prometheus" },
"expr": "rippled_validations_duplicate_Messages_In",
"legendFormat": "Duplicate In"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Messages",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Overlay Traffic by Category (Bytes In)",
"description": "Top traffic categories by inbound bytes. Includes all 57 overlay traffic categories from TrafficCount.h. Shows which protocol message types consume the most bandwidth. Categories include transactions, proposals, validations, ledger data, getobject, and overlay overhead.",
"type": "bargauge",
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 24 },
"options": {
"tooltip": { "mode": "multi", "sort": "desc" }
},
"targets": [
{
"datasource": { "type": "prometheus" },
"expr": "topk(10, {__name__=~\"rippled_.*_Bytes_In\", __name__!~\"rippled_total_.*\"})",
"legendFormat": "{{__name__}}"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "rippled_transactions_Bytes_In"
},
"properties": [{ "id": "displayName", "value": "Transactions" }]
},
{
"matcher": {
"id": "byName",
"options": "rippled_proposals_Bytes_In"
},
"properties": [{ "id": "displayName", "value": "Proposals" }]
},
{
"matcher": {
"id": "byName",
"options": "rippled_validations_Bytes_In"
},
"properties": [{ "id": "displayName", "value": "Validations" }]
},
{
"matcher": {
"id": "byName",
"options": "rippled_overhead_Bytes_In"
},
"properties": [{ "id": "displayName", "value": "Overhead" }]
},
{
"matcher": {
"id": "byName",
"options": "rippled_overhead_overlay_Bytes_In"
},
"properties": [{ "id": "displayName", "value": "Overhead Overlay" }]
},
{
"matcher": { "id": "byName", "options": "rippled_ping_Bytes_In" },
"properties": [{ "id": "displayName", "value": "Ping" }]
},
{
"matcher": { "id": "byName", "options": "rippled_status_Bytes_In" },
"properties": [{ "id": "displayName", "value": "Status" }]
},
{
"matcher": {
"id": "byName",
"options": "rippled_getObject_Bytes_In"
},
"properties": [{ "id": "displayName", "value": "Get Object" }]
},
{
"matcher": {
"id": "byName",
"options": "rippled_haveTxSet_Bytes_In"
},
"properties": [{ "id": "displayName", "value": "Have Tx Set" }]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledgerData_Bytes_In"
},
"properties": [{ "id": "displayName", "value": "Ledger Data" }]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_share_Bytes_In"
},
"properties": [{ "id": "displayName", "value": "Ledger Share" }]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_get_Bytes_In"
},
"properties": [{ "id": "displayName", "value": "Ledger Data Get" }]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_share_Bytes_In"
},
"properties": [
{ "id": "displayName", "value": "Ledger Data Share" }
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_Account_State_Node_get_Bytes_In"
},
"properties": [
{ "id": "displayName", "value": "Account State Node Get" }
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_Account_State_Node_share_Bytes_In"
},
"properties": [
{ "id": "displayName", "value": "Account State Node Share" }
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_Transaction_Node_get_Bytes_In"
},
"properties": [
{ "id": "displayName", "value": "Transaction Node Get" }
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_Transaction_Node_share_Bytes_In"
},
"properties": [
{ "id": "displayName", "value": "Transaction Node Share" }
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_Transaction_Set_candidate_get_Bytes_In"
},
"properties": [
{ "id": "displayName", "value": "Tx Set Candidate Get" }
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_Account_State_node_share_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Account State Node Share (Legacy)"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_Transaction_Set_candidate_share_Bytes_In"
},
"properties": [
{ "id": "displayName", "value": "Tx Set Candidate Share" }
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_Transaction_node_share_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Transaction Node Share (Legacy)"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_set_get_Bytes_In"
},
"properties": [{ "id": "displayName", "value": "Set Get" }]
}
]
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "statsd", "network", "telemetry"],
"templating": { "list": [] },
"time": { "from": "now-1h", "to": "now" },
"title": "Network Traffic (StatsD)",
"uid": "rippled-statsd-network"
}

View File

@@ -0,0 +1,415 @@
{
"annotations": {
"list": []
},
"description": "Node health metrics from beast::insight StatsD. Requires [insight] server=statsd in rippled config.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Validated Ledger Age",
"description": "Age of the most recently validated ledger in seconds. Sourced from the LedgerMaster.Validated_Ledger_Age gauge (LedgerMaster.h:373) which is updated every collection interval via the insight hook. Values above 20s indicate the node is falling behind the network.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_LedgerMaster_Validated_Ledger_Age",
"legendFormat": "Validated Age"
}
],
"fieldConfig": {
"defaults": {
"unit": "s",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 10
},
{
"color": "red",
"value": 20
}
]
}
},
"overrides": []
}
},
{
"title": "Published Ledger Age",
"description": "Age of the most recently published ledger in seconds. Sourced from the LedgerMaster.Published_Ledger_Age gauge (LedgerMaster.h:374). Published ledger age should track close to validated ledger age. A growing gap indicates publish pipeline backlog.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_LedgerMaster_Published_Ledger_Age",
"legendFormat": "Published Age"
}
],
"fieldConfig": {
"defaults": {
"unit": "s",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 10
},
{
"color": "red",
"value": 20
}
]
}
},
"overrides": []
}
},
{
"title": "Operating Mode Duration",
"description": "Cumulative time spent in each operating mode (Disconnected, Connected, Syncing, Tracking, Full). Sourced from State_Accounting.*_duration gauges (NetworkOPs.cpp:774-778). A healthy node should spend the vast majority of time in Full mode.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Full_duration",
"legendFormat": "Full"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Tracking_duration",
"legendFormat": "Tracking"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Syncing_duration",
"legendFormat": "Syncing"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Connected_duration",
"legendFormat": "Connected"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Disconnected_duration",
"legendFormat": "Disconnected"
}
],
"fieldConfig": {
"defaults": {
"unit": "s",
"custom": {
"axisLabel": "Duration (Sec)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Operating Mode Transitions",
"description": "Count of transitions into each operating mode. Sourced from State_Accounting.*_transitions gauges (NetworkOPs.cpp:780-786). Frequent transitions out of Full mode indicate instability. Transitions to Disconnected or Syncing warrant investigation.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Full_transitions",
"legendFormat": "Full"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Tracking_transitions",
"legendFormat": "Tracking"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Syncing_transitions",
"legendFormat": "Syncing"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Connected_transitions",
"legendFormat": "Connected"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Disconnected_transitions",
"legendFormat": "Disconnected"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Transitions",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "I/O Latency",
"description": "P95 and P50 of the I/O service loop latency in milliseconds. Sourced from the ios_latency event (Application.cpp:438) which measures how long it takes for the io_context to process a timer callback. Values above 10ms are logged; above 500ms trigger warnings. High values indicate thread pool saturation or blocking operations.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ios_latency{quantile=\"0.95\"}",
"legendFormat": "P95 I/O Latency"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ios_latency{quantile=\"0.5\"}",
"legendFormat": "P50 I/O Latency"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Latency (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Job Queue Depth",
"description": "Current number of jobs waiting in the job queue. Sourced from the job_count gauge (JobQueue.cpp:26). A sustained high value indicates the node cannot process work fast enough \u2014 common during ledger replay or heavy RPC load.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_job_count",
"legendFormat": "Job Queue Depth"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Jobs",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Ledger Fetch Rate",
"description": "Rate of ledger fetch requests initiated by the node. Sourced from the ledger_fetches counter (InboundLedgers.cpp:44) which increments each time the node requests a ledger from a peer. High rates indicate the node is catching up or missing ledgers.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_ledger_fetches_total[5m])",
"legendFormat": "Fetches / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops"
},
"overrides": []
}
},
{
"title": "Ledger History Mismatches",
"description": "Rate of ledger history hash mismatches. Sourced from the ledger.history.mismatch counter (LedgerHistory.cpp:16) which increments when a built ledger hash does not match the expected validated hash. Non-zero values indicate consensus divergence or database corruption.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_ledger_history_mismatch_total[5m])",
"legendFormat": "Mismatches / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 0.01
}
]
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "statsd", "node-health", "telemetry"],
"templating": {
"list": []
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Node Health (StatsD)",
"uid": "rippled-statsd-node-health"
}

View File

@@ -0,0 +1,396 @@
{
"annotations": {
"list": []
},
"description": "RPC and pathfinding metrics from beast::insight StatsD. Requires [insight] server=statsd in rippled config.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "RPC Request Rate (StatsD)",
"description": "Rate of RPC requests as counted by the beast::insight counter. Sourced from rpc.requests (ServerHandler.cpp:108) which increments on every HTTP and WebSocket RPC request. Compare with the span-based rpc.request rate in the RPC Performance dashboard for cross-validation.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_rpc_requests_total[5m])",
"legendFormat": "Requests / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "reqps"
},
"overrides": []
}
},
{
"title": "RPC Response Time (StatsD)",
"description": "P95 and P50 of RPC response time from the beast::insight timer. Sourced from the rpc.time event (ServerHandler.cpp:110) which records elapsed milliseconds for each RPC response. This measures the full HTTP handler time, not just command execution. Compare with span-based rpc.request duration.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_rpc_time{quantile=\"0.95\"}",
"legendFormat": "P95 Response Time"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_rpc_time{quantile=\"0.5\"}",
"legendFormat": "P50 Response Time"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Latency (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "RPC Response Size",
"description": "P95 and P50 of RPC response payload size in bytes. Sourced from the rpc.size event (ServerHandler.cpp:109) which records the byte length of each RPC JSON response. Large responses may indicate expensive queries (e.g. account_tx with many results) or API misuse.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_rpc_size{quantile=\"0.95\"}",
"legendFormat": "P95 Response Size"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_rpc_size{quantile=\"0.5\"}",
"legendFormat": "P50 Response Size"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Size (Bytes)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "RPC Response Time Distribution",
"description": "Distribution of RPC response times from the beast::insight timer showing P50, P90, P95, and P99 quantiles. Sourced from the rpc.time event (ServerHandler.cpp:110). Useful for detecting bimodal latency or long-tail requests.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_rpc_time{quantile=\"0.5\"}",
"legendFormat": "P50"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_rpc_time{quantile=\"0.9\"}",
"legendFormat": "P90"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_rpc_time{quantile=\"0.95\"}",
"legendFormat": "P95"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_rpc_time{quantile=\"0.99\"}",
"legendFormat": "P99"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Latency (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Pathfinding Fast Duration",
"description": "P95 and P50 of fast pathfinding execution time. Sourced from the pathfind_fast event (PathRequests.h:23) which records the duration of the fast pathfinding algorithm. Fast pathfinding uses a simplified search that trades accuracy for speed.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_pathfind_fast{quantile=\"0.95\"}",
"legendFormat": "P95 Fast Pathfind"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_pathfind_fast{quantile=\"0.5\"}",
"legendFormat": "P50 Fast Pathfind"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Pathfinding Full Duration",
"description": "P95 and P50 of full pathfinding execution time. Sourced from the pathfind_full event (PathRequests.h:24) which records the duration of the exhaustive pathfinding search. Full pathfinding is more expensive and can take significantly longer than fast mode.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_pathfind_full{quantile=\"0.95\"}",
"legendFormat": "P95 Full Pathfind"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_pathfind_full{quantile=\"0.5\"}",
"legendFormat": "P50 Full Pathfind"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Resource Warnings Rate",
"description": "Rate of resource warning events from the Resource Manager. Sourced from the warn meter (Logic.h:33) which increments when a consumer (peer or RPC client) exceeds the warning threshold for resource usage. A rising rate indicates aggressive clients that may need throttling. NOTE: This panel will show no data until the |m -> |c fix is applied in StatsDCollector.cpp:706 (Phase 6 Task 6.1).",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_warn_total[5m])",
"legendFormat": "Warnings / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 0.1
},
{
"color": "red",
"value": 1
}
]
}
},
"overrides": []
}
},
{
"title": "Resource Drops Rate",
"description": "Rate of resource drop events from the Resource Manager. Sourced from the drop meter (Logic.h:34) which increments when a consumer is disconnected or blocked due to excessive resource usage. Non-zero values mean the node is actively rejecting abusive connections. NOTE: This panel will show no data until the |m -> |c fix is applied in StatsDCollector.cpp:706 (Phase 6 Task 6.1).",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_drop_total[5m])",
"legendFormat": "Drops / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 0.01
},
{
"color": "red",
"value": 0.1
}
]
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "statsd", "rpc", "pathfinding", "telemetry"],
"templating": {
"list": []
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "RPC & Pathfinding (StatsD)",
"uid": "rippled-statsd-rpc"
}

View File

@@ -0,0 +1,527 @@
{
"annotations": {
"list": []
},
"description": "Ledger data exchange and object fetch traffic from beast::insight System Metrics. Covers ledger sync, node data retrieval, and transaction set exchange. Requires [insight] server=otel in rippled config.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Ledger Data Exchange (Bytes In)",
"description": "Inbound bytes for ledger data sub-categories. 'ledger_data' = aggregated ledger data, sub-types include Transaction_Set_candidate (proposed tx sets), Transaction_Node (tx tree nodes), and Account_State_Node (state tree nodes). High Account_State_Node traffic indicates state sync; high Transaction_Set_candidate indicates consensus catch-up. Sourced from TrafficCount.h ledger_data_* categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_data_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Ledger Data Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_data_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Ledger Data Share [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_data_Transaction_Set_candidate_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Set Candidate Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_data_Transaction_Set_candidate_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Set Candidate Share [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_data_Transaction_Node_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Node Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_data_Transaction_Node_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Node Share [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_data_Account_State_Node_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Account State Node Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_data_Account_State_Node_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Account State Node Share [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes In",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Ledger Share/Get Traffic (Bytes)",
"description": "Legacy ledger share and get traffic by sub-type. These are the older ledger fetch protocol categories (as opposed to ledger_data_* which is the newer protocol). Sub-types: Transaction_Set_candidate, Transaction_node, Account_State_node, plus aggregate ledger_share and ledger_get. Sourced from TrafficCount.h ledger_* categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Ledger Share In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Ledger Get In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_Transaction_Set_candidate_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Set Candidate Share [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_Transaction_Set_candidate_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Set Candidate Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_Transaction_node_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Node Share [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_Transaction_node_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Node Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_Account_State_node_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Account State Share [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_Account_State_node_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Account State Get [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes In",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "GetObject Traffic by Type (Bytes In)",
"description": "Object fetch traffic by object type. GetObject is the protocol for fetching specific SHAMap nodes. Types: Ledger (full ledger headers), Transaction (individual txs), Transaction_node (tx tree nodes), Account_State_node (state tree nodes), CAS (Content Addressable Storage objects), Fetch_Pack (batch fetch during catch-up), Transactions (bulk tx fetch). High Fetch_Pack traffic indicates a node is catching up. Sourced from TrafficCount.h getobject_* categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Ledger_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Ledger Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Ledger_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Ledger Share [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Transaction_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Transaction Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Transaction_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Transaction Share [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Transaction_node_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Node Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Transaction_node_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Node Share [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Account_State_node_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Account State Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Account_State_node_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Account State Share [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes In",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "GetObject Aggregate & Special Types (Bytes In)",
"description": "Aggregate getobject traffic plus special categories: CAS (Content Addressable Storage) for SHAMap node fetch, Fetch_Pack for bulk batch downloads during catch-up, Transactions for bulk tx fetch, and the aggregate getobject_get/getobject_share totals. Sourced from TrafficCount.h getobject_* categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_CAS_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "CAS Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_CAS_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "CAS Share [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Fetch_Pack_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Fetch Pack Share [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Fetch_Pack_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Fetch Pack Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Transactions_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Transactions Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Aggregate Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Aggregate Share [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes In",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "GetObject Messages by Type",
"description": "Message counts for object fetch operations. Shows how many individual fetch requests and responses are exchanged per type. High message counts with low byte counts indicate small object fetches; the inverse indicates large batch transfers. Sourced from TrafficCount.h getobject_* categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Ledger_get_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Ledger Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Transaction_get_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Transaction Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Transaction_node_get_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Node Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Account_State_node_get_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Account State Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_CAS_get_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "CAS Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Fetch_Pack_get_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Fetch Pack Get [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Transactions_get_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Transactions Get [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Messages In",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Overlay Traffic Heatmap (All Categories, Bytes In)",
"description": "Bar gauge showing all overlay traffic categories ranked by inbound bytes. Provides a complete at-a-glance view of which protocol message types consume the most bandwidth across all 57+ traffic categories. Sourced from all TrafficCount.h categories via wildcard match.",
"type": "bargauge",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"displayMode": "gradient",
"orientation": "horizontal",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(20, {exported_instance=~\"$node\", __name__=~\"rippled_.*_Bytes_In\", __name__!~\"rippled_total_.*\"})",
"legendFormat": "{{__name__}} [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 1048576
},
{
"color": "red",
"value": 104857600
}
]
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "statsd", "ledger", "sync", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(rippled_ledger_data_get_Bytes_In, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Ledger Data & Sync (System Metrics)",
"uid": "rippled-system-ledger-sync"
}

View File

@@ -0,0 +1,692 @@
{
"annotations": {
"list": []
},
"description": "Network traffic and peer metrics from beast::insight System Metrics. Requires [insight] server=otel in rippled config.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Active Peers",
"description": "Number of active inbound and outbound peer connections. Sourced from Peer_Finder.Active_Inbound_Peers and Peer_Finder.Active_Outbound_Peers gauges (PeerfinderManager.cpp:214-215). A healthy mainnet node typically has 10-21 outbound and 0-85 inbound peers depending on configuration.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_Peer_Finder_Active_Inbound_Peers{exported_instance=~\"$node\"}",
"legendFormat": "Inbound Peers [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_Peer_Finder_Active_Outbound_Peers{exported_instance=~\"$node\"}",
"legendFormat": "Outbound Peers [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Peers",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Peer Disconnects",
"description": "Cumulative count of peer disconnections. Sourced from the Overlay.Peer_Disconnects gauge (OverlayImpl.h:557). A rising trend indicates network instability, aggressive peer management, or resource exhaustion causing connection drops.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_Overlay_Peer_Disconnects{exported_instance=~\"$node\"}",
"legendFormat": "Disconnects [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Disconnects",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Total Network Bytes",
"description": "Total bytes sent and received across all peer connections. Sourced from the total.Bytes_In and total.Bytes_Out traffic category gauges (OverlayImpl.h:535-548). Provides a high-level view of network bandwidth consumption.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_total_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Bytes In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_total_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Bytes Out [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Total Network Messages",
"description": "Total messages sent and received across all peer connections. Sourced from the total.Messages_In and total.Messages_Out traffic category gauges (OverlayImpl.h:535-548). Shows the overall message throughput of the overlay network.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_total_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Messages In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_total_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Messages Out [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Messages",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Transaction Traffic",
"description": "Bytes and messages for transaction-related overlay traffic. Includes the transactions traffic category (OverlayImpl/TrafficCount.h). Spikes indicate high transaction volume on the network or transaction flooding.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_transactions_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Messages In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_transactions_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "TX Messages Out [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_transactions_duplicate_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Duplicate In [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Messages",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Proposal Traffic",
"description": "Messages for consensus proposal overlay traffic. Includes proposals, proposals_untrusted, and proposals_duplicate categories (TrafficCount.h). High untrusted or duplicate counts may indicate UNL misconfiguration or network spam.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_proposals_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Proposals In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_proposals_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Proposals Out [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_proposals_untrusted_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Untrusted In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_proposals_duplicate_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Duplicate In [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Messages",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Validation Traffic",
"description": "Messages for validation overlay traffic. Includes validations, validations_untrusted, and validations_duplicate categories (TrafficCount.h). Monitoring trusted vs untrusted validation traffic helps detect UNL health issues.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validations_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Validations In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validations_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Validations Out [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validations_untrusted_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Untrusted In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validations_duplicate_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Duplicate In [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Messages",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Overlay Traffic by Category (Bytes In)",
"description": "Top traffic categories by inbound bytes. Includes all 57 overlay traffic categories from TrafficCount.h. Shows which protocol message types consume the most bandwidth. Categories include transactions, proposals, validations, ledger data, getobject, and overlay overhead.",
"type": "bargauge",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, {exported_instance=~\"$node\", __name__=~\"rippled_.*_Bytes_In\", __name__!~\"rippled_total_.*\"})",
"legendFormat": "{{__name__}} [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "rippled_transactions_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Transactions"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_proposals_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Proposals"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_validations_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Validations"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_overhead_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Overhead"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_overhead_overlay_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Overhead Overlay"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ping_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Ping"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_status_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Status"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_getObject_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Get Object"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_haveTxSet_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Have Tx Set"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledgerData_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Ledger Data"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_share_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Ledger Share"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_get_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Ledger Data Get"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_share_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Ledger Data Share"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_Account_State_Node_get_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Account State Node Get"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_Account_State_Node_share_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Account State Node Share"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_Transaction_Node_get_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Transaction Node Get"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_Transaction_Node_share_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Transaction Node Share"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_Transaction_Set_candidate_get_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Tx Set Candidate Get"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_Account_State_node_share_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Account State Node Share (Legacy)"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_Transaction_Set_candidate_share_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Tx Set Candidate Share"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_Transaction_node_share_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Transaction Node Share (Legacy)"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_set_get_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Set Get"
}
]
}
]
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "statsd", "network", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(rippled_Peer_Finder_Active_Inbound_Peers, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Network Traffic (System Metrics)",
"uid": "rippled-system-network"
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,587 @@
{
"annotations": {
"list": []
},
"description": "Detailed overlay traffic breakdown for categories not covered by the main Network Traffic dashboard. Includes squelch, overhead, validator lists, object fetch, ledger sync, and protocol negotiation traffic. Requires [insight] server=otel in rippled config.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Squelch Traffic (Messages)",
"description": "Squelch-related overlay messages. Squelch is the peer traffic management protocol that suppresses redundant message forwarding. 'squelch' = squelch control messages, 'squelch_suppressed' = messages suppressed by squelch, 'squelch_ignored' = squelch directives that were ignored. High suppressed counts indicate effective bandwidth savings; high ignored counts may indicate misconfigured peers. Sourced from TrafficCount.h squelch categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_squelch_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Squelch In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_squelch_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Squelch Out [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_squelch_suppressed_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Suppressed In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_squelch_suppressed_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Suppressed Out [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_squelch_ignored_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Ignored In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_squelch_ignored_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Ignored Out [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Messages",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Overhead Traffic Breakdown (Bytes)",
"description": "Overlay protocol overhead by sub-category. 'overhead' = base protocol overhead (ping, status, etc.), 'overhead_cluster' = intra-cluster communication overhead, 'overhead_manifest' = validator manifest distribution overhead. High cluster overhead may indicate frequent cluster state syncs; high manifest overhead occurs during UNL changes. Sourced from TrafficCount.h overhead categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_overhead_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Base Overhead In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_overhead_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Base Overhead Out [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_overhead_cluster_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Cluster In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_overhead_cluster_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Cluster Out [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_overhead_manifest_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Manifest In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_overhead_manifest_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Manifest Out [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Validator List Traffic",
"description": "Validator list (UNL) distribution traffic. Validator lists are exchanged when peers share their trusted validator configurations. Spikes occur during UNL updates or when new peers connect. Sourced from TrafficCount.h validator_lists category.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validator_lists_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Bytes In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validator_lists_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Bytes Out [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validator_lists_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Messages In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validator_lists_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Messages Out [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Count",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": [
{
"matcher": {
"id": "byRegexp",
"options": "/Bytes/"
},
"properties": [
{
"id": "custom.axisPlacement",
"value": "right"
},
{
"id": "unit",
"value": "decbytes"
}
]
}
]
}
},
{
"title": "Set Get/Share Traffic (Bytes)",
"description": "Transaction set get and share traffic. 'set_get' = requests to fetch transaction sets (sent during ledger close), 'set_share' = responses sharing transaction sets. High set_get traffic indicates peers frequently requesting missing transaction sets, which may signal sync delays. Sourced from TrafficCount.h set_get/set_share categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_set_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Set Get In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_set_get_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Set Get Out [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_set_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Set Share In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_set_share_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Set Share Out [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Have/Requested Transactions (Messages)",
"description": "Transaction availability protocol messages. 'have_transactions' = advertisements that a peer has specific transactions available, 'requested_transactions' = explicit requests for transaction data. A high ratio of requested to have may indicate peers are behind on transaction propagation. Sourced from TrafficCount.h have_transactions/requested_transactions categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_have_transactions_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Have TX In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_have_transactions_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Have TX Out [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_requested_transactions_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Requested TX In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_requested_transactions_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Requested TX Out [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Messages",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Unknown / Unclassified Traffic",
"description": "Traffic that does not match any known overlay message category. Non-zero values may indicate protocol version mismatches, corrupted messages, or new message types not yet classified. Sourced from TrafficCount.h unknown category.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_unknown_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Unknown Bytes In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_unknown_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Unknown Bytes Out [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_unknown_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Unknown Messages In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_unknown_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Unknown Messages Out [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "none",
"custom": {
"axisLabel": "Count",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": [
{
"matcher": {
"id": "byRegexp",
"options": "/Bytes/"
},
"properties": [
{
"id": "custom.axisPlacement",
"value": "right"
},
{
"id": "unit",
"value": "decbytes"
}
]
}
]
}
},
{
"title": "Proof Path Traffic",
"description": "Proof path request/response traffic for ledger state proof exchange. Used by peers to verify specific ledger entries without downloading the full ledger. High request volume may indicate peers validating state during catch-up. Sourced from TrafficCount.h proof_path_request/proof_path_response categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_proof_path_request_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Request Bytes In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_proof_path_request_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Request Bytes Out [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_proof_path_response_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Response Bytes In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_proof_path_response_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Response Bytes Out [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Replay Delta Traffic",
"description": "Replay delta request/response traffic for ledger replay protocol. Used during catch-up to efficiently replay ledger state changes. Sourced from TrafficCount.h replay_delta_request/replay_delta_response categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_replay_delta_request_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Request Bytes In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_replay_delta_request_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Request Bytes Out [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_replay_delta_response_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Response Bytes In [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_replay_delta_response_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Response Bytes Out [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "statsd", "overlay", "network", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(rippled_squelch_Messages_In, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Overlay Traffic Detail (System Metrics)",
"uid": "rippled-system-overlay-detail"
}

View File

@@ -0,0 +1,417 @@
{
"annotations": {
"list": []
},
"description": "RPC and pathfinding metrics from beast::insight System Metrics. Requires [insight] server=otel in rippled config.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "RPC Request Rate (System Metrics)",
"description": "Rate of RPC requests as counted by the beast::insight counter. Sourced from rpc.requests (ServerHandler.cpp:108) which increments on every HTTP and WebSocket RPC request. Compare with the span-based rpc.request rate in the RPC Performance dashboard for cross-validation.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_rpc_requests_total{exported_instance=~\"$node\"}[5m])",
"legendFormat": "Requests / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "reqps"
},
"overrides": []
}
},
{
"title": "RPC Response Time (System Metrics)",
"description": "P95 and P50 of RPC response time from the beast::insight timer. Sourced from the rpc.time event (ServerHandler.cpp:110) which records elapsed milliseconds for each RPC response. This measures the full HTTP handler time, not just command execution. Compare with span-based rpc.request duration.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(rippled_rpc_time_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "P95 Response Time [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.5, sum by (le, exported_instance) (rate(rippled_rpc_time_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "P50 Response Time [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Latency (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "RPC Response Size",
"description": "P95 and P50 of RPC response payload size in bytes. Sourced from the rpc.size event (ServerHandler.cpp:109) which records the byte length of each RPC JSON response. Large responses may indicate expensive queries (e.g. account_tx with many results) or API misuse.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(rippled_rpc_size_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "P95 Response Size [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.5, sum by (le, exported_instance) (rate(rippled_rpc_size_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "P50 Response Size [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Size (Bytes)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "RPC Response Time Distribution",
"description": "Distribution of RPC response times from the beast::insight timer showing P50, P90, P95, and P99 quantiles. Sourced from the rpc.time event (ServerHandler.cpp:110). Useful for detecting bimodal latency or long-tail requests.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.5, sum by (le, exported_instance) (rate(rippled_rpc_time_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "P50 [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.9, sum by (le, exported_instance) (rate(rippled_rpc_time_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "P90 [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(rippled_rpc_time_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "P95 [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.99, sum by (le, exported_instance) (rate(rippled_rpc_time_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "P99 [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Latency (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Pathfinding Fast Duration",
"description": "P95 and P50 of fast pathfinding execution time. Sourced from the pathfind_fast event (PathRequests.h:23) which records the duration of the fast pathfinding algorithm. Fast pathfinding uses a simplified search that trades accuracy for speed.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(rippled_pathfind_fast_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "P95 Fast Pathfind [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.5, sum by (le, exported_instance) (rate(rippled_pathfind_fast_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "P50 Fast Pathfind [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Pathfinding Full Duration",
"description": "P95 and P50 of full pathfinding execution time. Sourced from the pathfind_full event (PathRequests.h:24) which records the duration of the exhaustive pathfinding search. Full pathfinding is more expensive and can take significantly longer than fast mode.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(rippled_pathfind_full_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "P95 Full Pathfind [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.5, sum by (le, exported_instance) (rate(rippled_pathfind_full_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "P50 Full Pathfind [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Resource Warnings Rate",
"description": "Rate of resource warning events from the Resource Manager. Sourced from the warn meter (Logic.h:33) which increments when a consumer (peer or RPC client) exceeds the warning threshold for resource usage. A rising rate indicates aggressive clients that may need throttling. NOTE: This panel will show no data until the |m -> |c fix is applied in System MetricsCollector.cpp:706 (Phase 6 Task 6.1).",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_warn_total{exported_instance=~\"$node\"}[5m])",
"legendFormat": "Warnings / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 0.1
},
{
"color": "red",
"value": 1
}
]
}
},
"overrides": []
}
},
{
"title": "Resource Drops Rate",
"description": "Rate of resource drop events from the Resource Manager. Sourced from the drop meter (Logic.h:34) which increments when a consumer is disconnected or blocked due to excessive resource usage. Non-zero values mean the node is actively rejecting abusive connections. NOTE: This panel will show no data until the |m -> |c fix is applied in System MetricsCollector.cpp:706 (Phase 6 Task 6.1).",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_drop_total{exported_instance=~\"$node\"}[5m])",
"legendFormat": "Drops / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 0.01
},
{
"color": "red",
"value": 0.1
}
]
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "statsd", "rpc", "pathfinding", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(rippled_rpc_requests_total, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "RPC & Pathfinding (System Metrics)",
"uid": "rippled-system-rpc"
}

View File

@@ -0,0 +1,384 @@
{
"annotations": {
"list": []
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Transaction Processing Rate",
"description": "Rate of transactions entering the processing pipeline. tx.process (NetworkOPs.cpp:1227) fires when a transaction is submitted locally or received from a peer and enters processTransaction(). tx.receive (PeerImp.cpp:1273) fires when a raw transaction message arrives from a peer before deduplication.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"tx.process\"}[5m]))",
"legendFormat": "tx.process / Sec [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"tx.receive\"}[5m]))",
"legendFormat": "tx.receive / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Transactions / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Transaction Processing Latency",
"description": "p95 and p50 latency of transaction processing (tx.process span). Measures the time from when a transaction enters processTransaction() to completion. Computed via histogram_quantile() over the spanmetrics duration histogram with a 5m rate window.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"tx.process\"}[5m])))",
"legendFormat": "P95 [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le, exported_instance) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"tx.process\"}[5m])))",
"legendFormat": "P50 [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Latency (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Transaction Path Distribution",
"description": "Breakdown of transactions by origin path. The xrpl.tx.local attribute indicates whether the transaction was submitted locally (true) or received from a peer (false). Helps understand the ratio of locally-originated vs relayed transactions.",
"type": "piechart",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (xrpl_tx_local, exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_tx_local=~\"$tx_origin\", span_name=\"tx.process\"}[5m]))",
"legendFormat": "Local = {{xrpl_tx_local}} [{{exported_instance}}]"
}
]
},
{
"title": "Transaction Receive vs Suppressed",
"description": "Total rate of raw transaction messages received from peers (tx.receive span from PeerImp.cpp:1273). This fires before deduplication via the HashRouter, so the difference between tx.receive and tx.process reflects suppressed duplicate transactions.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"tx.receive\"}[5m]))",
"legendFormat": "Total Received [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Transactions / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Transaction Processing Duration Heatmap",
"description": "Heatmap showing the distribution of tx.process span durations across histogram buckets over time. Each cell represents the count of transactions that completed within that latency bucket in a 5m window. Reveals whether processing times are consistent or exhibit multi-modal patterns.",
"type": "heatmap",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"yAxis": {
"axisLabel": "Duration (ms)"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(increase(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"tx.process\"}[5m])) by (le)",
"legendFormat": "{{le}}",
"format": "heatmap"
}
]
},
{
"title": "Transaction Apply Duration per Ledger",
"description": "p95 and p50 latency of applying the consensus transaction set to a new ledger. The tx.apply span (BuildLedger.cpp:88) wraps the applyTransactions() function that iterates through the CanonicalTXSet and applies each transaction to the OpenView. Long durations indicate heavy transaction sets or expensive transaction processing.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, exported_instance) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"tx.apply\"}[5m])))",
"legendFormat": "P95 tx.apply [{{exported_instance}}]"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le, exported_instance) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"tx.apply\"}[5m])))",
"legendFormat": "P50 tx.apply [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Latency (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Peer Transaction Receive Rate",
"description": "Rate of transaction messages received from network peers. Sourced from the tx.receive span (PeerImp.cpp:1273) which fires in the onMessage(TMTransaction) handler. High rates may indicate network-wide transaction volume spikes or peer flooding.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"tx.receive\"}[5m]))",
"legendFormat": "tx.receive / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Transactions / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Transaction Apply Failed Rate",
"description": "Rate of tx.apply spans completing with error status, indicating transaction application failures during ledger building. The span records xrpl.ledger.tx_failed as an attribute. Thresholds: green < 0.1/sec, yellow 0.1-1/sec, red > 1/sec. Some failures are normal (e.g. conflicting offers) but sustained high rates may indicate issues.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (exported_instance) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"tx.apply\", status_code=\"STATUS_CODE_ERROR\"}[5m]))",
"legendFormat": "Failed / Sec [{{exported_instance}}]"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 0.1
},
{
"color": "red",
"value": 1
}
]
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "transactions", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id — e.g. Node-1)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
},
{
"name": "tx_origin",
"label": "TX Origin",
"description": "Filter by transaction origin (true = local submit, false = peer relay)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total{span_name=\"tx.process\"}, xrpl_tx_local)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Transaction Overview",
"uid": "rippled-transactions"
}

View File

@@ -0,0 +1,12 @@
apiVersion: 1
providers:
- name: rippled-telemetry
orgId: 1
folder: rippled
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
foldersFromFilesStructure: false

View File

@@ -0,0 +1,13 @@
# Grafana datasource provisioning for the rippled telemetry stack.
# Auto-configures Jaeger as a trace data source on Grafana startup.
# Access Grafana at http://localhost:3000, then use Explore -> Jaeger
# to browse rippled traces.
apiVersion: 1
datasources:
- name: Jaeger
type: jaeger
access: proxy
url: http://jaeger:16686
isDefault: false

View File

@@ -0,0 +1,24 @@
# Grafana Loki data source provisioning for rippled log-trace correlation.
#
# Phase 8: Log-Trace Correlation and Centralized Log Ingestion
#
# Loki ingests rippled logs via OTel Collector's filelog receiver.
# The derivedFields config links trace_id values in log lines back to
# Tempo traces, enabling one-click log-to-trace navigation in Grafana.
#
# See: OpenTelemetryPlan/Phase8_taskList.md (Tasks 8.2, 8.4)
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
url: http://loki:3100
uid: loki
jsonData:
derivedFields:
- datasourceUid: tempo
matcherRegex: "trace_id=(\\w+)"
name: TraceID
url: "$${__value.raw}"

View File

@@ -0,0 +1,10 @@
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
uid: prometheus
access: proxy
url: http://prometheus:9090
isDefault: true
editable: true

View File

@@ -0,0 +1,155 @@
# Grafana datasource provisioning for Grafana Tempo.
# Auto-configures Tempo as a trace data source on Grafana startup.
# Access Grafana at http://localhost:3000, then use Explore -> Tempo
# to browse rippled traces using TraceQL.
#
# Search filters provide pre-configured dropdowns in the Explore UI.
# Each phase adds filters for the span attributes it introduces.
# Phase 1b (infra): Base filters — node identity, service, span name, status.
# Phase 2 (RPC): RPC command, status, role filters.
# Phase 3 (TX): Transaction hash, local/peer origin, status.
# Phase 4 (Cons): Consensus mode, round, ledger sequence, close time.
apiVersion: 1
datasources:
- name: Tempo
type: tempo
access: proxy
url: http://tempo:3200
uid: tempo
jsonData:
nodeGraph:
enabled: true
serviceMap:
datasourceUid: prometheus
# Phase 8: Trace-to-log correlation — enables one-click navigation
# from a Tempo trace to the corresponding Loki log lines. Filters
# by trace_id so only logs from the same trace are shown.
tracesToLogs:
datasourceUid: loki
filterByTraceID: true
filterBySpanID: false
tags: ["partition", "severity"]
tracesToMetrics:
datasourceUid: prometheus
spanStartTimeShift: "-1h"
spanEndTimeShift: "1h"
search:
filters:
# --- Node identification filters ---
# service.name: logical service name (default: "rippled").
# Useful when running multiple service types in the same collector.
- id: service-name
tag: service.name
operator: "="
scope: resource
type: static
# service.instance.id: unique node identifier — configurable via
# the service_instance_id setting in [telemetry], defaults to the
# node's public key. E.g. "Node-1" or "nHB1X37...".
- id: node-id
tag: service.instance.id
operator: "="
scope: resource
type: static
# service.version: rippled build version (e.g., "2.4.0-b1").
# Filter traces from specific software releases.
- id: node-version
tag: service.version
operator: "="
scope: resource
type: dynamic
# xrpl.network.id: numeric network identifier
# (0 = mainnet, 1 = testnet, 2 = devnet, etc.).
- id: network-id
tag: xrpl.network.id
operator: "="
scope: resource
type: dynamic
# xrpl.network.type: human-readable network name
# ("mainnet", "testnet", "devnet", "standalone").
- id: network-type
tag: xrpl.network.type
operator: "="
scope: resource
type: static
# --- Span intrinsic filters ---
- id: span-name
tag: name
operator: "="
scope: intrinsic
type: static
- id: span-status
tag: status
operator: "="
scope: intrinsic
type: static
- id: span-duration
tag: duration
operator: ">"
scope: intrinsic
type: static
# Phase 2: RPC tracing filters
- id: rpc-command
tag: xrpl.rpc.command
operator: "="
scope: span
type: static
- id: rpc-status
tag: xrpl.rpc.status
operator: "="
scope: span
type: dynamic
- id: rpc-role
tag: xrpl.rpc.role
operator: "="
scope: span
type: dynamic
# Phase 3: Transaction tracing filters
- id: tx-hash
tag: xrpl.tx.hash
operator: "="
scope: span
type: static
- id: tx-origin
tag: xrpl.tx.local
operator: "="
scope: span
type: dynamic
- id: tx-status
tag: xrpl.tx.status
operator: "="
scope: span
type: dynamic
# Phase 4: Consensus tracing filters
- id: consensus-mode
tag: xrpl.consensus.mode
operator: "="
scope: span
type: static
- id: consensus-round
tag: xrpl.consensus.round
operator: "="
scope: span
type: dynamic
- id: consensus-ledger-seq
tag: xrpl.consensus.ledger.seq
operator: "="
scope: span
type: static
- id: consensus-close-time-correct
tag: xrpl.consensus.close_time_correct
operator: "="
scope: span
type: dynamic
- id: consensus-state
tag: xrpl.consensus.state
operator: "="
scope: span
type: dynamic
- id: consensus-close-resolution
tag: xrpl.consensus.close_resolution_ms
operator: "="
scope: span
type: dynamic

View File

@@ -0,0 +1,725 @@
#!/usr/bin/env bash
# Integration test for rippled OpenTelemetry instrumentation.
#
# Launches a 6-node xrpld consensus network with telemetry enabled,
# exercises RPC / transaction / consensus code paths, then verifies
# that the expected spans and metrics appear in Jaeger and Prometheus.
#
# Usage:
# bash docker/telemetry/integration-test.sh
#
# Prerequisites:
# - .build/xrpld built with telemetry=ON
# - docker compose (v2)
# - curl, jq
#
# The script leaves the observability stack and xrpld nodes running
# so you can manually inspect Jaeger (localhost:16686) and Grafana
# (localhost:3000). Run with --cleanup to tear down instead.
set -euo pipefail
# ---------------------------------------------------------------------------
# Configuration
# ---------------------------------------------------------------------------
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
XRPLD="$REPO_ROOT/.build/xrpld"
COMPOSE_FILE="$SCRIPT_DIR/docker-compose.yml"
STANDALONE_CFG="$SCRIPT_DIR/xrpld-telemetry.cfg"
WORKDIR="${WORKDIR:-/tmp/xrpld-integration}"
NUM_NODES=6
PEER_PORT_BASE=51235
RPC_PORT_BASE=5005
CONSENSUS_TIMEOUT=120
GENESIS_ACCOUNT="rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"
GENESIS_SEED="snoPBrXtMeMyMHUVTgbuqAfg1SUTb"
DEST_ACCOUNT="" # Generated dynamically via wallet_propose
JAEGER="http://localhost:16686"
PROM="http://localhost:9090"
# Counters for pass/fail
PASS=0
FAIL=0
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
log() { printf "\033[1;34m[INFO]\033[0m %s\n" "$*"; }
ok() { printf "\033[1;32m[PASS]\033[0m %s\n" "$*"; PASS=$((PASS + 1)); }
fail() { printf "\033[1;31m[FAIL]\033[0m %s\n" "$*"; FAIL=$((FAIL + 1)); }
die() { printf "\033[1;31m[ERROR]\033[0m %s\n" "$*" >&2; exit 1; }
check_span() {
local op="$1"
local count
count=$(curl -sf "$JAEGER/api/traces?service=rippled&operation=$op&limit=5&lookback=1h" \
| jq '.data | length' 2>/dev/null || echo 0)
if [ "$count" -gt 0 ]; then
ok "$op ($count traces)"
else
fail "$op (0 traces)"
fi
}
# Phase 8: Verify trace_id injection in rippled log output.
# Greps all node debug.log files for the "trace_id=<hex> span_id=<hex>"
# pattern that Logs::format() injects when an active OTel span exists.
# Also cross-checks that a trace_id found in logs matches a trace in Jaeger.
check_log_correlation() {
log "Checking log-trace correlation..."
local total_matches=0
local sample_trace_id=""
for i in $(seq 1 "$NUM_NODES"); do
local logfile="$WORKDIR/node$i/debug.log"
if [ ! -f "$logfile" ]; then
continue
fi
local matches
matches=$(grep -c 'trace_id=[a-f0-9]\{32\} span_id=[a-f0-9]\{16\}' "$logfile" 2>/dev/null || echo 0)
total_matches=$((total_matches + matches))
# Capture the first trace_id we find for cross-referencing with Jaeger
if [ -z "$sample_trace_id" ] && [ "$matches" -gt 0 ]; then
sample_trace_id=$(grep -o 'trace_id=[a-f0-9]\{32\}' "$logfile" | head -1 | cut -d= -f2)
fi
done
if [ "$total_matches" -gt 0 ]; then
ok "Log correlation: found $total_matches log lines with trace_id"
else
fail "Log correlation: no trace_id found in any node debug.log"
fi
# Cross-check: verify the sample trace_id exists in Jaeger
if [ -n "$sample_trace_id" ]; then
local trace_found
trace_found=$(curl -sf "$JAEGER/api/traces/$sample_trace_id" \
| jq '.data | length' 2>/dev/null || echo 0)
if [ "$trace_found" -gt 0 ]; then
ok "Log-Jaeger cross-check: trace_id=$sample_trace_id found in Jaeger"
else
fail "Log-Jaeger cross-check: trace_id=$sample_trace_id NOT found in Jaeger"
fi
fi
}
cleanup() {
log "Cleaning up..."
# Kill xrpld nodes
for i in $(seq 1 "$NUM_NODES"); do
local pidfile="$WORKDIR/node$i/xrpld.pid"
if [ -f "$pidfile" ]; then
kill "$(cat "$pidfile")" 2>/dev/null || true
rm -f "$pidfile"
fi
done
# Also kill any straggling xrpld processes from our workdir
pkill -f "$WORKDIR" 2>/dev/null || true
# Stop docker stack
docker compose -f "$COMPOSE_FILE" down 2>/dev/null || true
# Remove workdir
rm -rf "$WORKDIR"
log "Cleanup complete."
}
# Handle --cleanup flag
if [ "${1:-}" = "--cleanup" ]; then
cleanup
exit 0
fi
# ---------------------------------------------------------------------------
# Step 0: Prerequisites
# ---------------------------------------------------------------------------
log "Checking prerequisites..."
command -v docker >/dev/null 2>&1 || die "docker not found"
docker compose version >/dev/null 2>&1 || die "docker compose (v2) not found"
command -v curl >/dev/null 2>&1 || die "curl not found"
command -v jq >/dev/null 2>&1 || die "jq not found"
[ -x "$XRPLD" ] || die "xrpld binary not found at $XRPLD (build with telemetry=ON)"
[ -f "$COMPOSE_FILE" ] || die "docker-compose.yml not found at $COMPOSE_FILE"
[ -f "$STANDALONE_CFG" ] || die "xrpld-telemetry.cfg not found at $STANDALONE_CFG"
log "All prerequisites met."
# ---------------------------------------------------------------------------
# Step 1: Clean previous run
# ---------------------------------------------------------------------------
log "Cleaning previous run data..."
for i in $(seq 1 "$NUM_NODES"); do
pidfile="$WORKDIR/node$i/xrpld.pid"
if [ -f "$pidfile" ]; then
kill "$(cat "$pidfile")" 2>/dev/null || true
fi
done
pkill -f "$WORKDIR" 2>/dev/null || true
# Kill any xrpld using the standalone config (from key generation)
pkill -f "xrpld-telemetry.cfg" 2>/dev/null || true
sleep 2
rm -rf "$WORKDIR"
mkdir -p "$WORKDIR"
# ---------------------------------------------------------------------------
# Step 2: Start observability stack
# ---------------------------------------------------------------------------
log "Starting observability stack..."
docker compose -f "$COMPOSE_FILE" up -d
log "Waiting for otel-collector to be ready..."
for attempt in $(seq 1 30); do
# The OTLP HTTP endpoint returns 405 for GET (expects POST), which
# means it is listening. curl -sf would fail on 405, so we check
# the HTTP status code explicitly.
status=$(curl -so /dev/null -w '%{http_code}' http://localhost:4318/ 2>/dev/null || echo 000)
if [ "$status" != "000" ]; then
log "otel-collector ready (attempt $attempt, HTTP $status)."
break
fi
if [ "$attempt" -eq 30 ]; then
die "otel-collector not ready after 30s"
fi
sleep 1
done
log "Waiting for Jaeger to be ready..."
for attempt in $(seq 1 30); do
if curl -sf "$JAEGER/" >/dev/null 2>&1; then
log "Jaeger ready (attempt $attempt)."
break
fi
if [ "$attempt" -eq 30 ]; then
die "Jaeger not ready after 30s"
fi
sleep 1
done
# ---------------------------------------------------------------------------
# Step 3: Generate validator keys
# ---------------------------------------------------------------------------
log "Generating $NUM_NODES validator key pairs..."
# Start a temporary standalone xrpld for key generation
TEMP_DATA="$WORKDIR/temp-keygen"
mkdir -p "$TEMP_DATA"
# Create a minimal temp config for key generation
TEMP_CFG="$TEMP_DATA/xrpld.cfg"
cat > "$TEMP_CFG" <<EOCFG
[server]
port_rpc_temp
[port_rpc_temp]
port = 5099
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[node_db]
type=NuDB
path=$TEMP_DATA/nudb
online_delete=256
[database_path]
$TEMP_DATA/db
[debug_logfile]
$TEMP_DATA/debug.log
[ssl_verify]
0
EOCFG
"$XRPLD" --conf "$TEMP_CFG" -a --start > "$TEMP_DATA/stdout.log" 2>&1 &
TEMP_PID=$!
log "Temporary xrpld started (PID $TEMP_PID), waiting for RPC..."
for attempt in $(seq 1 30); do
if curl -sf http://localhost:5099 -d '{"method":"server_info"}' >/dev/null 2>&1; then
log "Temporary xrpld RPC ready (attempt $attempt)."
break
fi
if [ "$attempt" -eq 30 ]; then
kill "$TEMP_PID" 2>/dev/null || true
die "Temporary xrpld RPC not ready after 30s"
fi
sleep 1
done
declare -a SEEDS
declare -a PUBKEYS
for i in $(seq 1 "$NUM_NODES"); do
result=$(curl -sf http://localhost:5099 -d '{"method":"validation_create"}')
seed=$(echo "$result" | jq -r '.result.validation_seed')
pubkey=$(echo "$result" | jq -r '.result.validation_public_key')
if [ -z "$seed" ] || [ "$seed" = "null" ]; then
kill "$TEMP_PID" 2>/dev/null || true
die "Failed to generate key pair $i"
fi
SEEDS+=("$seed")
PUBKEYS+=("$pubkey")
log " Node $i: $pubkey"
done
kill "$TEMP_PID" 2>/dev/null || true
wait "$TEMP_PID" 2>/dev/null || true
rm -rf "$TEMP_DATA"
log "Key generation complete."
# ---------------------------------------------------------------------------
# Step 4: Generate node configs and validators.txt
# ---------------------------------------------------------------------------
log "Generating node configs..."
# Create shared validators.txt
VALIDATORS_FILE="$WORKDIR/validators.txt"
{
echo "[validators]"
for i in $(seq 0 $((NUM_NODES - 1))); do
echo "${PUBKEYS[$i]}"
done
} > "$VALIDATORS_FILE"
# Create per-node configs
for i in $(seq 1 "$NUM_NODES"); do
NODE_DIR="$WORKDIR/node$i"
mkdir -p "$NODE_DIR/nudb" "$NODE_DIR/db"
RPC_PORT=$((RPC_PORT_BASE + i - 1))
PEER_PORT=$((PEER_PORT_BASE + i - 1))
SEED="${SEEDS[$((i - 1))]}"
# Build ips_fixed list (all peers except self)
IPS_FIXED=""
for j in $(seq 1 "$NUM_NODES"); do
if [ "$j" -ne "$i" ]; then
IPS_FIXED="${IPS_FIXED}127.0.0.1 $((PEER_PORT_BASE + j - 1))
"
fi
done
cat > "$NODE_DIR/xrpld.cfg" <<EOCFG
[server]
port_rpc
port_peer
[port_rpc]
port = $RPC_PORT
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_peer]
port = $PEER_PORT
ip = 0.0.0.0
protocol = peer
[node_db]
type=NuDB
path=$NODE_DIR/nudb
online_delete=256
[database_path]
$NODE_DIR/db
[debug_logfile]
$NODE_DIR/debug.log
[validation_seed]
$SEED
[validators_file]
$VALIDATORS_FILE
[ips_fixed]
${IPS_FIXED}
[peer_private]
1
[telemetry]
enabled=1
service_instance_id=Node-${i}
endpoint=http://localhost:4318/v1/traces
exporter=otlp_http
sampling_ratio=1.0
batch_size=512
batch_delay_ms=2000
max_queue_size=2048
trace_rpc=1
trace_transactions=1
trace_consensus=1
trace_peer=1
trace_ledger=1
metrics_endpoint=http://localhost:4318/v1/metrics
[insight]
server=otel
endpoint=http://localhost:4318/v1/metrics
prefix=rippled
service_instance_id=Node-${i}
[insight]
server=statsd
address=127.0.0.1:8125
prefix=rippled
[rpc_startup]
{ "command": "log_level", "severity": "warning" }
[ssl_verify]
0
EOCFG
log " Node $i config: RPC=$RPC_PORT, Peer=$PEER_PORT"
done
# ---------------------------------------------------------------------------
# Step 5: Start all 6 nodes
# ---------------------------------------------------------------------------
log "Starting $NUM_NODES xrpld nodes..."
for i in $(seq 1 "$NUM_NODES"); do
NODE_DIR="$WORKDIR/node$i"
"$XRPLD" --conf "$NODE_DIR/xrpld.cfg" --start > "$NODE_DIR/stdout.log" 2>&1 &
echo $! > "$NODE_DIR/xrpld.pid"
log " Node $i started (PID $(cat "$NODE_DIR/xrpld.pid"))"
done
# Give nodes a moment to initialize
sleep 5
# ---------------------------------------------------------------------------
# Step 6: Wait for consensus
# ---------------------------------------------------------------------------
log "Waiting for nodes to reach 'proposing' state (timeout: ${CONSENSUS_TIMEOUT}s)..."
start_time=$(date +%s)
nodes_ready=0
while [ "$nodes_ready" -lt "$NUM_NODES" ]; do
elapsed=$(( $(date +%s) - start_time ))
if [ "$elapsed" -ge "$CONSENSUS_TIMEOUT" ]; then
fail "Consensus timeout after ${CONSENSUS_TIMEOUT}s ($nodes_ready/$NUM_NODES nodes ready)"
log "Continuing with partial consensus..."
break
fi
nodes_ready=0
for i in $(seq 1 "$NUM_NODES"); do
RPC_PORT=$((RPC_PORT_BASE + i - 1))
state=$(curl -sf "http://localhost:$RPC_PORT" \
-d '{"method":"server_info"}' 2>/dev/null \
| jq -r '.result.info.server_state' 2>/dev/null || echo "unreachable")
if [ "$state" = "proposing" ]; then
nodes_ready=$((nodes_ready + 1))
fi
done
printf "\r %d/%d nodes proposing (%ds elapsed)..." "$nodes_ready" "$NUM_NODES" "$elapsed"
if [ "$nodes_ready" -lt "$NUM_NODES" ]; then
sleep 3
fi
done
echo ""
if [ "$nodes_ready" -eq "$NUM_NODES" ]; then
ok "All $NUM_NODES nodes reached 'proposing' state"
else
fail "Only $nodes_ready/$NUM_NODES nodes reached 'proposing' state"
fi
# ---------------------------------------------------------------------------
# Step 6b: Wait for validated ledger
# ---------------------------------------------------------------------------
log "Waiting for first validated ledger..."
for attempt in $(seq 1 60); do
val_seq=$(curl -sf "http://localhost:$RPC_PORT_BASE" \
-d '{"method":"server_info"}' 2>/dev/null \
| jq -r '.result.info.validated_ledger.seq // 0' 2>/dev/null || echo 0)
if [ "$val_seq" -gt 2 ] 2>/dev/null; then
ok "First validated ledger: seq $val_seq"
break
fi
if [ "$attempt" -eq 60 ]; then
fail "No validated ledger after 60s"
fi
sleep 1
done
# ---------------------------------------------------------------------------
# Step 7: Exercise RPC spans (Phase 2)
# ---------------------------------------------------------------------------
log "Exercising RPC spans..."
curl -sf "http://localhost:$RPC_PORT_BASE" \
-d '{"method":"server_info"}' > /dev/null
curl -sf "http://localhost:$RPC_PORT_BASE" \
-d '{"method":"server_state"}' > /dev/null
curl -sf "http://localhost:$RPC_PORT_BASE" \
-d '{"method":"ledger","params":[{"ledger_index":"current"}]}' > /dev/null
log "RPC commands sent. Waiting 5s for batch export..."
sleep 5
# ---------------------------------------------------------------------------
# Step 8: Submit transaction (Phase 3)
# ---------------------------------------------------------------------------
log "Submitting Payment transaction..."
# Generate a destination wallet
log " Generating destination wallet..."
wallet_result=$(curl -sf "http://localhost:$RPC_PORT_BASE" \
-d '{"method":"wallet_propose"}')
DEST_ACCOUNT=$(echo "$wallet_result" | jq -r '.result.account_id' 2>/dev/null)
if [ -z "$DEST_ACCOUNT" ] || [ "$DEST_ACCOUNT" = "null" ]; then
fail "Could not generate destination wallet"
DEST_ACCOUNT="rrrrrrrrrrrrrrrrrrrrrhoLvTp" # ACCOUNT_ZERO fallback
fi
log " Destination: $DEST_ACCOUNT"
# Get genesis account info
acct_result=$(curl -sf "http://localhost:$RPC_PORT_BASE" \
-d "{\"method\":\"account_info\",\"params\":[{\"account\":\"$GENESIS_ACCOUNT\"}]}")
seq_num=$(echo "$acct_result" | jq -r '.result.account_data.Sequence' 2>/dev/null || echo "unknown")
log " Genesis account sequence: $seq_num"
# Submit payment
submit_result=$(curl -sf "http://localhost:$RPC_PORT_BASE" -d "{
\"method\": \"submit\",
\"params\": [{
\"secret\": \"$GENESIS_SEED\",
\"tx_json\": {
\"TransactionType\": \"Payment\",
\"Account\": \"$GENESIS_ACCOUNT\",
\"Destination\": \"$DEST_ACCOUNT\",
\"Amount\": \"10000000\"
}
}]
}")
engine_result=$(echo "$submit_result" | jq -r '.result.engine_result' 2>/dev/null || echo "unknown")
tx_hash=$(echo "$submit_result" | jq -r '.result.tx_json.hash' 2>/dev/null || echo "unknown")
if [ "$engine_result" = "tesSUCCESS" ] || [ "$engine_result" = "terQUEUED" ]; then
ok "Transaction submitted: $engine_result (hash: ${tx_hash:0:16}...)"
else
fail "Transaction submission: $engine_result"
log " Full response: $(echo "$submit_result" | jq -c .result 2>/dev/null)"
fi
log "Waiting 15s for consensus round + batch export..."
sleep 15
# ---------------------------------------------------------------------------
# Step 9: Verify Jaeger traces
# ---------------------------------------------------------------------------
log "Verifying spans in Jaeger..."
# Check service registration
services=$(curl -sf "$JAEGER/api/services" | jq -r '.data[]' 2>/dev/null || echo "")
if echo "$services" | grep -q "rippled"; then
ok "Service 'rippled' registered in Jaeger"
else
fail "Service 'rippled' NOT found in Jaeger (found: $services)"
fi
log ""
log "--- Phase 2: RPC Spans ---"
check_span "rpc.request"
check_span "rpc.process"
check_span "rpc.command.server_info"
check_span "rpc.command.server_state"
check_span "rpc.command.ledger"
log ""
log "--- Phase 3: Transaction Spans ---"
check_span "tx.process"
check_span "tx.receive"
check_span "tx.apply"
log ""
log "--- Phase 4: Consensus Spans ---"
check_span "consensus.proposal.send"
check_span "consensus.ledger_close"
check_span "consensus.accept"
check_span "consensus.validation.send"
log ""
log "--- Phase 5: Ledger Spans ---"
check_span "ledger.build"
check_span "ledger.validate"
check_span "ledger.store"
log ""
log "--- Phase 5: Peer Spans (trace_peer=1) ---"
check_span "peer.proposal.receive"
check_span "peer.validation.receive"
# ---------------------------------------------------------------------------
# Step 9b: Verify log-trace correlation (Phase 8)
# ---------------------------------------------------------------------------
log ""
log "--- Phase 8: Log-Trace Correlation ---"
check_log_correlation
# ---------------------------------------------------------------------------
# Step 10: Verify Prometheus spanmetrics
# ---------------------------------------------------------------------------
log ""
log "--- Phase 5: Spanmetrics ---"
log "Waiting 20s for Prometheus scrape cycle..."
sleep 20
calls_count=$(curl -sf "$PROM/api/v1/query?query=traces_span_metrics_calls_total" \
| jq '.data.result | length' 2>/dev/null || echo 0)
if [ "$calls_count" -gt 0 ]; then
ok "Prometheus: traces_span_metrics_calls_total ($calls_count series)"
else
fail "Prometheus: traces_span_metrics_calls_total (0 series)"
fi
duration_count=$(curl -sf "$PROM/api/v1/query?query=traces_span_metrics_duration_milliseconds_count" \
| jq '.data.result | length' 2>/dev/null || echo 0)
if [ "$duration_count" -gt 0 ]; then
ok "Prometheus: duration histogram ($duration_count series)"
else
fail "Prometheus: duration histogram (0 series)"
fi
# Check Grafana
if curl -sf http://localhost:3000/api/health > /dev/null 2>&1; then
ok "Grafana: healthy at localhost:3000"
else
fail "Grafana: not reachable at localhost:3000"
fi
# ---------------------------------------------------------------------------
# Step 10b: Verify native OTel metrics in Prometheus (beast::insight)
# ---------------------------------------------------------------------------
log ""
log "--- Phase 7: Native OTel Metrics (beast::insight via OTLP) ---"
log "Waiting 20s for OTLP metric export + Prometheus scrape..."
sleep 20
check_otel_metric() {
local metric_name="$1"
local result
result=$(curl -sf "$PROM/api/v1/query?query=$metric_name" \
| jq '.data.result | length' 2>/dev/null || echo 0)
if [ "$result" -gt 0 ]; then
ok "OTel: $metric_name ($result series)"
else
fail "OTel: $metric_name (0 series)"
fi
}
# Node health gauges (ObservableGauge — no _total suffix)
check_otel_metric "rippled_LedgerMaster_Validated_Ledger_Age"
check_otel_metric "rippled_LedgerMaster_Published_Ledger_Age"
check_otel_metric "rippled_job_count"
# State accounting
check_otel_metric "rippled_State_Accounting_Full_duration"
# Peer finder
check_otel_metric "rippled_Peer_Finder_Active_Inbound_Peers"
check_otel_metric "rippled_Peer_Finder_Active_Outbound_Peers"
# RPC counters (Counter — Prometheus adds _total suffix automatically)
check_otel_metric "rippled_rpc_requests_total"
# Overlay traffic
check_otel_metric "rippled_total_Bytes_In"
# Verify StatsD receiver is NOT required (no statsd receiver in pipeline)
log ""
log "--- Verify StatsD receiver is not required ---"
statsd_port_check=$(curl -sf "http://localhost:8125" 2>&1 || echo "refused")
if echo "$statsd_port_check" | grep -qi "refused\|error\|connection"; then
ok "StatsD port 8125 is not listening (not required)"
else
fail "StatsD port 8125 appears to be listening (should not be needed)"
fi
# ---------------------------------------------------------------------------
# Step 10c: Verify Phase 9 OTel SDK Metrics
# ---------------------------------------------------------------------------
log ""
log "--- Phase 9: OTel SDK Metrics (MetricsRegistry) ---"
log "Waiting 15s for OTel metric export + Prometheus scrape..."
sleep 15
check_otel_metric() {
local metric_name="$1"
local result
result=$(curl -sf "$PROM/api/v1/query?query=$metric_name" \
| jq '.data.result | length' 2>/dev/null || echo 0)
if [ "$result" -gt 0 ]; then
ok "OTel: $metric_name ($result series)"
else
fail "OTel: $metric_name (0 series)"
fi
}
# Task 9.1: NodeStore I/O
check_otel_metric 'rippled_nodestore_state{metric="node_reads_total"}'
check_otel_metric 'rippled_nodestore_state{metric="write_load"}'
# Task 9.2: Cache hit rates
check_otel_metric 'rippled_cache_metrics{metric="SLE_hit_rate"}'
check_otel_metric 'rippled_cache_metrics{metric="treenode_cache_size"}'
# Task 9.3: TxQ metrics
check_otel_metric 'rippled_txq_metrics{metric="txq_count"}'
check_otel_metric 'rippled_txq_metrics{metric="txq_reference_fee_level"}'
# Task 9.4: Per-RPC metrics
check_otel_metric "rippled_rpc_method_started_total"
check_otel_metric "rippled_rpc_method_finished_total"
# Task 9.5: Per-job metrics
check_otel_metric "rippled_job_queued_total"
check_otel_metric "rippled_job_finished_total"
# Task 9.6: Counted object instances
check_otel_metric "rippled_object_count"
# Task 9.7: Load factor breakdown
check_otel_metric 'rippled_load_factor_metrics{metric="load_factor"}'
check_otel_metric 'rippled_load_factor_metrics{metric="load_factor_server"}'
# ---------------------------------------------------------------------------
# Step 11: Summary
# ---------------------------------------------------------------------------
echo ""
echo "==========================================================="
echo " INTEGRATION TEST RESULTS"
echo "==========================================================="
printf " \033[1;32mPASSED: %d\033[0m\n" "$PASS"
printf " \033[1;31mFAILED: %d\033[0m\n" "$FAIL"
echo "==========================================================="
echo ""
echo " Observability stack is running:"
echo ""
echo " Jaeger UI: http://localhost:16686"
echo " Grafana: http://localhost:3000"
echo " Prometheus: http://localhost:9090"
echo " Loki: http://localhost:3100"
echo ""
echo " xrpld nodes (6) are running:"
for i in $(seq 1 "$NUM_NODES"); do
RPC_PORT=$((RPC_PORT_BASE + i - 1))
PEER_PORT=$((PEER_PORT_BASE + i - 1))
echo " Node $i: RPC=localhost:$RPC_PORT Peer=:$PEER_PORT PID=$(cat "$WORKDIR/node$i/xrpld.pid" 2>/dev/null || echo 'unknown')"
done
echo ""
echo " To tear down:"
echo " bash docker/telemetry/integration-test.sh --cleanup"
echo ""
echo "==========================================================="
if [ "$FAIL" -gt 0 ]; then
exit 1
fi

View File

@@ -0,0 +1,123 @@
# OpenTelemetry Collector configuration for rippled development.
#
# Pipelines:
# traces: OTLP receiver -> batch processor -> debug + Jaeger + Tempo + spanmetrics
# metrics: OTLP + StatsD receivers + spanmetrics connector -> Prometheus exporter
# logs: filelog receiver -> batch processor -> otlphttp/Loki (Phase 8)
#
# rippled sends traces via OTLP/HTTP to port 4318. The collector batches
# them, forwards to both Jaeger and Tempo, and derives RED metrics via the
# spanmetrics connector, which Prometheus scrapes on port 8889.
#
# rippled sends beast::insight metrics natively via OTLP/HTTP to port 4318
# (same endpoint as traces). The OTLP receiver feeds both the traces and
# metrics pipelines. Metrics are exported to Prometheus alongside
# span-derived metrics.
#
# The StatsD receiver accepts beast::insight metrics from xrpld nodes
# configured with server=statsd in [insight]. It listens on UDP port 8125.
#
# Phase 8: The filelog receiver tails rippled's debug.log files under
# /var/log/rippled/ (mounted from the host). A regex_parser operator
# extracts timestamp, partition, severity, and optional trace_id/span_id
# fields injected by Logs::format(). Parsed logs are exported to Grafana
# Loki for log-trace correlation.
extensions:
health_check:
endpoint: 0.0.0.0:13133
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
# StatsD receiver — accepts beast::insight metrics from xrpld nodes
# configured with server=statsd in [insight]. Listens on UDP port 8125.
statsd:
endpoint: "0.0.0.0:8125"
aggregation_interval: 15s
enable_metric_type: true
is_monotonic_counter: true
timer_histogram_mapping:
- statsd_type: "timing"
observer_type: "summary"
summary:
percentiles: [0, 50, 90, 95, 99, 100]
- statsd_type: "histogram"
observer_type: "summary"
summary:
percentiles: [0, 50, 90, 95, 99, 100]
# Phase 8: Filelog receiver tails rippled debug.log files for log-trace
# correlation. Extracts structured fields (timestamp, partition, severity,
# trace_id, span_id, message) via regex. The trace_id and span_id are
# optional — only present when the log was emitted within an active span.
filelog:
include: [/var/log/rippled/*/debug.log]
operators:
- type: regex_parser
regex: '^(?P<timestamp>\S+)\s+(?P<partition>\S+):(?P<severity>\S+)\s+(?:trace_id=(?P<trace_id>[a-f0-9]+)\s+span_id=(?P<span_id>[a-f0-9]+)\s+)?(?P<message>.*)$'
timestamp:
parse_from: attributes.timestamp
layout: "%Y-%b-%d %H:%M:%S.%f"
processors:
batch:
timeout: 1s
send_batch_size: 100
connectors:
spanmetrics:
# Expose service.instance.id (node public key) as a Prometheus label so
# Grafana dashboards can filter metrics by individual node.
resource_metrics_key_attributes:
- service.instance.id
histogram:
explicit:
buckets: [1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 5s]
dimensions:
- name: xrpl.rpc.command
- name: xrpl.rpc.status
- name: xrpl.consensus.mode
- name: xrpl.tx.local
- name: xrpl.peer.proposal.trusted
- name: xrpl.peer.validation.trusted
exporters:
debug:
verbosity: detailed
otlp/jaeger:
endpoint: jaeger:4317
tls:
insecure: true
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
# Phase 8: Export logs to Grafana Loki via OTLP/HTTP. Loki 3.x supports
# native OTLP ingestion on its /otlp endpoint, replacing the removed
# loki exporter (dropped in otel-collector-contrib v0.147.0).
otlphttp/loki:
endpoint: http://loki:3100/otlp
prometheus:
endpoint: 0.0.0.0:8889
service:
extensions: [health_check]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp/jaeger, otlp/tempo, spanmetrics]
metrics:
receivers: [otlp, spanmetrics, statsd]
processors: [batch]
exporters: [prometheus]
# Phase 8: Log pipeline ingests rippled debug.log via filelog receiver,
# batches entries, and exports to Loki for log-trace correlation.
logs:
receivers: [filelog]
processors: [batch]
exporters: [otlphttp/loki]

View File

@@ -0,0 +1,9 @@
# Prometheus configuration for scraping spanmetrics from OTel Collector.
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: otel-collector
static_configs:
- targets: ["otel-collector:8889"]

View File

@@ -0,0 +1,59 @@
# Grafana Tempo configuration for rippled telemetry stack.
#
# Runs in single-binary mode for local development.
# Receives traces via OTLP/gRPC from the OTel Collector and stores
# them locally. Queryable via Grafana Explore using the Tempo datasource.
#
# Search filters are configured on the Grafana datasource side
# (grafana/provisioning/datasources/tempo.yaml). Tempo auto-indexes
# all span attributes for search in single-binary mode.
#
# For production, replace local storage with S3/GCS backend and adjust
# retention via the compactor settings. See:
# https://grafana.com/docs/tempo/latest/configuration/
stream_over_http_enabled: true
server:
http_listen_port: 3200
distributor:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
ingester:
max_block_duration: 5m
compactor:
compaction:
block_retention: 1h
# Enable metrics generator for service graph and span metrics.
# Produces RED metrics (rate, errors, duration) per service/span,
# feeding Grafana's service map visualization.
metrics_generator:
registry:
external_labels:
source: tempo
storage:
path: /var/tempo/generator/wal
remote_write:
- url: http://prometheus:9090/api/v1/write
overrides:
defaults:
metrics_generator:
processors:
- service-graphs
- span-metrics
storage:
trace:
backend: local
wal:
path: /var/tempo/wal
local:
path: /var/tempo/blocks

View File

@@ -0,0 +1,325 @@
# Telemetry Workload Tools
Synthetic workload generation and validation tools for rippled's OpenTelemetry telemetry stack. These tools validate that all spans, metrics, dashboards, and log-trace correlation work end-to-end under controlled load.
## Quick Start
```bash
# Build rippled with telemetry enabled
conan install . --build=missing -o telemetry=True
cmake --preset default -Dtelemetry=ON
cmake --build --preset default
# Run full validation (starts everything, runs load, validates)
docker/telemetry/workload/run-full-validation.sh --xrpld .build/xrpld
# Cleanup when done
docker/telemetry/workload/run-full-validation.sh --cleanup
```
## Architecture
The validation suite runs a multi-node rippled cluster as local processes alongside
a Docker Compose telemetry stack. The cluster exercises consensus, peer-to-peer
spans (proposals, validations), and all metric pipelines.
```
run-full-validation.sh (shell orchestrator)
|
|-- docker-compose.workload.yaml
| |-- otel-collector (traces via OTLP + StatsD receiver)
| |-- jaeger (trace search API)
| |-- prometheus (metrics scraping)
| |-- grafana (dashboards, provisioned automatically)
|
|-- generate-validator-keys.sh
| -> validator-keys.json, validators.txt
|
|-- Nx xrpld nodes (local processes, full telemetry)
| - Each node: [telemetry] enabled=1, trace_rpc/consensus/transactions
| - [signing_support] true (server-side signing for tx_submitter)
| - Peer discovery via [ips] (not [ips_fixed]) for active peer counts
|
|-- workload_orchestrator.py (phased load execution)
| |-- rpc_load_generator.py (WebSocket RPC traffic)
| |-- tx_submitter.py (transaction diversity)
| -> workload-report.json + per-phase reports
|
|-- validate_telemetry.py (pass/fail checks)
| -> validation-report.json
|
|-- benchmark.sh (baseline vs telemetry comparison)
-> benchmark-report-*.md
```
## Workload Profiles
The workload orchestrator (`workload_orchestrator.py`) reads named profiles
from `workload-profiles.json` and executes sequential load phases. Within
each phase, the RPC generator and TX submitter run concurrently.
### Available Profiles
| Profile | Phases | Duration | Purpose |
| ----------------- | ------ | ---------------------------- | ----------------------------------------------------------- |
| `full-validation` | 6 | ~5 min + 1 min propagation | Full 18-dashboard coverage with burst/idle/plateau patterns |
| `quick-smoke` | 1 | ~30s + 30s propagation | Fast CI smoke test |
| `stress` | 3 | ~3.5 min + 1 min propagation | Heavy sustained load for benchmarking |
### full-validation Phases
| Phase | RPC Rate | TX TPS | Duration | Dashboard Coverage |
| ------------ | -------- | ------ | -------- | ----------------------------------------------- |
| warmup | 5 RPS | — | 30s | Node Health, Validator Health (baseline gauges) |
| steady-state | 30 RPS | 3 TPS | 60s | All dashboards (plateau data) |
| rpc-burst | 100 RPS | — | 30s | Job Queue, RPC Performance (latency spikes) |
| tx-flood | 5 RPS | 20 TPS | 30s | Fee Market & TxQ, Transaction Overview |
| mixed-peak | 50 RPS | 10 TPS | 60s | Consensus Health, Ledger Operations |
| cooldown | 5 RPS | — | 30s | Recovery patterns, state transitions |
### Custom Profiles
Add profiles to `workload-profiles.json`:
```json
{
"profiles": {
"my-custom": {
"description": "Custom profile for specific testing",
"phases": [
{
"name": "phase-name",
"description": "What this phase exercises",
"duration_sec": 60,
"rpc": { "rate": 50, "weights": { "server_info": 80, "fee": 20 } },
"tx": { "tps": 5, "weights": { "Payment": 100 } }
}
],
"propagation_wait_sec": 30
}
}
}
```
Set `"rpc"` or `"tx"` to `null` to skip that generator for a phase.
Custom `"weights"` override the default command/transaction distribution.
## Tools Reference
### run-full-validation.sh
Orchestrates the complete validation pipeline. Starts the telemetry stack, starts a multi-node rippled cluster, generates load, and validates the results.
```bash
# Full validation with defaults (uses full-validation profile)
./run-full-validation.sh --xrpld /path/to/xrpld
# Quick smoke test
./run-full-validation.sh --xrpld /path/to/xrpld --profile quick-smoke
# Stress test with benchmarks
./run-full-validation.sh --xrpld /path/to/xrpld --profile stress --with-benchmark
# Skip Loki checks (if Phase 8 not deployed)
./run-full-validation.sh --xrpld /path/to/xrpld --skip-loki
```
### workload_orchestrator.py
Reads a named profile from `workload-profiles.json` and executes sequential
load phases. Within each phase, `rpc_load_generator.py` and `tx_submitter.py`
run as concurrent subprocesses. Produces per-phase reports and a combined
summary.
```bash
# Run with a specific profile
python3 workload_orchestrator.py --profile full-validation
# Multiple endpoints
python3 workload_orchestrator.py --profile full-validation \
--endpoints ws://localhost:6006 ws://localhost:6007
# Save combined report
python3 workload_orchestrator.py --profile stress --report /tmp/report.json
```
### rpc_load_generator.py
Generates RPC traffic matching realistic production distribution. Uses
rippled's **native WebSocket command format** (`{"command": ...}`) with flat
parameters — the same format as `tx_submitter.py`.
- 40% health checks (server_info, fee)
- 30% wallet queries (account_info, account_lines, account_objects)
- 15% explorer queries (ledger, ledger_data)
- 10% transaction lookups (tx, account_tx)
- 5% DEX queries (book_offers, amm_info)
```bash
# Basic usage
python3 rpc_load_generator.py --endpoints ws://localhost:6006 --rate 50 --duration 120
# Multiple endpoints (round-robin)
python3 rpc_load_generator.py \
--endpoints ws://localhost:6006 ws://localhost:6007 \
--rate 100 --duration 300
# Custom weights
python3 rpc_load_generator.py --endpoints ws://localhost:6006 \
--weights '{"server_info": 80, "account_info": 20}'
```
### tx_submitter.py
Submits diverse transaction types to exercise the full span and metric surface.
Uses rippled's **native WebSocket command format** (`{"command": ...}`) rather
than JSON-RPC format. The response payload is inside the `"result"` key, with
`"status"` at the top level.
Supported transaction types:
- Payment (XRP transfers) — exercises `tx.process`, `tx.receive`, `tx.apply`
- OfferCreate / OfferCancel (DEX activity)
- TrustSet (trust line creation)
- NFTokenMint / NFTokenCreateOffer (NFT activity)
- EscrowCreate / EscrowFinish (escrow lifecycle)
- AMMCreate / AMMDeposit (AMM pool operations)
Requires `[signing_support] true` in the node config for server-side signing.
```bash
# Basic usage
python3 tx_submitter.py --endpoint ws://localhost:6006 --tps 5 --duration 120
# Custom mix
python3 tx_submitter.py --endpoint ws://localhost:6006 \
--weights '{"Payment": 60, "OfferCreate": 20, "TrustSet": 20}'
```
### validate_telemetry.py
Automated validation that all expected telemetry data exists. Every metric and span is required — if it doesn't fire, the validation fails.
- **Span validation**: All span types from `expected_spans.json` with required attributes and parent-child hierarchies
- **Metric validation**: All metrics from `expected_metrics.json` — SpanMetrics, StatsD gauges/counters/histograms, Phase 9 OTLP metrics. Every listed metric must have > 0 series. Uses the Prometheus `/api/v1/series` endpoint (not instant queries) to avoid false negatives from stale gauges.
- **Log-trace correlation**: trace_id/span_id in Loki logs (requires Loki)
- **Dashboard validation**: All 10 Grafana dashboards load with panels
```bash
# Run all validations
python3 validate_telemetry.py --report /tmp/report.json
# Skip Loki checks
python3 validate_telemetry.py --skip-loki --report /tmp/report.json
```
### benchmark.sh
Compares baseline (no telemetry) vs telemetry-enabled performance:
```bash
./benchmark.sh --xrpld /path/to/xrpld --duration 300
```
Thresholds (configurable via environment):
| Metric | Threshold | Env Variable |
| ----------------- | --------- | --------------------------- |
| CPU overhead | < 3% | BENCH_CPU_OVERHEAD_PCT |
| Memory overhead | < 5MB | BENCH_MEM_OVERHEAD_MB |
| RPC p99 latency | < 2ms | BENCH_RPC_LATENCY_IMPACT_MS |
| Throughput impact | < 5% | BENCH_TPS_IMPACT_PCT |
| Consensus impact | < 1% | BENCH_CONSENSUS_IMPACT_PCT |
## Reading Validation Reports
The validation report (`validation-report.json`) is structured as:
```json
{
"summary": {
"total": 45,
"passed": 42,
"failed": 3,
"all_passed": false
},
"checks": [
{
"name": "span.rpc.request",
"category": "span",
"passed": true,
"message": "rpc.request: 15 traces found",
"details": { "trace_count": 15 }
}
]
}
```
Categories:
- **span**: Span type existence and attribute validation
- **metric**: Prometheus metric existence
- **log**: Log-trace correlation checks
- **dashboard**: Grafana dashboard accessibility
## CI Integration
The validation runs as a GitHub Actions workflow (`.github/workflows/telemetry-validation.yml`):
- Triggered manually or on pushes to telemetry branches
- Builds rippled, starts the full stack, runs load, validates
- Uploads reports as artifacts
- Posts summary to PR
## Configuration Files
| File | Purpose |
| ------------------------ | ------------------------------------------------------------- |
| `workload-profiles.json` | Named load profiles with phase definitions |
| `expected_spans.json` | Span inventory (names, attributes, hierarchies, config flags) |
| `expected_metrics.json` | Metric inventory every listed metric must be present |
| `test_accounts.json` | Test account roles (keys generated at runtime) |
| `requirements.txt` | Python dependencies |
### expected_metrics.json Format
```json
{
"category_name": {
"description": "Human-readable description.",
"metrics": ["metric_1", "metric_2"]
}
}
```
Every metric listed must produce > 0 Prometheus series during the validation run. If a metric doesn't fire, the workload generators need to produce enough load to trigger it.
### expected_spans.json Format
Each span entry defines its name, category, parent (for hierarchy validation),
required attributes, and the `config_flag` that must be enabled:
```json
{
"name": "rpc.request",
"category": "rpc",
"parent": null,
"required_attributes": ["rpc.method", "rpc.grpc.status_code"],
"config_flag": "trace_rpc"
}
```
## Node Configuration Notes
The orchestrator (`run-full-validation.sh`) generates node configs with:
- `[telemetry] enabled=1` with all trace categories (`trace_rpc`, `trace_consensus`, `trace_transactions`)
- `[signing_support] true` — required for `tx_submitter.py` to submit signed transactions via WebSocket
- `[ips]` (not `[ips_fixed]`) — ensures peer connections are counted in `Peer_Finder_Active_Inbound/Outbound_Peers` metrics (fixed peers are excluded from these counters by design)
## StatsD Gauge Behaviour
Beast::insight StatsD gauges only emit when their value _changes_ from the previous sample. This can cause two problems in the validation environment:
1. **Initial-zero gauges** — if a gauge value is 0 from startup and never changes, the gauge would never emit. To address this, `StatsDGaugeImpl` initializes `m_dirty = true`, ensuring the first flush always emits the initial value.
2. **Stale gauges** — once a gauge stabilizes (e.g., peer count stays at 1), it stops emitting new data points. Prometheus marks it stale after ~5 minutes. The validation script uses the Prometheus `/api/v1/series` endpoint instead of instant queries to catch such gauges.

View File

@@ -0,0 +1,379 @@
#!/usr/bin/env bash
# benchmark.sh — Performance benchmark for rippled telemetry overhead.
#
# Runs two identical workloads against a rippled cluster:
# 1. Baseline: telemetry disabled ([telemetry] enabled=0)
# 2. Telemetry: full telemetry enabled (traces + StatsD + all categories)
#
# Compares CPU, memory, RPC latency, TPS, and consensus round time.
# Outputs a Markdown table with pass/fail against configured thresholds.
#
# Usage:
# ./benchmark.sh --xrpld /path/to/xrpld --duration 300
#
# Thresholds (configurable via environment variables):
# BENCH_CPU_OVERHEAD_PCT=3 CPU overhead < 3%
# BENCH_MEM_OVERHEAD_MB=5 Memory overhead < 5MB
# BENCH_RPC_LATENCY_IMPACT_MS=2 RPC p99 latency impact < 2ms
# BENCH_TPS_IMPACT_PCT=5 Throughput impact < 5%
# BENCH_CONSENSUS_IMPACT_PCT=1 Consensus round time impact < 1%
set -euo pipefail
# ---------------------------------------------------------------------------
# Colored output helpers
# ---------------------------------------------------------------------------
log() { printf "\033[1;34m[BENCH]\033[0m %s\n" "$*"; }
ok() { printf "\033[1;32m[BENCH]\033[0m %s\n" "$*"; }
warn() { printf "\033[1;33m[BENCH]\033[0m %s\n" "$*"; }
fail() { printf "\033[1;31m[BENCH]\033[0m %s\n" "$*"; }
die() { printf "\033[1;31m[BENCH]\033[0m %s\n" "$*" >&2; exit 1; }
# ---------------------------------------------------------------------------
# Defaults and thresholds
# ---------------------------------------------------------------------------
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)"
# Configurable thresholds via environment variables.
CPU_THRESHOLD="${BENCH_CPU_OVERHEAD_PCT:-3}"
MEM_THRESHOLD="${BENCH_MEM_OVERHEAD_MB:-5}"
RPC_THRESHOLD="${BENCH_RPC_LATENCY_IMPACT_MS:-2}"
TPS_THRESHOLD="${BENCH_TPS_IMPACT_PCT:-5}"
CONSENSUS_THRESHOLD="${BENCH_CONSENSUS_IMPACT_PCT:-1}"
XRPLD="${BENCH_XRPLD:-$REPO_ROOT/.build/xrpld}"
DURATION=300
NUM_NODES=3
WORKDIR="/tmp/xrpld-benchmark"
RESULTS_DIR="$SCRIPT_DIR/benchmark-results"
RPC_PORT_BASE=5020
PEER_PORT_BASE=51250
# ---------------------------------------------------------------------------
# Argument parsing
# ---------------------------------------------------------------------------
usage() {
echo "Usage: $0 [OPTIONS]"
echo ""
echo "Options:"
echo " --xrpld PATH Path to xrpld binary (default: \$REPO_ROOT/.build/xrpld)"
echo " --duration SECS Benchmark duration per run (default: 300)"
echo " --nodes NUM Number of validator nodes (default: 3)"
echo " --output DIR Results output directory"
echo " -h, --help Show this help"
exit 0
}
while [ $# -gt 0 ]; do
case "$1" in
--xrpld) XRPLD="$2"; shift 2 ;;
--duration) DURATION="$2"; shift 2 ;;
--nodes) NUM_NODES="$2"; shift 2 ;;
--output) RESULTS_DIR="$2"; shift 2 ;;
-h|--help) usage ;;
*) die "Unknown option: $1" ;;
esac
done
# Validate prerequisites.
[ -x "$XRPLD" ] || die "xrpld not found at $XRPLD"
command -v jq >/dev/null 2>&1 || die "jq not found"
command -v bc >/dev/null 2>&1 || die "bc not found"
command -v curl >/dev/null 2>&1 || die "curl not found"
mkdir -p "$RESULTS_DIR"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
# ---------------------------------------------------------------------------
# Node cluster management
# ---------------------------------------------------------------------------
start_cluster() {
local telemetry_enabled="$1"
local label="$2"
log "Starting $NUM_NODES-node cluster ($label, telemetry=$telemetry_enabled)..."
rm -rf "$WORKDIR"
mkdir -p "$WORKDIR"
# Generate keys using first node.
bash "$SCRIPT_DIR/generate-validator-keys.sh" "$XRPLD" "$NUM_NODES" "$WORKDIR"
# Build per-node configs.
for i in $(seq 1 "$NUM_NODES"); do
local node_dir="$WORKDIR/node$i"
mkdir -p "$node_dir/nudb" "$node_dir/db"
local rpc_port=$((RPC_PORT_BASE + i - 1))
local peer_port=$((PEER_PORT_BASE + i - 1))
local seed
seed=$(jq -r ".[$((i-1))].seed" "$WORKDIR/validator-keys.json")
# Build ips_fixed list.
local ips_fixed=""
for j in $(seq 1 "$NUM_NODES"); do
if [ "$j" -ne "$i" ]; then
ips_fixed="${ips_fixed}127.0.0.1 $((PEER_PORT_BASE + j - 1))
"
fi
done
# Build telemetry section.
local telemetry_section=""
if [ "$telemetry_enabled" = "1" ]; then
telemetry_section="
[telemetry]
enabled=1
service_instance_id=bench-node-${i}
endpoint=http://localhost:4318/v1/traces
exporter=otlp_http
sampling_ratio=1.0
batch_size=512
batch_delay_ms=2000
max_queue_size=2048
trace_rpc=1
trace_transactions=1
trace_consensus=1
trace_peer=1
trace_ledger=1
[insight]
server=statsd
address=127.0.0.1:8125
prefix=rippled"
else
telemetry_section="
[telemetry]
enabled=0"
fi
cat > "$node_dir/xrpld.cfg" <<EOCFG
[server]
port_rpc
port_peer
[port_rpc]
port = $rpc_port
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_peer]
port = $peer_port
ip = 0.0.0.0
protocol = peer
[node_db]
type=NuDB
path=$node_dir/nudb
online_delete=256
[database_path]
$node_dir/db
[debug_logfile]
$node_dir/debug.log
[validation_seed]
$seed
[validators_file]
$WORKDIR/validators.txt
[ips_fixed]
${ips_fixed}
[peer_private]
1
${telemetry_section}
[rpc_startup]
{ "command": "log_level", "severity": "warning" }
[ssl_verify]
0
EOCFG
"$XRPLD" --conf "$node_dir/xrpld.cfg" --start > "$node_dir/stdout.log" 2>&1 &
echo $! > "$node_dir/xrpld.pid"
done
# Wait for consensus.
log "Waiting for consensus..."
for attempt in $(seq 1 120); do
local ready=0
for i in $(seq 1 "$NUM_NODES"); do
local port=$((RPC_PORT_BASE + i - 1))
local state
state=$(curl -sf "http://localhost:$port" \
-d '{"method":"server_info"}' 2>/dev/null \
| jq -r '.result.info.server_state' 2>/dev/null || echo "")
if [ "$state" = "proposing" ]; then
ready=$((ready + 1))
fi
done
if [ "$ready" -ge "$NUM_NODES" ]; then
ok "All $NUM_NODES nodes proposing (attempt $attempt)"
break
fi
if [ "$attempt" -eq 120 ]; then
warn "Consensus timeout — $ready/$NUM_NODES nodes ready"
fi
sleep 1
done
# Let the cluster stabilize.
sleep 5
}
stop_cluster() {
log "Stopping cluster..."
for i in $(seq 1 "$NUM_NODES"); do
local pidfile="$WORKDIR/node$i/xrpld.pid"
if [ -f "$pidfile" ]; then
kill "$(cat "$pidfile")" 2>/dev/null || true
fi
done
pkill -f "$WORKDIR" 2>/dev/null || true
sleep 3
}
# Build RPC ports CSV string.
rpc_ports_csv() {
local ports=""
for i in $(seq 1 "$NUM_NODES"); do
[ -n "$ports" ] && ports="$ports,"
ports="$ports$((RPC_PORT_BASE + i - 1))"
done
echo "$ports"
}
# ---------------------------------------------------------------------------
# Run benchmark
# ---------------------------------------------------------------------------
log "="
log " rippled Telemetry Performance Benchmark"
log " Nodes: $NUM_NODES | Duration: ${DURATION}s | Binary: $XRPLD"
log "="
# --- Baseline run ---
BASELINE_FILE="$RESULTS_DIR/baseline-${TIMESTAMP}.json"
start_cluster "0" "baseline"
bash "$SCRIPT_DIR/collect_system_metrics.sh" "$(rpc_ports_csv)" "$DURATION" "$BASELINE_FILE"
stop_cluster
# --- Telemetry run ---
TELEMETRY_FILE="$RESULTS_DIR/telemetry-${TIMESTAMP}.json"
start_cluster "1" "telemetry"
bash "$SCRIPT_DIR/collect_system_metrics.sh" "$(rpc_ports_csv)" "$DURATION" "$TELEMETRY_FILE"
stop_cluster
# ---------------------------------------------------------------------------
# Compare results
# ---------------------------------------------------------------------------
log "Comparing results..."
read_metric() {
local file="$1"
local key="$2"
jq -r ".$key // 0" "$file"
}
BASE_CPU=$(read_metric "$BASELINE_FILE" "cpu_pct_avg")
TELE_CPU=$(read_metric "$TELEMETRY_FILE" "cpu_pct_avg")
CPU_DELTA=$(echo "scale=2; $TELE_CPU - $BASE_CPU" | bc 2>/dev/null || echo "0")
BASE_MEM=$(read_metric "$BASELINE_FILE" "memory_rss_mb_peak")
TELE_MEM=$(read_metric "$TELEMETRY_FILE" "memory_rss_mb_peak")
MEM_DELTA=$(echo "scale=2; $TELE_MEM - $BASE_MEM" | bc 2>/dev/null || echo "0")
BASE_RPC=$(read_metric "$BASELINE_FILE" "rpc_p99_ms")
TELE_RPC=$(read_metric "$TELEMETRY_FILE" "rpc_p99_ms")
RPC_DELTA=$(echo "scale=2; $TELE_RPC - $BASE_RPC" | bc 2>/dev/null || echo "0")
BASE_TPS=$(read_metric "$BASELINE_FILE" "tps")
TELE_TPS=$(read_metric "$TELEMETRY_FILE" "tps")
if [ "$(echo "$BASE_TPS > 0" | bc 2>/dev/null)" = "1" ]; then
TPS_IMPACT=$(echo "scale=2; ($BASE_TPS - $TELE_TPS) / $BASE_TPS * 100" | bc 2>/dev/null || echo "0")
else
TPS_IMPACT="0"
fi
BASE_CONS=$(read_metric "$BASELINE_FILE" "consensus_round_p95_ms")
TELE_CONS=$(read_metric "$TELEMETRY_FILE" "consensus_round_p95_ms")
if [ "$(echo "$BASE_CONS > 0" | bc 2>/dev/null)" = "1" ]; then
CONS_IMPACT=$(echo "scale=2; ($TELE_CONS - $BASE_CONS) / $BASE_CONS * 100" | bc 2>/dev/null || echo "0")
else
CONS_IMPACT="0"
fi
# ---------------------------------------------------------------------------
# Pass/fail checks
# ---------------------------------------------------------------------------
PASS_COUNT=0
FAIL_COUNT=0
check_threshold() {
local name="$1"
local actual="$2"
local threshold="$3"
local unit="$4"
# Compare: actual <= threshold
if [ "$(echo "$actual <= $threshold" | bc 2>/dev/null)" = "1" ]; then
ok "$name: ${actual}${unit} <= ${threshold}${unit} PASS"
PASS_COUNT=$((PASS_COUNT + 1))
echo "PASS"
else
fail "$name: ${actual}${unit} > ${threshold}${unit} FAIL"
FAIL_COUNT=$((FAIL_COUNT + 1))
echo "FAIL"
fi
}
CPU_RESULT=$(check_threshold "CPU overhead" "$CPU_DELTA" "$CPU_THRESHOLD" "%")
MEM_RESULT=$(check_threshold "Memory overhead" "$MEM_DELTA" "$MEM_THRESHOLD" "MB")
RPC_RESULT=$(check_threshold "RPC p99 impact" "$RPC_DELTA" "$RPC_THRESHOLD" "ms")
TPS_RESULT=$(check_threshold "TPS impact" "$TPS_IMPACT" "$TPS_THRESHOLD" "%")
CONS_RESULT=$(check_threshold "Consensus impact" "$CONS_IMPACT" "$CONSENSUS_THRESHOLD" "%")
# ---------------------------------------------------------------------------
# Output Markdown table
# ---------------------------------------------------------------------------
REPORT_FILE="$RESULTS_DIR/benchmark-report-${TIMESTAMP}.md"
cat > "$REPORT_FILE" <<EOMD
# Telemetry Performance Benchmark Report
**Date**: $(date -u +"%Y-%m-%d %H:%M:%S UTC")
**Nodes**: $NUM_NODES | **Duration**: ${DURATION}s per run
**Binary**: $XRPLD
## Results
| Metric | Baseline | Telemetry | Delta | Threshold | Result |
|--------|----------|-----------|-------|-----------|--------|
| CPU (avg %) | ${BASE_CPU}% | ${TELE_CPU}% | ${CPU_DELTA}% | < ${CPU_THRESHOLD}% | ${CPU_RESULT} |
| Memory RSS (peak MB) | ${BASE_MEM} MB | ${TELE_MEM} MB | ${MEM_DELTA} MB | < ${MEM_THRESHOLD} MB | ${MEM_RESULT} |
| RPC p99 Latency (ms) | ${BASE_RPC} ms | ${TELE_RPC} ms | ${RPC_DELTA} ms | < ${RPC_THRESHOLD} ms | ${RPC_RESULT} |
| Throughput (TPS) | ${BASE_TPS} | ${TELE_TPS} | ${TPS_IMPACT}% | < ${TPS_THRESHOLD}% | ${TPS_RESULT} |
| Consensus Round p95 (ms) | ${BASE_CONS} ms | ${TELE_CONS} ms | ${CONS_IMPACT}% | < ${CONSENSUS_THRESHOLD}% | ${CONS_RESULT} |
## Summary
- **Passed**: $PASS_COUNT / $((PASS_COUNT + FAIL_COUNT))
- **Failed**: $FAIL_COUNT / $((PASS_COUNT + FAIL_COUNT))
## Raw Data
- Baseline: \`$(basename "$BASELINE_FILE")\`
- Telemetry: \`$(basename "$TELEMETRY_FILE")\`
EOMD
ok "Benchmark report written to $REPORT_FILE"
cat "$REPORT_FILE"
# Exit with failure if any check failed.
if [ "$FAIL_COUNT" -gt 0 ]; then
exit 1
fi

View File

@@ -0,0 +1,233 @@
#!/usr/bin/env bash
# collect_system_metrics.sh — Collect CPU, memory, and RPC latency metrics
# from running xrpld nodes for benchmark comparison.
#
# Samples system metrics at regular intervals and writes a JSON summary.
# Used by benchmark.sh for baseline vs telemetry comparison.
#
# Usage:
# ./collect_system_metrics.sh <rpc_ports_csv> <duration_seconds> <output_file>
#
# Example:
# ./collect_system_metrics.sh "5005,5006,5007" 300 /tmp/metrics-baseline.json
#
# Output JSON format:
# {
# "cpu_pct_avg": 12.5,
# "memory_rss_mb_peak": 450.2,
# "rpc_p99_ms": 15.3,
# "tps": 4.8,
# "consensus_round_p95_ms": 3200,
# "samples": 60
# }
set -euo pipefail
# ---------------------------------------------------------------------------
# Colored output helpers
# ---------------------------------------------------------------------------
log() { printf "\033[1;34m[METRICS]\033[0m %s\n" "$*"; }
ok() { printf "\033[1;32m[METRICS]\033[0m %s\n" "$*"; }
die() { printf "\033[1;31m[METRICS]\033[0m %s\n" "$*" >&2; exit 1; }
# ---------------------------------------------------------------------------
# Argument parsing
# ---------------------------------------------------------------------------
usage() {
echo "Usage: $0 <rpc_ports_csv> <duration_seconds> <output_file>"
echo ""
echo "Arguments:"
echo " rpc_ports_csv Comma-separated RPC ports (e.g., 5005,5006,5007)"
echo " duration_seconds How long to collect metrics"
echo " output_file Path to write JSON results"
exit 1
}
if [ $# -lt 3 ]; then
usage
fi
RPC_PORTS_CSV="$1"
DURATION="$2"
OUTPUT_FILE="$3"
IFS=',' read -ra RPC_PORTS <<< "$RPC_PORTS_CSV"
SAMPLE_INTERVAL=5
SAMPLES=$((DURATION / SAMPLE_INTERVAL))
log "Collecting metrics for ${DURATION}s (${SAMPLES} samples, ${#RPC_PORTS[@]} nodes)..."
# ---------------------------------------------------------------------------
# Temporary files for aggregation
# ---------------------------------------------------------------------------
TMPDIR_METRICS="$(mktemp -d)"
CPU_FILE="$TMPDIR_METRICS/cpu.txt"
MEM_FILE="$TMPDIR_METRICS/mem.txt"
RPC_FILE="$TMPDIR_METRICS/rpc.txt"
LEDGER_FILE="$TMPDIR_METRICS/ledger.txt"
touch "$CPU_FILE" "$MEM_FILE" "$RPC_FILE" "$LEDGER_FILE"
cleanup() {
rm -rf "$TMPDIR_METRICS"
}
trap cleanup EXIT
# ---------------------------------------------------------------------------
# Get initial ledger sequence for TPS calculation
# ---------------------------------------------------------------------------
INITIAL_SEQ=0
INITIAL_TIME=$(date +%s)
for port in "${RPC_PORTS[@]}"; do
seq=$(curl -sf "http://localhost:$port" \
-d '{"method":"server_info"}' 2>/dev/null \
| jq -r '.result.info.validated_ledger.seq // 0' 2>/dev/null || echo 0)
if [ "$seq" -gt "$INITIAL_SEQ" ]; then
INITIAL_SEQ=$seq
fi
done
log "Initial validated ledger seq: $INITIAL_SEQ"
# ---------------------------------------------------------------------------
# Sampling loop
# ---------------------------------------------------------------------------
for sample in $(seq 1 "$SAMPLES"); do
# Collect CPU usage for xrpld processes.
# Uses ps to find all xrpld processes and average their CPU%.
cpu_sum=0
cpu_count=0
while IFS= read -r line; do
cpu_val=$(echo "$line" | awk '{print $1}')
if [ -n "$cpu_val" ] && [ "$cpu_val" != "0.0" ]; then
cpu_sum=$(echo "$cpu_sum + $cpu_val" | bc 2>/dev/null || echo "$cpu_sum")
cpu_count=$((cpu_count + 1))
fi
done < <(ps aux 2>/dev/null | grep '[x]rpld' | awk '{print $3}')
if [ "$cpu_count" -gt 0 ]; then
cpu_avg=$(echo "scale=2; $cpu_sum / $cpu_count" | bc 2>/dev/null || echo "0")
echo "$cpu_avg" >> "$CPU_FILE"
fi
# Collect memory RSS for xrpld processes.
while IFS= read -r line; do
rss_kb=$(echo "$line" | awk '{print $1}')
if [ -n "$rss_kb" ] && [ "$rss_kb" != "0" ]; then
rss_mb=$(echo "scale=2; $rss_kb / 1024" | bc 2>/dev/null || echo "0")
echo "$rss_mb" >> "$MEM_FILE"
fi
done < <(ps aux 2>/dev/null | grep '[x]rpld' | awk '{print $6}')
# Collect RPC latency from each node.
for port in "${RPC_PORTS[@]}"; do
start_ms=$(date +%s%N)
curl -sf "http://localhost:$port" \
-d '{"method":"server_info"}' > /dev/null 2>&1 || true
end_ms=$(date +%s%N)
latency_ms=$(( (end_ms - start_ms) / 1000000 ))
echo "$latency_ms" >> "$RPC_FILE"
done
# Record current validated ledger seq.
for port in "${RPC_PORTS[@]}"; do
seq=$(curl -sf "http://localhost:$port" \
-d '{"method":"server_info"}' 2>/dev/null \
| jq -r '.result.info.validated_ledger.seq // 0' 2>/dev/null || echo 0)
echo "$seq" >> "$LEDGER_FILE"
break # Only need one node's seq per sample.
done
# Progress indicator.
if [ $((sample % 10)) -eq 0 ]; then
log " Sample $sample/$SAMPLES..."
fi
sleep "$SAMPLE_INTERVAL"
done
# ---------------------------------------------------------------------------
# Compute aggregated metrics
# ---------------------------------------------------------------------------
log "Computing aggregated metrics..."
# CPU average.
if [ -s "$CPU_FILE" ]; then
CPU_AVG=$(awk '{ sum += $1; n++ } END { if (n>0) printf "%.2f", sum/n; else print "0" }' "$CPU_FILE")
else
CPU_AVG="0"
fi
# Memory peak RSS (MB).
if [ -s "$MEM_FILE" ]; then
MEM_PEAK=$(sort -n "$MEM_FILE" | tail -1)
else
MEM_PEAK="0"
fi
# RPC latency p99 (ms).
if [ -s "$RPC_FILE" ]; then
RPC_COUNT=$(wc -l < "$RPC_FILE")
P99_INDEX=$(echo "scale=0; $RPC_COUNT * 99 / 100" | bc)
RPC_P99=$(sort -n "$RPC_FILE" | sed -n "${P99_INDEX}p")
[ -z "$RPC_P99" ] && RPC_P99="0"
else
RPC_P99="0"
fi
# TPS calculation from ledger sequence advancement.
FINAL_SEQ=0
for port in "${RPC_PORTS[@]}"; do
seq=$(curl -sf "http://localhost:$port" \
-d '{"method":"server_info"}' 2>/dev/null \
| jq -r '.result.info.validated_ledger.seq // 0' 2>/dev/null || echo 0)
if [ "$seq" -gt "$FINAL_SEQ" ]; then
FINAL_SEQ=$seq
fi
done
FINAL_TIME=$(date +%s)
ELAPSED=$((FINAL_TIME - INITIAL_TIME))
LEDGER_ADVANCE=$((FINAL_SEQ - INITIAL_SEQ))
if [ "$ELAPSED" -gt 0 ] && [ "$LEDGER_ADVANCE" -gt 0 ]; then
# Rough TPS: assume ~avg_txs_per_ledger * ledgers / elapsed.
# Without tx count, use ledger close rate as proxy.
TPS=$(echo "scale=2; $LEDGER_ADVANCE / $ELAPSED" | bc 2>/dev/null || echo "0")
else
TPS="0"
fi
# Consensus round time p95 (from ledger close interval).
# Approximate by looking at ledger sequence progression intervals.
if [ -s "$LEDGER_FILE" ]; then
# Calculate intervals between consecutive ledger sequences.
LEDGER_COUNT=$(wc -l < "$LEDGER_FILE")
# Rough estimate: DURATION / number_of_distinct_ledgers * 1000 ms
UNIQUE_LEDGERS=$(sort -u "$LEDGER_FILE" | wc -l)
if [ "$UNIQUE_LEDGERS" -gt 1 ]; then
CONSENSUS_P95=$(echo "scale=0; $DURATION * 1000 / ($UNIQUE_LEDGERS - 1)" | bc 2>/dev/null || echo "0")
else
CONSENSUS_P95="0"
fi
else
CONSENSUS_P95="0"
fi
# ---------------------------------------------------------------------------
# Write output JSON
# ---------------------------------------------------------------------------
cat > "$OUTPUT_FILE" <<EOF_JSON
{
"cpu_pct_avg": $CPU_AVG,
"memory_rss_mb_peak": $MEM_PEAK,
"rpc_p99_ms": $RPC_P99,
"tps": $TPS,
"consensus_round_p95_ms": $CONSENSUS_P95,
"samples": $SAMPLES,
"duration_seconds": $DURATION,
"node_count": ${#RPC_PORTS[@]},
"initial_ledger_seq": $INITIAL_SEQ,
"final_ledger_seq": $FINAL_SEQ
}
EOF_JSON
ok "Metrics written to $OUTPUT_FILE"
cat "$OUTPUT_FILE"

View File

@@ -0,0 +1,139 @@
{
"description": "Expected metric inventory for rippled telemetry validation. Sourced from 09-data-collection-reference.md.",
"spanmetrics": {
"description": "SpanMetrics-derived RED metrics from the OTel Collector spanmetrics connector.",
"metrics": [
"traces_span_metrics_calls_total",
"traces_span_metrics_duration_milliseconds_bucket",
"traces_span_metrics_duration_milliseconds_count",
"traces_span_metrics_duration_milliseconds_sum"
],
"required_labels": [
"span_name",
"status_code",
"service_name",
"span_kind"
],
"dimension_labels": [
"xrpl_rpc_command",
"xrpl_rpc_status",
"xrpl_consensus_mode",
"xrpl_tx_local",
"xrpl_peer_proposal_trusted",
"xrpl_peer_validation_trusted"
]
},
"statsd_gauges": {
"description": "beast::insight gauges emitted via StatsD UDP.",
"metrics": [
"rippled_LedgerMaster_Validated_Ledger_Age",
"rippled_LedgerMaster_Published_Ledger_Age",
"rippled_State_Accounting_Full_duration",
"rippled_Peer_Finder_Active_Inbound_Peers",
"rippled_Peer_Finder_Active_Outbound_Peers",
"rippled_jobq_job_count"
]
},
"statsd_counters": {
"description": "beast::insight counters emitted via StatsD UDP. The OTel Prometheus exporter appends _total to monotonic counters.",
"metrics": ["rippled_rpc_requests_total", "rippled_ledger_fetches_total"]
},
"statsd_histograms": {
"description": "beast::insight timers/histograms emitted via StatsD UDP.",
"metrics": ["rippled_rpc_time", "rippled_rpc_size"]
},
"overlay_traffic": {
"description": "Overlay traffic metrics (subset — full list has 45+ categories).",
"metrics": [
"rippled_total_Bytes_In",
"rippled_total_Bytes_Out",
"rippled_total_Messages_In",
"rippled_total_Messages_Out"
]
},
"phase9_nodestore": {
"description": "Phase 9 NodeStore I/O observable gauge (MetricsRegistry via OTLP). Single metric with 'metric' label distinguishing sub-metrics.",
"metrics": ["rippled_nodestore_state"]
},
"phase9_cache": {
"description": "Phase 9 cache hit rate observable gauge (MetricsRegistry via OTLP). Single metric with 'metric' label.",
"metrics": ["rippled_cache_metrics"]
},
"phase9_txq": {
"description": "Phase 9 transaction queue observable gauge (MetricsRegistry via OTLP). Single metric with 'metric' label.",
"metrics": ["rippled_txq_metrics"]
},
"phase9_rpc_method": {
"description": "Phase 9 per-RPC-method counters (MetricsRegistry via OTLP).",
"metrics": ["rippled_rpc_method_started_total"]
},
"phase9_objects": {
"description": "Phase 9 counted object instances observable gauge (MetricsRegistry via OTLP).",
"metrics": ["rippled_object_count"]
},
"phase9_load": {
"description": "Phase 9 fee escalation and load factor observable gauge (MetricsRegistry via OTLP).",
"metrics": ["rippled_load_factor_metrics"]
},
"parity_validation_agreement": {
"description": "External dashboard parity: validation agreement percentages (push_metrics.py).",
"metrics": [
"rippled_validation_agreement{metric=\"agreement_pct_1h\"}",
"rippled_validation_agreement{metric=\"agreement_pct_24h\"}"
]
},
"parity_validator_health": {
"description": "External dashboard parity: validator health indicators (push_metrics.py).",
"metrics": [
"rippled_validator_health{metric=\"amendment_blocked\"}",
"rippled_validator_health{metric=\"unl_expiry_days\"}"
]
},
"parity_peer_quality": {
"description": "External dashboard parity: peer quality metrics (push_metrics.py).",
"metrics": [
"rippled_peer_quality{metric=\"peer_latency_p90_ms\"}",
"rippled_peer_quality{metric=\"peers_insane_count\"}"
]
},
"parity_ledger_economy": {
"description": "External dashboard parity: ledger economy metrics (push_metrics.py).",
"metrics": [
"rippled_ledger_economy{metric=\"base_fee_xrp\"}",
"rippled_ledger_economy{metric=\"transaction_rate\"}"
]
},
"parity_state_tracking": {
"description": "External dashboard parity: server state tracking (push_metrics.py).",
"metrics": ["rippled_state_tracking{metric=\"state_value\"}"]
},
"parity_counters": {
"description": "External dashboard parity: monotonic counters (push_metrics.py).",
"metrics": [
"rippled_ledgers_closed_total",
"rippled_validations_sent_total",
"rippled_state_changes_total"
]
},
"parity_storage": {
"description": "External dashboard parity: storage detail metrics (push_metrics.py).",
"metrics": ["rippled_storage_detail{metric=\"nudb_bytes\"}"]
},
"grafana_dashboards": {
"description": "All 13 Grafana dashboards that must render data.",
"uids": [
"rippled-rpc-perf",
"rippled-transactions",
"rippled-consensus",
"rippled-ledger-ops",
"rippled-peer-net",
"rippled-system-node-health",
"rippled-system-network",
"rippled-system-rpc",
"rippled-system-overlay-detail",
"rippled-system-ledger-sync",
"rippled-validator-health",
"rippled-peer-quality"
]
}
}

View File

@@ -0,0 +1,194 @@
{
"description": "Expected span inventory for rippled telemetry validation. Sourced from 09-data-collection-reference.md.",
"spans": [
{
"name": "rpc.request",
"category": "rpc",
"parent": null,
"required_attributes": [],
"config_flag": "trace_rpc"
},
{
"name": "rpc.process",
"category": "rpc",
"parent": "rpc.request",
"required_attributes": [],
"config_flag": "trace_rpc"
},
{
"name": "rpc.ws_message",
"category": "rpc",
"parent": null,
"required_attributes": [],
"config_flag": "trace_rpc"
},
{
"name": "rpc.command.*",
"category": "rpc",
"parent": "rpc.process",
"required_attributes": [
"xrpl.rpc.command",
"xrpl.rpc.version",
"xrpl.rpc.role",
"xrpl.rpc.status",
"xrpl.rpc.duration_ms",
"xrpl.node.amendment_blocked",
"xrpl.node.server_state"
],
"config_flag": "trace_rpc",
"note": "Wildcard — matches rpc.command.server_info, rpc.command.ledger, etc."
},
{
"name": "tx.process",
"category": "transaction",
"parent": null,
"required_attributes": ["xrpl.tx.hash", "xrpl.tx.local", "xrpl.tx.path"],
"config_flag": "trace_transactions"
},
{
"name": "tx.receive",
"category": "transaction",
"parent": null,
"required_attributes": [
"xrpl.peer.id",
"xrpl.tx.hash",
"xrpl.tx.suppressed",
"xrpl.tx.status"
],
"config_flag": "trace_transactions"
},
{
"name": "tx.apply",
"category": "transaction",
"parent": "ledger.build",
"required_attributes": [
"xrpl.ledger.seq",
"xrpl.ledger.tx_count",
"xrpl.ledger.tx_failed"
],
"config_flag": "trace_transactions"
},
{
"name": "consensus.proposal.send",
"category": "consensus",
"parent": null,
"required_attributes": ["xrpl.consensus.round"],
"config_flag": "trace_consensus"
},
{
"name": "consensus.ledger_close",
"category": "consensus",
"parent": null,
"required_attributes": [
"xrpl.consensus.ledger.seq",
"xrpl.consensus.mode"
],
"config_flag": "trace_consensus"
},
{
"name": "consensus.accept",
"category": "consensus",
"parent": null,
"required_attributes": [
"xrpl.consensus.proposers",
"xrpl.consensus.validation_quorum",
"xrpl.consensus.proposers_validated"
],
"config_flag": "trace_consensus"
},
{
"name": "consensus.validation.send",
"category": "consensus",
"parent": null,
"required_attributes": [
"xrpl.consensus.ledger.seq",
"xrpl.consensus.proposing",
"xrpl.validation.ledger_hash",
"xrpl.validation.full"
],
"config_flag": "trace_consensus"
},
{
"name": "consensus.accept.apply",
"category": "consensus",
"parent": null,
"required_attributes": [
"xrpl.consensus.close_time",
"xrpl.consensus.ledger.seq",
"xrpl.consensus.parent_close_time",
"xrpl.consensus.close_time_self",
"xrpl.consensus.close_time_vote_bins",
"xrpl.consensus.resolution_direction"
],
"config_flag": "trace_consensus"
},
{
"name": "ledger.build",
"category": "ledger",
"parent": null,
"required_attributes": [
"xrpl.ledger.seq",
"xrpl.ledger.tx_count",
"xrpl.ledger.tx_failed",
"xrpl.ledger.close_time",
"xrpl.ledger.close_time_correct",
"xrpl.ledger.close_resolution_ms"
],
"config_flag": "trace_ledger"
},
{
"name": "ledger.validate",
"category": "ledger",
"parent": null,
"required_attributes": ["xrpl.ledger.seq", "xrpl.ledger.validations"],
"config_flag": "trace_ledger"
},
{
"name": "ledger.store",
"category": "ledger",
"parent": null,
"required_attributes": ["xrpl.ledger.seq"],
"config_flag": "trace_ledger"
},
{
"name": "peer.proposal.receive",
"category": "peer",
"parent": null,
"required_attributes": ["xrpl.peer.id", "xrpl.peer.proposal.trusted"],
"config_flag": "trace_peer"
},
{
"name": "peer.validation.receive",
"category": "peer",
"parent": null,
"required_attributes": [
"xrpl.peer.id",
"xrpl.peer.validation.trusted",
"xrpl.peer.validation.ledger_hash",
"xrpl.peer.validation.full"
],
"config_flag": "trace_peer"
}
],
"parent_child_relationships": [
{
"parent": "rpc.request",
"child": "rpc.process",
"description": "RPC request contains processing span",
"skip": true,
"skip_reason": "rpc.request and rpc.process run on different threads (onRequest posts a coroutine to JobQueue for processRequest). Span context is not propagated across the thread boundary. Requires C++ fix to capture and forward the span context through the coroutine lambda."
},
{
"parent": "rpc.process",
"child": "rpc.command.*",
"description": "Processing span contains per-command span"
},
{
"parent": "ledger.build",
"child": "tx.apply",
"description": "Ledger build contains transaction application"
}
],
"total_span_types": 17,
"total_unique_attributes": 37
}

View File

@@ -0,0 +1,150 @@
#!/usr/bin/env bash
# generate-validator-keys.sh — Generate validator key pairs for the workload harness.
#
# Uses a temporary standalone xrpld instance to call `validation_create` RPC
# for each node. Outputs a JSON file mapping node index to seed + public key.
#
# Usage:
# ./generate-validator-keys.sh <xrpld_binary> <num_nodes> <output_dir>
#
# Output:
# <output_dir>/validator-keys.json — JSON array of {index, seed, public_key}
# <output_dir>/validators.txt — [validators] section for xrpld.cfg
set -euo pipefail
# ---------------------------------------------------------------------------
# Colored output helpers
# ---------------------------------------------------------------------------
log() { printf "\033[1;34m[KEYGEN]\033[0m %s\n" "$*"; }
ok() { printf "\033[1;32m[KEYGEN]\033[0m %s\n" "$*"; }
die() { printf "\033[1;31m[KEYGEN]\033[0m %s\n" "$*" >&2; exit 1; }
# ---------------------------------------------------------------------------
# Argument parsing
# ---------------------------------------------------------------------------
usage() {
echo "Usage: $0 <xrpld_binary> <num_nodes> <output_dir>"
echo ""
echo "Arguments:"
echo " xrpld_binary Path to xrpld binary (built with telemetry=ON)"
echo " num_nodes Number of validator key pairs to generate (1-20)"
echo " output_dir Directory to write validator-keys.json and validators.txt"
exit 1
}
if [ $# -lt 3 ]; then
usage
fi
XRPLD="$1"
NUM_NODES="$2"
OUTPUT_DIR="$3"
# Validate arguments
[ -x "$XRPLD" ] || die "xrpld binary not found or not executable: $XRPLD"
[[ "$NUM_NODES" =~ ^[0-9]+$ ]] || die "num_nodes must be a positive integer"
[ "$NUM_NODES" -ge 1 ] && [ "$NUM_NODES" -le 20 ] || die "num_nodes must be between 1 and 20"
mkdir -p "$OUTPUT_DIR"
# ---------------------------------------------------------------------------
# Start a temporary standalone xrpld for key generation
# ---------------------------------------------------------------------------
TEMP_DIR="$(mktemp -d)"
TEMP_PORT=5099
TEMP_CFG="$TEMP_DIR/xrpld.cfg"
log "Starting temporary xrpld for key generation (port $TEMP_PORT)..."
cat > "$TEMP_CFG" <<EOCFG
[server]
port_rpc_keygen
[port_rpc_keygen]
port = $TEMP_PORT
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[node_db]
type=NuDB
path=$TEMP_DIR/nudb
online_delete=256
[database_path]
$TEMP_DIR/db
[debug_logfile]
$TEMP_DIR/debug.log
[ssl_verify]
0
EOCFG
"$XRPLD" --conf "$TEMP_CFG" -a --start > "$TEMP_DIR/stdout.log" 2>&1 &
TEMP_PID=$!
# Ensure cleanup on exit
cleanup_temp() {
kill "$TEMP_PID" 2>/dev/null || true
wait "$TEMP_PID" 2>/dev/null || true
rm -rf "$TEMP_DIR"
}
trap cleanup_temp EXIT
# Wait for RPC to become available
for attempt in $(seq 1 30); do
if curl -sf "http://localhost:$TEMP_PORT" \
-d '{"method":"server_info"}' >/dev/null 2>&1; then
log "Temporary xrpld RPC ready (attempt $attempt)."
break
fi
if [ "$attempt" -eq 30 ]; then
die "Temporary xrpld RPC not ready after 30s"
fi
sleep 1
done
# ---------------------------------------------------------------------------
# Generate key pairs
# ---------------------------------------------------------------------------
log "Generating $NUM_NODES validator key pairs..."
KEYS_JSON="["
VALIDATORS_TXT="[validators]"
for i in $(seq 1 "$NUM_NODES"); do
result=$(curl -sf "http://localhost:$TEMP_PORT" \
-d '{"method":"validation_create"}')
seed=$(echo "$result" | jq -r '.result.validation_seed')
pubkey=$(echo "$result" | jq -r '.result.validation_public_key')
if [ -z "$seed" ] || [ "$seed" = "null" ]; then
die "Failed to generate key pair for node $i"
fi
log " Node $i: ${pubkey:0:20}..."
# Build JSON entry
entry="{\"index\": $i, \"seed\": \"$seed\", \"public_key\": \"$pubkey\"}"
if [ "$i" -gt 1 ]; then
KEYS_JSON="$KEYS_JSON,"
fi
KEYS_JSON="$KEYS_JSON$entry"
VALIDATORS_TXT="$VALIDATORS_TXT
$pubkey"
done
KEYS_JSON="$KEYS_JSON]"
# ---------------------------------------------------------------------------
# Write output files
# ---------------------------------------------------------------------------
echo "$KEYS_JSON" | jq '.' > "$OUTPUT_DIR/validator-keys.json"
echo "$VALIDATORS_TXT" > "$OUTPUT_DIR/validators.txt"
ok "Generated $NUM_NODES key pairs:"
ok " Keys: $OUTPUT_DIR/validator-keys.json"
ok " Validators: $OUTPUT_DIR/validators.txt"

View File

@@ -0,0 +1,6 @@
# Python dependencies for Phase 10 workload tools.
#
# Install: pip install -r requirements.txt
websockets>=12.0
aiohttp>=3.9.0

View File

@@ -0,0 +1,453 @@
#!/usr/bin/env python3
"""RPC Load Generator for rippled telemetry validation.
Connects to one or more rippled WebSocket endpoints and fires all traced
RPC commands at configurable rates with realistic production-like
distribution.
Command distribution (default weights):
40% Health checks: server_info, fee
30% Wallet queries: account_info, account_lines, account_objects
15% Explorer: ledger, ledger_data
10% TX lookups: tx, account_tx
5% DEX queries: book_offers, amm_info
Usage:
python3 rpc_load_generator.py --endpoints ws://localhost:6006 --rate 50 --duration 120
# Multiple endpoints (round-robin):
python3 rpc_load_generator.py \\
--endpoints ws://localhost:6006 ws://localhost:6007 \\
--rate 100 --duration 300
# Custom weights:
python3 rpc_load_generator.py --endpoints ws://localhost:6006 \\
--weights '{"server_info":60,"account_info":30,"ledger":10}'
"""
import argparse
import asyncio
import json
import logging
import random
import sys
import time
import uuid
from dataclasses import dataclass, field
from typing import Any
import websockets
# ---------------------------------------------------------------------------
# Configuration
# ---------------------------------------------------------------------------
# Default command distribution matching realistic production ratios.
# Keys are RPC command names; values are relative weights.
DEFAULT_WEIGHTS: dict[str, int] = {
# 40% health checks
"server_info": 25,
"fee": 15,
# 30% wallet queries
"account_info": 15,
"account_lines": 8,
"account_objects": 7,
# 15% explorer
"ledger": 10,
"ledger_data": 5,
# 10% tx lookups
"tx": 5,
"account_tx": 5,
# 5% DEX queries
"book_offers": 3,
"amm_info": 2,
}
# Well-known genesis account for queries that require an account parameter.
GENESIS_ACCOUNT = "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"
logger = logging.getLogger("rpc_load_generator")
# ---------------------------------------------------------------------------
# Data classes
# ---------------------------------------------------------------------------
@dataclass
class LoadStats:
"""Tracks request counts and latencies during a load run.
Attributes:
total_sent: Total RPC requests dispatched.
total_success: Requests that returned a valid result.
total_errors: Requests that returned an error or timed out.
latencies: Per-command list of round-trip times in seconds.
command_counts: Per-command request count.
"""
total_sent: int = 0
total_success: int = 0
total_errors: int = 0
latencies: dict[str, list[float]] = field(default_factory=dict)
command_counts: dict[str, int] = field(default_factory=dict)
def record(self, command: str, latency: float, success: bool) -> None:
"""Record the outcome of a single RPC call."""
self.total_sent += 1
if success:
self.total_success += 1
else:
self.total_errors += 1
self.latencies.setdefault(command, []).append(latency)
self.command_counts[command] = self.command_counts.get(command, 0) + 1
def summary(self) -> dict[str, Any]:
"""Return a summary dict suitable for JSON serialization."""
per_command: dict[str, Any] = {}
for cmd, lats in self.latencies.items():
sorted_lats = sorted(lats)
n = len(sorted_lats)
per_command[cmd] = {
"count": self.command_counts.get(cmd, 0),
"p50_ms": round(sorted_lats[n // 2] * 1000, 2) if n else 0,
"p95_ms": (round(sorted_lats[int(n * 0.95)] * 1000, 2) if n else 0),
"p99_ms": (round(sorted_lats[int(n * 0.99)] * 1000, 2) if n else 0),
}
return {
"total_sent": self.total_sent,
"total_success": self.total_success,
"total_errors": self.total_errors,
"error_rate_pct": (
round(self.total_errors / self.total_sent * 100, 2)
if self.total_sent
else 0
),
"per_command": per_command,
}
# ---------------------------------------------------------------------------
# RPC command builders
# ---------------------------------------------------------------------------
def build_rpc_request(command: str) -> dict[str, Any]:
"""Build a native WebSocket command request for the given command.
Uses rippled's native WS format (``{"command": ...}``) with flat
parameters, NOT the JSON-RPC format (``{"method": ..., "params": [...]}``).
Args:
command: The rippled RPC command name.
Returns:
A dict representing the native WebSocket request body.
"""
req: dict[str, Any] = {"command": command}
if command in ("server_info", "fee"):
pass # No params needed.
elif command == "account_info":
req["account"] = GENESIS_ACCOUNT
elif command == "account_lines":
req["account"] = GENESIS_ACCOUNT
elif command == "account_objects":
req["account"] = GENESIS_ACCOUNT
req["limit"] = 10
elif command == "ledger":
req["ledger_index"] = "validated"
elif command == "ledger_data":
req["ledger_index"] = "validated"
req["limit"] = 5
elif command == "tx":
# Use a dummy hash — returns "txnNotFound" error but still exercises
# the full RPC span pipeline (rpc.request -> rpc.process -> rpc.command.tx).
req["transaction"] = "0" * 64
req["binary"] = False
elif command == "account_tx":
req["account"] = GENESIS_ACCOUNT
req["ledger_index_min"] = -1
req["ledger_index_max"] = -1
req["limit"] = 5
elif command == "book_offers":
req["taker_pays"] = {"currency": "XRP"}
req["taker_gets"] = {
"currency": "USD",
"issuer": GENESIS_ACCOUNT,
}
req["limit"] = 5
elif command == "amm_info":
# AMM may not exist — the span is still created on the server side.
req["asset"] = {"currency": "XRP"}
req["asset2"] = {
"currency": "USD",
"issuer": GENESIS_ACCOUNT,
}
return req
def choose_command(weights: dict[str, int]) -> str:
"""Select a random RPC command based on configured weights.
Args:
weights: Mapping of command name to relative weight.
Returns:
A command name string.
"""
commands = list(weights.keys())
w = [weights[c] for c in commands]
return random.choices(commands, weights=w, k=1)[0]
# ---------------------------------------------------------------------------
# WebSocket RPC client
# ---------------------------------------------------------------------------
async def send_rpc(
ws: websockets.WebSocketClientProtocol,
command: str,
stats: LoadStats,
inject_traceparent: bool = True,
) -> None:
"""Send a single RPC request over WebSocket and record the result.
Args:
ws: Open WebSocket connection.
command: RPC command name.
stats: LoadStats instance to record results.
inject_traceparent: If True, add a W3C traceparent header field
to the request for context propagation testing.
"""
request = build_rpc_request(command)
# Inject W3C traceparent for context propagation testing.
# The rippled WebSocket handler extracts this from the JSON body
# when present (Phase 2 context propagation).
if inject_traceparent:
trace_id = uuid.uuid4().hex
span_id = uuid.uuid4().hex[:16]
request["traceparent"] = f"00-{trace_id}-{span_id}-01"
t0 = time.monotonic()
try:
await ws.send(json.dumps(request))
raw = await asyncio.wait_for(ws.recv(), timeout=10.0)
latency = time.monotonic() - t0
response = json.loads(raw)
# Native WS responses have {"status": "success", "result": {...}}
# or {"status": "error", "error": "...", "error_message": "..."}.
success = response.get("status") == "success"
stats.record(command, latency, success)
except (asyncio.TimeoutError, websockets.exceptions.WebSocketException) as exc:
latency = time.monotonic() - t0
stats.record(command, latency, False)
logger.debug("RPC %s failed: %s", command, exc)
async def run_load(
endpoints: list[str],
rate: float,
duration: float,
weights: dict[str, int],
inject_traceparent: bool,
) -> LoadStats:
"""Run the RPC load generator against the given endpoints.
Distributes requests round-robin across endpoints at the specified
rate (requests per second) for the given duration.
Args:
endpoints: List of WebSocket URLs (ws://host:port).
rate: Target requests per second.
duration: Total run time in seconds.
weights: Command distribution weights.
inject_traceparent: Whether to inject W3C traceparent headers.
Returns:
LoadStats with aggregated results.
"""
stats = LoadStats()
interval = 1.0 / rate if rate > 0 else 0.1
# Open persistent connections to all endpoints.
connections: list[websockets.WebSocketClientProtocol] = []
for ep in endpoints:
try:
ws = await websockets.connect(ep, ping_interval=20, ping_timeout=10)
connections.append(ws)
logger.info("Connected to %s", ep)
except Exception as exc:
logger.error("Failed to connect to %s: %s", ep, exc)
if not connections:
logger.error("No connections established. Aborting.")
return stats
logger.info(
"Starting load: rate=%s RPS, duration=%ss, endpoints=%d",
rate,
duration,
len(connections),
)
start = time.monotonic()
conn_idx = 0
try:
while (time.monotonic() - start) < duration:
command = choose_command(weights)
ws = connections[conn_idx % len(connections)]
conn_idx += 1
# Fire-and-forget style with bounded concurrency via sleep.
asyncio.create_task(send_rpc(ws, command, stats, inject_traceparent))
await asyncio.sleep(interval)
# Periodic progress log.
elapsed = time.monotonic() - start
if stats.total_sent % 100 == 0 and stats.total_sent > 0:
actual_rps = stats.total_sent / elapsed if elapsed > 0 else 0
logger.info(
"Progress: %d sent, %d errors, %.1f RPS (%.0fs elapsed)",
stats.total_sent,
stats.total_errors,
actual_rps,
elapsed,
)
except asyncio.CancelledError:
logger.info("Load generation cancelled.")
finally:
# Allow in-flight requests to complete.
await asyncio.sleep(2)
for ws in connections:
await ws.close()
elapsed = time.monotonic() - start
logger.info(
"Load complete: %d sent, %d success, %d errors in %.1fs (%.1f RPS)",
stats.total_sent,
stats.total_success,
stats.total_errors,
elapsed,
stats.total_sent / elapsed if elapsed > 0 else 0,
)
return stats
# ---------------------------------------------------------------------------
# CLI entry point
# ---------------------------------------------------------------------------
def parse_args() -> argparse.Namespace:
"""Parse command-line arguments."""
parser = argparse.ArgumentParser(
description="RPC Load Generator for rippled telemetry validation",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Basic usage (50 RPS for 2 minutes):
python3 rpc_load_generator.py --endpoints ws://localhost:6006 --rate 50 --duration 120
# Multiple endpoints with custom weights:
python3 rpc_load_generator.py \\
--endpoints ws://localhost:6006 ws://localhost:6007 \\
--rate 100 --duration 300 \\
--weights '{"server_info": 80, "account_info": 20}'
""",
)
parser.add_argument(
"--endpoints",
nargs="+",
default=["ws://localhost:6006"],
help="WebSocket endpoints (default: ws://localhost:6006)",
)
parser.add_argument(
"--rate",
type=float,
default=50.0,
help="Target requests per second (default: 50)",
)
parser.add_argument(
"--duration",
type=float,
default=120.0,
help="Run duration in seconds (default: 120)",
)
parser.add_argument(
"--weights",
type=str,
default=None,
help="JSON string of command weights (overrides defaults)",
)
parser.add_argument(
"--no-traceparent",
action="store_true",
help="Disable W3C traceparent injection",
)
parser.add_argument(
"--output",
type=str,
default=None,
help="Write JSON summary to this file path",
)
parser.add_argument(
"--verbose",
action="store_true",
help="Enable debug logging",
)
return parser.parse_args()
def main() -> None:
"""Main entry point for the RPC load generator."""
args = parse_args()
logging.basicConfig(
level=logging.DEBUG if args.verbose else logging.INFO,
format="%(asctime)s [%(name)s] %(levelname)s %(message)s",
)
# Parse custom weights if provided.
weights = DEFAULT_WEIGHTS.copy()
if args.weights:
try:
custom = json.loads(args.weights)
weights = {k: int(v) for k, v in custom.items()}
logger.info("Using custom weights: %s", weights)
except (json.JSONDecodeError, ValueError) as exc:
logger.error("Invalid --weights JSON: %s", exc)
sys.exit(1)
# Run the load generator.
stats = asyncio.run(
run_load(
endpoints=args.endpoints,
rate=args.rate,
duration=args.duration,
weights=weights,
inject_traceparent=not args.no_traceparent,
)
)
summary = stats.summary()
print(json.dumps(summary, indent=2))
if args.output:
with open(args.output, "w") as f:
json.dump(summary, f, indent=2)
logger.info("Summary written to %s", args.output)
# Exit with error if error rate exceeds 50%.
if summary["error_rate_pct"] > 50:
logger.error("High error rate: %.1f%%", summary["error_rate_pct"])
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,395 @@
#!/usr/bin/env bash
# run-full-validation.sh — Orchestrates the full telemetry validation pipeline.
#
# Sequence:
# 1. Start the observability stack (OTel Collector, Jaeger, Tempo, Prometheus, Loki, Grafana)
# 2. Start a multi-node rippled cluster with full telemetry enabled
# 3. Wait for consensus
# 4. Run workload orchestrator (RPC load, TX submission, propagation wait)
# 5. Run the telemetry validation suite
# 6. (Optional) Run the performance benchmark
#
# Usage:
# ./run-full-validation.sh --xrpld /path/to/xrpld
# ./run-full-validation.sh --xrpld /path/to/xrpld --with-benchmark
# ./run-full-validation.sh --cleanup
#
# Exit codes:
# 0 — All validation checks passed
# 1 — One or more validation checks failed
# 2 — Infrastructure error (cluster/stack failed to start)
set -euo pipefail
# ---------------------------------------------------------------------------
# Colored output helpers
# ---------------------------------------------------------------------------
log() { printf "\033[1;34m[VALIDATE]\033[0m %s\n" "$*"; }
ok() { printf "\033[1;32m[VALIDATE]\033[0m %s\n" "$*"; }
warn() { printf "\033[1;33m[VALIDATE]\033[0m %s\n" "$*"; }
fail() { printf "\033[1;31m[VALIDATE]\033[0m %s\n" "$*"; }
die() { printf "\033[1;31m[VALIDATE]\033[0m %s\n" "$*" >&2; exit 2; }
# ---------------------------------------------------------------------------
# Configuration
# ---------------------------------------------------------------------------
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
TELEMETRY_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
REPO_ROOT="$(cd "$TELEMETRY_DIR/../.." && pwd)"
COMPOSE_FILE="$TELEMETRY_DIR/docker-compose.workload.yaml"
WORKDIR="/tmp/xrpld-validation"
XRPLD="${XRPLD:-$REPO_ROOT/.build/xrpld}"
NUM_NODES=5
RPC_PORT_BASE=5005
WS_PORT_BASE=6006
PEER_PORT_BASE=51235
RPC_RATE=50
RPC_DURATION=120
TX_TPS=5
TX_DURATION=120
WITH_BENCHMARK=false
SKIP_LOKI=false
WORKLOAD_PROFILE="full-validation"
REPORT_DIR="$WORKDIR/reports"
GENESIS_ACCOUNT="rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"
GENESIS_SEED="snoPBrXtMeMyMHUVTgbuqAfg1SUTb"
# ---------------------------------------------------------------------------
# Argument parsing
# ---------------------------------------------------------------------------
usage() {
echo "Usage: $0 [OPTIONS]"
echo ""
echo "Options:"
echo " --xrpld PATH Path to xrpld binary"
echo " --nodes NUM Number of validator nodes (default: 5)"
echo " --rpc-rate RPS RPC load rate (default: 50)"
echo " --rpc-duration SECS RPC load duration (default: 120)"
echo " --tx-tps TPS Transaction submit rate (default: 5)"
echo " --tx-duration SECS Transaction submit duration (default: 120)"
echo " --profile NAME Workload profile (default: full-validation)"
echo " --with-benchmark Also run performance benchmarks"
echo " --skip-loki Skip Loki log-trace correlation checks"
echo " --cleanup Tear down everything and exit"
echo " -h, --help Show this help"
exit 0
}
while [ $# -gt 0 ]; do
case "$1" in
--xrpld) XRPLD="$2"; shift 2 ;;
--nodes) NUM_NODES="$2"; shift 2 ;;
--rpc-rate) RPC_RATE="$2"; shift 2 ;;
--rpc-duration) RPC_DURATION="$2"; shift 2 ;;
--tx-tps) TX_TPS="$2"; shift 2 ;;
--tx-duration) TX_DURATION="$2"; shift 2 ;;
--profile) WORKLOAD_PROFILE="$2"; shift 2 ;;
--with-benchmark) WITH_BENCHMARK=true; shift ;;
--skip-loki) SKIP_LOKI=true; shift ;;
--cleanup) # Cleanup mode
log "Cleaning up..."
pkill -f "$WORKDIR" 2>/dev/null || true
docker compose -f "$COMPOSE_FILE" down 2>/dev/null || true
rm -rf "$WORKDIR"
ok "Cleanup complete."
exit 0
;;
-h|--help) usage ;;
*) die "Unknown option: $1" ;;
esac
done
# ---------------------------------------------------------------------------
# Prerequisites
# ---------------------------------------------------------------------------
log "Checking prerequisites..."
[ -x "$XRPLD" ] || die "xrpld binary not found: $XRPLD"
command -v docker >/dev/null 2>&1 || die "docker not found"
docker compose version >/dev/null 2>&1 || die "docker compose (v2) not found"
command -v python3 >/dev/null 2>&1 || die "python3 not found"
command -v curl >/dev/null 2>&1 || die "curl not found"
command -v jq >/dev/null 2>&1 || die "jq not found"
[ -f "$COMPOSE_FILE" ] || die "docker-compose.workload.yaml not found"
# Install Python dependencies.
log "Installing Python dependencies..."
pip3 install -q -r "$SCRIPT_DIR/requirements.txt" 2>/dev/null || \
pip install -q -r "$SCRIPT_DIR/requirements.txt" 2>/dev/null || \
warn "Could not install Python dependencies — they may already be present"
ok "Prerequisites verified."
# ---------------------------------------------------------------------------
# Cleanup previous run
# ---------------------------------------------------------------------------
log "Cleaning up previous run..."
pkill -f "$WORKDIR" 2>/dev/null || true
sleep 2
rm -rf "$WORKDIR"
mkdir -p "$WORKDIR" "$REPORT_DIR"
# ---------------------------------------------------------------------------
# Step 1: Start observability stack
# ---------------------------------------------------------------------------
log "Step 1: Starting observability stack..."
docker compose -f "$COMPOSE_FILE" up -d
log "Waiting for OTel Collector..."
for attempt in $(seq 1 30); do
status=$(curl -so /dev/null -w '%{http_code}' http://localhost:4318/ 2>/dev/null || echo 000)
if [ "$status" != "000" ]; then
ok "OTel Collector ready (attempt $attempt)"
break
fi
[ "$attempt" -eq 30 ] && die "OTel Collector not ready after 30s"
sleep 1
done
log "Waiting for Jaeger..."
for attempt in $(seq 1 30); do
if curl -sf "http://localhost:16686/" >/dev/null 2>&1; then
ok "Jaeger ready (attempt $attempt)"
break
fi
[ "$attempt" -eq 30 ] && die "Jaeger not ready after 30s"
sleep 1
done
log "Waiting for Prometheus..."
for attempt in $(seq 1 30); do
if curl -sf "http://localhost:9090/-/healthy" >/dev/null 2>&1; then
ok "Prometheus ready (attempt $attempt)"
break
fi
[ "$attempt" -eq 30 ] && die "Prometheus not ready after 30s"
sleep 1
done
# ---------------------------------------------------------------------------
# Step 2: Generate validator keys and start cluster
# ---------------------------------------------------------------------------
log "Step 2: Starting $NUM_NODES-node validator cluster..."
bash "$SCRIPT_DIR/generate-validator-keys.sh" "$XRPLD" "$NUM_NODES" "$WORKDIR"
for i in $(seq 1 "$NUM_NODES"); do
NODE_DIR="$WORKDIR/node$i"
mkdir -p "$NODE_DIR/nudb" "$NODE_DIR/db"
RPC_PORT=$((RPC_PORT_BASE + i - 1))
WS_PORT=$((WS_PORT_BASE + i - 1))
PEER_PORT=$((PEER_PORT_BASE + i - 1))
SEED=$(jq -r ".[$((i-1))].seed" "$WORKDIR/validator-keys.json")
# Build ips_fixed.
IPS_FIXED=""
for j in $(seq 1 "$NUM_NODES"); do
if [ "$j" -ne "$i" ]; then
IPS_FIXED="${IPS_FIXED}127.0.0.1 $((PEER_PORT_BASE + j - 1))
"
fi
done
cat > "$NODE_DIR/xrpld.cfg" <<EOCFG
[server]
port_rpc
port_ws
port_peer
[port_rpc]
port = $RPC_PORT
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_ws]
port = $WS_PORT
ip = 127.0.0.1
admin = 127.0.0.1
protocol = ws
[port_peer]
port = $PEER_PORT
ip = 0.0.0.0
protocol = peer
[node_db]
type=NuDB
path=$NODE_DIR/nudb
online_delete=256
[database_path]
$NODE_DIR/db
[debug_logfile]
$NODE_DIR/debug.log
[validation_seed]
$SEED
[validators_file]
$WORKDIR/validators.txt
[ips]
${IPS_FIXED}
[telemetry]
enabled=1
service_instance_id=validator-${i}
endpoint=http://localhost:4318/v1/traces
exporter=otlp_http
sampling_ratio=1.0
batch_size=512
batch_delay_ms=2000
max_queue_size=2048
trace_rpc=1
trace_transactions=1
trace_consensus=1
trace_peer=1
trace_ledger=1
[insight]
server=statsd
address=127.0.0.1:8125
prefix=rippled
[rpc_startup]
{ "command": "log_level", "severity": "warning" }
[signing_support]
true
[ssl_verify]
0
EOCFG
"$XRPLD" --conf "$NODE_DIR/xrpld.cfg" --start > "$NODE_DIR/stdout.log" 2>&1 &
echo $! > "$NODE_DIR/xrpld.pid"
log " Node $i: RPC=$RPC_PORT WS=$WS_PORT Peer=$PEER_PORT PID=$!"
done
# ---------------------------------------------------------------------------
# Step 3: Wait for consensus
# ---------------------------------------------------------------------------
log "Step 3: Waiting for consensus..."
for attempt in $(seq 1 120); do
ready=0
for i in $(seq 1 "$NUM_NODES"); do
port=$((RPC_PORT_BASE + i - 1))
state=$(curl -sf "http://localhost:$port" \
-d '{"method":"server_info"}' 2>/dev/null \
| jq -r '.result.info.server_state' 2>/dev/null || echo "")
if [ "$state" = "proposing" ]; then
ready=$((ready + 1))
fi
done
if [ "$ready" -ge "$NUM_NODES" ]; then
ok "All $NUM_NODES nodes proposing (attempt $attempt)"
break
fi
if [ "$attempt" -eq 120 ]; then
warn "Consensus timeout — $ready/$NUM_NODES nodes ready"
fi
printf "\r %d/%d nodes proposing..." "$ready" "$NUM_NODES"
sleep 1
done
echo ""
# Wait for first validated ledger.
log "Waiting for validated ledger..."
for attempt in $(seq 1 60); do
val_seq=$(curl -sf "http://localhost:$RPC_PORT_BASE" \
-d '{"method":"server_info"}' 2>/dev/null \
| jq -r '.result.info.validated_ledger.seq // 0' 2>/dev/null || echo 0)
if [ "$val_seq" -gt 2 ] 2>/dev/null; then
ok "Validated ledger: seq $val_seq"
break
fi
[ "$attempt" -eq 60 ] && warn "No validated ledger after 60s"
sleep 1
done
# ---------------------------------------------------------------------------
# Step 4: Run workload orchestrator
# ---------------------------------------------------------------------------
log "Step 4: Running workload orchestrator (profile: $WORKLOAD_PROFILE)..."
WS_ENDPOINTS=""
for i in $(seq 1 "$NUM_NODES"); do
WS_ENDPOINTS="$WS_ENDPOINTS ws://localhost:$((WS_PORT_BASE + i - 1))"
done
python3 "$SCRIPT_DIR/workload_orchestrator.py" \
--profile "$WORKLOAD_PROFILE" \
--endpoints $WS_ENDPOINTS \
--report "$REPORT_DIR/workload-report.json" \
--report-dir "$REPORT_DIR" || \
warn "Workload orchestrator returned non-zero exit"
ok "Workload orchestration complete."
# ---------------------------------------------------------------------------
# Step 5: Run telemetry validation suite
# ---------------------------------------------------------------------------
log "Step 5: Running telemetry validation suite..."
VALIDATION_ARGS="--report $REPORT_DIR/validation-report.json"
if [ "$SKIP_LOKI" = true ]; then
VALIDATION_ARGS="$VALIDATION_ARGS --skip-loki"
fi
VALIDATION_EXIT=0
python3 "$SCRIPT_DIR/validate_telemetry.py" $VALIDATION_ARGS || VALIDATION_EXIT=$?
if [ "$VALIDATION_EXIT" -eq 0 ]; then
ok "All telemetry validation checks passed!"
else
fail "Some telemetry validation checks failed (exit $VALIDATION_EXIT)"
fi
# ---------------------------------------------------------------------------
# Step 6: (Optional) Run benchmark
# ---------------------------------------------------------------------------
if [ "$WITH_BENCHMARK" = true ]; then
log "Step 6: Running performance benchmark..."
bash "$SCRIPT_DIR/benchmark.sh" \
--xrpld "$XRPLD" \
--duration 120 \
--nodes 3 \
--output "$REPORT_DIR" || \
warn "Benchmark returned non-zero exit"
fi
# ---------------------------------------------------------------------------
# Summary
# ---------------------------------------------------------------------------
echo ""
echo "==========================================================="
echo " FULL VALIDATION RESULTS"
echo "==========================================================="
echo ""
echo " Reports directory: $REPORT_DIR"
echo ""
ls -la "$REPORT_DIR/" 2>/dev/null || true
echo ""
echo " Observability stack is running:"
echo " Jaeger UI: http://localhost:16686"
echo " Grafana: http://localhost:3000"
echo " Prometheus: http://localhost:9090"
echo ""
echo " xrpld nodes ($NUM_NODES) are running:"
for i in $(seq 1 "$NUM_NODES"); do
rpc=$((RPC_PORT_BASE + i - 1))
ws=$((WS_PORT_BASE + i - 1))
pid=$(cat "$WORKDIR/node$i/xrpld.pid" 2>/dev/null || echo 'unknown')
echo " Node $i: RPC=$rpc WS=$ws PID=$pid"
done
echo ""
echo " To tear down:"
echo " $0 --cleanup"
echo ""
echo "==========================================================="
exit "$VALIDATION_EXIT"

View File

@@ -0,0 +1,42 @@
{
"genesis": {
"account": "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh",
"seed": "snoPBrXtMeMyMHUVTgbuqAfg1SUTb",
"description": "Genesis account with all XRP. Used to fund test accounts."
},
"test_accounts": [
{
"name": "alice",
"description": "Primary sender for Payment and OfferCreate transactions."
},
{
"name": "bob",
"description": "Primary receiver for Payment transactions."
},
{
"name": "carol",
"description": "TrustSet and issued currency counterparty."
},
{
"name": "dave",
"description": "NFToken operations (mint, offer, accept)."
},
{
"name": "eve",
"description": "Escrow operations (create, finish)."
},
{
"name": "frank",
"description": "AMM pool operations (create, deposit, withdraw)."
},
{
"name": "grace",
"description": "Additional sender for parallel transaction submission."
},
{
"name": "heidi",
"description": "Additional receiver for payment diversity."
}
],
"note": "Test account keypairs are generated dynamically at runtime via wallet_propose RPC. This file defines the logical roles. Actual keys are stored in the workdir during execution."
}

View File

@@ -0,0 +1,821 @@
#!/usr/bin/env python3
"""Transaction Submitter for rippled telemetry validation.
Generates diverse transaction types against a rippled cluster to exercise
the full span and metric surface: tx.process, tx.apply, ledger.build,
consensus.*, and all associated attributes.
Pre-funds test accounts from the genesis account, then submits a
configurable mix of transaction types at a target TPS.
Supported transaction types:
- Payment (XRP and issued currencies)
- OfferCreate / OfferCancel (DEX activity)
- TrustSet (trust line creation)
- NFTokenMint / NFTokenCreateOffer / NFTokenAcceptOffer
- EscrowCreate / EscrowFinish
- AMMCreate / AMMDeposit / AMMWithdraw (if amendment enabled)
Usage:
python3 tx_submitter.py --endpoint ws://localhost:6006 --tps 5 --duration 120
# Custom transaction mix:
python3 tx_submitter.py --endpoint ws://localhost:6006 \\
--weights '{"Payment":50,"OfferCreate":20,"TrustSet":10,"NFTokenMint":10,"EscrowCreate":10}'
"""
import argparse
import asyncio
import json
import logging
import random
import sys
import time
from dataclasses import dataclass, field
from typing import Any
import websockets
logger = logging.getLogger("tx_submitter")
# ---------------------------------------------------------------------------
# Constants
# ---------------------------------------------------------------------------
GENESIS_ACCOUNT = "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"
GENESIS_SEED = "snoPBrXtMeMyMHUVTgbuqAfg1SUTb"
# Amount to fund each test account (100,000 XRP in drops).
FUND_AMOUNT = "100000000000"
# Default transaction mix weights (relative).
DEFAULT_TX_WEIGHTS: dict[str, int] = {
"Payment": 40,
"OfferCreate": 15,
"OfferCancel": 5,
"TrustSet": 10,
"NFTokenMint": 10,
"NFTokenCreateOffer": 5,
"EscrowCreate": 5,
"EscrowFinish": 5,
"AMMCreate": 3,
"AMMDeposit": 2,
}
# Number of test accounts to create.
NUM_TEST_ACCOUNTS = 8
# ---------------------------------------------------------------------------
# Data classes
# ---------------------------------------------------------------------------
@dataclass
class Account:
"""Represents a funded XRPL test account.
Attributes:
name: Human-readable name (e.g., "alice").
account: Classic address (rXXX...).
seed: Secret seed for signing.
sequence: Next available sequence number.
"""
name: str
account: str
seed: str
sequence: int = 0
@dataclass
class TxStats:
"""Tracks transaction submission results.
Attributes:
total_submitted: Total transactions sent to the network.
total_success: Transactions that returned tesSUCCESS or terQUEUED.
total_errors: Transactions that returned an error engine_result.
by_type: Per-transaction-type count of submissions.
errors_by_type: Per-transaction-type count of errors.
"""
total_submitted: int = 0
total_success: int = 0
total_errors: int = 0
by_type: dict[str, int] = field(default_factory=dict)
errors_by_type: dict[str, int] = field(default_factory=dict)
def record(self, tx_type: str, success: bool) -> None:
"""Record the result of a transaction submission."""
self.total_submitted += 1
self.by_type[tx_type] = self.by_type.get(tx_type, 0) + 1
if success:
self.total_success += 1
else:
self.total_errors += 1
self.errors_by_type[tx_type] = self.errors_by_type.get(tx_type, 0) + 1
def summary(self) -> dict[str, Any]:
"""Return a summary dict suitable for JSON serialization."""
return {
"total_submitted": self.total_submitted,
"total_success": self.total_success,
"total_errors": self.total_errors,
"success_rate_pct": (
round(self.total_success / self.total_submitted * 100, 2)
if self.total_submitted
else 0
),
"by_type": self.by_type,
"errors_by_type": self.errors_by_type,
}
# ---------------------------------------------------------------------------
# WebSocket RPC helpers
# ---------------------------------------------------------------------------
async def ws_request(
ws: websockets.WebSocketClientProtocol,
command: str,
params: dict[str, Any] | None = None,
) -> dict[str, Any]:
"""Send a native WebSocket command and return the result payload.
Uses rippled's native WebSocket format (``command`` key with flat
parameters). The response has ``status`` at the top level and the
actual data payload inside ``result``. This helper unwraps the
``result`` dict so callers can read fields directly.
Args:
ws: Open WebSocket connection.
command: RPC command name (e.g., ``account_info``, ``submit``).
params: Optional flat parameter dict merged into the request.
Returns:
The inner ``result`` dict from the response.
Raises:
RuntimeError: If the request fails or times out.
"""
request: dict[str, Any] = {"command": command}
if params:
request.update(params)
await ws.send(json.dumps(request))
raw = await asyncio.wait_for(ws.recv(), timeout=30.0)
resp = json.loads(raw)
# WS command format: {"status": "success", "result": {...}, "type": "response"}
# On error: {"status": "error", "error": "...", "error_message": "..."}
if resp.get("status") == "error":
logger.warning(
"%s error: %s%s",
command,
resp.get("error", "unknown"),
resp.get("error_message", ""),
)
return resp.get("result", resp)
async def create_account(ws: websockets.WebSocketClientProtocol, name: str) -> Account:
"""Create a new account via wallet_propose RPC.
Args:
ws: Open WebSocket connection.
name: Human-readable name for the account.
Returns:
An Account instance with the generated keypair.
"""
result = await ws_request(ws, "wallet_propose")
if "account_id" not in result:
raise RuntimeError(
f"wallet_propose failed: {json.dumps(result, indent=None)[:300]}"
)
return Account(
name=name,
account=result["account_id"],
seed=result["master_seed"],
)
async def fund_account(
ws: websockets.WebSocketClientProtocol,
dest: Account,
genesis_seq: int,
) -> tuple[bool, int]:
"""Fund a test account from genesis.
Args:
ws: Open WebSocket connection.
dest: Destination account to fund.
genesis_seq: Current genesis account sequence number.
Returns:
Tuple of (success: bool, next_sequence: int).
"""
resp = await ws_request(
ws,
"submit",
{
"secret": GENESIS_SEED,
"tx_json": {
"TransactionType": "Payment",
"Account": GENESIS_ACCOUNT,
"Destination": dest.account,
"Amount": FUND_AMOUNT,
"Sequence": genesis_seq,
},
},
)
engine_result = resp.get("engine_result", "unknown")
success = engine_result in ("tesSUCCESS", "terQUEUED")
if not success:
# Log the full response to help diagnose submit failures in CI.
logger.warning(
"Fund %s failed: engine_result=%s, full response: %s",
dest.name,
engine_result,
json.dumps(resp, indent=None)[:500],
)
return success, genesis_seq + 1
async def get_account_sequence(
ws: websockets.WebSocketClientProtocol, account: str
) -> int:
"""Get the current sequence number for an account.
Args:
ws: Open WebSocket connection.
account: Classic address.
Returns:
Current sequence number.
"""
resp = await ws_request(ws, "account_info", {"account": account})
if "account_data" not in resp:
# Log full response to diagnose WS API format issues.
logger.warning(
"account_info for %s: no account_data, full response: %s",
account[:12],
json.dumps(resp, indent=None)[:500],
)
return 0
return resp["account_data"].get("Sequence", 0)
# ---------------------------------------------------------------------------
# Transaction builders
# ---------------------------------------------------------------------------
def build_payment(sender: Account, receiver: Account) -> dict[str, Any]:
"""Build an XRP Payment transaction.
Args:
sender: Source account.
receiver: Destination account.
Returns:
Transaction JSON and signing secret.
"""
amount = str(random.randint(1000, 1000000)) # 0.001 - 1 XRP
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "Payment",
"Account": sender.account,
"Destination": receiver.account,
"Amount": amount,
"Sequence": sender.sequence,
},
}
def build_offer_create(sender: Account) -> dict[str, Any]:
"""Build an OfferCreate transaction (XRP/USD pair).
Args:
sender: Account placing the offer.
Returns:
Transaction JSON and signing secret.
"""
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "OfferCreate",
"Account": sender.account,
"TakerPays": str(random.randint(100000, 10000000)),
"TakerGets": {
"currency": "USD",
"issuer": GENESIS_ACCOUNT,
"value": str(round(random.uniform(0.1, 100.0), 2)),
},
"Sequence": sender.sequence,
},
}
def build_offer_cancel(sender: Account) -> dict[str, Any]:
"""Build an OfferCancel transaction.
Uses a non-existent offer sequence — will fail gracefully but still
exercises the tx.process span pipeline.
Args:
sender: Account cancelling the offer.
Returns:
Transaction JSON and signing secret.
"""
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "OfferCancel",
"Account": sender.account,
"OfferSequence": max(1, sender.sequence - 1),
"Sequence": sender.sequence,
},
}
def build_trust_set(sender: Account) -> dict[str, Any]:
"""Build a TrustSet transaction for a USD trust line.
Args:
sender: Account setting the trust line.
Returns:
Transaction JSON and signing secret.
"""
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "TrustSet",
"Account": sender.account,
"LimitAmount": {
"currency": "USD",
"issuer": GENESIS_ACCOUNT,
"value": "1000000",
},
"Sequence": sender.sequence,
},
}
def build_nftoken_mint(sender: Account) -> dict[str, Any]:
"""Build an NFTokenMint transaction.
Args:
sender: Account minting the NFT.
Returns:
Transaction JSON and signing secret.
"""
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "NFTokenMint",
"Account": sender.account,
"NFTokenTaxon": random.randint(0, 100),
"Flags": 8, # tfTransferable
"Sequence": sender.sequence,
},
}
def build_nftoken_create_offer(sender: Account) -> dict[str, Any]:
"""Build an NFTokenCreateOffer transaction.
Uses a dummy NFTokenID — will fail but exercises the span pipeline.
Args:
sender: Account creating the NFT offer.
Returns:
Transaction JSON and signing secret.
"""
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "NFTokenCreateOffer",
"Account": sender.account,
"NFTokenID": "0" * 64,
"Amount": str(random.randint(100000, 1000000)),
"Flags": 1, # tfSellNFToken
"Sequence": sender.sequence,
},
}
def build_escrow_create(sender: Account, receiver: Account) -> dict[str, Any]:
"""Build an EscrowCreate transaction.
Creates a time-based escrow that finishes 10 seconds from now.
Args:
sender: Account creating the escrow.
receiver: Destination account for escrow funds.
Returns:
Transaction JSON and signing secret.
"""
# Ripple epoch offset: 946684800 seconds from Unix epoch
ripple_time = int(time.time()) - 946684800
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "EscrowCreate",
"Account": sender.account,
"Destination": receiver.account,
"Amount": str(random.randint(100000, 1000000)),
"FinishAfter": ripple_time + 10,
"Sequence": sender.sequence,
},
}
def build_escrow_finish(sender: Account, owner: Account) -> dict[str, Any]:
"""Build an EscrowFinish transaction.
Uses a dummy offer sequence — will likely fail but exercises spans.
Args:
sender: Account finishing the escrow.
owner: Account that created the escrow.
Returns:
Transaction JSON and signing secret.
"""
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "EscrowFinish",
"Account": sender.account,
"Owner": owner.account,
"OfferSequence": max(1, owner.sequence - 2),
"Sequence": sender.sequence,
},
}
def build_amm_create(sender: Account) -> dict[str, Any]:
"""Build an AMMCreate transaction (XRP/USD pool).
Requires the AMM amendment to be enabled on the network.
Args:
sender: Account creating the AMM pool.
Returns:
Transaction JSON and signing secret.
"""
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "AMMCreate",
"Account": sender.account,
"Amount": str(random.randint(10000000, 100000000)),
"Amount2": {
"currency": "USD",
"issuer": GENESIS_ACCOUNT,
"value": str(round(random.uniform(10.0, 1000.0), 2)),
},
"TradingFee": 500, # 0.5%
"Sequence": sender.sequence,
},
}
def build_amm_deposit(sender: Account) -> dict[str, Any]:
"""Build an AMMDeposit transaction.
Args:
sender: Account depositing into the AMM pool.
Returns:
Transaction JSON and signing secret.
"""
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "AMMDeposit",
"Account": sender.account,
"Asset": {"currency": "XRP"},
"Asset2": {
"currency": "USD",
"issuer": GENESIS_ACCOUNT,
},
"Amount": str(random.randint(1000000, 10000000)),
"Flags": 0x00080000, # tfSingleAsset
"Sequence": sender.sequence,
},
}
# Transaction type -> builder function mapping.
# Each builder takes (accounts: list[Account]) and returns submit params.
TX_BUILDERS: dict[str, Any] = {
"Payment": lambda accts: build_payment(accts[0], accts[1]),
"OfferCreate": lambda accts: build_offer_create(accts[0]),
"OfferCancel": lambda accts: build_offer_cancel(accts[0]),
"TrustSet": lambda accts: build_trust_set(accts[2]),
"NFTokenMint": lambda accts: build_nftoken_mint(accts[3]),
"NFTokenCreateOffer": lambda accts: build_nftoken_create_offer(accts[3]),
"EscrowCreate": lambda accts: build_escrow_create(accts[4], accts[1]),
"EscrowFinish": lambda accts: build_escrow_finish(accts[4], accts[4]),
"AMMCreate": lambda accts: build_amm_create(accts[5]),
"AMMDeposit": lambda accts: build_amm_deposit(accts[5]),
}
# ---------------------------------------------------------------------------
# Main submission loop
# ---------------------------------------------------------------------------
async def setup_accounts(
ws: websockets.WebSocketClientProtocol,
) -> list[Account]:
"""Create and fund test accounts from genesis.
Generates NUM_TEST_ACCOUNTS accounts via wallet_propose, then funds
each with FUND_AMOUNT XRP from genesis.
Args:
ws: Open WebSocket connection to a rippled node.
Returns:
List of funded Account instances.
"""
account_names = ["alice", "bob", "carol", "dave", "eve", "frank", "grace", "heidi"]
logger.info("Creating %d test accounts...", NUM_TEST_ACCOUNTS)
accounts: list[Account] = []
for name in account_names[:NUM_TEST_ACCOUNTS]:
acct = await create_account(ws, name)
accounts.append(acct)
logger.info(" Created %s: %s", name, acct.account)
# Get genesis sequence.
genesis_seq = await get_account_sequence(ws, GENESIS_ACCOUNT)
logger.info("Genesis sequence: %d", genesis_seq)
# Fund all accounts.
logger.info("Funding test accounts...")
for acct in accounts:
success, genesis_seq = await fund_account(ws, acct, genesis_seq)
if success:
logger.info(" Funded %s", acct.name)
else:
logger.warning(" Failed to fund %s", acct.name)
# Wait for funding transactions to be validated.
logger.info("Waiting 10s for funding transactions to validate...")
await asyncio.sleep(10)
# Refresh sequence numbers for all accounts.
for acct in accounts:
try:
acct.sequence = await get_account_sequence(ws, acct.account)
logger.info(" %s sequence: %d", acct.name, acct.sequence)
except Exception as exc:
logger.warning(" Failed to get sequence for %s: %s", acct.name, exc)
return accounts
async def submit_transaction(
ws: websockets.WebSocketClientProtocol,
tx_type: str,
accounts: list[Account],
stats: TxStats,
) -> None:
"""Submit a single transaction of the given type.
Selects the appropriate builder, constructs the transaction, submits
it via the submit RPC, and records the result.
Args:
ws: Open WebSocket connection.
tx_type: Transaction type name (e.g., "Payment").
accounts: List of funded test accounts.
stats: TxStats instance to record results.
"""
builder = TX_BUILDERS.get(tx_type)
if not builder:
logger.warning("Unknown transaction type: %s", tx_type)
return
try:
params = builder(accounts)
# Identify which account is the sender to bump its sequence.
sender_addr = params["tx_json"]["Account"]
sender = next((a for a in accounts if a.account == sender_addr), None)
resp = await ws_request(ws, "submit", params)
engine_result = resp.get("engine_result", "unknown")
success = engine_result in (
"tesSUCCESS",
"terQUEUED",
"tecUNFUNDED_OFFER",
"tecNO_DST_INSUF_XRP",
)
stats.record(tx_type, success)
if sender:
sender.sequence += 1
if not success:
logger.debug(
"%s result: %s (%s)",
tx_type,
engine_result,
resp.get("engine_result_message", ""),
)
except Exception as exc:
stats.record(tx_type, False)
logger.debug("%s error: %s", tx_type, exc)
async def run_submitter(
endpoint: str,
tps: float,
duration: float,
weights: dict[str, int],
) -> TxStats:
"""Run the transaction submitter against a single endpoint.
Args:
endpoint: WebSocket URL (ws://host:port).
tps: Target transactions per second.
duration: Total run time in seconds.
weights: Transaction type distribution weights.
Returns:
TxStats with aggregated results.
"""
stats = TxStats()
interval = 1.0 / tps if tps > 0 else 0.5
ws = await websockets.connect(endpoint, ping_interval=20, ping_timeout=10)
logger.info("Connected to %s", endpoint)
try:
# Setup test accounts.
accounts = await setup_accounts(ws)
if len(accounts) < 6:
logger.error("Need at least 6 funded accounts, got %d", len(accounts))
return stats
# Build weighted command list.
tx_types = list(weights.keys())
tx_weights = [weights[t] for t in tx_types]
logger.info(
"Starting TX submission: tps=%s, duration=%ss, types=%d",
tps,
duration,
len(tx_types),
)
start = time.monotonic()
while (time.monotonic() - start) < duration:
tx_type = random.choices(tx_types, weights=tx_weights, k=1)[0]
await submit_transaction(ws, tx_type, accounts, stats)
await asyncio.sleep(interval)
# Progress logging every 50 transactions.
if stats.total_submitted % 50 == 0 and stats.total_submitted > 0:
elapsed = time.monotonic() - start
actual_tps = stats.total_submitted / elapsed if elapsed > 0 else 0
logger.info(
"Progress: %d submitted, %d success, %d errors, "
"%.1f TPS (%.0fs elapsed)",
stats.total_submitted,
stats.total_success,
stats.total_errors,
actual_tps,
elapsed,
)
finally:
await ws.close()
elapsed = time.monotonic() - start
logger.info(
"Submission complete: %d submitted, %d success, %d errors "
"in %.1fs (%.1f TPS)",
stats.total_submitted,
stats.total_success,
stats.total_errors,
elapsed,
stats.total_submitted / elapsed if elapsed > 0 else 0,
)
return stats
# ---------------------------------------------------------------------------
# CLI entry point
# ---------------------------------------------------------------------------
def parse_args() -> argparse.Namespace:
"""Parse command-line arguments."""
parser = argparse.ArgumentParser(
description="Transaction Submitter for rippled telemetry validation",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Basic usage (5 TPS for 2 minutes):
python3 tx_submitter.py --endpoint ws://localhost:6006 --tps 5 --duration 120
# Custom transaction mix:
python3 tx_submitter.py --endpoint ws://localhost:6006 \\
--weights '{"Payment": 60, "OfferCreate": 20, "TrustSet": 20}'
""",
)
parser.add_argument(
"--endpoint",
type=str,
default="ws://localhost:6006",
help="WebSocket endpoint (default: ws://localhost:6006)",
)
parser.add_argument(
"--tps",
type=float,
default=5.0,
help="Target transactions per second (default: 5)",
)
parser.add_argument(
"--duration",
type=float,
default=120.0,
help="Run duration in seconds (default: 120)",
)
parser.add_argument(
"--weights",
type=str,
default=None,
help="JSON string of transaction type weights (overrides defaults)",
)
parser.add_argument(
"--output",
type=str,
default=None,
help="Write JSON summary to this file path",
)
parser.add_argument(
"--verbose",
action="store_true",
help="Enable debug logging",
)
return parser.parse_args()
def main() -> None:
"""Main entry point for the transaction submitter."""
args = parse_args()
logging.basicConfig(
level=logging.DEBUG if args.verbose else logging.INFO,
format="%(asctime)s [%(name)s] %(levelname)s %(message)s",
)
# Parse custom weights if provided.
weights = DEFAULT_TX_WEIGHTS.copy()
if args.weights:
try:
custom = json.loads(args.weights)
weights = {k: int(v) for k, v in custom.items()}
logger.info("Using custom weights: %s", weights)
except (json.JSONDecodeError, ValueError) as exc:
logger.error("Invalid --weights JSON: %s", exc)
sys.exit(1)
# Run the submitter.
stats = asyncio.run(
run_submitter(
endpoint=args.endpoint,
tps=args.tps,
duration=args.duration,
weights=weights,
)
)
summary = stats.summary()
print(json.dumps(summary, indent=2))
if args.output:
with open(args.output, "w") as f:
json.dump(summary, f, indent=2)
logger.info("Summary written to %s", args.output)
if __name__ == "__main__":
main()

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,98 @@
{
"profiles": {
"full-validation": {
"description": "Full 18-dashboard coverage with burst/idle/plateau patterns",
"phases": [
{
"name": "warmup",
"description": "Low load to populate baseline gauges and node health metrics",
"duration_sec": 30,
"rpc": {
"rate": 5,
"weights": { "server_info": 50, "fee": 30, "ledger": 20 }
},
"tx": null
},
{
"name": "steady-state",
"description": "Medium sustained load — plateau data for all dashboards",
"duration_sec": 60,
"rpc": { "rate": 30 },
"tx": { "tps": 3 }
},
{
"name": "rpc-burst",
"description": "Heavy RPC to saturate job queue and spike latency",
"duration_sec": 30,
"rpc": { "rate": 100 },
"tx": null
},
{
"name": "tx-flood",
"description": "High TX rate for fee escalation and TxQ pressure",
"duration_sec": 30,
"rpc": { "rate": 5, "weights": { "server_info": 50, "fee": 50 } },
"tx": {
"tps": 20,
"weights": { "Payment": 70, "OfferCreate": 20, "TrustSet": 10 }
}
},
{
"name": "mixed-peak",
"description": "Realistic peak load — consensus and ledger ops under stress",
"duration_sec": 60,
"rpc": { "rate": 50 },
"tx": { "tps": 10 }
},
{
"name": "cooldown",
"description": "Low load for recovery metrics and state transition data",
"duration_sec": 30,
"rpc": { "rate": 5, "weights": { "server_info": 80, "fee": 20 } },
"tx": null
}
],
"propagation_wait_sec": 60
},
"quick-smoke": {
"description": "Fast smoke test — minimal data for CI quick checks",
"phases": [
{
"name": "smoke",
"description": "Single phase covering all generator types",
"duration_sec": 30,
"rpc": { "rate": 20 },
"tx": { "tps": 3 }
}
],
"propagation_wait_sec": 30
},
"stress": {
"description": "Heavy sustained load for performance benchmarking",
"phases": [
{
"name": "ramp-up",
"description": "Gradually increasing load",
"duration_sec": 30,
"rpc": { "rate": 20 },
"tx": { "tps": 5 }
},
{
"name": "peak",
"description": "Maximum sustained load",
"duration_sec": 120,
"rpc": { "rate": 150 },
"tx": { "tps": 25 }
},
{
"name": "sustain",
"description": "Continued high load for stability check",
"duration_sec": 60,
"rpc": { "rate": 100 },
"tx": { "tps": 15 }
}
],
"propagation_wait_sec": 60
}
}
}

View File

@@ -0,0 +1,503 @@
#!/usr/bin/env python3
"""Workload Orchestrator for rippled telemetry validation.
Reads a named profile from workload-profiles.json and executes sequential
load phases, each with configurable RPC and TX parameters. Produces a
combined report with per-phase results.
Phases run sequentially. Within each phase, the RPC load generator and
transaction submitter run concurrently (if both are configured).
Orchestration Flow::
workload-profiles.json
|
v
workload_orchestrator.py
|
+----+----+----+----+----+----+
| Phase 1 | Phase 2 | ...... | Phase N |
+----+----+----+----+----+----+
| |
+----+----+ +----+----+
| rpc_load | | tx_sub | (concurrent within phase)
| _gen.py | | mitter |
+----+----+ +----+----+
| |
v v
per-phase JSON reports
|
v
combined-report.json
Usage:
python3 workload_orchestrator.py --profile full-validation
python3 workload_orchestrator.py --profile quick-smoke --endpoints ws://localhost:6006
python3 workload_orchestrator.py --profile stress --report /tmp/report.json
Profiles are defined in workload-profiles.json in the same directory.
"""
import argparse
import asyncio
import json
import logging
import sys
import time
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any
logger = logging.getLogger("workload_orchestrator")
SCRIPT_DIR = Path(__file__).parent.resolve()
PROFILES_FILE = SCRIPT_DIR / "workload-profiles.json"
# ---------------------------------------------------------------------------
# Data classes
# ---------------------------------------------------------------------------
@dataclass
class PhaseResult:
"""Result of a single workload phase.
Attributes:
name: Phase name from the profile.
duration_sec: Configured duration.
actual_sec: Actual elapsed time.
rpc_summary: JSON summary from rpc_load_generator, or None.
tx_summary: JSON summary from tx_submitter, or None.
errors: List of error messages from subprocess failures.
"""
name: str
duration_sec: int
actual_sec: float = 0.0
rpc_summary: dict[str, Any] | None = None
tx_summary: dict[str, Any] | None = None
errors: list[str] = field(default_factory=list)
# ---------------------------------------------------------------------------
# Profile loading
# ---------------------------------------------------------------------------
def load_profile(profile_name: str) -> dict[str, Any]:
"""Load a named profile from workload-profiles.json.
Args:
profile_name: Key in the profiles dict (e.g., "full-validation").
Returns:
The profile dict with phases and propagation_wait_sec.
Raises:
SystemExit: If the profile file or name is not found.
"""
if not PROFILES_FILE.exists():
logger.error("Profiles file not found: %s", PROFILES_FILE)
sys.exit(2)
with open(PROFILES_FILE) as f:
data = json.load(f)
profiles = data.get("profiles", {})
if profile_name not in profiles:
available = ", ".join(profiles.keys())
logger.error("Profile '%s' not found. Available: %s", profile_name, available)
sys.exit(2)
profile = profiles[profile_name]
# Validate profile schema — fail fast on bad config.
phases = profile.get("phases", [])
if not isinstance(phases, list) or not phases:
logger.error("Profile '%s' has no valid phases", profile_name)
sys.exit(2)
for i, phase in enumerate(phases):
if not isinstance(phase.get("name"), str):
logger.error("Phase %d missing valid 'name'", i)
sys.exit(2)
if (
not isinstance(phase.get("duration_sec"), (int, float))
or phase["duration_sec"] <= 0
):
logger.error(
"Phase %d '%s' has invalid duration_sec",
i,
phase.get("name"),
)
sys.exit(2)
logger.info(
"Loaded profile '%s': %s (%d phases)",
profile_name,
profile.get("description", ""),
len(phases),
)
return profile
# ---------------------------------------------------------------------------
# Subprocess execution
# ---------------------------------------------------------------------------
async def run_subprocess(cmd: list[str], label: str) -> tuple[int, str, str]:
"""Run a subprocess and capture its stdout and stderr.
Args:
cmd: Command and arguments.
label: Human-readable label for logging.
Returns:
Tuple of (return_code, stdout_text, stderr_text).
"""
logger.debug("Starting %s: %s", label, " ".join(cmd))
proc = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await proc.communicate()
if proc.returncode != 0:
logger.warning(
"%s exited with code %d: %s",
label,
proc.returncode,
stderr.decode().strip()[-500:],
)
return proc.returncode, stdout.decode(), stderr.decode()
# ---------------------------------------------------------------------------
# Phase execution
# ---------------------------------------------------------------------------
def _collect_task_result(
label: str,
returncode: int,
stderr: str,
report_path: Path,
result: PhaseResult,
) -> None:
"""Process the result of a completed subprocess task.
Reads the JSON report file (if it exists) and records any errors.
Args:
label: "rpc" or "tx".
returncode: Subprocess exit code.
stderr: Captured stderr text.
report_path: Path to the JSON report file.
result: PhaseResult to update.
"""
if report_path.exists():
try:
with open(report_path) as f:
summary = json.load(f)
if label == "rpc":
result.rpc_summary = summary
elif label == "tx":
result.tx_summary = summary
except (json.JSONDecodeError, OSError) as exc:
logger.warning("Failed to parse %s report %s: %s", label, report_path, exc)
result.errors.append(f"Failed to parse {label} report: {exc}")
if returncode != 0:
snippet = stderr.strip()[-200:] if stderr else ""
result.errors.append(
f"{label.upper()} generator exited with {returncode}: {snippet}"
)
def _build_rpc_cmd(
endpoints: list[str], rpc_cfg: dict[str, Any], duration: int, output: Path
) -> list[str]:
"""Build the command list for the RPC load generator subprocess."""
cmd = [
sys.executable,
str(SCRIPT_DIR / "rpc_load_generator.py"),
"--endpoints",
*endpoints,
"--rate",
str(rpc_cfg.get("rate", 50)),
"--duration",
str(duration),
"--output",
str(output),
]
weights = rpc_cfg.get("weights")
if weights:
cmd.extend(["--weights", json.dumps(weights)])
return cmd
def _build_tx_cmd(
endpoint: str, tx_cfg: dict[str, Any], duration: int, output: Path
) -> list[str]:
"""Build the command list for the TX submitter subprocess."""
cmd = [
sys.executable,
str(SCRIPT_DIR / "tx_submitter.py"),
"--endpoint",
endpoint,
"--tps",
str(tx_cfg.get("tps", 5)),
"--duration",
str(duration),
"--output",
str(output),
]
weights = tx_cfg.get("weights")
if weights:
cmd.extend(["--weights", json.dumps(weights)])
return cmd
async def run_phase(
phase: dict[str, Any],
endpoints: list[str],
report_dir: Path,
phase_idx: int,
) -> PhaseResult:
"""Execute a single workload phase.
Launches rpc_load_generator.py and/or tx_submitter.py as subprocesses
based on the phase configuration. Both run concurrently if configured.
Args:
phase: Phase dict from the profile.
endpoints: List of WebSocket endpoint URLs.
report_dir: Directory for per-phase JSON reports.
phase_idx: Phase index (for file naming).
Returns:
PhaseResult with subprocess outputs.
"""
name = phase["name"]
duration = phase["duration_sec"]
result = PhaseResult(name=name, duration_sec=duration)
prefix = f"phase{phase_idx + 1}-{name}"
logger.info(
"=== Phase %d: %s (%ds) — %s ===",
phase_idx + 1,
name,
duration,
phase.get("description", ""),
)
tasks: list[tuple[str, Path, asyncio.Task]] = []
t0 = time.monotonic()
rpc_cfg = phase.get("rpc")
if rpc_cfg:
rpc_out = report_dir / f"{prefix}-rpc.json"
cmd = _build_rpc_cmd(endpoints, rpc_cfg, duration, rpc_out)
tasks.append(
("rpc", rpc_out, asyncio.create_task(run_subprocess(cmd, f"RPC [{name}]")))
)
tx_cfg = phase.get("tx")
if tx_cfg:
tx_out = report_dir / f"{prefix}-tx.json"
cmd = _build_tx_cmd(endpoints[0], tx_cfg, duration, tx_out)
tasks.append(
("tx", tx_out, asyncio.create_task(run_subprocess(cmd, f"TX [{name}]")))
)
if not tasks:
logger.warning(
"Phase %d: %s — no workload configured, skipping", phase_idx + 1, name
)
return result
for label, report_path, task in tasks:
returncode, _stdout, stderr = await task
_collect_task_result(label, returncode, stderr, report_path, result)
result.actual_sec = time.monotonic() - t0
logger.info(
"Phase %d complete: %.1fs actual, %d errors",
phase_idx + 1,
result.actual_sec,
len(result.errors),
)
return result
# ---------------------------------------------------------------------------
# Profile execution
# ---------------------------------------------------------------------------
async def run_profile(
profile: dict[str, Any],
endpoints: list[str],
report_dir: Path,
) -> dict[str, Any]:
"""Execute all phases in a profile sequentially.
Args:
profile: Profile dict with phases and propagation_wait_sec.
endpoints: WebSocket endpoints for the rippled cluster.
report_dir: Directory for phase reports.
Returns:
Combined report dict with per-phase results and totals.
"""
phases = profile.get("phases", [])
propagation_wait = profile.get("propagation_wait_sec", 60)
results: list[PhaseResult] = []
total_start = time.monotonic()
for idx, phase in enumerate(phases):
result = await run_phase(phase, endpoints, report_dir, idx)
results.append(result)
# Wait for telemetry data to propagate through the collector pipeline.
logger.info("Waiting %ds for telemetry data to propagate...", propagation_wait)
await asyncio.sleep(propagation_wait)
total_elapsed = time.monotonic() - total_start
# Build combined report from all phase results.
total_rpc_sent = 0
total_rpc_errors = 0
total_tx_submitted = 0
total_tx_errors = 0
phase_reports = []
for r in results:
pr: dict[str, Any] = {
"name": r.name,
"duration_sec": r.duration_sec,
"actual_sec": round(r.actual_sec, 1),
"errors": r.errors,
}
if r.rpc_summary:
pr["rpc"] = r.rpc_summary
total_rpc_sent += r.rpc_summary.get("total_sent", 0)
total_rpc_errors += r.rpc_summary.get("total_errors", 0)
if r.tx_summary:
pr["tx"] = r.tx_summary
total_tx_submitted += r.tx_summary.get("total_submitted", 0)
total_tx_errors += r.tx_summary.get("total_errors", 0)
phase_reports.append(pr)
report = {
"profile": profile.get("description", ""),
"total_elapsed_sec": round(total_elapsed, 1),
"phases": phase_reports,
"totals": {
"rpc_sent": total_rpc_sent,
"rpc_errors": total_rpc_errors,
"tx_submitted": total_tx_submitted,
"tx_errors": total_tx_errors,
},
}
return report
# ---------------------------------------------------------------------------
# CLI entry point
# ---------------------------------------------------------------------------
def parse_args() -> argparse.Namespace:
"""Parse command-line arguments."""
parser = argparse.ArgumentParser(
description="Workload Orchestrator for rippled telemetry validation",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Profiles:
full-validation Full 18-dashboard coverage (~5 min load + 1 min propagation)
quick-smoke Fast CI smoke test (~30s load + 30s propagation)
stress Heavy sustained load for benchmarking (~3.5 min + 1 min)
Examples:
python3 workload_orchestrator.py --profile full-validation
python3 workload_orchestrator.py --profile quick-smoke --endpoints ws://localhost:6006
python3 workload_orchestrator.py --profile stress --report /tmp/report.json
""",
)
parser.add_argument(
"--profile",
type=str,
required=True,
help="Named profile from workload-profiles.json",
)
parser.add_argument(
"--endpoints",
nargs="+",
default=["ws://localhost:6006"],
help="WebSocket endpoints (default: ws://localhost:6006)",
)
parser.add_argument(
"--report",
type=str,
default=None,
help="Write combined JSON report to this file",
)
parser.add_argument(
"--report-dir",
type=str,
default="/tmp/xrpld-validation/reports",
help="Directory for per-phase reports",
)
parser.add_argument(
"--verbose",
action="store_true",
help="Enable debug logging",
)
return parser.parse_args()
def main() -> None:
"""Main entry point for the workload orchestrator."""
args = parse_args()
logging.basicConfig(
level=logging.DEBUG if args.verbose else logging.INFO,
format="%(asctime)s [%(name)s] %(levelname)s %(message)s",
)
profile = load_profile(args.profile)
report_dir = Path(args.report_dir)
report_dir.mkdir(parents=True, exist_ok=True)
report = asyncio.run(run_profile(profile, args.endpoints, report_dir))
print(json.dumps(report, indent=2))
if args.report:
with open(args.report, "w") as f:
json.dump(report, f, indent=2)
logger.info("Combined report written to %s", args.report)
# Exit with error if either generator had high error rates.
totals = report["totals"]
rpc_err_rate = (
totals["rpc_errors"] / totals["rpc_sent"] * 100 if totals["rpc_sent"] > 0 else 0
)
tx_err_rate = (
totals["tx_errors"] / totals["tx_submitted"] * 100
if totals["tx_submitted"] > 0
else 0
)
if rpc_err_rate > 50 or tx_err_rate > 50:
logger.error(
"High error rates: RPC=%.1f%%, TX=%.1f%%", rpc_err_rate, tx_err_rate
)
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,94 @@
# xrpld validator node configuration template for workload harness.
#
# Placeholders (replaced by docker-compose entrypoint):
# {{NODE_INDEX}} — Node number (1-based)
# {{RPC_PORT}} — HTTP RPC port
# {{WS_PORT}} — WebSocket port
# {{PEER_PORT}} — Peer protocol port
# {{DATA_DIR}} — Node data directory
# {{VALIDATION_SEED}} — Validator seed from key generation
# {{VALIDATORS_FILE}} — Path to shared validators.txt
# {{IPS_FIXED}} — Peer addresses (one per line)
# {{OTEL_ENDPOINT}} — OTel Collector OTLP/HTTP endpoint
# {{STATSD_ADDRESS}} — StatsD UDP address (host:port)
# {{LOG_LEVEL}} — Log level (debug, info, warning, error)
[server]
port_rpc
port_ws
port_peer
[port_rpc]
port = {{RPC_PORT}}
ip = 0.0.0.0
admin = 0.0.0.0
protocol = http
[port_ws]
port = {{WS_PORT}}
ip = 0.0.0.0
admin = 0.0.0.0
protocol = ws
[port_peer]
port = {{PEER_PORT}}
ip = 0.0.0.0
protocol = peer
[node_db]
type=NuDB
path={{DATA_DIR}}/nudb
online_delete=256
[database_path]
{{DATA_DIR}}/db
[debug_logfile]
{{DATA_DIR}}/debug.log
[validation_seed]
{{VALIDATION_SEED}}
[validators_file]
{{VALIDATORS_FILE}}
[ips_fixed]
{{IPS_FIXED}}
[peer_private]
1
# --- OpenTelemetry tracing (all categories enabled) ---
[telemetry]
enabled=1
service_instance_id=validator-{{NODE_INDEX}}
endpoint={{OTEL_ENDPOINT}}
exporter=otlp_http
sampling_ratio=1.0
batch_size=512
batch_delay_ms=2000
max_queue_size=2048
trace_rpc=1
trace_transactions=1
trace_consensus=1
trace_peer=1
trace_ledger=1
# --- StatsD metrics (beast::insight) ---
[insight]
server=statsd
address={{STATSD_ADDRESS}}
prefix=rippled
[rpc_startup]
{ "command": "log_level", "severity": "{{LOG_LEVEL}}" }
[ssl_verify]
0
# --- Network tuning for local cluster ---
[network_id]
0
[sntp_servers]
time.google.com

View File

@@ -0,0 +1,60 @@
# Standalone xrpld configuration with OpenTelemetry enabled.
#
# Usage:
# 1. Start the observability stack:
# docker compose -f docker/telemetry/docker-compose.yml up -d
# 2. Run xrpld in standalone mode:
# ./xrpld --conf docker/telemetry/xrpld-telemetry.cfg -a --start
# 3. Send RPC commands to exercise tracing:
# curl -s http://localhost:5005 -d '{"method":"server_info"}'
# 4. View traces in Jaeger UI: http://localhost:16686
[server]
port_rpc_admin_local
port_ws_admin_local
[port_rpc_admin_local]
port = 5005
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_ws_admin_local]
port = 6006
ip = 127.0.0.1
admin = 127.0.0.1
protocol = ws
[node_db]
type=NuDB
path=docker/telemetry/data/nudb
online_delete=256
advisory_delete=0
[database_path]
docker/telemetry/data
[debug_logfile]
docker/telemetry/data/debug.log
[rpc_startup]
{ "command": "log_level", "severity": "debug" }
[ssl_verify]
0
# --- OpenTelemetry tracing ---
[telemetry]
enabled=1
service_instance_id=rippled-standalone
endpoint=http://localhost:4318/v1/traces
exporter=otlp_http
sampling_ratio=1.0
batch_size=512
batch_delay_ms=5000
max_queue_size=2048
trace_rpc=1
trace_transactions=1
trace_consensus=1
trace_peer=0
trace_ledger=1

278
docs/build/telemetry.md vendored Normal file
View File

@@ -0,0 +1,278 @@
# OpenTelemetry Tracing for Rippled
This document explains how to build rippled with OpenTelemetry distributed tracing support, configure the runtime telemetry options, and set up the observability backend to view traces.
- [OpenTelemetry Tracing for Rippled](#opentelemetry-tracing-for-rippled)
- [Overview](#overview)
- [Building with Telemetry](#building-with-telemetry)
- [Summary](#summary)
- [Build steps](#build-steps)
- [Install dependencies](#install-dependencies)
- [Call CMake](#call-cmake)
- [Build](#build)
- [Building without telemetry](#building-without-telemetry)
- [Runtime Configuration](#runtime-configuration)
- [Configuration options](#configuration-options)
- [Observability Stack](#observability-stack)
- [Start the stack](#start-the-stack)
- [Verify the stack](#verify-the-stack)
- [View traces in Jaeger](#view-traces-in-jaeger)
- [Running Tests](#running-tests)
- [Troubleshooting](#troubleshooting)
- [No traces appear in Jaeger](#no-traces-appear-in-jaeger)
- [Conan lockfile error](#conan-lockfile-error)
- [CMake target not found](#cmake-target-not-found)
- [Architecture](#architecture)
- [Key files](#key-files)
- [Conditional compilation](#conditional-compilation)
## Overview
Rippled supports optional [OpenTelemetry](https://opentelemetry.io/) distributed tracing.
When enabled, it instruments RPC requests with trace spans that are exported via
OTLP/HTTP to an OpenTelemetry Collector, which forwards them to a tracing backend
such as Jaeger.
Telemetry is **off by default** at both compile time and runtime:
- **Compile time**: The Conan option `telemetry` and CMake option `telemetry` must be set to `True`/`ON`.
When disabled, all tracing macros compile to `((void)0)` with zero overhead.
- **Runtime**: The `[telemetry]` config section must set `enabled=1`.
When disabled at runtime, a no-op implementation is used.
## Building with Telemetry
### Summary
Follow the same instructions as mentioned in [BUILD.md](../../BUILD.md) but with the following changes:
1. Pass `-o telemetry=True` to `conan install` to pull the `opentelemetry-cpp` dependency.
2. CMake will automatically pick up `telemetry=ON` from the Conan-generated toolchain.
3. Build as usual.
---
### Build steps
```bash
cd /path/to/rippled
rm -rf .build
mkdir .build
cd .build
```
#### Install dependencies
The `telemetry` option adds `opentelemetry-cpp/1.18.0` as a dependency.
If the Conan lockfile does not yet include this package, bypass it with `--lockfile=""`.
```bash
conan install .. \
--output-folder . \
--build missing \
--settings build_type=Debug \
-o telemetry=True \
-o tests=True \
-o xrpld=True \
--lockfile=""
```
> **Note**: The first build with telemetry may take longer as `opentelemetry-cpp`
> and its transitive dependencies are compiled from source.
#### Call CMake
The Conan-generated toolchain file sets `telemetry=ON` automatically.
No additional CMake flags are needed beyond the standard ones.
```bash
cmake .. -G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=Debug \
-Dtests=ON -Dxrpld=ON
```
You should see in the CMake output:
```
-- OpenTelemetry tracing enabled
```
#### Build
```bash
cmake --build . --parallel $(nproc)
```
### Building without telemetry
Omit the `-o telemetry=True` option (or pass `-o telemetry=False`).
The `opentelemetry-cpp` dependency will not be downloaded,
the `XRPL_ENABLE_TELEMETRY` preprocessor define will not be set,
and all tracing macros will compile to no-ops.
The resulting binary is identical to one built before telemetry support was added.
## Runtime Configuration
Add a `[telemetry]` section to your `xrpld.cfg` file:
```ini
[telemetry]
enabled=1
service_name=rippled
endpoint=http://localhost:4318/v1/traces
sampling_ratio=1.0
trace_rpc=1
trace_transactions=1
trace_consensus=1
trace_peer=0
```
### Configuration options
| Option | Type | Default | Description |
| --------------------- | ------ | --------------------------------- | -------------------------------------------------- |
| `enabled` | int | `0` | Enable (`1`) or disable (`0`) telemetry at runtime |
| `service_name` | string | `rippled` | Service name reported in traces |
| `service_instance_id` | string | node public key | Unique instance identifier |
| `exporter` | string | `otlp_http` | Exporter type |
| `endpoint` | string | `http://localhost:4318/v1/traces` | OTLP/HTTP collector endpoint |
| `use_tls` | int | `0` | Enable TLS for the exporter connection |
| `tls_ca_cert` | string | (empty) | Path to CA certificate for TLS |
| `sampling_ratio` | double | `1.0` | Fraction of traces to sample (`0.0` to `1.0`) |
| `batch_size` | uint32 | `512` | Maximum spans per export batch |
| `batch_delay_ms` | uint32 | `5000` | Maximum delay (ms) before flushing a batch |
| `max_queue_size` | uint32 | `2048` | Maximum spans queued in memory |
| `trace_rpc` | int | `1` | Enable RPC request tracing |
| `trace_transactions` | int | `1` | Enable transaction lifecycle tracing |
| `trace_consensus` | int | `1` | Enable consensus round tracing |
| `trace_peer` | int | `0` | Enable peer message tracing (high volume) |
| `trace_ledger` | int | `1` | Enable ledger close tracing |
## Observability Stack
A Docker Compose stack is provided in `docker/telemetry/` with three services:
| Service | Port | Purpose |
| ------------------ | ---------------------------------------------- | ---------------------------------------------------- |
| **OTel Collector** | `4317` (gRPC), `4318` (HTTP), `13133` (health) | Receives OTLP spans, batches, and forwards to Jaeger |
| **Jaeger** | `16686` (UI) | Trace storage and visualization |
| **Grafana** | `3000` | Dashboards (Jaeger pre-configured as datasource) |
### Start the stack
```bash
docker compose -f docker/telemetry/docker-compose.yml up -d
```
### Verify the stack
```bash
# Collector health
curl http://localhost:13133
# Jaeger UI
open http://localhost:16686
# Grafana
open http://localhost:3000
```
### View traces in Jaeger
1. Open `http://localhost:16686` in a browser.
2. Select the service name (e.g. `rippled`) from the **Service** dropdown.
3. Click **Find Traces**.
4. Click into any trace to see the span tree and attributes.
Traced RPC operations produce a span hierarchy like:
```
rpc.request
└── rpc.command.server_info (xrpl.rpc.command=server_info, xrpl.rpc.status=success)
```
Each span includes attributes:
- `xrpl.rpc.command` — the RPC method name
- `xrpl.rpc.version` — API version
- `xrpl.rpc.role``admin` or `user`
- `xrpl.rpc.status``success` or `error`
## Running Tests
Unit tests run with the telemetry-enabled build regardless of whether the
observability stack is running. When no collector is available, the exporter
silently drops spans with no impact on test results.
```bash
# Run all RPC tests
./xrpld --unittest=RPCCall,ServerInfo,AccountTx,LedgerRPC,Transaction --unittest-jobs $(nproc)
# Run the full test suite
./xrpld --unittest --unittest-jobs $(nproc)
```
To generate traces during manual testing, start rippled in standalone mode:
```bash
./xrpld --conf /path/to/xrpld.cfg --standalone --start
```
Then send RPC requests:
```bash
curl -s -X POST http://127.0.0.1:5005/ \
-H "Content-Type: application/json" \
-d '{"method":"server_info","params":[{}]}'
```
## Troubleshooting
### No traces appear in Jaeger
1. Confirm the OTel Collector is running: `docker compose -f docker/telemetry/docker-compose.yml ps`
2. Check collector logs for errors: `docker compose -f docker/telemetry/docker-compose.yml logs otel-collector`
3. Confirm `[telemetry] enabled=1` is set in the rippled config.
4. Confirm `endpoint` points to the correct collector address (`http://localhost:4318/v1/traces`).
5. Wait for the batch delay to elapse (default `5000` ms) before checking Jaeger.
### Conan lockfile error
If you see `ERROR: Requirement 'opentelemetry-cpp/1.18.0' not in lockfile 'requires'`,
the lockfile was generated without the telemetry dependency.
Pass `--lockfile=""` to bypass the lockfile, or regenerate it with telemetry enabled.
### CMake target not found
If CMake reports that `opentelemetry-cpp` targets are not found,
ensure you ran `conan install` with `-o telemetry=True` and that the
Conan-generated toolchain file is being used.
The Conan package provides a single umbrella target
`opentelemetry-cpp::opentelemetry-cpp` (not individual component targets).
## Architecture
### Key files
| File | Purpose |
| ---------------------------------------------- | ----------------------------------------------------------- |
| `include/xrpl/telemetry/Telemetry.h` | Abstract telemetry interface and `Setup` struct |
| `include/xrpl/telemetry/SpanGuard.h` | RAII span guard (activates scope, ends span on destruction) |
| `src/libxrpl/telemetry/Telemetry.cpp` | OTel-backed implementation (`TelemetryImpl`) |
| `src/libxrpl/telemetry/TelemetryConfig.cpp` | Config parser (`setup_Telemetry()`) |
| `src/libxrpl/telemetry/NullTelemetry.cpp` | No-op implementation (used when disabled) |
| `src/xrpld/telemetry/TracingInstrumentation.h` | Convenience macros (`XRPL_TRACE_RPC`, etc.) |
| `src/xrpld/rpc/detail/ServerHandler.cpp` | RPC entry point instrumentation |
| `src/xrpld/rpc/detail/RPCHandler.cpp` | Per-command instrumentation |
| `docker/telemetry/docker-compose.yml` | Observability stack (Collector + Jaeger + Grafana) |
| `docker/telemetry/otel-collector-config.yaml` | OTel Collector pipeline configuration |
### Conditional compilation
All OpenTelemetry SDK headers are guarded behind `#ifdef XRPL_ENABLE_TELEMETRY`.
The instrumentation macros in `TracingInstrumentation.h` compile to `((void)0)` when
the define is absent.
At runtime, if `enabled=0` is set in config (or the section is omitted), a
`NullTelemetry` implementation is used that returns no-op spans.
This two-layer approach ensures zero overhead when telemetry is not wanted.

View File

@@ -0,0 +1,655 @@
# External Dashboard Parity — Design Spec
> **Date**: 2026-03-30
> **Status**: Draft
> **Source**: [realgrapedrop/xrpl-validator-dashboard](https://github.com/realgrapedrop/xrpl-validator-dashboard)
> **Jira Epic**: RIPD-5060
## Summary
Integrate 29 missing metrics, 18 alert rules, and enriched span attributes from the community `xrpl-validator-dashboard` into rippled's native OpenTelemetry instrumentation. Changes are distributed across phases 2, 3, 4, 6, 7, 9, 10, and 11 of the OTel PR chain.
## Gap Analysis
### Coverage Breakdown (86 external metrics)
| Status | Count | Notes |
| ----------------- | ----- | -------------------------------------------------------------- |
| Already covered | 30 | peer_count, load_factor, io_latency, uptime, overlay traffic |
| Partially covered | 3 | state_value encoding, NuDB granularity, validation_quorum |
| Missing | 29 | Validation agreement, ledger economy, peer quality, UNL health |
| N/A (external) | 24 | Monitor health, realtime duplicates, system metrics |
### Missing Metrics by Category
| Category | Metrics | Count |
| -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----- |
| Validation Agreement | `validations_sent_total`, `validations_checked_total`, `validation_agreements_total`, `validation_missed_total`, `validation_agreement_pct_1h/24h`, `validation_agreements_1h/24h`, `validation_missed_1h/24h`, `validation_event` | 11 |
| Ledger Economy | `ledgers_closed_total`, `ledger_age_seconds`, `base_fee_xrp`, `reserve_base_xrp`, `reserve_inc_xrp`, `transaction_rate` | 6 |
| State Tracking | `time_in_current_state_seconds`, `state_changes_total`, `validator_state_info` | 3 |
| Peer Quality | `peers_insane`, `peer_latency_p90_ms` | 2 |
| Validator Health | `amendment_blocked`, `unl_expiry_days` | 2 |
| Upgrade Awareness | `peers_higher_version_pct`, `upgrade_recommended` | 2 |
| Storage / Other | `ledger_nudb_bytes`, `jq_trans_overflow_total`, `initial_sync_duration_seconds` | 3 |
### Alert Rules (18 total, from external dashboard)
| Group | Count | Rules |
| ----------- | ----- | ----------------------------------------------------------------------------------------------------------------------- |
| Critical | 8 | Agreement <90%, not proposing, unhealthy state, amendment blocked, UNL expiring, IO latency, load factor, peer count <5 |
| Network | 3 | Peer drop >10%/30%, P90 latency + disconnect correlation |
| Performance | 7 | CPU >80%, memory >90%, disk >85%, job queue overflow, upgrade recommended, tx rate drop, stale ledger |
---
## Branch-to-Change Mapping
### Phase 2 — `pratik/otel-phase2-rpc-tracing`
> **Ref**: Adds to existing Phase 2 task list. Consumed by Phase 7 (MetricsRegistry) and Phase 10 (validation checks).
**Task 2.8: RPC Span Attribute Enrichment**
Add node-level health context to every `rpc.command.*` span so operators can correlate RPC behavior with node state.
New span attributes on `rpc.command.*`:
| Attribute | Type | Source | Value Example |
| ----------------------------- | ------ | ------------------------------------ | --------------------- |
| `xrpl.node.amendment_blocked` | bool | `app_.getOPs().isAmendmentBlocked()` | `true` |
| `xrpl.node.server_state` | string | `app_.getOPs().strOperatingMode()` | `"full"`, `"syncing"` |
**File**: `src/xrpld/rpc/detail/RPCHandler.cpp` (in the `rpc.command.*` span creation block, after existing setAttribute calls)
**Rationale**: RPC is the operator's primary interaction point. When a node is amendment-blocked or degraded, every RPC response is suspect. Tagging spans with this state enables Jaeger queries like `{name=~"rpc.command.*"} | xrpl.node.amendment_blocked = true` to find all RPCs served during a blocked period.
**Exit Criteria**:
- [ ] `rpc.command.server_info` spans carry `xrpl.node.amendment_blocked` and `xrpl.node.server_state` attributes
- [ ] No measurable latency impact (attribute values are cached atomics, not computed per-call)
---
### Phase 3 — `pratik/otel-phase3-tx-tracing`
> **Ref**: Adds to existing Phase 3 task list. Consumed by Phase 10 (validation checks).
**Task 3.7: Transaction Span Peer Version Attribute**
Add the relaying peer's rippled version to transaction receive spans to enable version-mismatch correlation.
New span attribute on `tx.receive`:
| Attribute | Type | Source | Value Example |
| ------------------- | ------ | -------------------- | ----------------- |
| `xrpl.peer.version` | string | `peer->getVersion()` | `"rippled-2.4.0"` |
**File**: `src/xrpld/overlay/detail/PeerImp.cpp` (in the `tx.receive` span block, after existing `xrpl.peer.id` setAttribute)
**Rationale**: Transaction relay is where version mismatches cause subtle serialization or validation bugs. Tracing "this tx came from a v2.3.0 peer" helps diagnose compatibility issues during network upgrades.
**Exit Criteria**:
- [ ] `tx.receive` spans carry `xrpl.peer.version` attribute with a non-empty version string
- [ ] Attribute is omitted (not empty-string) when `getVersion()` returns empty
---
### Phase 4 — `pratik/otel-phase4-consensus-tracing`
> **Ref**: Adds to existing Phase 4 task list. Provides the span-level foundation that Phase 7 (ValidationTracker) builds upon. Consumed by Phase 10 (validation checks).
**Task 4.8: Consensus Validation Span Enrichment**
Add ledger hash and validation type to validation spans on both send and receive paths. This enables trace-level agreement analysis — filter by ledger hash to see which validators agreed.
New span attributes on `consensus.validation.send`:
| Attribute | Type | Source | Value Example |
| ----------------------------- | ------ | --------------------------------------- | --------------------------- |
| `xrpl.validation.ledger_hash` | string | Ledger hash from `validate()` call args | `"A1B2C3..."` (64-char hex) |
| `xrpl.validation.full` | bool | Whether this is a full validation | `true` |
New span attributes on `peer.validation.receive`:
| Attribute | Type | Source | Value Example |
| ---------------------------------- | ------ | ------------------------------------- | --------------------------- |
| `xrpl.peer.validation.ledger_hash` | string | From deserialized STValidation object | `"A1B2C3..."` (64-char hex) |
| `xrpl.peer.validation.full` | bool | From STValidation flags | `true` |
New span attributes on `consensus.accept`:
| Attribute | Type | Source | Value Example |
| ------------------------------------ | ----- | ---------------------------------------- | ------------- |
| `xrpl.consensus.validation_quorum` | int64 | `app_.validators().quorum()` | `28` |
| `xrpl.consensus.proposers_validated` | int64 | `result.proposers` from consensus result | `35` |
**Files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp` (validation.send and accept spans)
- `src/xrpld/overlay/detail/PeerImp.cpp` (peer.validation.receive span)
**Rationale**: The external dashboard's most valuable feature is validation agreement tracking. By recording the ledger hash on both outgoing and incoming validation spans, we create the raw data for agreement analysis at the trace level. Phase 7's ValidationTracker builds the metric-level aggregation on top of this.
**Exit Criteria**:
- [ ] `consensus.validation.send` spans carry `xrpl.validation.ledger_hash` and `xrpl.validation.full`
- [ ] `peer.validation.receive` spans carry `xrpl.peer.validation.ledger_hash` and `xrpl.peer.validation.full`
- [ ] `consensus.accept` spans carry `xrpl.consensus.validation_quorum` and `xrpl.consensus.proposers_validated`
- [ ] Ledger hash attributes match between send and receive for the same ledger
---
### Phase 6 — `pratik/otel-phase6-statsd`
> **Ref**: Adds to existing Phase 6 scope. No separate task list file exists for Phase 6 per project convention.
**Addition: Bridge `peerDisconnectsCharges_` metric**
The overlay already tracks resource-limit disconnects via `OverlayImpl::Stats::peerDisconnectsCharges_` (a `beast::insight::Gauge`). This metric is registered but not included in the StatsD bridge mapping.
**What to do**:
- Ensure `rippled_Overlay_Peer_Disconnects_Charges` appears in the StatsD-to-Prometheus metric name mapping
- Verify the metric appears in Prometheus after StatsD bridge is active
**File**: `src/xrpld/overlay/detail/OverlayImpl.cpp`
**Prometheus name**: `rippled_Overlay_Peer_Disconnects_Charges`
---
### Phase 7 — `pratik/otel-phase7-native-metrics`
> **Ref**: Adds to existing Phase 7 task list. This is the largest addition. Depends on Phase 4 span attributes for validation tracking context. Consumed by Phase 9 (dashboards), Phase 10 (validation), Phase 11 (alerts).
**Task 7.8: ValidationTracker — Validation Agreement Computation**
The most valuable missing component. A stateful class that tracks whether our validator's validations agree with network consensus, maintaining rolling 1h and 24h windows.
**Architecture**:
```
consensus.validation.send ─────> ValidationTracker ──────> MetricsRegistry
(records our validation (reconciles after (exports agreement
for ledger X) 8s grace period) gauges every 10s)
ledger.validate ───────────────> ValidationTracker
(records which ledger (marks ledger X as
network validated) agreed or missed)
```
**Design**:
```cpp
/// Tracks validation agreement between this node and network consensus.
///
/// ValidationTracker
/// ├── recordOurValidation(ledgerHash, ledgerSeq) // called when we send
/// ├── recordNetworkValidation(ledgerHash, seq) // called on ledger validate
/// ├── reconcile() // called periodically (timer)
/// ├── agreementPct1h() -> double // 0.0-100.0
/// ├── agreementPct24h() -> double
/// ├── agreements1h() -> uint64_t
/// ├── missed1h() -> uint64_t
/// ├── agreements24h() -> uint64_t
/// ├── missed24h() -> uint64_t
/// ├── totalAgreements() -> uint64_t
/// ├── totalMissed() -> uint64_t
/// ├── totalValidationsSent() -> uint64_t
/// └── totalValidationsChecked() -> uint64_t // all network validations seen
class ValidationTracker
{
// Ring buffer of pending ledger events (max 1000)
struct LedgerEvent {
uint256 ledgerHash;
LedgerIndex seq;
TimePoint closeTime;
bool weValidated = false; // did we send a validation for this ledger?
bool networkValidated = false; // did network validate this ledger?
bool reconciled = false; // has 8s grace period elapsed?
bool agreed = false; // after reconciliation: did we agree?
};
// Sliding window deques for pre-computed window stats
struct WindowEvent {
TimePoint time;
bool agreed;
};
std::deque<WindowEvent> window1h_; // events in last 1 hour
std::deque<WindowEvent> window24h_; // events in last 24 hours
// Reconciliation: 8s grace period after ledger close.
// If our validation hasn't arrived by then, mark as missed.
// 5-minute late repair: if a late validation arrives, correct the miss.
static constexpr auto kGracePeriod = std::chrono::seconds(8);
static constexpr auto kLateRepairWindow = std::chrono::minutes(5);
};
```
**Recording sites** (modifications to consensus code from Phase 7 branch):
| Hook Point | File | What to Record |
| ---------------------------- | ------------------- | ----------------------------------------------------------------------- |
| `validate()` in `doAccept()` | RCLConsensus.cpp | `tracker.recordOurValidation(ledgerHash, seq)` |
| `onValidation()` callback | RCLValidations path | `tracker.recordNetworkValidation(...)` — increment `validationsChecked` |
| LedgerMaster fully-validated | LedgerMaster.cpp | `tracker.recordNetworkValidation(validatedHash, seq)` |
**Key new files**:
- `src/xrpld/telemetry/ValidationTracker.h`
- `src/xrpld/telemetry/detail/ValidationTracker.cpp`
**Key modified files**:
- `src/xrpld/telemetry/MetricsRegistry.h` (add ValidationTracker member)
- `src/xrpld/telemetry/MetricsRegistry.cpp` (add gauge callback reading from tracker)
- `src/xrpld/app/consensus/RCLConsensus.cpp` (add recording hooks)
- `src/xrpld/app/ledger/detail/LedgerMaster.cpp` (add recording hook)
**Exit Criteria**:
- [ ] `ValidationTracker` correctly tracks agreement with 8s grace period
- [ ] 5-minute late repair corrects false-positive misses
- [ ] Thread-safe (atomics + mutex for window deques)
- [ ] Rolling windows correctly evict stale entries
- [ ] Unit tests for: normal agreement, missed validation, late repair, window eviction
---
**Task 7.9: Validator Health Observable Gauges**
New MetricsRegistry observable gauge for amendment, UNL, and quorum health.
| Gauge Name | Label `metric=` | Type | Source |
| -------------------------- | ------------------- | ------ | ------------------------------------------------- |
| `rippled_validator_health` | `amendment_blocked` | int64 | `app_.getOPs().isAmendmentBlocked()` → 0/1 |
| | `unl_blocked` | int64 | `app_.getOPs().isUNLBlocked()` → 0/1 |
| | `unl_expiry_days` | double | `app_.validators().expires()` → days until expiry |
| | `validation_quorum` | int64 | `app_.validators().quorum()` |
**File**: `src/xrpld/telemetry/MetricsRegistry.cpp` (new gauge callback in `registerAsyncGauges()`)
**Exit Criteria**:
- [ ] All 4 label values emitted every 10s
- [ ] `unl_expiry_days` is negative when expired, positive when active
- [ ] Values visible in Prometheus
---
**Task 7.10: Peer Quality Observable Gauges**
New MetricsRegistry observable gauge for peer health aggregates.
| Gauge Name | Label `metric=` | Type | Source |
| ---------------------- | -------------------------- | ------ | ------------------------------------------ |
| `rippled_peer_quality` | `peer_latency_p90_ms` | double | Iterate peers, compute P90 from `latency_` |
| | `peers_insane_count` | int64 | Count peers with `tracking_ == diverged` |
| | `peers_higher_version_pct` | double | Compare `getVersion()` to own version |
| | `upgrade_recommended` | int64 | 1 if `peers_higher_version_pct > 60%` |
**Implementation note**: The callback iterates `app_.overlay().foreach(...)` to collect per-peer latency and version data. This runs every 10s on the metrics reader thread — acceptable overhead for ~50-200 peers.
**File**: `src/xrpld/telemetry/MetricsRegistry.cpp`
**Exit Criteria**:
- [ ] P90 latency computed correctly (sort peer latencies, pick 90th percentile)
- [ ] Insane count matches `peers` RPC output
- [ ] Version comparison handles format variations (e.g., "rippled-2.4.0-rc1")
- [ ] Values visible in Prometheus
---
**Task 7.11: Ledger Economy Observable Gauges**
New MetricsRegistry observable gauge for fee and ledger metrics.
| Gauge Name | Label `metric=` | Type | Source |
| ------------------------ | -------------------- | ------ | ----------------------------------------- |
| `rippled_ledger_economy` | `base_fee_xrp` | double | `app_.getFeeTrack().getBaseFee()` → drops |
| | `reserve_base_xrp` | double | From validated ledger fee settings |
| | `reserve_inc_xrp` | double | From validated ledger fee settings |
| | `ledger_age_seconds` | double | `now - lastValidatedCloseTime` |
| | `transaction_rate` | double | Derived: tx count delta / time delta |
**File**: `src/xrpld/telemetry/MetricsRegistry.cpp`
**Exit Criteria**:
- [ ] Fee values match `server_info` RPC output
- [ ] `ledger_age_seconds` increases monotonically between ledger closes, resets on close
- [ ] `transaction_rate` is smoothed (rolling average, not instantaneous)
---
**Task 7.12: State Tracking Observable Gauges**
New MetricsRegistry observable gauge for node state duration.
| Gauge Name | Label `metric=` | Type | Source |
| ------------------------ | ------------------------------- | ------ | ------------------------------------------------ |
| `rippled_state_tracking` | `state_value` | int64 | 0-7 numeric encoding matching external dashboard |
| | `time_in_current_state_seconds` | double | `now - lastModeChangeTime` |
**State value encoding**:
rippled's `OperatingMode` enum maps 0-4 (DISCONNECTED through FULL). The external dashboard extends this to 0-6 by combining operating mode with consensus participation:
| Value | State | Source |
| ----- | ------------ | ----------------------------------------------------------- |
| 0 | disconnected | `OperatingMode::DISCONNECTED` |
| 1 | connected | `OperatingMode::CONNECTED` |
| 2 | syncing | `OperatingMode::SYNCING` |
| 3 | tracking | `OperatingMode::TRACKING` |
| 4 | full | `OperatingMode::FULL` and not validating |
| 5 | validating | `OperatingMode::FULL` and `mConsensus.validating()` is true |
| 6 | proposing | `OperatingMode::FULL` and consensus mode is `proposing` |
**Note**: Values 5-6 require checking both `OperatingMode` and `ConsensusMode`. The callback should derive these from `app_.getOPs().getOperatingMode()` combined with `mConsensus.mode()`. If operating mode is FULL and consensus is proposing → 6; if FULL and validating → 5; otherwise use the raw OperatingMode enum value.
**File**: `src/xrpld/telemetry/MetricsRegistry.cpp`
**Exit Criteria**:
- [ ] `state_value` matches external dashboard encoding
- [ ] `time_in_current_state_seconds` resets on mode change
---
**Task 7.13: Storage Detail Observable Gauge**
| Gauge Name | Label `metric=` | Type | Source |
| ------------------------ | --------------- | ----- | ---------------------------------------- |
| `rippled_storage_detail` | `nudb_bytes` | int64 | NuDB backend file size (filesystem stat) |
**File**: `src/xrpld/telemetry/MetricsRegistry.cpp`
**Exit Criteria**:
- [ ] NuDB file size reported in bytes
- [ ] Gracefully returns 0 if NuDB not configured
---
**Task 7.14: New Synchronous Counters**
New counters incremented at event sites. Declared in MetricsRegistry, recording sites added in consensus/overlay/network code.
| Counter Name | Increment Site | Source File |
| ------------------------------------- | -------------------------------- | --------------------- |
| `rippled_ledgers_closed_total` | `onAccept()` in consensus | RCLConsensus.cpp |
| `rippled_validations_sent_total` | `validate()` in consensus | RCLConsensus.cpp |
| `rippled_validations_checked_total` | Network validation received | LedgerMaster.cpp |
| `rippled_validation_agreements_total` | ValidationTracker reconciliation | ValidationTracker.cpp |
| `rippled_validation_missed_total` | ValidationTracker reconciliation | ValidationTracker.cpp |
| `rippled_state_changes_total` | `setMode()` in NetworkOPs | NetworkOPs.cpp |
| `rippled_jq_trans_overflow_total` | Job queue overflow path | JobQueue.cpp |
**Key modified files**:
- `src/xrpld/telemetry/MetricsRegistry.h/.cpp` (counter declarations)
- `src/xrpld/app/consensus/RCLConsensus.cpp` (recording: ledgers_closed, validations_sent)
- `src/xrpld/app/ledger/detail/LedgerMaster.cpp` (recording: validations_checked)
- `src/xrpld/app/misc/NetworkOPs.cpp` (recording: state_changes)
**Exit Criteria**:
- [ ] All 7 counters monotonically increase during normal operation
- [ ] Counter values match expected rates (e.g., ledgers_closed ≈ 1 per 3-5s)
- [ ] Values visible in Prometheus
---
**Task 7.15: Validation Agreement Observable Gauge**
Reads from the `ValidationTracker` (Task 7.8) to export rolling window stats.
| Gauge Name | Label `metric=` | Type | Source |
| ------------------------------ | ------------------- | ------ | --------------------------- |
| `rippled_validation_agreement` | `agreement_pct_1h` | double | `tracker.agreementPct1h()` |
| | `agreements_1h` | int64 | `tracker.agreements1h()` |
| | `missed_1h` | int64 | `tracker.missed1h()` |
| | `agreement_pct_24h` | double | `tracker.agreementPct24h()` |
| | `agreements_24h` | int64 | `tracker.agreements24h()` |
| | `missed_24h` | int64 | `tracker.missed24h()` |
**File**: `src/xrpld/telemetry/MetricsRegistry.cpp`
**Exit Criteria**:
- [ ] Agreement percentages in range [0.0, 100.0]
- [ ] Window stats match manual count from validation counters
- [ ] Percentages stabilize after 1h/24h of operation
---
### Phase 9 — `pratik/otel-phase9-metric-gap-fill`
> **Ref**: Adds to existing Phase 9 task list. Depends on Phase 7 gauges/counters. Consumed by Phase 10 (dashboard load checks).
**Task 9.11: Validator Health Dashboard**
New Grafana dashboard: `rippled-validator-health.json`
| Panel | Type | PromQL |
| -------------------------- | ---------- | ---------------------------------------------------------------- |
| Agreement % (1h) | stat | `rippled_validation_agreement{metric="agreement_pct_1h"}` |
| Agreement % (24h) | stat | `rippled_validation_agreement{metric="agreement_pct_24h"}` |
| Agreements vs Missed (1h) | bargauge | `agreements_1h` and `missed_1h` side by side |
| Agreements vs Missed (24h) | bargauge | `agreements_24h` and `missed_24h` side by side |
| Validation Rate | stat | `rate(rippled_validations_sent_total[5m]) * 60` |
| Validations Checked Rate | stat | `rate(rippled_validations_checked_total[5m]) * 60` |
| Amendment Blocked | stat | `rippled_validator_health{metric="amendment_blocked"}` |
| UNL Expiry (days) | stat | `rippled_validator_health{metric="unl_expiry_days"}` |
| Validation Quorum | stat | `rippled_validator_health{metric="validation_quorum"}` |
| State Value Timeline | timeseries | `rippled_state_tracking{metric="state_value"}` |
| Time in Current State | stat | `rippled_state_tracking{metric="time_in_current_state_seconds"}` |
| State Changes Rate | stat | `rate(rippled_state_changes_total[1h])` |
| Ledgers Closed Rate | stat | `rate(rippled_ledgers_closed_total[5m]) * 60` |
**Dashboard conventions**: `$node` template variable for `exported_instance` filtering, dark theme, matching existing panel sizes and color schemes.
---
**Task 9.12: Peer Quality Dashboard**
New Grafana dashboard: `rippled-peer-quality.json`
| Panel | Type | PromQL |
| ---------------------- | ---------- | ---------------------------------------------------------------- |
| P90 Peer Latency | timeseries | `rippled_peer_quality{metric="peer_latency_p90_ms"}` |
| Insane/Diverged Peers | stat | `rippled_peer_quality{metric="peers_insane_count"}` |
| Higher Version Peers % | stat | `rippled_peer_quality{metric="peers_higher_version_pct"}` |
| Upgrade Recommended | stat | `rippled_peer_quality{metric="upgrade_recommended"}` |
| Resource Disconnects | timeseries | `rippled_Overlay_Peer_Disconnects_Charges` |
| Inbound vs Outbound | bargauge | `rippled_Peer_Finder_Active_Inbound_Peers`, `..._Outbound_Peers` |
---
**Task 9.13: Ledger Economy Dashboard Panels**
Add a "Ledger Economy" row to the existing `system-node-health.json` dashboard:
| Panel | Type | PromQL |
| -------------------- | ---------- | ----------------------------------------------------- |
| Base Fee (drops) | stat | `rippled_ledger_economy{metric="base_fee_xrp"}` |
| Reserve Base (drops) | stat | `rippled_ledger_economy{metric="reserve_base_xrp"}` |
| Reserve Inc (drops) | stat | `rippled_ledger_economy{metric="reserve_inc_xrp"}` |
| Ledger Age | stat | `rippled_ledger_economy{metric="ledger_age_seconds"}` |
| Transaction Rate | timeseries | `rippled_ledger_economy{metric="transaction_rate"}` |
---
### Phase 10 — `pratik/otel-phase10-workload-validation`
> **Ref**: Adds to existing Phase 10 task list. Validates all additions from Phases 2-9.
**Task 10.6: External Dashboard Parity Validation Checks**
Add checks to `validate_telemetry.py` for all new span attributes and metrics.
**New span attribute checks (~8)**:
| Span Name | New Attribute |
| --------------------------- | ------------------------------------ |
| `rpc.command.server_info` | `xrpl.node.amendment_blocked` |
| `rpc.command.server_info` | `xrpl.node.server_state` |
| `tx.receive` | `xrpl.peer.version` |
| `consensus.validation.send` | `xrpl.validation.ledger_hash` |
| `consensus.validation.send` | `xrpl.validation.full` |
| `peer.validation.receive` | `xrpl.peer.validation.ledger_hash` |
| `consensus.accept` | `xrpl.consensus.validation_quorum` |
| `consensus.accept` | `xrpl.consensus.proposers_validated` |
**New metric existence checks (~13)**:
| Metric Name |
| ---------------------------------------------------------- |
| `rippled_validation_agreement{metric="agreement_pct_1h"}` |
| `rippled_validation_agreement{metric="agreement_pct_24h"}` |
| `rippled_validator_health{metric="amendment_blocked"}` |
| `rippled_validator_health{metric="unl_expiry_days"}` |
| `rippled_peer_quality{metric="peer_latency_p90_ms"}` |
| `rippled_peer_quality{metric="peers_insane_count"}` |
| `rippled_ledger_economy{metric="base_fee_xrp"}` |
| `rippled_ledger_economy{metric="transaction_rate"}` |
| `rippled_state_tracking{metric="state_value"}` |
| `rippled_ledgers_closed_total` |
| `rippled_validations_sent_total` |
| `rippled_state_changes_total` |
| `rippled_storage_detail{metric="nudb_bytes"}` |
**New dashboard load checks (~3)**:
| Dashboard |
| ------------------------------ |
| `rippled-validator-health` |
| `rippled-peer-quality` |
| `system-node-health` (updated) |
**New metric value sanity checks (~4)**:
| Check | Condition |
| ----------------------------- | ----------------- |
| `validation_agreement_pct_1h` | in [0, 100] |
| `unl_expiry_days` | > 0 (not expired) |
| `peer_latency_p90_ms` | > 0 (peers exist) |
| `state_value` | in [0, 7] |
**Total new checks: ~28** (bringing total from 73 to ~101)
---
### Phase 11 — (future branch)
> **Ref**: Adds to existing Phase 11 task list. Depends on Phase 7 metrics and Phase 9 dashboards.
**Task 11.9: Alert Rules from External Dashboard**
Port 18 alert rules from the external `xrpl-validator-dashboard` to Grafana alerting provisioning.
**Critical Group** (8 rules, eval interval 10s):
| Rule | Condition | For |
| ------------------- | --------------------------------------------------------------- | --- |
| Agreement Below 90% | `rippled_validation_agreement{metric="agreement_pct_24h"} < 90` | 30s |
| Not Proposing | `rippled_state_tracking{metric="state_value"} < 6` | 10s |
| Unhealthy State | `rippled_state_tracking{metric="state_value"} < 4` | 10s |
| Amendment Blocked | `rippled_validator_health{metric="amendment_blocked"} == 1` | 1m |
| UNL Expiring | `rippled_validator_health{metric="unl_expiry_days"} < 14` | 1h |
| High IO Latency | `histogram_quantile(0.95, rippled_ios_latency_bucket) > 50` | 1m |
| High Load Factor | `rippled_load_factor_metrics{metric="load_factor"} > 1000` | 1m |
| Peer Count Critical | `rippled_server_info{metric="peers"} < 5` | 1m |
**Network Group** (3 rules, eval interval 10s):
| Rule | Condition | For |
| ------------------------- | ------------------------------------------------------------------- | --- |
| Peer Drop >10% | `delta(rippled_server_info{metric="peers"}[30s]) / ... * 100 < -10` | 30s |
| Peer Drop >30% | Same formula, threshold -30 | 30s |
| P90 Latency + Disconnects | `peer_latency_p90_ms > 500 AND rate(disconnects) > 0` | 2m |
**Performance Group** (7 rules, eval interval 10s):
| Rule | Condition | For |
| ------------------- | -------------------------------------------------------------- | --- |
| CPU High | Per-core CPU > 80% | 2m |
| Memory Critical | Memory usage > 90% | 1m |
| Disk Warning | Disk usage > 85% | 2m |
| Job Queue Overflow | `rate(rippled_jq_trans_overflow_total[5m]) > 0` | 1m |
| Upgrade Recommended | `rippled_peer_quality{metric="peers_higher_version_pct"} > 60` | 1m |
| TX Rate Drop | Transaction rate dropped > 50% in 5m window | 5m |
| Stale Ledger | `rippled_ledger_economy{metric="ledger_age_seconds"} > 30` | 1m |
**Notification channels**: Template configs for Email/SMTP, Discord, Slack, PagerDuty.
**Files**:
- `docker/telemetry/grafana/alerting/alert-rules.yaml` (new or extend existing)
- `docker/telemetry/grafana/alerting/contact-points.yaml`
- `docker/telemetry/grafana/alerting/notification-policies.yaml`
---
**Task 11.10: Dual-Datasource Architecture Documentation**
Document the external dashboard's "fast path" pattern as a future optimization for real-time panels:
- **Pattern**: A lightweight Prometheus scrape endpoint (separate from OTLP pipeline) that polls critical metrics every 2-5s, bypassing the 10s OTLP metric reader interval and Prometheus scrape interval.
- **Use case**: Real-time state panels (server state, ledger age, peer count) where 10-15s latency is too slow.
- **Decision**: Document as a future option, not implement now. Current 10s interval is acceptable for v1.
**File**: `OpenTelemetryPlan/Phase11_taskList.md` (documentation task, no code)
---
## Documentation Updates
### `docs/telemetry-runbook.md` (on Phase 9 branch)
Add new sections after "Phase 9: OTel Metrics Alerting Rules":
1. **Validator Health Monitoring** — explains agreement tracking, amendment blocked, UNL expiry, with example PromQL queries
2. **Peer Quality Monitoring** — explains P90 latency, insane peers, version awareness
3. **Ledger Economy Monitoring** — explains fee/reserve gauges, transaction rate, ledger age
4. **Validation Agreement Explained** — operator-facing explanation of the reconciliation algorithm (8s grace, 5m late repair), what "missed" means, and when to worry
### `OpenTelemetryPlan/09-data-collection-reference.md` (on Phase 9 branch)
Add new metric tables in a "Phase 7+: External Dashboard Parity" section covering all 29 new metrics with their gauge names, label values, types, and sources.
---
## Cross-Phase Dependency Chain
```
Phase 2 (span attrs: amendment_blocked, server_state)
Phase 3 (span attrs: peer.version)
Phase 4 (span attrs: validation.ledger_hash, validation.full, quorum)
Phase 6 (StatsD bridge: peerDisconnectsCharges)
├── all above rebase into ──>
Phase 7 (ValidationTracker + 7 gauges + 7 counters + agreement gauge)
Phase 9 (3 dashboards + ledger economy panels + runbook + data-collection-ref)
Phase 10 (28 new validation checks in validate_telemetry.py)
Phase 11 (18 alert rules + dual-datasource docs)
```
## Rebase Strategy
After committing changes to each branch (starting from Phase 2):
1. Commit on `pratik/otel-phase2-rpc-tracing`
2. Rebase `phase3` onto `phase2`, resolve conflicts (task list files only — low risk)
3. Commit on `phase3`, rebase `phase4` onto `phase3`
4. Continue through chain: 4 → 5 → 5b → 6 → 7 → 8 → 9 → 10
5. Force-push-with-lease all affected branches
Since these are documentation-only changes (task list .md files), merge conflicts should be minimal — each file is unique to its branch.

792
docs/telemetry-runbook.md Normal file
View File

@@ -0,0 +1,792 @@
# rippled Telemetry Operator Runbook
## Overview
rippled supports OpenTelemetry distributed tracing to provide visibility into RPC requests, transaction processing, and consensus rounds.
## Quick Start
### 1. Start the observability stack
```bash
docker compose -f docker/telemetry/docker-compose.yml up -d
```
This starts:
- **OTel Collector** on ports 4317 (gRPC) and 4318 (HTTP)
- **Jaeger** UI on http://localhost:16686
- **Prometheus** on http://localhost:9090
- **Loki** on http://localhost:3100 (log aggregation)
- **Grafana** on http://localhost:3000
### 2. Enable telemetry in rippled
Add to your `xrpld.cfg`:
```ini
[telemetry]
enabled=1
endpoint=http://localhost:4318/v1/traces
```
### 3. Build with telemetry support
```bash
conan install . --build=missing -o telemetry=True
cmake --preset default -Dtelemetry=ON
cmake --build --preset default
```
## Configuration Reference
| Option | Default | Description |
| -------------------- | --------------------------------- | ----------------------------------------- |
| `enabled` | `0` | Master switch for telemetry |
| `endpoint` | `http://localhost:4318/v1/traces` | OTLP/HTTP endpoint |
| `exporter` | `otlp_http` | Exporter type |
| `sampling_ratio` | `1.0` | Head-based sampling ratio (0.01.0) |
| `trace_rpc` | `1` | Enable RPC request tracing |
| `trace_transactions` | `1` | Enable transaction tracing |
| `trace_consensus` | `1` | Enable consensus tracing |
| `trace_peer` | `0` | Enable peer message tracing (high volume) |
| `trace_ledger` | `1` | Enable ledger tracing |
| `batch_size` | `512` | Max spans per batch export |
| `batch_delay_ms` | `5000` | Delay between batch exports |
| `max_queue_size` | `2048` | Max spans queued before dropping |
| `use_tls` | `0` | Use TLS for exporter connection |
| `tls_ca_cert` | (empty) | Path to CA certificate bundle |
## Span Reference
All spans instrumented in rippled, grouped by subsystem:
### RPC Spans (Phase 2)
| Span Name | Source File | Attributes | Description |
| -------------------- | --------------------- | ---------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------- |
| `rpc.request` | ServerHandler.cpp:271 | — | Top-level HTTP RPC request |
| `rpc.process` | ServerHandler.cpp:573 | — | RPC processing (child of rpc.request) |
| `rpc.ws_message` | ServerHandler.cpp:384 | — | WebSocket RPC message |
| `rpc.command.<name>` | RPCHandler.cpp:161 | `xrpl.rpc.command`, `xrpl.rpc.version`, `xrpl.rpc.role`, `xrpl.rpc.status`, `xrpl.rpc.duration_ms`, `xrpl.rpc.error_message` | Per-command span (e.g., `rpc.command.server_info`) |
### Transaction Spans (Phase 3)
| Span Name | Source File | Attributes | Description |
| ------------ | ------------------- | ---------------------------------------------------------------------- | ------------------------------------- |
| `tx.process` | NetworkOPs.cpp:1227 | `xrpl.tx.hash`, `xrpl.tx.local`, `xrpl.tx.path` | Transaction submission and processing |
| `tx.receive` | PeerImp.cpp:1273 | `xrpl.peer.id`, `xrpl.tx.hash`, `xrpl.tx.suppressed`, `xrpl.tx.status` | Transaction received from peer relay |
| `tx.apply` | BuildLedger.cpp:88 | `xrpl.ledger.seq`, `xrpl.ledger.tx_count`, `xrpl.ledger.tx_failed` | Transaction set applied per ledger |
### Consensus Spans (Phase 4)
| Span Name | Source File | Attributes | Description |
| --------------------------- | -------------------- | ----------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------ |
| `consensus.proposal.send` | RCLConsensus.cpp:177 | `xrpl.consensus.round` | Consensus proposal broadcast |
| `consensus.ledger_close` | RCLConsensus.cpp:282 | `xrpl.consensus.ledger.seq`, `xrpl.consensus.mode` | Ledger close event |
| `consensus.accept` | RCLConsensus.cpp:395 | `xrpl.consensus.proposers`, `xrpl.consensus.round_time_ms` | Ledger accepted by consensus |
| `consensus.validation.send` | RCLConsensus.cpp:753 | `xrpl.consensus.ledger.seq`, `xrpl.consensus.proposing` | Validation sent after accept |
| `consensus.accept.apply` | RCLConsensus.cpp:453 | `xrpl.consensus.close_time`, `close_time_correct`, `close_resolution_ms`, `state`, `proposing`, `round_time_ms`, `ledger.seq` | Ledger application with close time details |
#### Close Time Queries (Tempo TraceQL)
```
# Find rounds where validators disagreed on close time
{name="consensus.accept.apply"} | xrpl.consensus.close_time_correct = false
# Find consensus failures (moved_on)
{name="consensus.accept.apply"} | xrpl.consensus.state = "moved_on"
# Find slow ledger applications (>5s)
{name="consensus.accept.apply"} | duration > 5s
# Find specific ledger's consensus details
{name="consensus.accept.apply"} | xrpl.consensus.ledger.seq = 92345678
```
### Ledger Spans (Phase 5)
| Span Name | Source File | Attributes | Description |
| ----------------- | -------------------- | ------------------------------------------------------------------ | ----------------------------- |
| `ledger.build` | BuildLedger.cpp:31 | `xrpl.ledger.seq`, `xrpl.ledger.tx_count`, `xrpl.ledger.tx_failed` | Ledger build during consensus |
| `ledger.validate` | LedgerMaster.cpp:915 | `xrpl.ledger.seq`, `xrpl.ledger.validations` | Ledger promoted to validated |
| `ledger.store` | LedgerMaster.cpp:409 | `xrpl.ledger.seq` | Ledger stored in history |
### Peer Spans (Phase 5)
| Span Name | Source File | Attributes | Description |
| ------------------------- | ---------------- | ---------------------------------------------- | ----------------------------- |
| `peer.proposal.receive` | PeerImp.cpp:1667 | `xrpl.peer.id`, `xrpl.peer.proposal.trusted` | Proposal received from peer |
| `peer.validation.receive` | PeerImp.cpp:2264 | `xrpl.peer.id`, `xrpl.peer.validation.trusted` | Validation received from peer |
## Prometheus Metrics (Spanmetrics)
The OTel Collector's spanmetrics connector automatically derives RED (Rate, Errors, Duration) metrics from every span. No custom metrics code is needed in rippled.
### Generated Metric Names
| Prometheus Metric | Type | Description |
| -------------------------------------------------- | --------- | ---------------------------- |
| `traces_span_metrics_calls_total` | Counter | Total span invocations |
| `traces_span_metrics_duration_milliseconds_bucket` | Histogram | Latency distribution buckets |
| `traces_span_metrics_duration_milliseconds_count` | Histogram | Latency observation count |
| `traces_span_metrics_duration_milliseconds_sum` | Histogram | Cumulative latency |
### Metric Labels
Every metric carries these standard labels:
| Label | Source | Example |
| -------------- | ------------------ | ---------------------------------------- |
| `span_name` | Span name | `rpc.command.server_info` |
| `status_code` | Span status | `STATUS_CODE_UNSET`, `STATUS_CODE_ERROR` |
| `service_name` | Resource attribute | `rippled` |
| `span_kind` | Span kind | `SPAN_KIND_INTERNAL` |
Additionally, span attributes configured as dimensions in the collector become metric labels (dots → underscores):
| Span Attribute | Metric Label | Applies To |
| ------------------------------ | ------------------------------ | ------------------------------- |
| `xrpl.rpc.command` | `xrpl_rpc_command` | `rpc.command.*` spans |
| `xrpl.rpc.status` | `xrpl_rpc_status` | `rpc.command.*` spans |
| `xrpl.consensus.mode` | `xrpl_consensus_mode` | `consensus.ledger_close` spans |
| `xrpl.tx.local` | `xrpl_tx_local` | `tx.process` spans |
| `xrpl.peer.proposal.trusted` | `xrpl_peer_proposal_trusted` | `peer.proposal.receive` spans |
| `xrpl.peer.validation.trusted` | `xrpl_peer_validation_trusted` | `peer.validation.receive` spans |
### Histogram Buckets
Configured in `otel-collector-config.yaml`:
```
1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 5s
```
## System Metrics (beast::insight via OTel native)
rippled has a built-in metrics framework (`beast::insight`) that exports metrics natively via OTLP/HTTP. These complement the span-derived RED metrics by providing system-level gauges, counters, and timers that don't map to individual trace spans.
### Configuration
Add to `xrpld.cfg`:
```ini
[insight]
server=otel
endpoint=http://localhost:4318/v1/metrics
prefix=rippled
```
The OTel Collector receives these via the OTLP receiver (same endpoint as traces, port 4318) and exports them to Prometheus alongside spanmetrics.
#### StatsD fallback (backward compatibility)
The legacy StatsD backend is still available:
```ini
[insight]
server=statsd
address=127.0.0.1:8125
prefix=rippled
```
When using StatsD, uncomment the `statsd` receiver in `otel-collector-config.yaml` and add port `8125:8125/udp` to the docker-compose otel-collector service.
### Metric Reference
#### Gauges
| Prometheus Metric | Source | Description |
| --------------------------------------------- | ------------------------- | -------------------------------------------------------------------------- |
| `rippled_LedgerMaster_Validated_Ledger_Age` | LedgerMaster.h:373 | Age of validated ledger (seconds) |
| `rippled_LedgerMaster_Published_Ledger_Age` | LedgerMaster.h:374 | Age of published ledger (seconds) |
| `rippled_State_Accounting_{Mode}_duration` | NetworkOPs.cpp:774 | Time in each operating mode (Disconnected/Connected/Syncing/Tracking/Full) |
| `rippled_State_Accounting_{Mode}_transitions` | NetworkOPs.cpp:780 | Transition count per mode |
| `rippled_Peer_Finder_Active_Inbound_Peers` | PeerfinderManager.cpp:214 | Active inbound peer connections |
| `rippled_Peer_Finder_Active_Outbound_Peers` | PeerfinderManager.cpp:215 | Active outbound peer connections |
| `rippled_Overlay_Peer_Disconnects` | OverlayImpl.h:557 | Peer disconnect count |
| `rippled_job_count` | JobQueue.cpp:26 | Current job queue depth |
| `rippled_{category}_Bytes_In/Out` | OverlayImpl.h:535 | Overlay traffic bytes per category (57 categories) |
| `rippled_{category}_Messages_In/Out` | OverlayImpl.h:535 | Overlay traffic messages per category |
#### OTel MetricsRegistry Gauges (Phase 9)
These gauges are exported via the OTel Metrics SDK `PeriodicMetricReader` (10s interval), NOT through beast::insight.
| Prometheus Metric | Source | Description |
| ----------------------------------------------------------- | ------------------- | -------------------------------------------- |
| `rippled_server_info{metric="server_state"}` | MetricsRegistry.cpp | Operating mode (0=DISCONNECTED .. 4=FULL) |
| `rippled_server_info{metric="uptime"}` | MetricsRegistry.cpp | Seconds since server start |
| `rippled_server_info{metric="peers"}` | MetricsRegistry.cpp | Total connected peers |
| `rippled_server_info{metric="validated_ledger_seq"}` | MetricsRegistry.cpp | Validated ledger sequence number |
| `rippled_server_info{metric="ledger_current_index"}` | MetricsRegistry.cpp | Current open ledger sequence |
| `rippled_server_info{metric="peer_disconnects_resources"}` | MetricsRegistry.cpp | Cumulative resource-related peer disconnects |
| `rippled_server_info{metric="last_close_proposers"}` | MetricsRegistry.cpp | Proposers in last closed round |
| `rippled_server_info{metric="last_close_converge_time_ms"}` | MetricsRegistry.cpp | Last close convergence time (ms) |
| `rippled_build_info{version="<ver>"}` | MetricsRegistry.cpp | Info-style metric (always 1) |
| `rippled_complete_ledgers{bound="start\|end",index="<N>"}` | MetricsRegistry.cpp | Complete ledger range start/end pairs |
| `rippled_db_metrics{metric="db_kb_total"}` | MetricsRegistry.cpp | Total database size (KB) |
| `rippled_db_metrics{metric="db_kb_ledger"}` | MetricsRegistry.cpp | Ledger database size (KB) |
| `rippled_db_metrics{metric="db_kb_transaction"}` | MetricsRegistry.cpp | Transaction database size (KB) |
| `rippled_db_metrics{metric="historical_perminute"}` | MetricsRegistry.cpp | Historical ledger fetches per minute |
| `rippled_cache_metrics{metric="AL_size"}` | MetricsRegistry.cpp | AcceptedLedger cache size |
| `rippled_nodestore_state{metric="node_reads_duration_us"}` | MetricsRegistry.cpp | Cumulative read time (microseconds) |
| `rippled_nodestore_state{metric="read_request_bundle"}` | MetricsRegistry.cpp | Read request bundle count |
| `rippled_nodestore_state{metric="read_threads_running"}` | MetricsRegistry.cpp | Active read threads |
| `rippled_nodestore_state{metric="read_threads_total"}` | MetricsRegistry.cpp | Total read threads configured |
#### Counters
| Prometheus Metric | Source | Description |
| --------------------------------- | --------------------- | ------------------------------ |
| `rippled_rpc_requests` | ServerHandler.cpp:108 | Total RPC request count |
| `rippled_ledger_fetches` | InboundLedgers.cpp:44 | Ledger fetch request count |
| `rippled_ledger_history_mismatch` | LedgerHistory.cpp:16 | Ledger hash mismatch count |
| `rippled_warn` | Logic.h:33 | Resource manager warning count |
| `rippled_drop` | Logic.h:34 | Resource manager drop count |
#### Histograms (from StatsD timers)
| Prometheus Metric | Source | Description |
| ----------------------- | --------------------- | ------------------------------ |
| `rippled_rpc_time` | ServerHandler.cpp:110 | RPC response time (ms) |
| `rippled_rpc_size` | ServerHandler.cpp:109 | RPC response size (bytes) |
| `rippled_ios_latency` | Application.cpp:438 | I/O service loop latency (ms) |
| `rippled_pathfind_fast` | PathRequests.h:23 | Fast pathfinding duration (ms) |
| `rippled_pathfind_full` | PathRequests.h:24 | Full pathfinding duration (ms) |
## Grafana Dashboards
Fifteen dashboards are pre-provisioned in `docker/telemetry/grafana/dashboards/`:
### RPC Performance (`rippled-rpc-perf`)
| Panel | Type | PromQL | Labels Used |
| --------------------------- | ---------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- |
| RPC Request Rate by Command | timeseries | `sum by (xrpl_rpc_command) (rate(traces_span_metrics_calls_total{span_name=~"rpc.command.*"}[5m]))` | `xrpl_rpc_command` |
| RPC Latency p95 by Command | timeseries | `histogram_quantile(0.95, sum by (le, xrpl_rpc_command) (rate(traces_span_metrics_duration_milliseconds_bucket{span_name=~"rpc.command.*"}[5m])))` | `xrpl_rpc_command` |
| RPC Error Rate | bargauge | Error spans / total spans × 100, grouped by `xrpl_rpc_command` | `xrpl_rpc_command`, `status_code` |
| RPC Latency Heatmap | heatmap | `sum(increase(traces_span_metrics_duration_milliseconds_bucket{span_name=~"rpc.command.*"}[5m])) by (le)` | `le` (bucket boundaries) |
| Overall RPC Throughput | timeseries | `rpc.request` + `rpc.process` rate | — |
| RPC Success vs Error | timeseries | by `status_code` (UNSET vs ERROR) | `status_code` |
| Top Commands by Volume | bargauge | `topk(10, ...)` by `xrpl_rpc_command` | `xrpl_rpc_command` |
| WebSocket Message Rate | stat | `rpc.ws_message` rate | — |
### Transaction Overview (`rippled-transactions`)
| Panel | Type | PromQL | Labels Used |
| --------------------------------- | ---------- | -------------------------------------------------------------------------------------------- | --------------- |
| Transaction Processing Rate | timeseries | `rate(traces_span_metrics_calls_total{span_name="tx.process"}[5m])` and `tx.receive` | `span_name` |
| Transaction Processing Latency | timeseries | `histogram_quantile(0.95 / 0.50, ... {span_name="tx.process"})` | — |
| Transaction Path Distribution | piechart | `sum by (xrpl_tx_local) (rate(traces_span_metrics_calls_total{span_name="tx.process"}[5m]))` | `xrpl_tx_local` |
| Transaction Receive vs Suppressed | timeseries | `rate(traces_span_metrics_calls_total{span_name="tx.receive"}[5m])` | — |
| TX Processing Duration Heatmap | heatmap | `tx.process` histogram buckets | `le` |
| TX Apply Duration per Ledger | timeseries | p95/p50 of `tx.apply` | — |
| Peer TX Receive Rate | timeseries | `tx.receive` rate | — |
| TX Apply Failed Rate | stat | `tx.apply` with `STATUS_CODE_ERROR` | `status_code` |
### Consensus Health (`rippled-consensus`)
| Panel | Type | PromQL | Labels Used |
| ----------------------------- | ---------- | ---------------------------------------------------------------------------------- | --------------------- |
| Consensus Round Duration | timeseries | `histogram_quantile(0.95 / 0.50, ... {span_name="consensus.accept"})` | — |
| Consensus Proposals Sent Rate | timeseries | `rate(traces_span_metrics_calls_total{span_name="consensus.proposal.send"}[5m])` | — |
| Ledger Close Duration | timeseries | `histogram_quantile(0.95, ... {span_name="consensus.ledger_close"})` | — |
| Validation Send Rate | stat | `rate(traces_span_metrics_calls_total{span_name="consensus.validation.send"}[5m])` | — |
| Ledger Apply Duration | timeseries | `histogram_quantile(0.95 / 0.50, ... {span_name="consensus.accept.apply"})` | — |
| Close Time Agreement | timeseries | `rate(traces_span_metrics_calls_total{span_name="consensus.accept.apply"}[5m])` | — |
| Consensus Mode Over Time | timeseries | `consensus.ledger_close` by `xrpl_consensus_mode` | `xrpl_consensus_mode` |
| Accept vs Close Rate | timeseries | `consensus.accept` vs `consensus.ledger_close` rate | — |
| Validation vs Close Rate | timeseries | `consensus.validation.send` vs `consensus.ledger_close` | — |
| Accept Duration Heatmap | heatmap | `consensus.accept` histogram buckets | `le` |
### Ledger Operations (`rippled-ledger-ops`)
| Panel | Type | PromQL | Labels Used |
| ----------------------- | ---------- | ---------------------------------------------- | ----------- |
| Ledger Build Rate | stat | `ledger.build` call rate | — |
| Ledger Build Duration | timeseries | p95/p50 of `ledger.build` | — |
| Ledger Validation Rate | stat | `ledger.validate` call rate | — |
| Build Duration Heatmap | heatmap | `ledger.build` histogram buckets | `le` |
| TX Apply Duration | timeseries | p95/p50 of `tx.apply` | — |
| TX Apply Rate | timeseries | `tx.apply` call rate | — |
| Ledger Store Rate | stat | `ledger.store` call rate | — |
| Build vs Close Duration | timeseries | p95 `ledger.build` vs `consensus.ledger_close` | — |
### Peer Network (`rippled-peer-net`)
Requires `trace_peer=1` in the `[telemetry]` config section.
| Panel | Type | PromQL | Labels Used |
| -------------------------------- | ---------- | --------------------------------- | ------------------------------ |
| Proposal Receive Rate | timeseries | `peer.proposal.receive` rate | — |
| Validation Receive Rate | timeseries | `peer.validation.receive` rate | — |
| Proposals Trusted vs Untrusted | piechart | by `xrpl_peer_proposal_trusted` | `xrpl_peer_proposal_trusted` |
| Validations Trusted vs Untrusted | piechart | by `xrpl_peer_validation_trusted` | `xrpl_peer_validation_trusted` |
### Node Health — System Metrics (`rippled-system-node-health`)
| Panel | Type | PromQL | Labels Used |
| -------------------------- | ---------- | ------------------------------------------------------ | ---------------- |
| Validated Ledger Age | stat | `rippled_LedgerMaster_Validated_Ledger_Age` | — |
| Published Ledger Age | stat | `rippled_LedgerMaster_Published_Ledger_Age` | — |
| Operating Mode Duration | timeseries | `rippled_State_Accounting_*_duration` | — |
| Operating Mode Transitions | timeseries | `rippled_State_Accounting_*_transitions` | — |
| I/O Latency | timeseries | `histogram_quantile(0.95, rippled_ios_latency_bucket)` | — |
| Job Queue Depth | timeseries | `rippled_job_count` | — |
| Ledger Fetch Rate | stat | `rate(rippled_ledger_fetches[5m])` | — |
| Ledger History Mismatches | stat | `rate(rippled_ledger_history_mismatch[5m])` | — |
| Server State | stat | `rippled_server_info{metric="server_state"}` | `metric` |
| Uptime | stat | `rippled_server_info{metric="uptime"}` | `metric` |
| Peer Count | stat | `rippled_server_info{metric="peers"}` | `metric` |
| Validated Ledger Seq | stat | `rippled_server_info{metric="validated_ledger_seq"}` | `metric` |
| Build Version | stat | `rippled_build_info` | `version` |
| Complete Ledger Ranges | table | `rippled_complete_ledgers` | `bound`, `index` |
| Database Sizes | timeseries | `rippled_db_metrics{metric=~"db_kb_.*"}` | `metric` |
| Historical Fetch Rate | stat | `rippled_db_metrics{metric="historical_perminute"}` | `metric` |
### Network Traffic — System Metrics (`rippled-system-network`)
| Panel | Type | PromQL | Labels Used |
| ---------------------- | ---------- | -------------------------------------- | ----------- |
| Active Peers | timeseries | `rippled_Peer_Finder_Active_*_Peers` | — |
| Peer Disconnects | timeseries | `rippled_Overlay_Peer_Disconnects` | — |
| Total Network Bytes | timeseries | `rippled_total_Bytes_In/Out` | — |
| Total Network Messages | timeseries | `rippled_total_Messages_In/Out` | — |
| Transaction Traffic | timeseries | `rippled_transactions_Messages_In/Out` | — |
| Proposal Traffic | timeseries | `rippled_proposals_Messages_In/Out` | — |
| Validation Traffic | timeseries | `rippled_validations_Messages_In/Out` | — |
| Traffic by Category | bargauge | `topk(10, rippled_*_Bytes_In)` | — |
### RPC & Pathfinding — System Metrics (`rippled-system-rpc`)
| Panel | Type | PromQL | Labels Used |
| ------------------------- | ---------- | -------------------------------------------------------- | ----------- |
| RPC Request Rate | stat | `rate(rippled_rpc_requests[5m])` | — |
| RPC Response Time | timeseries | `histogram_quantile(0.95, rippled_rpc_time_bucket)` | — |
| RPC Response Size | timeseries | `histogram_quantile(0.95, rippled_rpc_size_bucket)` | — |
| RPC Response Time Heatmap | heatmap | `rippled_rpc_time_bucket` | — |
| Pathfinding Fast Duration | timeseries | `histogram_quantile(0.95, rippled_pathfind_fast_bucket)` | — |
| Pathfinding Full Duration | timeseries | `histogram_quantile(0.95, rippled_pathfind_full_bucket)` | — |
| Resource Warnings Rate | stat | `rate(rippled_warn[5m])` | — |
| Resource Drops Rate | stat | `rate(rippled_drop[5m])` | — |
### Span → Metric → Dashboard Summary
| Span Name | Prometheus Metric Filter | Grafana Dashboard |
| --------------------------- | ----------------------------------------- | --------------------------------------------- |
| `rpc.request` | `{span_name="rpc.request"}` | RPC Performance (Overall Throughput) |
| `rpc.process` | `{span_name="rpc.process"}` | RPC Performance (Overall Throughput) |
| `rpc.ws_message` | `{span_name="rpc.ws_message"}` | RPC Performance (WebSocket Rate) |
| `rpc.command.*` | `{span_name=~"rpc.command.*"}` | RPC Performance (Rate, Latency, Error, Top) |
| `tx.process` | `{span_name="tx.process"}` | Transaction Overview (Rate, Latency, Heatmap) |
| `tx.receive` | `{span_name="tx.receive"}` | Transaction Overview (Rate, Receive) |
| `tx.apply` | `{span_name="tx.apply"}` | Transaction Overview + Ledger Ops (Apply) |
| `consensus.accept` | `{span_name="consensus.accept"}` | Consensus Health (Duration, Rate, Heatmap) |
| `consensus.proposal.send` | `{span_name="consensus.proposal.send"}` | Consensus Health (Proposals Rate) |
| `consensus.ledger_close` | `{span_name="consensus.ledger_close"}` | Consensus Health (Close, Mode) |
| `consensus.validation.send` | `{span_name="consensus.validation.send"}` | Consensus Health (Validation Rate) |
| `consensus.accept.apply` | `{span_name="consensus.accept.apply"}` | Consensus Health (Apply Duration, Close Time) |
| `ledger.build` | `{span_name="ledger.build"}` | Ledger Ops (Build Rate, Duration, Heatmap) |
| `ledger.validate` | `{span_name="ledger.validate"}` | Ledger Ops (Validation Rate) |
| `ledger.store` | `{span_name="ledger.store"}` | Ledger Ops (Store Rate) |
| `peer.proposal.receive` | `{span_name="peer.proposal.receive"}` | Peer Network (Rate, Trusted/Untrusted) |
| `peer.validation.receive` | `{span_name="peer.validation.receive"}` | Peer Network (Rate, Trusted/Untrusted) |
## Log-Trace Correlation (Phase 8)
When rippled is built with `telemetry=ON`, log lines emitted within an active OpenTelemetry span automatically include `trace_id` and `span_id` fields:
```
2024-01-15T10:30:45.123Z LedgerMaster:NFO trace_id=abc123def456789012345678abcdef01 span_id=0123456789abcdef Validated ledger 42
```
This enables bidirectional navigation between logs and traces in Grafana:
- **Tempo -> Loki**: Click "Logs for this trace" on any trace in Grafana Tempo to see all log lines from that trace.
- **Loki -> Tempo**: Click the `TraceID` derived field link on any log line containing `trace_id=` to jump to the full trace in Tempo.
### Log Ingestion Pipeline
Log files are ingested by the OTel Collector's `filelog` receiver, which tails `debug.log` files and parses them with a regex that extracts `timestamp`, `partition`, `severity`, `trace_id`, `span_id`, and `message` fields. Parsed entries are exported to Grafana Loki.
### LogQL Query Examples
```logql
# Find all logs for a specific trace
{job="rippled"} |= "trace_id=abc123def456789012345678abcdef01"
# Error logs with trace context (log lines with ERR severity that have a trace_id)
{job="rippled"} |= "ERR" |= "trace_id="
# All logs from a specific partition that were emitted during a span
{job="rippled"} |= "LedgerMaster" | regexp `trace_id=(?P<trace_id>[a-f0-9]+)` | trace_id != ""
# Logs from the last hour containing trace context
{job="rippled"} |= "trace_id=" | regexp `(?P<partition>\S+):(?P<sev>\S+)\s+trace_id=(?P<tid>[a-f0-9]+)`
# Count of traced vs untraced log lines
count_over_time({job="rippled"} |= "trace_id=" [5m])
```
### Verifying Log Correlation
1. Start the observability stack and rippled with telemetry enabled.
2. Send an RPC request: `curl http://localhost:5005 -d '{"method":"server_info"}'`
3. Check the debug.log for `trace_id=` entries: `grep trace_id= /path/to/debug.log`
4. Open Grafana at http://localhost:3000 -> Explore -> Loki and search for `{job="rippled"} |= "trace_id="`.
5. Click the TraceID link to navigate to the corresponding trace in Tempo.
## Phase 9: OTel Metrics Alerting Rules
The following alerting rules are recommended for the Phase 9 OTel SDK metrics.
Add to your Prometheus alerting rules configuration.
### NodeStore
| Alert Name | Severity | Condition | For | Description |
| --------------------------- | -------- | ---------------------------------------------------- | --- | ------------------------------------------------------- |
| `NodeStoreHighWriteLoad` | Warning | `rippled_nodestore_state{metric="write_load"} > 100` | 5m | NodeStore backend is under sustained write pressure |
| `NodeStoreReadQueueBacklog` | Warning | `rippled_nodestore_state{metric="read_queue"} > 500` | 5m | Prefetch thread pool is saturated; reads are backing up |
### Cache
| Alert Name | Severity | Condition | For | Description |
| ----------------------- | -------- | ------------------------------------------------------- | --- | ------------------------------------------------------ |
| `SLECacheHitRateLow` | Warning | `rippled_cache_metrics{metric="SLE_hit_rate"} < 0.5` | 10m | SLE cache is thrashing; consider increasing cache size |
| `LedgerCacheHitRateLow` | Warning | `rippled_cache_metrics{metric="ledger_hit_rate"} < 0.5` | 10m | Ledger cache hit rate is degraded |
### Transaction Queue
| Alert Name | Severity | Condition | For | Description |
| ---------------------- | -------- | ---------------------------------------------------------------------------------------------------------------------- | --- | -------------------------------------------------- |
| `TxQNearCapacity` | Warning | `rippled_txq_metrics{metric="txq_count"} / rippled_txq_metrics{metric="txq_max_size"} > 0.8` | 5m | TxQ is >80% full; transactions may be rejected |
| `TxQHighFeeEscalation` | Warning | `rippled_txq_metrics{metric="txq_open_ledger_fee_level"} / rippled_txq_metrics{metric="txq_reference_fee_level"} > 10` | 5m | Fee escalation is 10x above reference; high demand |
### Load Factor
| Alert Name | Severity | Condition | For | Description |
| --------------------- | -------- | -------------------------------------------------------------- | --- | -------------------------------------------------------------- |
| `HighLoadFactor` | Warning | `rippled_load_factor_metrics{metric="load_factor"} > 5` | 10m | Combined load factor is elevated; transactions cost 5x+ normal |
| `HighLocalLoadFactor` | Critical | `rippled_load_factor_metrics{metric="load_factor_local"} > 10` | 5m | Local server load is critically elevated |
### RPC Performance
| Alert Name | Severity | Condition | For | Description |
| ------------------ | -------- | ---------------------------------------------------------------------------------------------------------- | --- | --------------------------------- |
| `HighRPCErrorRate` | Warning | `sum(rate(rippled_rpc_method_errored_total[5m])) / sum(rate(rippled_rpc_method_started_total[5m])) > 0.05` | 5m | >5% of RPC calls are erroring |
| `SlowRPCLatency` | Warning | `histogram_quantile(0.95, sum by (le) (rate(rippled_rpc_method_duration_us_bucket[5m]))) > 5000000` | 5m | RPC p95 latency exceeds 5 seconds |
### Job Queue
| Alert Name | Severity | Condition | For | Description |
| ------------------ | -------- | ----------------------------------------------------------------------------------------------------- | --- | ---------------------------------------------------- |
| `JobQueueBacklog` | Warning | `sum(rate(rippled_job_queued_total[5m])) - sum(rate(rippled_job_finished_total[5m])) > 100` | 5m | Jobs are being queued faster than they're completing |
| `SlowJobExecution` | Warning | `histogram_quantile(0.95, sum by (le) (rate(rippled_job_running_duration_us_bucket[5m]))) > 10000000` | 5m | Job execution p95 exceeds 10 seconds |
## Validator Health Monitoring (Phase 7+)
Phase 7 introduces native metrics for validator health, validation agreement, peer quality, ledger economy, and state tracking — inspired by the community [xrpl-validator-dashboard](https://github.com/realgrapedrop/xrpl-validator-dashboard). These metrics are exported via the OTel Metrics SDK `PeriodicMetricReader` (10s interval).
### Validation Agreement
The `ValidationTracker` class computes rolling validation agreement between this node and network consensus. It maintains 1h and 24h sliding windows with an 8-second grace period and 5-minute late repair window.
| Prometheus Metric | Description |
| ---------------------------------------------------------- | ------------------------------ |
| `rippled_validation_agreement{metric="agreement_pct_1h"}` | Agreement % over last 1 hour |
| `rippled_validation_agreement{metric="agreement_pct_24h"}` | Agreement % over last 24 hours |
| `rippled_validation_agreement{metric="agreements_1h"}` | Agreed validations in 1h |
| `rippled_validation_agreement{metric="missed_1h"}` | Missed validations in 1h |
| `rippled_validation_agreement{metric="agreements_24h"}` | Agreed validations in 24h |
| `rippled_validation_agreement{metric="missed_24h"}` | Missed validations in 24h |
| `rippled_validations_sent_total` | Total validations sent |
| `rippled_validations_checked_total` | Total network validations seen |
| `rippled_validation_agreements_total` | Cumulative agreements |
| `rippled_validation_missed_total` | Cumulative misses |
**How reconciliation works**:
1. When the node sends a validation for ledger X, the tracker records `weValidated=true`
2. When the network validates a ledger, the tracker records `networkValidated=true`
3. After an 8-second grace period, the tracker reconciles: if both are true for the same ledger hash, it's an agreement; otherwise, a miss
4. If a late validation arrives within 5 minutes, a previous miss can be corrected (late repair)
**When to worry**: Agreement below 90% over 24h indicates the node is missing network consensus — check connectivity, clock sync, and whether the node is in `Full` mode.
```promql
# Agreement percentage over 24 hours
rippled_validation_agreement{metric="agreement_pct_24h"}
# Validation send rate (should be ~1 per 3-5s during normal operation)
rate(rippled_validations_sent_total[5m]) * 60
# Ratio of agreements to total reconciled
rippled_validation_agreements_total / (rippled_validation_agreements_total + rippled_validation_missed_total)
```
### Validator Health Gauges
| Prometheus Metric | Description | Healthy Value |
| ------------------------------------------------------ | ----------------------------------- | ----------------------- |
| `rippled_validator_health{metric="amendment_blocked"}` | 1 if amendment-blocked, 0 if not | 0 |
| `rippled_validator_health{metric="unl_blocked"}` | 1 if UNL-blocked, 0 if not | 0 |
| `rippled_validator_health{metric="unl_expiry_days"}` | Days until UNL list expires | > 14 |
| `rippled_validator_health{metric="validation_quorum"}` | Current validation quorum threshold | Network-dependent (~28) |
```promql
# Alert if amendment blocked
rippled_validator_health{metric="amendment_blocked"} == 1
# Alert if UNL expiring within 14 days
rippled_validator_health{metric="unl_expiry_days"} < 14
```
### Peer Quality Monitoring
| Prometheus Metric | Description |
| --------------------------------------------------------- | --------------------------------------- |
| `rippled_peer_quality{metric="peer_latency_p90_ms"}` | P90 peer latency in milliseconds |
| `rippled_peer_quality{metric="peers_insane_count"}` | Peers with diverged/insane tracking |
| `rippled_peer_quality{metric="peers_higher_version_pct"}` | % of peers running a newer version |
| `rippled_peer_quality{metric="upgrade_recommended"}` | 1 if >60% of peers are on newer version |
| `rippled_Overlay_Peer_Disconnects_Charges` | Disconnects due to resource charges |
**Key insight**: If `upgrade_recommended` is 1, the node is running an older version than the majority of the network. This doesn't affect functionality immediately but may cause issues when amendments activate.
```promql
# P90 peer latency trend
rippled_peer_quality{metric="peer_latency_p90_ms"}
# Correlate high latency with disconnects
rippled_peer_quality{metric="peer_latency_p90_ms"} > 500
and rate(rippled_Overlay_Peer_Disconnects_Charges[5m]) > 0
```
### Ledger Economy Monitoring
| Prometheus Metric | Description |
| ----------------------------------------------------- | ---------------------------------- |
| `rippled_ledger_economy{metric="base_fee_xrp"}` | Base fee in drops |
| `rippled_ledger_economy{metric="reserve_base_xrp"}` | Account reserve in drops |
| `rippled_ledger_economy{metric="reserve_inc_xrp"}` | Owner reserve increment in drops |
| `rippled_ledger_economy{metric="ledger_age_seconds"}` | Seconds since last validated close |
| `rippled_ledger_economy{metric="transaction_rate"}` | Smoothed transaction rate |
| `rippled_ledgers_closed_total` | Total ledgers closed |
```promql
# Fee values (should match server_info output)
rippled_ledger_economy{metric="base_fee_xrp"}
# Ledger age — should reset to ~0 every 3-5s
rippled_ledger_economy{metric="ledger_age_seconds"}
# Ledger close rate (should be ~12-20 per minute)
rate(rippled_ledgers_closed_total[5m]) * 60
```
### State Tracking
| Prometheus Metric | Description |
| ---------------------------------------------------------------- | ------------------------------ |
| `rippled_state_tracking{metric="state_value"}` | Numeric state (0-6, see table) |
| `rippled_state_tracking{metric="time_in_current_state_seconds"}` | Duration in current state |
| `rippled_state_changes_total` | Total state transitions |
**State value encoding**:
| Value | State | Meaning |
| ----- | ------------ | ---------------------------------------------------- |
| 0 | disconnected | No network connectivity |
| 1 | connected | Connected but not syncing |
| 2 | syncing | Fetching ledger history |
| 3 | tracking | Following network but not fully validated |
| 4 | full | Fully synced, not validating |
| 5 | validating | Fully synced and validating |
| 6 | proposing | Fully synced, validating, and proposing in consensus |
Values 5-6 combine `OperatingMode` (0-4) with `ConsensusMode` (validating/proposing) to give a richer picture of node participation.
```promql
# State timeline (should stay at 5 or 6 for validators)
rippled_state_tracking{metric="state_value"}
# Alert on frequent state changes (flapping)
rate(rippled_state_changes_total[1h]) > 2
```
### Grafana Dashboards (Phase 9)
| Dashboard | UID | Panels | Key Metrics |
| ------------------ | -------------------------- | ------ | --------------------------------------------------------- |
| Validator Health | `rippled-validator-health` | 13 | Agreement %, validation rate, amendment/UNL health, state |
| Peer Quality | `rippled-peer-quality` | 6 | P90 latency, insane peers, version awareness |
| System Node Health | (updated) | +5 | Ledger economy row: fee, reserves, age, tx rate |
---
## Troubleshooting
### No OTel SDK metrics in Prometheus
1. Verify `enabled=1` in the `[telemetry]` config section
2. Check that `metrics_endpoint` points to the OTel Collector's HTTP receiver
(default: `http://localhost:4318/v1/metrics`)
3. Check rippled logs for `MetricsRegistry: started successfully` message
4. Verify the OTel Collector is configured with an OTLP receiver and Prometheus exporter
5. Check Prometheus targets page for the collector scrape target
### Cache hit rates are zero
Cache hit rates may be zero during startup before caches are warmed. Wait for the
node to reach `Full` operating mode and process several ledgers before investigating.
### NodeStore I/O counters not incrementing
NodeStore counters are cumulative and may appear flat if the node is idle. Submit
some transactions or RPC requests to generate I/O activity.
### No traces appearing in Jaeger
1. Check rippled logs for `Telemetry starting` message
2. Verify `enabled=1` in the `[telemetry]` config section
3. Test collector connectivity: `curl -v http://localhost:4318/v1/traces`
4. Check collector logs: `docker compose logs otel-collector`
### No system metrics in Prometheus
1. Check rippled logs for `OTelCollector starting` message
2. Verify `server=otel` in the `[insight]` config section
3. Verify the endpoint in `[insight]` points to the OTLP/HTTP port (default: `http://localhost:4318/v1/metrics`)
4. Check that the `otlp` receiver is in the metrics pipeline receivers in `otel-collector-config.yaml`
5. Query Prometheus directly: `curl 'http://localhost:9090/api/v1/query?query=rippled_job_count'`
### Server info gauge shows server_state=0
This is normal during startup. The server starts in DISCONNECTED mode (0) and
progresses through CONNECTED (1), SYNCING (2), TRACKING (3), to FULL (4).
Wait for the node to sync with the network.
### Database metrics showing zero
The `getKBUsed*()` methods require SQLite databases to exist. If running with
`--standalone` or before the first ledger is stored, these will be zero.
### High memory usage
- Reduce `sampling_ratio` (e.g., `0.1` for 10% sampling)
- Reduce `max_queue_size` and `batch_size`
- Disable high-volume trace categories: `trace_peer=0`
### Collector connection failures
- Verify endpoint URL matches collector address
- Check firewall rules for ports 4317/4318
- If using TLS, verify certificate path with `tls_ca_cert`
### No trace_id in log output
- Verify rippled was built with `telemetry=ON` (the `XRPL_ENABLE_TELEMETRY` preprocessor flag)
- Verify `enabled=1` in the `[telemetry]` config section
- Log lines only contain `trace_id`/`span_id` when emitted inside an active span — background logs outside of RPC/consensus/transaction processing will not have trace context
- Check that the specific trace category is enabled (e.g., `trace_rpc=1`)
### No logs in Loki
- Verify the log file mount in docker-compose.yml points to the correct rippled log directory
- Check OTel Collector logs for filelog receiver errors: `docker compose logs otel-collector`
- Verify Loki is running: `curl http://localhost:3100/ready`
- Check the filelog receiver glob pattern matches your log file paths
## Performance Tuning
| Scenario | Recommendation |
| ------------------------ | ------------------------------------------------- |
| Production mainnet | `sampling_ratio=0.01`, `trace_peer=0` |
| Testnet/devnet | `sampling_ratio=1.0` (full tracing) |
| Debugging specific issue | `sampling_ratio=1.0` temporarily |
| High-throughput node | Increase `batch_size=1024`, `max_queue_size=4096` |
## Disabling Telemetry
Set `enabled=0` in config (runtime disable) or build without the flag:
```bash
cmake --preset default -Dtelemetry=OFF
```
When telemetry is compiled out, all trace macros expand to no-ops with zero overhead.
## Validating Telemetry Stack
After deploying telemetry, use the Phase 10 workload tools to validate the full stack end-to-end.
### Quick Validation
```bash
# Run the full validation suite (starts cluster, generates load, validates):
docker/telemetry/workload/run-full-validation.sh --xrpld .build/xrpld
# Check the report:
cat /tmp/xrpld-validation/reports/validation-report.json | jq '.summary'
```
### What Gets Validated
| Category | Checks | Description |
| ---------- | -------------- | -------------------------------------------------------- |
| Spans | 16+ span types | All span names appear in Jaeger with required attributes |
| Metrics | 30+ metrics | SpanMetrics, StatsD gauges/counters, Phase 9 metrics |
| Logs | 2 checks | trace_id/span_id present in Loki, cross-reference works |
| Dashboards | 10 dashboards | All Grafana dashboards load without errors |
### Running Individual Tools
```bash
# RPC load only:
python3 docker/telemetry/workload/rpc_load_generator.py \
--endpoints ws://localhost:6006 --rate 50 --duration 120
# Transaction mix only:
python3 docker/telemetry/workload/tx_submitter.py \
--endpoint ws://localhost:6006 --tps 5 --duration 120
# Validation only (assumes load already ran):
python3 docker/telemetry/workload/validate_telemetry.py \
--report /tmp/report.json
```
### Interpreting Failures
- **Span failures**: Check that the relevant trace category is enabled in `[telemetry]` config (e.g., `trace_rpc=1`).
- **Metric failures**: Verify the OTel Collector is running and Prometheus is scraping port 8889. Check `docker compose logs otel-collector`.
- **Dashboard failures**: Ensure Grafana provisioning is mounted correctly. Check `docker compose logs grafana`.
## Performance Benchmarking
Measure the overhead of the telemetry stack against a baseline:
```bash
docker/telemetry/workload/benchmark.sh --xrpld .build/xrpld --duration 300
```
### Benchmark Thresholds
| Metric | Target | Description |
| ----------------- | ------ | -------------------------------------- |
| CPU overhead | < 3% | Average CPU increase across nodes |
| Memory overhead | < 5MB | Peak RSS increase per node |
| RPC p99 latency | < 2ms | Additional p99 latency for server_info |
| Throughput impact | < 5% | Reduction in ledger close rate |
| Consensus impact | < 1% | Increase in consensus round time |
### Tuning for Production
If benchmarks exceed thresholds:
1. **Reduce sampling**: `sampling_ratio=0.01` (1% of traces)
2. **Disable peer tracing**: `trace_peer=0` (highest volume category)
3. **Increase batch delay**: `batch_delay_ms=10000` (less frequent exports)
4. **Reduce queue size**: `max_queue_size=1024` (back-pressure earlier)
See `docker/telemetry/workload/README.md` for full documentation.

View File

@@ -1,73 +0,0 @@
#pragma once
#include <xrpl/beast/utility/Journal.h>
#include <chrono>
#include <cstdint>
#include <string_view>
namespace xrpl {
// cSpell:ignore ptmalloc
// -----------------------------------------------------------------------------
// Allocator interaction note:
// - This facility invokes glibc's malloc_trim(0) on Linux/glibc to request that
// ptmalloc return free heap pages to the OS.
// - If an alternative allocator (e.g. jemalloc or tcmalloc) is linked or
// preloaded (LD_PRELOAD), calling glibc's malloc_trim typically has no effect
// on the *active* heap. The call is harmless but may not reclaim memory
// because those allocators manage their own arenas.
// - Only glibc sbrk/arena space is eligible for trimming; large mmap-backed
// allocations are usually returned to the OS on free regardless of trimming.
// - Call at known reclamation points (e.g., after cache sweeps / online delete)
// and consider rate limiting to avoid churn.
// -----------------------------------------------------------------------------
struct MallocTrimReport
{
bool supported{false};
int trimResult{-1};
std::int64_t rssBeforeKB{-1};
std::int64_t rssAfterKB{-1};
std::chrono::microseconds durationUs{-1};
std::int64_t minfltDelta{-1};
std::int64_t majfltDelta{-1};
[[nodiscard]] std::int64_t
deltaKB() const noexcept
{
if (rssBeforeKB < 0 || rssAfterKB < 0)
return 0;
return rssAfterKB - rssBeforeKB;
}
};
/**
* @brief Attempt to return freed memory to the operating system.
*
* On Linux with glibc malloc, this issues ::malloc_trim(0), which may release
* free space from ptmalloc arenas back to the kernel. On other platforms, or if
* a different allocator is in use, this function is a no-op and the report will
* indicate that trimming is unsupported or had no effect.
*
* @param tag Identifier for logging/debugging purposes.
* @param journal Journal for diagnostic logging.
* @return Report containing before/after metrics and the trim result.
*
* @note If an alternative allocator (jemalloc/tcmalloc) is linked or preloaded,
* calling glibc's malloc_trim may have no effect on the active heap. The
* call is harmless but typically does not reclaim memory under those
* allocators.
*
* @note Only memory served from glibc's sbrk/arena heaps is eligible for trim.
* Large allocations satisfied via mmap are usually returned on free
* independently of trimming.
*
* @note Intended for use after operations that free significant memory (e.g.,
* cache sweeps, ledger cleanup, online delete). Consider rate limiting.
*/
MallocTrimReport
mallocTrim(std::string_view tag, beast::Journal journal);
} // namespace xrpl

View File

@@ -12,4 +12,5 @@
#include <xrpl/beast/insight/Hook.h>
#include <xrpl/beast/insight/HookImpl.h>
#include <xrpl/beast/insight/NullCollector.h>
#include <xrpl/beast/insight/OTelCollector.h>
#include <xrpl/beast/insight/StatsDCollector.h>

View File

@@ -0,0 +1,92 @@
#pragma once
/**
* @file OTelCollector.h
* @brief OpenTelemetry-based implementation of the beast::insight::Collector
* interface for native OTLP metric export.
*
* When XRPL_ENABLE_TELEMETRY is defined, OTelCollector maps each
* beast::insight instrument type (Counter, Gauge, Event, Meter, Hook) to
* the corresponding OpenTelemetry Metrics SDK instrument and exports
* them via OTLP/HTTP to an OpenTelemetry Collector.
*
* When XRPL_ENABLE_TELEMETRY is NOT defined, OTelCollector::New() returns
* a NullCollector so the binary compiles without OTel dependencies.
*
* Dependency diagram:
*
* +-----------------+ +-------------------+
* | Collector (ABC) |<----| OTelCollector |
* +-----------------+ | (public header) |
* ^ +-------------------+
* | |
* +-----------------+ +-------------------+
* | NullCollector | | OTelCollectorImp |
* | (fallback when | | (impl in .cpp, |
* | no telemetry) | | uses OTel SDK) |
* +-----------------+ +-------------------+
* |
* +-------------------+
* | OTel Metrics SDK |
* | MeterProvider |
* | OTLP HTTP Metric |
* | Exporter |
* +-------------------+
*/
#include <xrpl/beast/insight/Collector.h>
#include <xrpl/beast/utility/Journal.h>
#include <memory>
#include <string>
namespace beast {
namespace insight {
/**
* @brief A Collector that exports metrics via OpenTelemetry OTLP/HTTP.
*
* Replaces StatsD-based metric collection with native OTel Metrics SDK
* instruments. Each beast::insight instrument maps to an OTel equivalent:
*
* - Counter -> OTel Counter<int64_t>
* - Gauge -> OTel ObservableGauge<int64_t> (async callback)
* - Event -> OTel Histogram<double> (duration in milliseconds)
* - Meter -> OTel Counter<uint64_t> (monotonic, unsigned)
* - Hook -> Called by PeriodicMetricReader at collection time
*
* @see StatsDCollector for the StatsD-based alternative.
* @see NullCollector for the no-op fallback.
*/
class OTelCollector : public Collector
{
public:
explicit OTelCollector() = default;
/**
* @brief Factory method to create an OTelCollector instance.
*
* When XRPL_ENABLE_TELEMETRY is defined, creates a real OTel-backed
* collector that exports metrics via OTLP/HTTP. When telemetry is
* disabled at compile time, returns a NullCollector.
*
* @param endpoint OTLP/HTTP metrics endpoint URL
* (e.g. "http://localhost:4318/v1/metrics").
* @param prefix Prefix prepended to all metric names
* (e.g. "rippled").
* @param instanceId Unique identifier for this node instance,
* emitted as the `service.instance.id` OTel
* resource attribute. Defaults to empty string
* (attribute omitted when empty).
* @param journal Journal for logging.
* @return Shared pointer to the created Collector.
*/
static std::shared_ptr<Collector>
New(std::string const& endpoint,
std::string const& prefix,
std::string const& instanceId,
Journal journal);
};
} // namespace insight
} // namespace beast

View File

@@ -70,14 +70,24 @@ JobQueue::Coro::resume()
running_ = true;
}
{
std::lock_guard lock(jq_.m_mutex);
std::lock_guard lk(jq_.m_mutex);
--jq_.nSuspend_;
}
auto saved = detail::getLocalValues().release();
detail::getLocalValues().reset(&lvs_);
std::lock_guard lock(mutex_);
XRPL_ASSERT(static_cast<bool>(coro_), "xrpl::JobQueue::Coro::resume : is runnable");
coro_();
// A late resume() can arrive after the coroutine has already completed.
// This is an expected (if rare) outcome of the race condition documented
// in JobQueue.h:354-377 where post() schedules a resume job before the
// coroutine yields — the mutex serializes access, but by the time this
// resume() acquires the lock the coroutine may have already run to
// completion. Calling operator() on a completed boost::coroutine2 is
// undefined behavior, so we must check and skip invoking the coroutine
// body if it has already completed.
if (coro_)
{
coro_();
}
detail::getLocalValues().release();
detail::getLocalValues().reset(saved);
std::lock_guard lk(mutex_run_);

View File

@@ -99,8 +99,8 @@ public:
Effects:
The coroutine continues execution from where it last left off
using this same thread.
Undefined behavior if called after the coroutine has completed
with a return (as opposed to a yield()).
If the coroutine has already completed, returns immediately
(handles the documented post-before-yield race condition).
Undefined behavior if resume() or post() called consecutively
without a corresponding yield.
*/
@@ -357,8 +357,10 @@ private:
If the post() job were to be executed before yield(), undefined behavior
would occur. The lock ensures that coro_ is not called again until we exit
the coroutine. At which point a scheduled resume() job waiting on the lock
would gain entry, harmlessly call coro_ and immediately return as we have
already completed the coroutine.
would gain entry. resume() checks if the coroutine has already completed
(coro_ converts to false) and, if so, skips invoking operator() since
calling operator() on a completed boost::coroutine2 pull_type is undefined
behavior.
The race condition occurs as follows:

View File

@@ -3,7 +3,6 @@
#include <xrpl/basics/Blob.h>
#include <xrpl/basics/SHAMapHash.h>
#include <xrpl/basics/TaggedCache.h>
#include <xrpl/ledger/CachedSLEs.h>
#include <boost/asio.hpp>
@@ -19,10 +18,28 @@ class Manager;
namespace perf {
class PerfLog;
}
namespace telemetry {
class Telemetry;
class MetricsRegistry;
} // namespace telemetry
// This is temporary until we migrate all code to use ServiceRegistry.
class Application;
template <
class Key,
class T,
bool IsKeyCache,
class SharedWeakUnionPointer,
class SharedPointerType,
class Hash,
class KeyEqual,
class Mutex>
class TaggedCache;
class STLedgerEntry;
using SLE = STLedgerEntry;
using CachedSLEs = TaggedCache<uint256, SLE const>;
// Forward declarations
class AcceptedLedger;
class AmendmentTable;
@@ -89,7 +106,7 @@ public:
getNodeFamily() = 0;
virtual TimeKeeper&
timeKeeper() = 0;
getTimeKeeper() = 0;
virtual JobQueue&
getJobQueue() = 0;
@@ -98,7 +115,7 @@ public:
getTempNodeCache() = 0;
virtual CachedSLEs&
cachedSLEs() = 0;
getCachedSLEs() = 0;
virtual NetworkIDService&
getNetworkIDService() = 0;
@@ -120,26 +137,26 @@ public:
getValidations() = 0;
virtual ValidatorList&
validators() = 0;
getValidators() = 0;
virtual ValidatorSite&
validatorSites() = 0;
getValidatorSites() = 0;
virtual ManifestCache&
validatorManifests() = 0;
getValidatorManifests() = 0;
virtual ManifestCache&
publisherManifests() = 0;
getPublisherManifests() = 0;
// Network services
virtual Overlay&
overlay() = 0;
getOverlay() = 0;
virtual Cluster&
cluster() = 0;
getCluster() = 0;
virtual PeerReservationTable&
peerReservations() = 0;
getPeerReservations() = 0;
virtual Resource::Manager&
getResourceManager() = 0;
@@ -174,13 +191,13 @@ public:
getLedgerReplayer() = 0;
virtual PendingSaves&
pendingSaves() = 0;
getPendingSaves() = 0;
virtual OpenLedger&
openLedger() = 0;
getOpenLedger() = 0;
virtual OpenLedger const&
openLedger() const = 0;
getOpenLedger() const = 0;
// Transaction and operation services
virtual NetworkOPs&
@@ -205,21 +222,30 @@ public:
virtual perf::PerfLog&
getPerfLog() = 0;
virtual telemetry::Telemetry&
getTelemetry() = 0;
/** Return the MetricsRegistry, or nullptr if telemetry is disabled.
Used by PerfLog and other hot paths to record OTel metrics.
*/
virtual telemetry::MetricsRegistry*
getMetricsRegistry() = 0;
// Configuration and state
virtual bool
isStopping() const = 0;
virtual beast::Journal
journal(std::string const& name) = 0;
getJournal(std::string const& name) = 0;
virtual boost::asio::io_context&
getIOContext() = 0;
virtual Logs&
logs() = 0;
getLogs() = 0;
virtual std::optional<uint256> const&
trapTxID() const = 0;
getTrapTxID() const = 0;
/** Retrieve the "wallet database" */
virtual DatabaseCon&
@@ -228,7 +254,7 @@ public:
// Temporary: Get the underlying Application for functions that haven't
// been migrated yet. This should be removed once all code is migrated.
virtual Application&
app() = 0;
getApp() = 0;
};
} // namespace xrpl

View File

@@ -77,16 +77,16 @@ public:
If the object is not found or an error is encountered, the
result will indicate the condition.
@note This will be called concurrently.
@param hash The hash of the object.
@param key A pointer to the key data.
@param pObject [out] The created object if successful.
@return The result of the operation.
*/
virtual Status
fetch(uint256 const& hash, std::shared_ptr<NodeObject>* pObject) = 0;
fetch(void const* key, std::shared_ptr<NodeObject>* pObject) = 0;
/** Fetch a batch synchronously. */
virtual std::pair<std::vector<std::shared_ptr<NodeObject>>, Status>
fetchBatch(std::vector<uint256> const& hashes) = 0;
fetchBatch(std::vector<uint256 const*> const& hashes) = 0;
/** Store a single object.
Depending on the implementation this may happen immediately

View File

@@ -85,6 +85,19 @@ message TMPublicKey {
// If you want to send an amount that is greater than any single address of yours
// you must first combine coins from one address to another.
// Trace context for OpenTelemetry distributed tracing across nodes.
// Uses W3C Trace Context format internally.
message TraceContext {
optional bytes trace_id = 1; // 16-byte trace identifier
optional bytes span_id = 2; // 8-byte parent span identifier
optional uint32 trace_flags = 3; // bit 0 = sampled
// TODO: trace_state is reserved for W3C tracestate vendor-specific
// key-value pairs but is not yet read or written by
// TraceContextPropagator. Wire it when cross-vendor trace
// propagation is needed.
optional string trace_state = 4; // W3C tracestate header value
}
enum TransactionStatus {
tsNEW = 1; // origin node did/could not validate
tsCURRENT = 2; // scheduled to go in this ledger
@@ -101,6 +114,9 @@ message TMTransaction {
required TransactionStatus status = 2;
optional uint64 receiveTimestamp = 3;
optional bool deferred = 4; // not applied to open ledger
// Optional trace context for OpenTelemetry distributed tracing
optional TraceContext trace_context = 1001;
}
message TMTransactions {
@@ -149,6 +165,9 @@ message TMProposeSet {
// Number of hops traveled
optional uint32 hops = 12 [deprecated = true];
// Optional trace context for OpenTelemetry distributed tracing
optional TraceContext trace_context = 1001;
}
enum TxSetStatus {
@@ -194,6 +213,9 @@ message TMValidation {
// Number of hops traveled
optional uint32 hops = 3 [deprecated = true];
// Optional trace context for OpenTelemetry distributed tracing
optional TraceContext trace_context = 1001;
}
// An array of Endpoint messages

View File

@@ -19,7 +19,7 @@
XRPL_FIX (Security3_1_3, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (PermissionedDomainInvariant, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (ExpiredNFTokenOfferRemoval, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (BatchInnerSigs, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (BatchInnerSigs, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(LendingProtocol, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(PermissionDelegationV1_1, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (DirectoryLimit, Supported::yes, VoteBehavior::DefaultNo)
@@ -33,7 +33,7 @@ XRPL_FEATURE(TokenEscrow, Supported::yes, VoteBehavior::DefaultNo
XRPL_FIX (EnforceNFTokenTrustlineV2, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (AMMv1_3, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(PermissionedDEX, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(Batch, Supported::no, VoteBehavior::DefaultNo)
XRPL_FEATURE(Batch, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(SingleAssetVault, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (PayChanCancelAfter, Supported::yes, VoteBehavior::DefaultNo)
// Check flags in Credential transactions

Some files were not shown because too many files have changed in this diff Show More