Compare commits

...

81 Commits

Author SHA1 Message Date
Pratik Mankawde
252acebf59 Fix CI: align telemetry workflow build with main CI pipeline
Rewrite the build steps to mirror the main CI (reusable-build-test-config):
- Use build-deps action (conan install with --options:host='&:xrpld=True')
  so the generated toolchain sets xrpld=ON and telemetry=True automatically
- Separate configure and build steps matching main CI pattern
- Run cmake from within build/ dir pointing to .. (same as main CI)
- Remove manual -Dtelemetry=ON -Dxrpld=ON (come from toolchain)
- Remove Docker DinD service (ubuntu-latest has Docker pre-installed)
- Install ninja-build system package for the Ninja generator
- Increase timeout to 90 min for full build + validation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 18:45:46 +00:00
Pratik Mankawde
ea60b461e0 minor change
Signed-off-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
2026-03-11 16:58:35 +00:00
Pratik Mankawde
502e4f8ac3 Fix CI: align telemetry workflow build with main CI approach
The telemetry-validation workflow used --output-folder=build with conan
install, which overrides the conanfile.py layout() and breaks include
paths for Conan-managed protobuf (runtime_version.h not found). Aligned
the build steps with the main CI's reusable-build-test-config.yml: drop
--output-folder, add Ninja generator, and explicit build_type setting.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 16:58:35 +00:00
Pratik Mankawde
c273b15127 Fix CI: cspell EOJSON delimiter and telemetry workflow conan setup
- Rename EOJSON heredoc delimiter to EOF_JSON to avoid cspell unknown word
- Add conan installation step (pip3 install conan) to telemetry-validation workflow
- Use shared setup-conan action for proper Conan profile/remote configuration
- Align build commands with reusable-build-test-config.yml conventions

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 16:58:35 +00:00
Pratik Mankawde
cac4bbc8cf Phase 10: Synthetic workload generation and telemetry validation tools
Add comprehensive workload harness for end-to-end validation of the
Phases 1-9 telemetry stack:

Task 10.1 — Multi-node test harness:
  - docker-compose.workload.yaml with full OTel stack (Collector, Jaeger,
    Tempo, Prometheus, Loki, Grafana)
  - generate-validator-keys.sh for automated key generation
  - xrpld-validator.cfg.template for node configuration

Task 10.2 — RPC load generator:
  - rpc_load_generator.py with WebSocket client, configurable rates,
    realistic command distribution (40% health, 30% wallet, 15% explorer,
    10% tx lookups, 5% DEX), W3C traceparent injection

Task 10.3 — Transaction submitter:
  - tx_submitter.py with 10 transaction types (Payment, OfferCreate,
    OfferCancel, TrustSet, NFTokenMint, NFTokenCreateOffer, EscrowCreate,
    EscrowFinish, AMMCreate, AMMDeposit), auto-funded test accounts

Task 10.4 — Telemetry validation suite:
  - validate_telemetry.py checking spans (Jaeger), metrics (Prometheus),
    log-trace correlation (Loki), dashboards (Grafana)
  - expected_spans.json (17 span types, 22 attributes, 3 hierarchies)
  - expected_metrics.json (SpanMetrics, StatsD, Phase 9, dashboards)

Task 10.5 — Performance benchmark suite:
  - benchmark.sh for baseline vs telemetry comparison
  - collect_system_metrics.sh for CPU/memory/latency sampling
  - Thresholds: <3% CPU, <5MB memory, <2ms RPC p99, <5% TPS, <1% consensus

Task 10.6 — CI integration:
  - telemetry-validation.yml GitHub Actions workflow
  - run-full-validation.sh orchestrator script
  - Manual trigger + telemetry branch auto-trigger

Task 10.7 — Documentation:
  - workload/README.md with quick start and tool reference
  - Updated telemetry-runbook.md with validation and benchmark sections
  - Updated 09-data-collection-reference.md with validation inventory

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 16:58:35 +00:00
Pratik Mankawde
c319b93522 Fix clang-tidy: remove unused alias, annotate empty catches in MetricsRegistry
- Remove unused `metric_api` namespace alias (misc-unused-alias-decls)
- Add NOLINT(bugprone-empty-catch) to 5 observable gauge callback catches
  that intentionally swallow exceptions when services aren't ready yet

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 16:57:50 +00:00
Pratik Mankawde
af9ce4aa66 Fix CI: guard MetricsRegistry GTest for telemetry=OFF only
When telemetry=ON, XRPL_ENABLE_TELEMETRY is globally defined, causing
MetricsRegistry.cpp to compile its full OTel path which references
xrpld symbols (LedgerMaster, TxQ, OpenLedger) that cannot be linked
into the standalone GTest binary. Guard the test with #ifndef and only
add MetricsRegistry.cpp as a source when telemetry is OFF.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:35:44 +00:00
Pratik Mankawde
598e5b548b Update levelization results for MetricsRegistry GTest migration
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:08:25 +00:00
Pratik Mankawde
2f26ad09b8 Convert MetricsRegistry test from Beast to GTest format
Move test from src/test/telemetry/ (Beast unit_test::suite) to
src/tests/libxrpl/telemetry/ (GTest TEST_F). The test exercises the
no-op/disabled path only, which compiles without XRPL_ENABLE_TELEMETRY
and has no xrpld link dependencies beyond MetricsRegistry.cpp itself.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
9570054802 Fix CI: use xrpl namespace in MetricsRegistry test suite definition
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
86339cf214 Fix MetricsRegistry: add missing OpenLedger.h and Histogram::Record context arg
- Added missing #include <xrpld/app/ledger/OpenLedger.h> for
  app.openLedger().current() calls in observable gauge callbacks.
- Added opentelemetry::context::Context{} as third argument to
  Histogram::Record() calls — the initializer_list overload requires
  an explicit Context parameter in the installed OTel C++ SDK version.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
6a7410366c Update levelization results for xrpld.telemetry module
Regenerate loops.txt and ordering.txt to account for the bidirectional
dependency between xrpld.app and xrpld.telemetry introduced in Phase 9.
MetricsRegistry.cpp reads metrics from xrpld.app services (LedgerMaster,
TxQ, AcceptedLedger) while Application.cpp wires MetricsRegistry into
the app lifecycle — a pattern consistent with existing accepted loops
(overlay, peerfinder, rpc, shamap).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
55510a8260 Phase 9: Add node template variable and instance filters to Grafana dashboards
Add $node template variable (exported_instance) to rippled-fee-market,
rippled-job-queue, and rippled-rpc-perf dashboards enabling multi-node
filtering. Add $job_type variable to job-queue and $method variable to
rpc-perf dashboards. Inject exported_instance=~"$node" filter into all
PromQL queries across these dashboards including rate(), histogram_quantile(),
topk(), and sum() expressions. Also add the instance filter to Phase 9
panels (NodeStore, Cache, CountedObjects) in system-node-health dashboard.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
919908b2b4 Phase 9: Internal Metric Instrumentation Gap Fill (Tasks 9.1-9.10)
Implement ~50 OTel metrics covering NodeStore I/O, cache hit rates,
TxQ state, PerfLog per-RPC/per-job counters, CountedObject instances,
and load factor breakdown via MetricsRegistry.

Core implementation:
- MetricsRegistry class with synchronous instruments (Counter, Histogram)
  for RPC and Job metrics, and ObservableGauge callbacks for cache, TxQ,
  CountedObject, LoadFactor, and NodeStore state polling.
- ServiceRegistry extended with getMetricsRegistry() virtual method.
- Application wires MetricsRegistry lifecycle (create/start/stop).
- PerfLogImp instrumented to emit OTel metrics on RPC and Job events.

Dashboards & observability:
- 3 new Grafana dashboards: RPC Performance, Job Queue, Fee Market/TxQ.
- Extended statsd-node-health dashboard with NodeStore, Cache, and
  CountedObject panels.
- 10 alerting rules added to telemetry-runbook.md.
- Integration test extended with 12 OTel metric validation checks.

Documentation:
- 09-data-collection-reference.md updated with Phase 9 metric tables.
- Unit tests for MetricsRegistry disabled-path (no-op) behavior.

All OTel SDK code guarded with #ifdef XRPL_ENABLE_TELEMETRY.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
494624ea1e Phase 9-11: Future enhancement plans for metric gap fill, workload validation, and third-party pipelines
- Phase 9: Internal Metric Instrumentation Gap Fill (10 tasks, 12d)
  - MetricsRegistry class, NodeStore I/O, cache, TxQ, PerfLog, CountedObjects, load factors
- Phase 10: Synthetic Workload Generation & Telemetry Validation (7 tasks, 10d)
  - Multi-node harness, RPC/tx generators, validation suite, benchmarks, CI
- Phase 11: Third-Party Data Collection Pipelines (11 tasks, 15d)
  - Custom OTel Collector receiver (Go), 30 external metrics, alerting rules, 4 dashboards
- Updated 06-implementation-phases.md with plan sections §6.8.2-§6.8.4, gantt, effort summary
- Updated 09-data-collection-reference.md with §5b-§5d future metric definitions
- Updated 08-appendix.md with Phase 9-11 glossary, task list entries, cross-reference guide, effort summary

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
748ea403d1 Fix Log.cpp: ToLowerBase16 requires nostd::span, not raw char arrays
The OTel SDK's TraceId::ToLowerBase16 and SpanId::ToLowerBase16 expect
opentelemetry::nostd::span<char, N> rather than raw char arrays. Also
corrected array sizes from 33/17 to 32/16 (no null terminator needed
since we use output.append(buf, N)).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
a602844986 Phase 8: Fix CI — add missing OTel trace/context.h include and cspell logql
Add opentelemetry/trace/context.h to Log.cpp so that
opentelemetry::trace::GetSpan() resolves correctly.
Add 'logql' to cspell dictionary to silence unknown-word warnings.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
9b67d29c12 Phase 8: Fix Loki exporter for otel-collector-contrib v0.147+
Upgrade Loki from 2.9.0 to 3.4.2 which supports native OTLP ingestion.
Replace removed `loki` exporter with `otlphttp/loki` pointed at Loki's
/otlp endpoint. The `loki` exporter was dropped in otel-collector-contrib
v0.147.0.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
cf012907e7 Phase 8: Update documentation for log-trace correlation
Task 8.6: Add Log-Trace Correlation section to telemetry-runbook.md
with LogQL examples, verification steps, and troubleshooting guidance.
Update 09-data-collection-reference.md section 5a from "Future" to
actual implementation docs covering log format, ingestion pipeline,
Grafana correlation config, and Loki backend. Add Phase 8 log
correlation test section and troubleshooting to TESTING.md.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
423dc77c62 Phase 8: Implement log-trace correlation and Loki log ingestion
Task 8.1: Inject trace_id/span_id into Logs::format() when an active
OTel span exists, guarded by #ifdef XRPL_ENABLE_TELEMETRY. Uses
thread-local GetSpan()/GetContext() with <10ns overhead per call.

Task 8.2: Add Grafana Loki service (grafana/loki:2.9.0) to the Docker
Compose stack with port 3100 exposed. Add Loki as a dependency for
otel-collector and grafana services.

Task 8.3: Add filelog receiver to OTel Collector config to tail rippled
debug.log files with regex_parser extracting timestamp, partition,
severity, trace_id, span_id, and message fields. Add loki exporter and
logs pipeline. Mount rippled log directory into collector container.

Task 8.4: Add tracesToLogs config in Tempo datasource provisioning
pointing to Loki with filterByTraceID enabled, enabling one-click
trace-to-log navigation in Grafana.

Task 8.5: Add check_log_correlation() function to integration-test.sh
that greps debug.log files for trace_id pattern and cross-checks the
trace_id exists in Jaeger.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
75603c8a27 Fix prettier markdown table alignment
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
e87a8f94e5 Appendix: add Phase8_taskList.md to document index
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
e39ae6fc1b Phase 8: Log-trace correlation plan docs and task list
- Add §6.8.1 to 06-implementation-phases.md with full Phase 8 plan
  (motivation, architecture, Mermaid diagrams, tasks table, exit criteria)
- Add Phase8_taskList.md with per-task breakdown (8.1-8.6)
- Add §5a log-trace correlation section to 09-data-collection-reference.md
- Add Phase 8 row to OpenTelemetryPlan.md, update totals to 13 weeks / 8 phases
- Add Phases 6-8 to Gantt chart in 06-implementation-phases.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
ccafc0d0a5 Phase 8: Add Loki datasource provisioning for log-trace correlation
Adds Grafana Loki data source with derivedFields config linking
trace_id values in log lines to Tempo traces. This enables one-click
log-to-trace navigation in Grafana Explore.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
6fe0240864 Fix OTelCollector: Journal::info is a method, not a bool member
The beast::Journal stream accessors (info, warn, etc.) are methods that
return a Stream object. They must be called with () to test if the log
level is active.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
43f7449f21 Fix system dashboard $node template variable queries
The label_values() PromQL function requires a metric name as the first
argument. Without it, Prometheus returns raw label hashes instead of
readable node names like "validator-0:6006".

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
f2a1a61f73 Fix OTelCollector.cpp compilation errors against OTel C++ SDK
- CreateInt64Counter -> CreateUInt64Counter (no int64 counter in API)
- Counter<int64_t> member -> Counter<uint64_t> with static_cast in Add()
- Add async_instruments.h include for ObservableInstrument definition
- Replace JLOG() macro with beast::Journal if(info)/info() pattern
- MeterProviderFactory::Create(reader,resource) -> Create(views,resource)
  + AddMetricReader() (no 2-arg reader overload exists)
- HistogramAggregationConfig: std::list<double> -> std::vector<double>
  using boundaries_ member instead of aggregate initialization

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
d70dacd3df Phase 7: Add node filters to system dashboards + OTelCollector instanceId
- Add $node template variable to all 5 system-* Grafana dashboards
  with exported_instance=~"$node" filter on all PromQL queries
- Add instanceId parameter to OTelCollector::New() factory to set
  service.instance.id resource attribute on metrics (matches trace
  exporter behavior for human-friendly node names in Grafana)
- CollectorManager reads service_instance_id from [insight] config

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
598ff8b108 Phase 7: Native OTel metrics migration (Tasks 7.1-7.7)
Replace StatsD UDP metric transport with native OpenTelemetry Metrics SDK
export via OTLP/HTTP behind the existing beast::insight::Collector interface.

- Task 7.1: Link opentelemetry-cpp to beast module in CMake when telemetry=ON
- Task 7.2: New OTelCollector class mapping beast::insight instruments to OTel
  SDK (Counter, ObservableGauge, Histogram, Counter<uint64>) with OTLP/HTTP
  export via PeriodicMetricReader at 1s intervals
- Task 7.3: Add server=otel branch to CollectorManager with endpoint config
- Task 7.4: Update otel-collector-config.yaml to use OTLP receiver for metrics
  pipeline (StatsD receiver commented out for backward compat)
- Task 7.5: Metric names preserved via dot-to-underscore formatting matching
  StatsD->Prometheus conventions
- Task 7.6: Rename Grafana dashboards from statsd-* to system-*, update titles
  and UIDs from "StatsD" to "System Metrics"
- Task 7.7: Update integration test to use server=otel, verify OTLP metrics
- Task 7.8: Update runbook, TESTING.md, config reference, and data collection
  reference docs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
5bf0457354 Appendix: add Phase7_taskList.md to document index
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
5f24f79dc1 Separate plan from tasks: move Phase 7 plan into 06-implementation-phases.md, remove Phase 8 content
- Move Phase 7 motivation (gains/losses/decision) and architecture (class
  hierarchy, data flow diagram, config) from Phase7_taskList.md into
  06-implementation-phases.md §6.8
- Strip Phase7_taskList.md to tasks only (7.1-7.8 + summary table)
- Remove Phase8_taskList.md — belongs on Phase 8 branch
- Remove §6.8.1 (Phase 8) from 06-implementation-phases.md
- Remove §5a (Phase 8 log correlation) from 09-data-collection-reference.md
- Remove Phase 8 row from OpenTelemetryPlan.md phase table

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
b3246dff58 Phase 7-8: Plan docs for native OTel metrics migration and log-trace correlation
Phase 7 (native metrics): Replace StatsDCollector with OTelCollectorImpl
behind the existing beast::insight::Collector interface. Maps Counter,
Gauge, Meter, Event to OTel SDK instruments. Exports via OTLP/HTTP to
same collector endpoint as traces. Eliminates StatsD UDP dependency.
Resolves deferred Phase 6 Task 6.1 (|m wire format).

Phase 8 (log correlation): Inject trace_id/span_id into JLOG output
via Logs::format() thread-local span context read. Add Grafana Loki
with OTel Collector filelog receiver for centralized log ingestion.
Enable bidirectional Tempo-Loki correlation in Grafana.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
4ae30eff7c Remove 'rippled' prefix from dashboard titles, add new dashboards to doc
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
2c73791b23 Fix markdown formatting in data collection reference
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
a3965703db Add StatsD dashboards for overlay traffic detail and ledger data sync
Two new Grafana dashboards covering previously uncovered beast::insight
StatsD metrics:

Overlay Traffic Detail (8 panels):
- Squelch traffic (squelch, suppressed, ignored)
- Overhead breakdown (base, cluster, manifest)
- Validator list distribution traffic
- Set get/share (transaction set exchange)
- Have/requested transactions protocol
- Unknown/unclassified traffic
- Proof path request/response
- Replay delta request/response

Ledger Data & Sync (6 panels):
- Ledger data exchange by sub-type (TX set, TX node, account state)
- Legacy ledger share/get traffic
- GetObject by type (ledger, transaction, account state, CAS, fetch pack)
- GetObject aggregate and special types
- GetObject message counts
- All-categories ranked bar gauge (top 20)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
adfeb02056 formatting
Signed-off-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
de0a9f4a9c Appendix: add 09-data-collection-reference.md to document index
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
b60e584ea2 Add consensus.accept.apply span to data collection reference
Add the close time span and its 6 attributes to the Phase 4 consensus
span table and attribute table in 09-data-collection-reference.md.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
19b0a4fb95 document updates
Signed-off-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
a8cd756e6a Phase 6: Integrate beast::insight StatsD metrics into telemetry pipeline
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
9f699307a0 formatting changes
Signed-off-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
d8aa2af511 Phase 5b: Add ledger/peer/tx spans + expand Grafana dashboards
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:10 +00:00
Pratik Mankawde
f2cc295a97 Appendix: add Phase5_IntegrationTest_taskList.md to document index
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:09 +00:00
Pratik Mankawde
9048a3f6b5 Use human-friendly node names in integration test telemetry config
Set service_instance_id=rippled-node-N in each test node's [telemetry]
section so Grafana/Tempo filters show readable names instead of base58
public keys. Update dashboard descriptions and Tempo datasource comments.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:09 +00:00
Pratik Mankawde
d79eacb66e Fix Grafana node filter: use exported_instance instead of service_instance_id
Prometheus renames the OTel resource attribute service.instance.id to
'instance', which then conflicts with the built-in scrape 'instance'
label. Prometheus resolves this by prefixing it as 'exported_instance'.
Update all dashboard PromQL queries and template variable queries to
use exported_instance so the Node dropdown correctly populates.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:09 +00:00
Pratik Mankawde
96cc33f8b9 Set explicit uid on Prometheus datasource for dashboard compatibility
Dashboard template variables reference datasource uid "prometheus" but
the provisioning config had no uid set, causing Grafana to auto-assign
a random one and break dashboard panel queries.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:09 +00:00
Pratik Mankawde
915ebcff69 Add Grafana dashboard template variables for node and metric filtering
- Add resource_metrics_key_attributes to spanmetrics connector so
  service.instance.id becomes a Prometheus label for per-node filtering
- Add 'node' dropdown (service_instance_id) to all 3 dashboards
- Add 'command' dropdown (xrpl_rpc_command) to RPC Performance
- Add 'tx_origin' dropdown (xrpl_tx_local) to Transaction Overview
- Add 'consensus_mode' dropdown (xrpl_consensus_mode) to Consensus Health
- Update all panel PromQL queries to include $node filter

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:09 +00:00
Pratik Mankawde
b9809f0630 Add consensus close time panels and docs for accept.apply span
- Add Ledger Apply Duration and Close Time Agreement panels to
  consensus-health dashboard
- Add consensus.accept.apply to telemetry runbook with TraceQL
  queries for close time disagreements and consensus failures
- Add span to TESTING.md expected span catalog and verification loop

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:09 +00:00
Pratik Mankawde
ec76a6a975 Phase 5: Observability stack — spanmetrics, dashboards, runbook
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:09 +00:00
Pratik Mankawde
b01b29f6f1 Add consensus.accept.apply span with ledger close time attributes
Add a new trace span in doAccept() capturing ledger close time details:
- xrpl.consensus.close_time: agreed-upon close time (epoch seconds)
- xrpl.consensus.close_time_correct: whether validators converged
  (per avCT_CONSENSUS_PCT = 75% threshold)
- xrpl.consensus.close_resolution_ms: time rounding granularity
- xrpl.consensus.state: "finished" or "moved_on" (consensus failure)
- xrpl.consensus.proposing: whether this node was proposing

Update Tempo datasource with close time filters, plan docs with
new span inventory, and add test coverage for the attribute pattern.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:09 +00:00
Pratik Mankawde
7c424bc406 Add consensus trace filters to Grafana Tempo datasource
Add xrpl.consensus.mode (static), xrpl.consensus.round (dynamic),
and xrpl.consensus.ledger.seq (static) search filters for Phase 4
consensus span attributes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:09 +00:00
Pratik Mankawde
f3905d9951 Phase 4: Consensus tracing — round lifecycle, proposals, validations
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:09 +00:00
Pratik Mankawde
571be8de5b Add transaction trace filters to Grafana Tempo datasource
Add xrpl.tx.hash (static), xrpl.tx.local and xrpl.tx.status
(dynamic) search filters for Phase 3 transaction span attributes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:09 +00:00
Pratik Mankawde
89c0b16c92 Phase 3: Transaction tracing — protobuf context, PeerImp, NetworkOPs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:09 +00:00
Pratik Mankawde
bf674d968d Fix gersemi cmake formatting in test CMakeLists
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:09 +00:00
Pratik Mankawde
49cc18b2ed Appendix: add Phase 2-5 task lists to document index
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:09 +00:00
Pratik Mankawde
61564c9131 Add RPC trace filters to Grafana Tempo datasource
Add xrpl.rpc.command (static), xrpl.rpc.status and xrpl.rpc.role
(dynamic) search filters for Phase 2 RPC span attributes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:09 +00:00
Pratik Mankawde
964d823295 Phase 2: Complete RPC tracing — interface, macros, attributes, tests
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:02:09 +00:00
Pratik Mankawde
f9338b8919 Phase 1c: RPC layer telemetry integration
Add tracing instrumentation to the RPC request handling layer:

TracingInstrumentation.h:
- Convenience macros (XRPL_TRACE_RPC, XRPL_TRACE_TX, etc.) that
  create RAII SpanGuard objects when telemetry is enabled
- XRPL_TRACE_SET_ATTR / XRPL_TRACE_EXCEPTION for span enrichment
- Zero-overhead no-ops when XRPL_ENABLE_TELEMETRY is not defined

RPCHandler.cpp:
- Trace each RPC command with span name "rpc.command.<method>"
- Record command name, API version, role, and status as attributes
- Capture exceptions on the span for error visibility

ServerHandler.cpp:
- Trace HTTP requests ("rpc.request"), WebSocket messages
  ("rpc.ws_message"), and processRequest ("rpc.process")

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:01:54 +00:00
Pratik Mankawde
665660360c Phase 1a: OpenTelemetry plan documentation
Add comprehensive planning documentation for the OpenTelemetry
distributed tracing integration:

- Tracing fundamentals and concepts
- Architecture analysis of rippled's tracing surface area
- Design decisions and trade-offs
- Implementation strategy and code samples
- Configuration reference
- Implementation phases roadmap
- Observability backend comparison
- POC task list and presentation materials

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:01:54 +00:00
Pratik Mankawde
819dabdad0 Fix gersemi cmake formatting (if/endif spacing, target_link_libraries wrapping)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:00:54 +00:00
Pratik Mankawde
46794d549b Appendix: add presentation.md to document index
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:00:54 +00:00
Pratik Mankawde
84a9627298 Move presentation.md into OpenTelemetryPlan directory
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:00:54 +00:00
Pratik Mankawde
05cc50d728 Respect custom service_instance_id when set in [telemetry] config
Only override serviceInstanceId with the node public key when the user
hasn't explicitly set service_instance_id in the [telemetry] section.
This allows operators to assign human-friendly names (e.g. "validator-1")
while still defaulting to the node's base58 public key.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:00:54 +00:00
Pratik Mankawde
4357f176b2 Fix empty service.instance.id by deferring node identity injection
The Telemetry object is constructed in ApplicationImp's member initializer
list where nodeIdentity_ is not yet available, resulting in an empty
service.instance.id resource attribute. Add setServiceInstanceId() virtual
method that Application::setup() calls after nodeIdentity_ is known but
before telemetry_->start() creates the OTel resource.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:00:54 +00:00
Pratik Mankawde
1442465e1e Add node identification filters to Grafana Tempo datasource
Add resource-level search filters for node identity:
- service.instance.id (node public key) — unique node identifier
- service.version (rippled build version)
- xrpl.network.id (numeric network ID)
- xrpl.network.type (mainnet/testnet/devnet/standalone)

These enable filtering traces by specific nodes in multi-node
deployments and by network in mixed environments.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:00:54 +00:00
Pratik Mankawde
96ac67a6ea Add Tempo search filters and metrics generator for trace exploration
Configure Grafana Tempo datasource with pre-built search filters
(service.name, span name, status, duration) for the Explore UI.
Enable Tempo metrics_generator with service-graphs and span-metrics
processors to power Grafana's service map visualization.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:00:54 +00:00
Pratik Mankawde
2a95ea1f23 Add Grafana Tempo as second trace backend alongside Jaeger
- Add Tempo 2.7.2 service to docker-compose with local storage
- Add otlp/tempo exporter to OTel Collector traces pipeline
- Add Tempo Grafana datasource provisioning with node graph
- Update 05-configuration-reference.md examples with Tempo
- OTel Collector fans traces to both Jaeger and Tempo simultaneously

Jaeger provides a standalone UI at :16686 for quick lookups.
Tempo is queryable via Grafana Explore using TraceQL and is the
recommended backend for production (supports S3/GCS storage).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:00:54 +00:00
Pratik Mankawde
1c72714e02 Phase 1b: Telemetry core infrastructure
Add the OpenTelemetry telemetry library and supporting infrastructure:

Build system:
- Conan opentelemetry-cpp dependency with OTLP/gRPC exporter
- CMake integration for xrpl_telemetry library target
- Levelization ordering updates

Core library (libxrpl):
- Telemetry class: provider lifecycle, span creation, sampling config
- SpanGuard: RAII span management with attribute/exception helpers
- TelemetryConfig: parse [telemetry] config section
- NullTelemetry: no-op implementation when telemetry is disabled

Application integration:
- Telemetry member in ApplicationImp with start/stop lifecycle
- getTelemetry() interface on Application
- ServiceRegistry telemetry accessor

Docker observability stack:
- OTel Collector, Jaeger, Grafana docker-compose setup
- Collector config with OTLP gRPC receiver and Jaeger exporter

Config and docs:
- Example telemetry config section in xrpld-example.cfg
- Build documentation for telemetry setup

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 15:00:54 +00:00
Pratik Mankawde
fd18cf9e01 Merge remote-tracking branch 'origin/develop' into pratik/otel-phase1a-plan-docs 2026-03-11 14:58:44 +00:00
Alex Kremer
7b3724b7a3 fix: Add missed clang-tidy bugprone-inc-dec-conditions check (#6526) 2026-03-11 14:04:26 +00:00
Ayaz Salikhov
bee2d112c6 ci: Fix how clang-tidy is run when .clang-tidy is changed (#6521) 2026-03-11 14:18:18 +01:00
Bart
01c977bbfe ci: Fix rules used to determine when to upload Conan recipes (#6524)
The refs as previously used pointed to the source branch, not the target branch. However, determining the target branch is different depending on the GitHub event. The pull request logic was incorrect and needed to be fixed, and the logic inside the workflow could be simplified. Both modifications have been made in this commit.
2026-03-11 13:43:58 +01:00
Bart
3baf5454f2 ci: Only upload artifacts in the XRPLF/rippled repository (#6523)
This change will only attempt to upload artifacts for CI runs performed in the XRPLF/rippled repository.
2026-03-11 11:48:40 +01:00
Michael Legleux
24a5cbaa93 chore: Build voidstar on amd64 only (#6481)
* chore: Build voidstar on amd64 only

* fatal error if configuring voidstar on  non x86
2026-03-10 23:59:43 +00:00
Michael Legleux
eb7c8c6c7a chore: Use CMake components for install (#6485)
* chore: Use components for install

* rm CMake export targets

* reformat
2026-03-10 23:38:43 +00:00
Alex Kremer
f27d8f3890 chore: Enable clang-tidy bugprone-inc-dec-in-conditions check (#6455) 2026-03-10 20:12:15 +00:00
Alex Kremer
8345cd77df chore: Enable clang-tidy bugprone-unused-raii check (#6505) 2026-03-10 19:48:56 +00:00
Alex Kremer
c38aabdaee chore: Enable clang-tidy bugprone-unhandled-self-assignment check (#6504) 2026-03-10 17:42:49 +00:00
Pratik Mankawde
d6bf13394e Appendix: add 00-tracing-fundamentals.md and POC_taskList.md to document index
Split document index into Plan Documents and Task Lists sections.
These files were introduced in this branch but missing from the index.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 19:03:09 +00:00
Pratik Mankawde
34243e0cc2 Phase 1a: OpenTelemetry plan documentation
Add comprehensive planning documentation for the OpenTelemetry
distributed tracing integration:

- Tracing fundamentals and concepts
- Architecture analysis of rippled's tracing surface area
- Design decisions and trade-offs
- Implementation strategy and code samples
- Configuration reference
- Implementation phases roadmap
- Observability backend comparison
- POC task list and presentation materials

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 19:03:09 +00:00
142 changed files with 27715 additions and 728 deletions

View File

@@ -14,6 +14,7 @@ Checks: "-*,
bugprone-fold-init-type,
bugprone-forward-declaration-namespace,
bugprone-inaccurate-erase,
bugprone-inc-dec-in-conditions,
bugprone-incorrect-enable-if,
bugprone-incorrect-roundings,
bugprone-infinite-loop,
@@ -64,8 +65,10 @@ Checks: "-*,
bugprone-undefined-memory-manipulation,
bugprone-undelegated-constructor,
bugprone-unhandled-exception-at-new,
bugprone-unhandled-self-assignment,
bugprone-unique-ptr-array-mismatch,
bugprone-unsafe-functions,
bugprone-unused-raii,
bugprone-unused-local-non-trivial-variable,
bugprone-virtual-near-miss,
cppcoreguidelines-no-suspend-with-lock,
@@ -95,13 +98,11 @@ Checks: "-*,
# checks that have some issues that need to be resolved:
#
# bugprone-crtp-constructor-accessibility,
# bugprone-inc-dec-in-conditions,
# bugprone-move-forwarding-reference,
# bugprone-switch-missing-default-case,
# bugprone-unused-raii,
# bugprone-unused-return-value,
# bugprone-use-after-move,
# bugprone-unhandled-self-assignment,
# bugprone-unused-raii,
#
# cppcoreguidelines-misleading-capture-default-by-value,
# cppcoreguidelines-init-variables,

View File

@@ -16,6 +16,9 @@ Loop: xrpld.app xrpld.rpc
Loop: xrpld.app xrpld.shamap
xrpld.shamap ~= xrpld.app
Loop: xrpld.app xrpld.telemetry
xrpld.telemetry ~= xrpld.app
Loop: xrpld.overlay xrpld.rpc
xrpld.rpc ~= xrpld.overlay

View File

@@ -32,6 +32,8 @@ libxrpl.server > xrpl.server
libxrpl.shamap > xrpl.basics
libxrpl.shamap > xrpl.protocol
libxrpl.shamap > xrpl.shamap
libxrpl.telemetry > xrpl.basics
libxrpl.telemetry > xrpl.telemetry
libxrpl.tx > xrpl.basics
libxrpl.tx > xrpl.conditions
libxrpl.tx > xrpl.core
@@ -172,8 +174,11 @@ test.toplevel > test.csf
test.toplevel > xrpl.json
test.unit_test > xrpl.basics
tests.libxrpl > xrpl.basics
tests.libxrpl > xrpl.core
tests.libxrpl > xrpld.telemetry
tests.libxrpl > xrpl.json
tests.libxrpl > xrpl.net
tests.libxrpl > xrpl.telemetry
xrpl.conditions > xrpl.basics
xrpl.conditions > xrpl.protocol
xrpl.core > xrpl.basics
@@ -206,6 +211,7 @@ xrpl.server > xrpl.shamap
xrpl.shamap > xrpl.basics
xrpl.shamap > xrpl.nodestore
xrpl.shamap > xrpl.protocol
xrpl.telemetry > xrpl.basics
xrpl.tx > xrpl.basics
xrpl.tx > xrpl.core
xrpl.tx > xrpl.ledger
@@ -224,6 +230,7 @@ xrpld.app > xrpl.rdb
xrpld.app > xrpl.resource
xrpld.app > xrpl.server
xrpld.app > xrpl.shamap
xrpld.app > xrpl.telemetry
xrpld.app > xrpl.tx
xrpld.consensus > xrpl.basics
xrpld.consensus > xrpl.json
@@ -238,6 +245,7 @@ xrpld.overlay > xrpl.basics
xrpld.overlay > xrpl.core
xrpld.overlay > xrpld.core
xrpld.overlay > xrpld.peerfinder
xrpld.overlay > xrpld.telemetry
xrpld.overlay > xrpl.json
xrpld.overlay > xrpl.protocol
xrpld.overlay > xrpl.rdb
@@ -251,10 +259,12 @@ xrpld.peerfinder > xrpl.rdb
xrpld.perflog > xrpl.basics
xrpld.perflog > xrpl.core
xrpld.perflog > xrpld.rpc
xrpld.perflog > xrpld.telemetry
xrpld.perflog > xrpl.json
xrpld.rpc > xrpl.basics
xrpld.rpc > xrpl.core
xrpld.rpc > xrpld.core
xrpld.rpc > xrpld.telemetry
xrpld.rpc > xrpl.json
xrpld.rpc > xrpl.ledger
xrpld.rpc > xrpl.net
@@ -265,3 +275,8 @@ xrpld.rpc > xrpl.resource
xrpld.rpc > xrpl.server
xrpld.rpc > xrpl.tx
xrpld.shamap > xrpl.shamap
xrpld.telemetry > xrpl.basics
xrpld.telemetry > xrpl.core
xrpld.telemetry > xrpl.nodestore
xrpld.telemetry > xrpl.server
xrpld.telemetry > xrpl.telemetry

View File

@@ -55,7 +55,7 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
# fee to 500.
# - Bookworm using GCC 15: Debug on linux/amd64, enable code
# coverage (which will be done below).
# - Bookworm using Clang 16: Debug on linux/arm64, enable voidstar.
# - Bookworm using Clang 16: Debug on linux/amd64, enable voidstar.
# - Bookworm using Clang 17: Release on linux/amd64, set the
# reference fee to 1000.
# - Bookworm using Clang 20: Debug on linux/amd64.
@@ -78,7 +78,7 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-16"
and build_type == "Debug"
and architecture["platform"] == "linux/arm64"
and architecture["platform"] == "linux/amd64"
):
cmake_args = f"-Dvoidstar=ON {cmake_args}"
skip = False

View File

@@ -141,9 +141,8 @@ jobs:
needs:
- should-run
- build-test
# Only run when committing to a PR that targets a release branch in the
# XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' && needs.should-run.outputs.go == 'true' && startsWith(github.ref, 'refs/heads/release') }}
# Only run when committing to a PR that targets a release branch.
if: ${{ github.repository == 'XRPLF/rippled' && needs.should-run.outputs.go == 'true' && github.event_name == 'pull_request' && startsWith(github.event.pull_request.base.ref, 'release') }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}

View File

@@ -17,8 +17,7 @@ defaults:
jobs:
upload-recipe:
# Only run when a tag is pushed to the XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' }}
if: ${{ github.repository == 'XRPLF/rippled' }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}

View File

@@ -92,8 +92,8 @@ jobs:
upload-recipe:
needs: build-test
# Only run when pushing to the develop branch in the XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' && github.event_name == 'push' && github.ref == 'refs/heads/develop' }}
# Only run when pushing to the develop branch.
if: ${{ github.repository == 'XRPLF/rippled' && github.event_name == 'push' && github.ref == 'refs/heads/develop' }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}

View File

@@ -101,7 +101,7 @@ jobs:
steps:
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
uses: XRPLF/actions/cleanup-workspace@c7d9ce5ebb03c752a354889ecd870cadfc2b1cd4
uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
@@ -176,7 +176,7 @@ jobs:
fi
- name: Upload the binary (Linux)
if: ${{ github.repository_owner == 'XRPLF' && runner.os == 'Linux' }}
if: ${{ github.repository == 'XRPLF/rippled' && runner.os == 'Linux' }}
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: xrpld-${{ inputs.config_name }}
@@ -229,21 +229,8 @@ jobs:
env:
BUILD_NPROC: ${{ steps.nproc.outputs.nproc }}
run: |
set -o pipefail
./xrpld --unittest --unittest-jobs "${BUILD_NPROC}" 2>&1 | tee unittest.log
./xrpld --unittest --unittest-jobs "${BUILD_NPROC}"
- name: Show test failure summary
if: ${{ failure() && !inputs.build_only }}
working-directory: ${{ runner.os == 'Windows' && format('{0}/{1}', env.BUILD_DIR, inputs.build_type) || env.BUILD_DIR }}
run: |
if [ ! -f unittest.log ]; then
echo "unittest.log not found; embedded tests may not have run."
exit 0
fi
if ! grep -E "failed" unittest.log; then
echo "Log present but no failure lines found in unittest.log."
fi
- name: Debug failure (Linux)
if: ${{ failure() && runner.os == 'Linux' && !inputs.build_only }}
run: |
@@ -266,7 +253,7 @@ jobs:
--target coverage
- name: Upload coverage report
if: ${{ github.repository_owner == 'XRPLF' && !inputs.build_only && env.COVERAGE_ENABLED == 'true' }}
if: ${{ github.repository == 'XRPLF/rippled' && !inputs.build_only && env.COVERAGE_ENABLED == 'true' }}
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
with:
disable_search: true

View File

@@ -78,9 +78,9 @@ jobs:
id: run_clang_tidy
continue-on-error: true
env:
TARGETS: ${{ inputs.files != '' && inputs.files || 'src tests' }}
FILES: ${{ inputs.files }}
run: |
run-clang-tidy -j ${{ steps.nproc.outputs.nproc }} -p "${BUILD_DIR}" ${TARGETS} 2>&1 | tee clang-tidy-output.txt
run-clang-tidy -j ${{ steps.nproc.outputs.nproc }} -p "$BUILD_DIR" $FILES 2>&1 | tee clang-tidy-output.txt
- name: Upload clang-tidy output
if: steps.run_clang_tidy.outcome != 'success'

View File

@@ -51,5 +51,5 @@ jobs:
if: ${{ always() && !cancelled() && (!inputs.check_only_changed || needs.determine-files.outputs.any_cpp_changed == 'true' || needs.determine-files.outputs.clang_tidy_config_changed == 'true') }}
uses: ./.github/workflows/reusable-clang-tidy-files.yml
with:
files: ${{ (needs.determine-files.outputs.clang_tidy_config_changed == 'true' && '') || (inputs.check_only_changed && needs.determine-files.outputs.all_changed_files || '') }}
files: ${{ needs.determine-files.outputs.clang_tidy_config_changed == 'true' && '' || (inputs.check_only_changed && needs.determine-files.outputs.all_changed_files || '') }}
create_issue_on_failure: ${{ inputs.create_issue_on_failure }}

View File

@@ -69,22 +69,28 @@ jobs:
conan export . --version=${{ steps.version.outputs.version }}
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/${{ steps.version.outputs.version }}
# When this workflow is triggered by a push event, it will always be when merging into the
# 'develop' branch, see on-trigger.yml.
- name: Upload Conan recipe (develop)
if: ${{ github.ref == 'refs/heads/develop' }}
if: ${{ github.event_name == 'push' }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=develop
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/develop
# When this workflow is triggered by a pull request event, it will always be when merging into
# one of the 'release' branches, see on-pr.yml.
- name: Upload Conan recipe (rc)
if: ${{ startsWith(github.ref, 'refs/heads/release') }}
if: ${{ github.event_name == 'pull_request' }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=rc
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/rc
# When this workflow is triggered by a tag event, it will always be when tagging a final
# release, see on-tag.yml.
- name: Upload Conan recipe (release)
if: ${{ github.event_name == 'tag' }}
env:

View File

@@ -0,0 +1,194 @@
# Telemetry Validation CI Workflow
#
# Builds rippled with telemetry enabled, runs the multi-node workload
# harness, validates all telemetry data, and runs performance benchmarks.
#
# This is a separate workflow from the main CI. It runs:
# - On manual dispatch (workflow_dispatch)
# - On pushes to telemetry-related branches
#
# The workflow is intentionally heavyweight (builds rippled, starts Docker
# services, runs a multi-node cluster) — it validates the full telemetry
# stack end-to-end rather than individual unit tests.
#
# The build steps mirror the main CI pipeline (reusable-build-test-config.yml):
# - setup-conan action → build-deps action → cmake configure → cmake build
# This ensures dependency resolution, toolchain generation, and compiler
# flags are identical to what the PR workflow uses.
name: Telemetry Validation
on:
workflow_dispatch:
inputs:
rpc_rate:
description: "RPC load rate (requests per second)"
required: false
default: "50"
rpc_duration:
description: "RPC load duration (seconds)"
required: false
default: "120"
tx_tps:
description: "Transaction submit rate (TPS)"
required: false
default: "5"
tx_duration:
description: "Transaction submit duration (seconds)"
required: false
default: "120"
run_benchmark:
description: "Run performance benchmarks"
required: false
type: boolean
default: false
push:
branches:
- "pratik/otel-phase*"
- "feature/otel-*"
- "feature/telemetry-*"
paths:
- "docker/telemetry/**"
- "include/xrpl/basics/Telemetry*.h"
- "src/xrpld/app/misc/Telemetry*"
concurrency:
group: telemetry-validation-${{ github.ref }}
cancel-in-progress: true
env:
BUILD_DIR: build
jobs:
validate-telemetry:
name: Telemetry Stack Validation
runs-on: ubuntu-latest
timeout-minutes: 90
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install system dependencies
run: |
sudo apt-get update
sudo apt-get install -y curl jq bc python3 python3-pip ninja-build
- name: Install Python dependencies
run: pip3 install -r docker/telemetry/workload/requirements.txt
- name: Install Conan
run: pip3 install conan
# ── Build steps (mirrors main CI: setup-conan → build-deps → cmake) ──
- name: Set up Conan
uses: ./.github/actions/setup-conan
- name: Cache Conan packages
uses: actions/cache@v4
with:
path: ~/.conan2/p
key: telemetry-conan-${{ runner.os }}-${{ hashFiles('conanfile.py') }}
restore-keys: |
telemetry-conan-${{ runner.os }}-
# Use the same build-deps action as the main CI pipeline.
# This runs conan install with --options:host='&:xrpld=True' which
# sets xrpld=ON and telemetry=True (default) in the generated
# CMake toolchain — no manual -D flags needed.
- name: Build dependencies
uses: ./.github/actions/build-deps
with:
build_nproc: 4
build_type: Release
- name: Configure CMake
working-directory: ${{ env.BUILD_DIR }}
run: |
cmake \
-G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=Release \
..
- name: Build xrpld
working-directory: ${{ env.BUILD_DIR }}
run: |
cmake \
--build . \
--config Release \
--parallel $(nproc) \
--target xrpld
# ── Telemetry validation steps ──
- name: Make scripts executable
run: chmod +x docker/telemetry/workload/*.sh
- name: Run full telemetry validation
id: validation
env:
XRPLD: ${{ env.BUILD_DIR }}/xrpld
run: |
ARGS="--xrpld ${{ env.BUILD_DIR }}/xrpld --skip-loki"
ARGS="$ARGS --rpc-rate ${{ github.event.inputs.rpc_rate || '50' }}"
ARGS="$ARGS --rpc-duration ${{ github.event.inputs.rpc_duration || '120' }}"
ARGS="$ARGS --tx-tps ${{ github.event.inputs.tx_tps || '5' }}"
ARGS="$ARGS --tx-duration ${{ github.event.inputs.tx_duration || '120' }}"
if [ "${{ github.event.inputs.run_benchmark }}" = "true" ]; then
ARGS="$ARGS --with-benchmark"
fi
docker/telemetry/workload/run-full-validation.sh $ARGS
continue-on-error: true
- name: Upload validation reports
if: always()
uses: actions/upload-artifact@v4
with:
name: telemetry-validation-reports
path: /tmp/xrpld-validation/reports/
retention-days: 30
- name: Upload node logs
if: failure()
uses: actions/upload-artifact@v4
with:
name: xrpld-node-logs
path: /tmp/xrpld-validation/node*/debug.log
retention-days: 7
- name: Print validation summary
if: always()
run: |
REPORT="/tmp/xrpld-validation/reports/validation-report.json"
if [ -f "$REPORT" ]; then
echo "## Telemetry Validation Results" >> "$GITHUB_STEP_SUMMARY"
echo "" >> "$GITHUB_STEP_SUMMARY"
TOTAL=$(jq '.summary.total' "$REPORT")
PASSED=$(jq '.summary.passed' "$REPORT")
FAILED=$(jq '.summary.failed' "$REPORT")
echo "| Metric | Value |" >> "$GITHUB_STEP_SUMMARY"
echo "|--------|-------|" >> "$GITHUB_STEP_SUMMARY"
echo "| Total Checks | $TOTAL |" >> "$GITHUB_STEP_SUMMARY"
echo "| Passed | $PASSED |" >> "$GITHUB_STEP_SUMMARY"
echo "| Failed | $FAILED |" >> "$GITHUB_STEP_SUMMARY"
echo "" >> "$GITHUB_STEP_SUMMARY"
if [ "$FAILED" -gt 0 ]; then
echo "### Failed Checks" >> "$GITHUB_STEP_SUMMARY"
echo "" >> "$GITHUB_STEP_SUMMARY"
jq -r '.checks[] | select(.passed == false) | "- **\(.name)**: \(.message)"' "$REPORT" >> "$GITHUB_STEP_SUMMARY"
fi
fi
- name: Cleanup
if: always()
run: |
docker/telemetry/workload/run-full-validation.sh --cleanup 2>/dev/null || true
- name: Check validation result
if: steps.validation.outcome == 'failure'
run: |
echo "Telemetry validation failed. Check the uploaded reports for details."
exit 1

View File

@@ -64,7 +64,7 @@ jobs:
steps:
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
uses: XRPLF/actions/cleanup-workspace@c7d9ce5ebb03c752a354889ecd870cadfc2b1cd4
uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
@@ -103,11 +103,11 @@ jobs:
sanitizers: ${{ matrix.sanitizers }}
- name: Log into Conan remote
if: ${{ github.repository_owner == 'XRPLF' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}
if: ${{ github.repository == 'XRPLF/rippled' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}
run: conan remote login "${CONAN_REMOTE_NAME}" "${{ secrets.CONAN_REMOTE_USERNAME }}" --password "${{ secrets.CONAN_REMOTE_PASSWORD }}"
- name: Upload Conan packages
if: ${{ github.repository_owner == 'XRPLF' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}
if: ${{ github.repository == 'XRPLF/rippled' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}
env:
FORCE_OPTION: ${{ github.event.inputs.force_upload == 'true' && '--force' || '' }}
run: conan upload "*" --remote="${CONAN_REMOTE_NAME}" --confirm ${FORCE_OPTION}

View File

@@ -117,6 +117,18 @@ if(rocksdb)
target_link_libraries(xrpl_libs INTERFACE RocksDB::rocksdb)
endif()
# OpenTelemetry distributed tracing (optional).
# When ON, links against opentelemetry-cpp and defines XRPL_ENABLE_TELEMETRY
# so that tracing macros in TracingInstrumentation.h are compiled in.
# When OFF (default), all tracing code compiles to no-ops with zero overhead.
# Enable via: conan install -o telemetry=True, or cmake -Dtelemetry=ON.
option(telemetry "Enable OpenTelemetry tracing" OFF)
if(telemetry)
find_package(opentelemetry-cpp CONFIG REQUIRED)
add_compile_definitions(XRPL_ENABLE_TELEMETRY)
message(STATUS "OpenTelemetry tracing enabled")
endif()
# Work around changes to Conan recipe for now.
if(TARGET nudb::core)
set(nudb nudb::core)
@@ -131,7 +143,6 @@ if(coverage)
include(XrplCov)
endif()
set(PROJECT_EXPORT_SET XrplExports)
include(XrplCore)
include(XrplInstall)
include(XrplValidatorKeys)

View File

@@ -251,29 +251,6 @@ pip3 install pre-commit
pre-commit install
```
## Clang-tidy
All code must pass `clang-tidy` checks according to the settings in [`.clang-tidy`](./.clang-tidy).
There is a Continuous Integration job that runs clang-tidy on pull requests. The CI will check:
- All changed C++ files (`.cpp`, `.h`, `.ipp`) when only code files are modified
- **All files in the repository** when the `.clang-tidy` configuration file is changed
This ensures that configuration changes don't introduce new warnings across the codebase.
### Running clang-tidy locally
Before running clang-tidy, you must build the project to generate required files (particularly protobuf headers). Refer to [`BUILD.md`](./BUILD.md) for build instructions.
Then run clang-tidy on your local changes:
```
run-clang-tidy -p build src tests
```
This will check all source files in the `src` and `tests` directories using the compile commands from your `build` directory.
## Contracts and instrumentation
We are using [Antithesis](https://antithesis.com/) for continuous fuzzing,

View File

@@ -0,0 +1,244 @@
# Distributed Tracing Fundamentals
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Next**: [Architecture Analysis](./01-architecture-analysis.md)
---
## What is Distributed Tracing?
Distributed tracing is a method for tracking data objects as they flow through distributed systems. In a network like XRP Ledger, a single transaction touches multiple independent nodes—each with no shared memory or logging. Distributed tracing connects these dots.
**Without tracing:** You see isolated logs on each node with no way to correlate them.
**With tracing:** You see the complete journey of a transaction or an event across all nodes it touched.
---
## Core Concepts
### 1. Trace
A **trace** represents the entire journey of a request through the system. It has a unique `trace_id` that stays constant across all nodes.
```
Trace ID: abc123
├── Node A: received transaction
├── Node B: relayed transaction
├── Node C: included in consensus
└── Node D: applied to ledger
```
### 2. Span
A **span** represents a single unit of work within a trace. Each span has:
| Attribute | Description | Example |
| ---------------- | --------------------- | -------------------------- |
| `trace_id` | Links to parent trace | `abc123` |
| `span_id` | Unique identifier | `span456` |
| `parent_span_id` | Parent span (if any) | `p_span123` |
| `name` | Operation name | `rpc.submit` |
| `start_time` | When work began | `2024-01-15T10:30:00Z` |
| `end_time` | When work completed | `2024-01-15T10:30:00.050Z` |
| `attributes` | Key-value metadata | `tx.hash=ABC...` |
| `status` | OK, ERROR MSG | `OK` |
### 3. Trace Context
**Trace context** is the data that propagates between services to link spans together. It contains:
- `trace_id` - The trace this span belongs to
- `span_id` - The current span (becomes parent for child spans)
- `trace_flags` - Sampling decisions
---
## How Spans Form a Trace
Spans have parent-child relationships forming a tree structure:
```mermaid
flowchart TB
subgraph trace["Trace: abc123"]
A["tx.submit<br/>span_id: 001<br/>50ms"] --> B["tx.validate<br/>span_id: 002<br/>5ms"]
A --> C["tx.relay<br/>span_id: 003<br/>10ms"]
A --> D["tx.apply<br/>span_id: 004<br/>30ms"]
D --> E["ledger.update<br/>span_id: 005<br/>20ms"]
end
style A fill:#0d47a1,stroke:#082f6a,color:#ffffff
style B fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style C fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style D fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style E fill:#bf360c,stroke:#8c2809,color:#ffffff
```
The same trace visualized as a **timeline (Gantt chart)**:
```
Time → 0ms 10ms 20ms 30ms 40ms 50ms
├───────────────────────────────────────────┤
tx.submit│▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
├─────┤
tx.valid │▓▓▓▓▓│
│ ├──────────┤
tx.relay │ │▓▓▓▓▓▓▓▓▓▓│
│ ├────────────────────────────┤
tx.apply │ │▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
│ ├──────────────────┤
ledger │ │▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
```
---
## Distributed Traces Across Nodes
In distributed systems like rippled, traces span **multiple independent nodes**. The trace context must be propagated in network messages:
```mermaid
sequenceDiagram
participant Client
participant NodeA as Node A
participant NodeB as Node B
participant NodeC as Node C
Client->>NodeA: Submit TX<br/>(no trace context)
Note over NodeA: Creates new trace<br/>trace_id: abc123<br/>span: tx.receive
NodeA->>NodeB: Relay TX<br/>(trace_id: abc123, parent: 001)
Note over NodeB: Creates child span<br/>span: tx.relay<br/>parent_span_id: 001
NodeA->>NodeC: Relay TX<br/>(trace_id: abc123, parent: 001)
Note over NodeC: Creates child span<br/>span: tx.relay<br/>parent_span_id: 001
Note over NodeA,NodeC: All spans share trace_id: abc123<br/>enabling correlation across nodes
```
---
## Context Propagation
For traces to work across nodes, **trace context must be propagated** in messages.
### What's in the Context (32 bytes)
| Field | Size | Description |
| ------------- | ---------- | ------------------------------------------------------- |
| `trace_id` | 16 bytes | Identifies the entire trace (constant across all nodes) |
| `span_id` | 8 bytes | The sender's current span (becomes parent on receiver) |
| `trace_flags` | 4 bytes | Sampling decision flags |
| `trace_state` | ~0-4 bytes | Optional vendor-specific data |
### How span_id Changes at Each Hop
Only **one** `span_id` travels in the context - the sender's current span. Each node:
1. Extracts the received `span_id` and uses it as the `parent_span_id`
2. Creates a **new** `span_id` for its own span
3. Sends its own `span_id` as the parent when forwarding
```
Node A Node B Node C
────── ────── ──────
Span AAA Span BBB Span CCC
│ │ │
▼ ▼ ▼
Context out: Context out: Context out:
├─ trace_id: abc123 ├─ trace_id: abc123 ├─ trace_id: abc123
├─ span_id: AAA ──────────► ├─ span_id: BBB ──────────► ├─ span_id: CCC ──────►
└─ flags: 01 └─ flags: 01 └─ flags: 01
│ │
parent = AAA parent = BBB
```
The `trace_id` stays constant, but `span_id` **changes at every hop** to maintain the parent-child chain.
### Propagation Formats
There are two patterns:
### HTTP/RPC Headers (W3C Trace Context)
```
traceparent: 00-abc123def456-span789-01
│ │ │ │
│ │ │ └── Flags (sampled)
│ │ └── Parent span ID
│ └── Trace ID
└── Version
```
### Protocol Buffers (rippled P2P messages)
```protobuf
message TMTransaction {
bytes rawTransaction = 1;
// ... existing fields ...
// Trace context extension
bytes trace_parent = 100; // W3C traceparent
bytes trace_state = 101; // W3C tracestate
}
```
---
## Sampling
Not every trace needs to be recorded. **Sampling** reduces overhead:
### Head Sampling (at trace start)
```
Request arrives → Random 10% chance → Record or skip entire trace
```
- ✅ Low overhead
- ❌ May miss interesting traces
### Tail Sampling (after trace completes)
```
Trace completes → Collector evaluates:
- Error? → KEEP
- Slow? → KEEP
- Normal? → Sample 10%
```
- ✅ Never loses important traces
- ❌ Higher memory usage at collector
---
## Key Benefits for rippled
| Challenge | How Tracing Helps |
| ---------------------------------- | ---------------------------------------- |
| "Where is my transaction?" | Follow trace across all nodes it touched |
| "Why was consensus slow?" | See timing breakdown of each phase |
| "Which node is the bottleneck?" | Compare span durations across nodes |
| "What happened during the outage?" | Correlate errors across the network |
---
## Glossary
| Term | Definition |
| ------------------- | --------------------------------------------------------------- |
| **Trace** | Complete journey of a request, identified by `trace_id` |
| **Span** | Single operation within a trace |
| **Context** | Data propagated between services (`trace_id`, `span_id`, flags) |
| **Instrumentation** | Code that creates spans and propagates context |
| **Collector** | Service that receives, processes, and exports traces |
| **Backend** | Storage/visualization system (Jaeger, Tempo, etc.) |
| **Head Sampling** | Sampling decision at trace start |
| **Tail Sampling** | Sampling decision after trace completes |
---
_Next: [Architecture Analysis](./01-architecture-analysis.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,330 @@
# Architecture Analysis
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Design Decisions](./02-design-decisions.md) | [Implementation Strategy](./03-implementation-strategy.md)
---
## 1.1 Current rippled Architecture Overview
The rippled node software consists of several interconnected components that need instrumentation for distributed tracing:
```mermaid
flowchart TB
subgraph rippled["rippled Node"]
subgraph services["Core Services"]
RPC["RPC Server<br/>(HTTP/WS/gRPC)"]
Overlay["Overlay<br/>(P2P Network)"]
Consensus["Consensus<br/>(RCLConsensus)"]
end
JobQueue["JobQueue<br/>(Thread Pool)"]
subgraph processing["Processing Layer"]
NetworkOPs["NetworkOPs<br/>(Tx Processing)"]
LedgerMaster["LedgerMaster<br/>(Ledger Mgmt)"]
NodeStore["NodeStore<br/>(Database)"]
end
subgraph observability["Existing Observability"]
PerfLog["PerfLog<br/>(JSON)"]
Insight["Insight<br/>(StatsD)"]
Logging["Logging<br/>(Journal)"]
end
services --> JobQueue
JobQueue --> processing
end
style rippled fill:#424242,stroke:#212121,color:#ffffff
style services fill:#1565c0,stroke:#0d47a1,color:#ffffff
style processing fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style observability fill:#e65100,stroke:#bf360c,color:#ffffff
```
---
## 1.2 Key Components for Instrumentation
| Component | Location | Purpose | Trace Value |
| ----------------- | ------------------------------------------ | ------------------------ | ---------------------------- |
| **Overlay** | `src/xrpld/overlay/` | P2P communication | Message propagation timing |
| **PeerImp** | `src/xrpld/overlay/detail/PeerImp.cpp` | Individual peer handling | Per-peer latency |
| **RCLConsensus** | `src/xrpld/app/consensus/RCLConsensus.cpp` | Consensus algorithm | Round timing, phase analysis |
| **NetworkOPs** | `src/xrpld/app/misc/NetworkOPs.cpp` | Transaction processing | Tx lifecycle tracking |
| **ServerHandler** | `src/xrpld/rpc/detail/ServerHandler.cpp` | RPC entry point | Request latency |
| **RPCHandler** | `src/xrpld/rpc/detail/RPCHandler.cpp` | Command execution | Per-command timing |
| **JobQueue** | `src/xrpl/core/JobQueue.h` | Async task execution | Queue wait times |
---
## 1.3 Transaction Flow Diagram
Transaction flow spans multiple nodes in the network. Each node creates linked spans to form a distributed trace:
```mermaid
sequenceDiagram
participant Client
participant PeerA as Peer A (Receive)
participant PeerB as Peer B (Relay)
participant PeerC as Peer C (Validate)
Client->>PeerA: 1. Submit TX
rect rgb(230, 245, 255)
Note over PeerA: tx.receive SPAN START
PeerA->>PeerA: HashRouter Deduplication
PeerA->>PeerA: tx.validate (child span)
end
PeerA->>PeerB: 2. Relay TX (with trace ctx)
rect rgb(230, 245, 255)
Note over PeerB: tx.receive (linked span)
end
PeerB->>PeerC: 3. Relay TX
rect rgb(230, 245, 255)
Note over PeerC: tx.receive (linked span)
PeerC->>PeerC: tx.process
end
Note over Client,PeerC: DISTRIBUTED TRACE (same trace_id: abc123)
```
### Trace Structure
```
trace_id: abc123
├── span: tx.receive (Peer A)
│ ├── span: tx.validate
│ └── span: tx.relay
├── span: tx.receive (Peer B) [parent: Peer A]
│ └── span: tx.relay
└── span: tx.receive (Peer C) [parent: Peer B]
└── span: tx.process
```
---
## 1.4 Consensus Round Flow
Consensus rounds are multi-phase operations that benefit significantly from tracing:
```mermaid
flowchart TB
subgraph round["consensus.round (root span)"]
attrs["Attributes:<br/>xrpl.consensus.ledger.seq = 12345678<br/>xrpl.consensus.mode = proposing<br/>xrpl.consensus.proposers = 35"]
subgraph open["consensus.phase.open"]
open_desc["Duration: ~3s<br/>Waiting for transactions"]
end
subgraph establish["consensus.phase.establish"]
est_attrs["proposals_received = 28<br/>disputes_resolved = 3"]
est_children["├── consensus.proposal.receive (×28)<br/>├── consensus.proposal.send (×1)<br/>└── consensus.dispute.resolve (×3)"]
end
subgraph accept["consensus.phase.accept"]
acc_attrs["transactions_applied = 150<br/>ledger.hash = DEF456..."]
acc_children["├── ledger.build<br/>└── ledger.validate"]
end
attrs --> open
open --> establish
establish --> accept
end
style round fill:#f57f17,stroke:#e65100,color:#ffffff
style open fill:#1565c0,stroke:#0d47a1,color:#ffffff
style establish fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style accept fill:#c2185b,stroke:#880e4f,color:#ffffff
```
---
## 1.5 RPC Request Flow
RPC requests support W3C Trace Context headers for distributed tracing across services:
```mermaid
flowchart TB
subgraph request["rpc.request (root span)"]
http["HTTP Request<br/>POST /<br/>traceparent: 00-abc123...-def456...-01"]
attrs["Attributes:<br/>http.method = POST<br/>net.peer.ip = 192.168.1.100<br/>xrpl.rpc.command = submit"]
subgraph enqueue["jobqueue.enqueue"]
job_attr["xrpl.job.type = jtCLIENT_RPC"]
end
subgraph command["rpc.command.submit"]
cmd_attrs["xrpl.rpc.version = 2<br/>xrpl.rpc.role = user"]
cmd_children["├── tx.deserialize<br/>├── tx.validate_local<br/>└── tx.submit_to_network"]
end
response["Response: 200 OK<br/>Duration: 45ms"]
http --> attrs
attrs --> enqueue
enqueue --> command
command --> response
end
style request fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style enqueue fill:#1565c0,stroke:#0d47a1,color:#ffffff
style command fill:#e65100,stroke:#bf360c,color:#ffffff
```
---
## 1.6 Key Trace Points
The following table identifies priority instrumentation points across the codebase:
| Category | Span Name | File | Method | Priority |
| --------------- | ---------------------- | -------------------- | ---------------------- | -------- |
| **Transaction** | `tx.receive` | `PeerImp.cpp` | `handleTransaction()` | High |
| **Transaction** | `tx.validate` | `NetworkOPs.cpp` | `processTransaction()` | High |
| **Transaction** | `tx.process` | `NetworkOPs.cpp` | `doTransactionSync()` | High |
| **Transaction** | `tx.relay` | `OverlayImpl.cpp` | `relay()` | Medium |
| **Consensus** | `consensus.round` | `RCLConsensus.cpp` | `startRound()` | High |
| **Consensus** | `consensus.phase.*` | `Consensus.h` | `timerEntry()` | High |
| **Consensus** | `consensus.proposal.*` | `RCLConsensus.cpp` | `peerProposal()` | Medium |
| **RPC** | `rpc.request` | `ServerHandler.cpp` | `onRequest()` | High |
| **RPC** | `rpc.command.*` | `RPCHandler.cpp` | `doCommand()` | High |
| **Peer** | `peer.connect` | `OverlayImpl.cpp` | `onHandoff()` | Low |
| **Peer** | `peer.message.*` | `PeerImp.cpp` | `onMessage()` | Low |
| **Ledger** | `ledger.acquire` | `InboundLedgers.cpp` | `acquire()` | Medium |
| **Ledger** | `ledger.build` | `RCLConsensus.cpp` | `buildLCL()` | High |
---
## 1.7 Instrumentation Priority
```mermaid
quadrantChart
title Instrumentation Priority Matrix
x-axis Low Complexity --> High Complexity
y-axis Low Value --> High Value
quadrant-1 Implement First
quadrant-2 Plan Carefully
quadrant-3 Quick Wins
quadrant-4 Consider Later
RPC Tracing: [0.3, 0.85]
Transaction Tracing: [0.65, 0.92]
Consensus Tracing: [0.75, 0.87]
Peer Message Tracing: [0.4, 0.3]
Ledger Acquisition: [0.5, 0.6]
JobQueue Tracing: [0.35, 0.5]
```
---
## 1.8 Observable Outcomes
After implementing OpenTelemetry, operators and developers will gain visibility into the following:
### 1.8.1 What You Will See: Traces
| Trace Type | Description | Example Query in Grafana/Tempo |
| -------------------------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| **Transaction Lifecycle** | Full journey from RPC submission through validation, relay, consensus, and ledger inclusion | `{service.name="rippled" && xrpl.tx.hash="ABC123..."}` |
| **Cross-Node Propagation** | Transaction path across multiple rippled nodes with timing | `{xrpl.tx.relay_count > 0}` |
| **Consensus Rounds** | Complete round with all phases (open, establish, accept) | `{span.name=~"consensus.round.*"}` |
| **RPC Request Processing** | Individual command execution with timing breakdown | `{xrpl.rpc.command="account_info"}` |
| **Ledger Acquisition** | Peer-to-peer ledger data requests and responses | `{span.name="ledger.acquire"}` |
### 1.8.2 What You Will See: Metrics (Derived from Traces)
| Metric | Description | Dashboard Panel |
| ----------------------------- | -------------------------------------- | --------------------------- |
| **RPC Latency (p50/p95/p99)** | Response time distribution per command | Heatmap by command |
| **Transaction Throughput** | Transactions processed per second | Time series graph |
| **Consensus Round Duration** | Time to complete consensus phases | Histogram |
| **Cross-Node Latency** | Time for transaction to reach N nodes | Line chart with percentiles |
| **Error Rate** | Failed transactions/RPC calls by type | Stacked bar chart |
### 1.8.3 Concrete Dashboard Examples
**Transaction Trace View (Jaeger/Tempo):**
```
┌────────────────────────────────────────────────────────────────────────────────┐
│ Trace: abc123... (Transaction Submission) Duration: 847ms │
├────────────────────────────────────────────────────────────────────────────────┤
│ ├── rpc.request [ServerHandler] ████░░░░░░ 45ms │
│ │ └── rpc.command.submit [RPCHandler] ████░░░░░░ 42ms │
│ │ └── tx.receive [NetworkOPs] ███░░░░░░░ 35ms │
│ │ ├── tx.validate [TxQ] █░░░░░░░░░ 8ms │
│ │ └── tx.relay [Overlay] ██░░░░░░░░ 15ms │
│ │ ├── tx.receive [Node-B] █████░░░░░ 52ms │
│ │ │ └── tx.relay [Node-B] ██░░░░░░░░ 18ms │
│ │ └── tx.receive [Node-C] ██████░░░░ 65ms │
│ └── consensus.round [RCLConsensus] ████████░░ 720ms │
│ ├── consensus.phase.open ██░░░░░░░░ 180ms │
│ ├── consensus.phase.establish █████░░░░░ 480ms │
│ └── consensus.phase.accept █░░░░░░░░░ 60ms │
└────────────────────────────────────────────────────────────────────────────────┘
```
**RPC Performance Dashboard Panel:**
```
┌─────────────────────────────────────────────────────────────┐
│ RPC Command Latency (Last 1 Hour) │
├─────────────────────────────────────────────────────────────┤
│ Command │ p50 │ p95 │ p99 │ Errors │ Rate │
│──────────────────┼────────┼────────┼────────┼────────┼──────│
│ account_info │ 12ms │ 45ms │ 89ms │ 0.1% │ 150/s│
│ submit │ 35ms │ 120ms │ 250ms │ 2.3% │ 45/s│
│ ledger │ 8ms │ 25ms │ 55ms │ 0.0% │ 80/s│
│ tx │ 15ms │ 50ms │ 100ms │ 0.5% │ 60/s│
│ server_info │ 5ms │ 12ms │ 20ms │ 0.0% │ 200/s│
└─────────────────────────────────────────────────────────────┘
```
**Consensus Health Dashboard Panel:**
```mermaid
---
config:
xyChart:
width: 1200
height: 400
plotReservedSpacePercent: 50
chartOrientation: vertical
themeVariables:
xyChart:
plotColorPalette: "#3498db"
---
xychart-beta
title "Consensus Round Duration (Last 24 Hours)"
x-axis "Time of Day (Hours)" [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24]
y-axis "Duration (seconds)" 1 --> 5
line [2.1, 2.3, 2.5, 2.4, 2.8, 1.6, 3.2, 3.0, 3.5, 1.3, 3.8, 3.6, 4.0, 3.2, 4.3, 4.1, 4.5, 4.3, 4.2, 2.4, 4.8, 4.6, 4.9, 4.7, 5.0, 4.9, 4.8, 2.6, 4.7, 4.5, 4.2, 4.0, 2.5, 3.7, 3.2, 3.4, 2.9, 3.1, 2.6, 2.8, 2.3, 1.5, 2.7, 2.4, 2.5, 2.3, 2.2, 2.1, 2.0]
```
### 1.8.4 Operator Actionable Insights
| Scenario | What You'll See | Action |
| --------------------- | ---------------------------------------------------------------------------- | -------------------------------- |
| **Slow RPC** | Span showing which phase is slow (parsing, execution, serialization) | Optimize specific code path |
| **Transaction Stuck** | Trace stops at validation; error attribute shows reason | Fix transaction parameters |
| **Consensus Delay** | Phase.establish taking too long; proposer attribute shows missing validators | Investigate network connectivity |
| **Memory Spike** | Large batch of spans correlating with memory increase | Tune batch_size or sampling |
| **Network Partition** | Traces missing cross-node links for specific peer | Check peer connectivity |
### 1.8.5 Developer Debugging Workflow
1. **Find Transaction**: Query by `xrpl.tx.hash` to get full trace
2. **Identify Bottleneck**: Look at span durations to find slowest component
3. **Check Attributes**: Review `xrpl.tx.validity`, `xrpl.rpc.status` for errors
4. **Correlate Logs**: Use `trace_id` to find related PerfLog entries
5. **Compare Nodes**: Filter by `service.instance.id` to compare behavior across nodes
---
_Next: [Design Decisions](./02-design-decisions.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,498 @@
# Design Decisions
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Architecture Analysis](./01-architecture-analysis.md) | [Code Samples](./04-code-samples.md)
---
## 2.1 OpenTelemetry Components
### 2.1.1 SDK Selection
**Primary Choice**: OpenTelemetry C++ SDK (`opentelemetry-cpp`)
| Component | Purpose | Required |
| --------------------------------------- | ---------------------- | ----------- |
| `opentelemetry-cpp::api` | Tracing API headers | Yes |
| `opentelemetry-cpp::sdk` | SDK implementation | Yes |
| `opentelemetry-cpp::ext` | Extensions (exporters) | Yes |
| `opentelemetry-cpp::otlp_grpc_exporter` | OTLP/gRPC export | Recommended |
| `opentelemetry-cpp::otlp_http_exporter` | OTLP/HTTP export | Alternative |
### 2.1.2 Instrumentation Strategy
**Manual Instrumentation** (recommended):
| Approach | Pros | Cons |
| ---------- | ----------------------------------------------------------------- | ------------------------------------------------------- |
| **Manual** | Precise control, optimized placement, rippled-specific attributes | More development effort |
| **Auto** | Less code, automatic coverage | Less control, potential overhead, limited customization |
---
## 2.2 Exporter Configuration
```mermaid
flowchart TB
subgraph nodes["rippled Nodes"]
node1["rippled<br/>Node 1"]
node2["rippled<br/>Node 2"]
node3["rippled<br/>Node 3"]
end
collector["OpenTelemetry<br/>Collector<br/>(sidecar or standalone)"]
subgraph backends["Observability Backends"]
jaeger["Jaeger<br/>(Dev)"]
tempo["Tempo<br/>(Prod)"]
elastic["Elastic<br/>APM"]
end
node1 -->|"OTLP/gRPC<br/>:4317"| collector
node2 -->|"OTLP/gRPC<br/>:4317"| collector
node3 -->|"OTLP/gRPC<br/>:4317"| collector
collector --> jaeger
collector --> tempo
collector --> elastic
style nodes fill:#0d47a1,stroke:#082f6a,color:#ffffff
style backends fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style collector fill:#bf360c,stroke:#8c2809,color:#ffffff
```
### 2.2.1 OTLP/gRPC (Recommended)
```cpp
// Configuration for OTLP over gRPC
namespace otlp = opentelemetry::exporter::otlp;
otlp::OtlpGrpcExporterOptions opts;
opts.endpoint = "localhost:4317";
opts.use_ssl_credentials = true;
opts.ssl_credentials_cacert_path = "/path/to/ca.crt";
```
### 2.2.2 OTLP/HTTP (Alternative)
```cpp
// Configuration for OTLP over HTTP
namespace otlp = opentelemetry::exporter::otlp;
otlp::OtlpHttpExporterOptions opts;
opts.url = "http://localhost:4318/v1/traces";
opts.content_type = otlp::HttpRequestContentType::kJson; // or kBinary
```
---
## 2.3 Span Naming Conventions
### 2.3.1 Naming Schema
```
<component>.<operation>[.<sub-operation>]
```
**Examples**:
- `tx.receive` - Transaction received from peer
- `consensus.phase.establish` - Consensus establish phase
- `rpc.command.server_info` - server_info RPC command
### 2.3.2 Complete Span Catalog
```yaml
# Transaction Spans
tx:
receive: "Transaction received from network"
validate: "Transaction signature/format validation"
process: "Full transaction processing"
relay: "Transaction relay to peers"
apply: "Apply transaction to ledger"
# Consensus Spans
consensus:
round: "Complete consensus round"
phase:
open: "Open phase - collecting transactions"
establish: "Establish phase - reaching agreement"
accept: "Accept phase - applying consensus"
proposal:
receive: "Receive peer proposal"
send: "Send our proposal"
validation:
receive: "Receive peer validation"
send: "Send our validation"
# RPC Spans
rpc:
request: "HTTP/WebSocket request handling"
command:
"*": "Specific RPC command (dynamic)"
# Peer Spans
peer:
connect: "Peer connection establishment"
disconnect: "Peer disconnection"
message:
send: "Send protocol message"
receive: "Receive protocol message"
# Ledger Spans
ledger:
acquire: "Ledger acquisition from network"
build: "Build new ledger"
validate: "Ledger validation"
close: "Close ledger"
# Job Spans
job:
enqueue: "Job added to queue"
execute: "Job execution"
```
---
## 2.4 Attribute Schema
### 2.4.1 Resource Attributes (Set Once at Startup)
```cpp
// Standard OpenTelemetry semantic conventions
resource::SemanticConventions::SERVICE_NAME = "rippled"
resource::SemanticConventions::SERVICE_VERSION = BuildInfo::getVersionString()
resource::SemanticConventions::SERVICE_INSTANCE_ID = <node_public_key_base58>
// Custom rippled attributes
"xrpl.network.id" = <network_id> // e.g., 0 for mainnet
"xrpl.network.type" = "mainnet" | "testnet" | "devnet" | "standalone"
"xrpl.node.type" = "validator" | "stock" | "reporting"
"xrpl.node.cluster" = <cluster_name> // If clustered
```
### 2.4.2 Span Attributes by Category
#### Transaction Attributes
```cpp
"xrpl.tx.hash" = string // Transaction hash (hex)
"xrpl.tx.type" = string // "Payment", "OfferCreate", etc.
"xrpl.tx.account" = string // Source account (redacted in prod)
"xrpl.tx.sequence" = int64 // Account sequence number
"xrpl.tx.fee" = int64 // Fee in drops
"xrpl.tx.result" = string // "tesSUCCESS", "tecPATH_DRY", etc.
"xrpl.tx.ledger_index" = int64 // Ledger containing transaction
```
#### Consensus Attributes
```cpp
"xrpl.consensus.round" = int64 // Round number
"xrpl.consensus.phase" = string // "open", "establish", "accept"
"xrpl.consensus.mode" = string // "proposing", "observing", etc.
"xrpl.consensus.proposers" = int64 // Number of proposers
"xrpl.consensus.ledger.prev" = string // Previous ledger hash
"xrpl.consensus.ledger.seq" = int64 // Ledger sequence
"xrpl.consensus.tx_count" = int64 // Transactions in consensus set
"xrpl.consensus.duration_ms" = float64 // Round duration
```
#### RPC Attributes
```cpp
"xrpl.rpc.command" = string // Command name
"xrpl.rpc.version" = int64 // API version
"xrpl.rpc.role" = string // "admin" or "user"
"xrpl.rpc.params" = string // Sanitized parameters (optional)
```
#### Peer & Message Attributes
```cpp
"xrpl.peer.id" = string // Peer public key (base58)
"xrpl.peer.address" = string // IP:port
"xrpl.peer.latency_ms" = float64 // Measured latency
"xrpl.peer.cluster" = string // Cluster name if clustered
"xrpl.message.type" = string // Protocol message type name
"xrpl.message.size_bytes" = int64 // Message size
"xrpl.message.compressed" = bool // Whether compressed
```
#### Ledger & Job Attributes
```cpp
"xrpl.ledger.hash" = string // Ledger hash
"xrpl.ledger.index" = int64 // Ledger sequence/index
"xrpl.ledger.close_time" = int64 // Close time (epoch)
"xrpl.ledger.tx_count" = int64 // Transaction count
"xrpl.job.type" = string // Job type name
"xrpl.job.queue_ms" = float64 // Time spent in queue
"xrpl.job.worker" = int64 // Worker thread ID
```
### 2.4.3 Data Collection Summary
The following table summarizes what data is collected by category:
| Category | Attributes Collected | Purpose |
| --------------- | -------------------------------------------------------------------- | --------------------------- |
| **Transaction** | `tx.hash`, `tx.type`, `tx.result`, `tx.fee`, `ledger_index` | Trace transaction lifecycle |
| **Consensus** | `round`, `phase`, `mode`, `proposers` (public keys), `duration_ms` | Analyze consensus timing |
| **RPC** | `command`, `version`, `status`, `duration_ms` | Monitor RPC performance |
| **Peer** | `peer.id` (public key), `latency_ms`, `message.type`, `message.size` | Network topology analysis |
| **Ledger** | `ledger.hash`, `ledger.index`, `close_time`, `tx_count` | Ledger progression tracking |
| **Job** | `job.type`, `queue_ms`, `worker` | JobQueue performance |
### 2.4.4 Privacy & Sensitive Data Policy
OpenTelemetry instrumentation is designed to collect **operational metadata only**, never sensitive content.
#### Data NOT Collected
The following data is explicitly **excluded** from telemetry collection:
| Excluded Data | Reason |
| ----------------------- | ----------------------------------------- |
| **Private Keys** | Never exposed; not relevant to tracing |
| **Account Balances** | Financial data; privacy sensitive |
| **Transaction Amounts** | Financial data; privacy sensitive |
| **Raw TX Payloads** | May contain sensitive memo/data fields |
| **Personal Data** | No PII collected |
| **IP Addresses** | Configurable; excluded by default in prod |
#### Privacy Protection Mechanisms
| Mechanism | Description |
| ----------------------------- | ------------------------------------------------------------------------- |
| **Account Hashing** | `xrpl.tx.account` is hashed at collector level before storage |
| **Configurable Redaction** | Sensitive fields can be excluded via `[telemetry]` config section |
| **Sampling** | Only 10% of traces recorded by default, reducing data exposure |
| **Local Control** | Node operators have full control over what gets exported |
| **No Raw Payloads** | Transaction content is never recorded, only metadata (hash, type, result) |
| **Collector-Level Filtering** | Additional redaction/hashing can be configured at OTel Collector |
#### Collector-Level Data Protection
The OpenTelemetry Collector can be configured to hash or redact sensitive attributes before export:
```yaml
processors:
attributes:
actions:
# Hash account addresses before storage
- key: xrpl.tx.account
action: hash
# Remove IP addresses entirely
- key: xrpl.peer.address
action: delete
# Redact specific fields
- key: xrpl.rpc.params
action: delete
```
#### Configuration Options for Privacy
In `rippled.cfg`, operators can control data collection granularity:
```ini
[telemetry]
enabled=1
# Disable collection of specific components
trace_transactions=1
trace_consensus=1
trace_rpc=1
trace_peer=0 # Disable peer tracing (high volume, includes addresses)
# Redact specific attributes
redact_account=1 # Hash account addresses before export
redact_peer_address=1 # Remove peer IP addresses
```
> **Key Principle**: Telemetry collects **operational metadata** (timing, counts, hashes) — never **sensitive content** (keys, balances, amounts, raw payloads).
---
## 2.5 Context Propagation Design
### 2.5.1 Propagation Boundaries
```mermaid
flowchart TB
subgraph http["HTTP/WebSocket (RPC)"]
w3c["W3C Trace Context Headers:<br/>traceparent: 00-{trace_id}-{span_id}-{flags}<br/>tracestate: rippled=<state>"]
end
subgraph protobuf["Protocol Buffers (P2P)"]
proto["message TraceContext {<br/> bytes trace_id = 1; // 16 bytes<br/> bytes span_id = 2; // 8 bytes<br/> uint32 trace_flags = 3;<br/> string trace_state = 4;<br/>}"]
end
subgraph jobqueue["JobQueue (Internal Async)"]
job["Context captured at job creation,<br/>restored at execution<br/><br/>class Job {<br/> opentelemetry::context::Context traceContext_;<br/>};"]
end
style http fill:#0d47a1,stroke:#082f6a,color:#ffffff
style protobuf fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style jobqueue fill:#bf360c,stroke:#8c2809,color:#ffffff
```
---
## 2.6 Integration with Existing Observability
### 2.6.1 Existing Frameworks Comparison
rippled already has two observability mechanisms. OpenTelemetry complements (not replaces) them:
| Aspect | PerfLog | Beast Insight (StatsD) | OpenTelemetry |
| --------------------- | ----------------------------- | ---------------------------- | ------------------------- |
| **Type** | Logging | Metrics | Distributed Tracing |
| **Data** | JSON log entries | Counters, gauges, histograms | Spans with context |
| **Scope** | Single node | Single node | **Cross-node** |
| **Output** | `perf.log` file | StatsD server | OTLP Collector |
| **Question answered** | "What happened on this node?" | "How many? How fast?" | "What was the journey?" |
| **Correlation** | By timestamp | By metric name | By `trace_id` |
| **Overhead** | Low (file I/O) | Low (UDP packets) | Low-Medium (configurable) |
### 2.6.2 What Each Framework Does Best
#### PerfLog
- **Purpose**: Detailed local event logging for RPC and job execution
- **Strengths**:
- Rich JSON output with timing data
- Already integrated in RPC handlers
- File-based, no external dependencies
- **Limitations**:
- Single-node only (no cross-node correlation)
- No parent-child relationships between events
- Manual log parsing required
```json
// Example PerfLog entry
{
"time": "2024-01-15T10:30:00.123Z",
"method": "submit",
"duration_us": 1523,
"result": "tesSUCCESS"
}
```
#### Beast Insight (StatsD)
- **Purpose**: Real-time metrics for monitoring dashboards
- **Strengths**:
- Aggregated metrics (counters, gauges, histograms)
- Low overhead (UDP, fire-and-forget)
- Good for alerting thresholds
- **Limitations**:
- No request-level detail
- No causal relationships
- Single-node perspective
```cpp
// Example StatsD usage in rippled
insight.increment("rpc.submit.count");
insight.gauge("ledger.age", age);
insight.timing("consensus.round", duration);
```
#### OpenTelemetry (NEW)
- **Purpose**: Distributed request tracing across nodes
- **Strengths**:
- **Cross-node correlation** via `trace_id`
- Parent-child span relationships
- Rich attributes per span
- Industry standard (CNCF)
- **Limitations**:
- Requires collector infrastructure
- Higher complexity than logging
```cpp
// Example OpenTelemetry span
auto span = telemetry.startSpan("tx.relay");
span->SetAttribute("tx.hash", hash);
span->SetAttribute("peer.id", peerId);
// Span automatically linked to parent via context
```
### 2.6.3 When to Use Each
| Scenario | PerfLog | StatsD | OpenTelemetry |
| --------------------------------------- | ---------- | ------ | ------------- |
| "How many TXs per second?" | ❌ | ✅ | ❌ |
| "What's the p99 RPC latency?" | ❌ | ✅ | ✅ |
| "Why was this specific TX slow?" | ⚠️ partial | ❌ | ✅ |
| "Which node delayed consensus?" | ❌ | ❌ | ✅ |
| "What happened on node X at time T?" | ✅ | ❌ | ✅ |
| "Show me the TX journey across 5 nodes" | ❌ | ❌ | ✅ |
### 2.6.4 Coexistence Strategy
> **Note**: Phase 7 replaces the StatsD bridge with native OTel Metrics SDK export. The diagram below shows the Phase 6 intermediate state. See [Phase7_taskList.md](./Phase7_taskList.md) for the migration design where Beast Insight emits via OTLP instead of StatsD.
```mermaid
flowchart TB
subgraph rippled["rippled Process"]
perflog["PerfLog<br/>(JSON to file)"]
insight["Beast Insight<br/>(StatsD)"]
otel["OpenTelemetry<br/>(Tracing)"]
end
perflog --> perffile["perf.log"]
insight --> statsd["StatsD Server"]
otel --> collector["OTLP Collector"]
perffile --> grafana["Grafana<br/>(Unified UI)"]
statsd --> grafana
collector --> grafana
style rippled fill:#212121,stroke:#0a0a0a,color:#ffffff
style grafana fill:#bf360c,stroke:#8c2809,color:#ffffff
```
**Phase 7 target state**: Beast Insight routes to `OTelCollector` (new `Collector` implementation) which exports via OTLP/HTTP to the same collector endpoint as traces. StatsD UDP path becomes a deprecated fallback (`[insight] server=statsd`). See [06-implementation-phases.md §6.8](./06-implementation-phases.md) and [Phase7_taskList.md](./Phase7_taskList.md) for details.
### 2.6.5 Correlation with PerfLog
Trace IDs can be correlated with existing PerfLog entries for comprehensive debugging:
```cpp
// In RPCHandler.cpp - correlate trace with PerfLog
Status doCommand(RPC::JsonContext& context, Json::Value& result)
{
// Start OpenTelemetry span
auto span = context.app.getTelemetry().startSpan(
"rpc.command." + context.method);
// Get trace ID for correlation
auto traceId = span->GetContext().trace_id().IsValid()
? toHex(span->GetContext().trace_id())
: "";
// Use existing PerfLog with trace correlation
auto const curId = context.app.getPerfLog().currentId();
context.app.getPerfLog().rpcStart(context.method, curId);
// Future: Add trace ID to PerfLog entry
// context.app.getPerfLog().setTraceId(curId, traceId);
try {
auto ret = handler(context, result);
context.app.getPerfLog().rpcFinish(context.method, curId);
span->SetStatus(opentelemetry::trace::StatusCode::kOk);
return ret;
} catch (std::exception const& e) {
context.app.getPerfLog().rpcError(context.method, curId);
span->RecordException(e);
span->SetStatus(opentelemetry::trace::StatusCode::kError, e.what());
throw;
}
}
```
---
_Previous: [Architecture Analysis](./01-architecture-analysis.md)_ | _Next: [Implementation Strategy](./03-implementation-strategy.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,451 @@
# Implementation Strategy
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Code Samples](./04-code-samples.md) | [Configuration Reference](./05-configuration-reference.md)
---
## 3.1 Directory Structure
The telemetry implementation follows rippled's existing code organization pattern:
```
include/xrpl/
├── telemetry/
│ ├── Telemetry.h # Main telemetry interface
│ ├── TelemetryConfig.h # Configuration structures
│ ├── TraceContext.h # Context propagation utilities
│ ├── SpanGuard.h # RAII span management
│ └── SpanAttributes.h # Attribute helper functions
src/libxrpl/
├── telemetry/
│ ├── Telemetry.cpp # Implementation
│ ├── TelemetryConfig.cpp # Config parsing
│ ├── TraceContext.cpp # Context serialization
│ └── NullTelemetry.cpp # No-op implementation
src/xrpld/
├── telemetry/
│ ├── TracingInstrumentation.h # Instrumentation macros
│ └── TracingInstrumentation.cpp
```
---
## 3.2 Implementation Approach
<div align="center">
```mermaid
%%{init: {'flowchart': {'nodeSpacing': 20, 'rankSpacing': 30}}}%%
flowchart TB
subgraph phase1["Phase 1: Core"]
direction LR
sdk["SDK Integration"] ~~~ interface["Telemetry Interface"] ~~~ config["Configuration"]
end
subgraph phase2["Phase 2: RPC"]
direction LR
http["HTTP Context"] ~~~ rpc["RPC Handlers"]
end
subgraph phase3["Phase 3: P2P"]
direction LR
proto["Protobuf Context"] ~~~ tx["Transaction Relay"]
end
subgraph phase4["Phase 4: Consensus"]
direction LR
consensus["Consensus Rounds"] ~~~ proposals["Proposals"]
end
phase1 --> phase2 --> phase3 --> phase4
style phase1 fill:#1565c0,stroke:#0d47a1,color:#ffffff
style phase2 fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style phase3 fill:#e65100,stroke:#bf360c,color:#ffffff
style phase4 fill:#c2185b,stroke:#880e4f,color:#ffffff
```
</div>
### Key Principles
1. **Minimal Intrusion**: Instrumentation should not alter existing control flow
2. **Zero-Cost When Disabled**: Use compile-time flags and no-op implementations
3. **Backward Compatibility**: Protocol Buffer extensions use high field numbers
4. **Graceful Degradation**: Tracing failures must not affect node operation
---
## 3.3 Performance Overhead Summary
| Metric | Overhead | Notes |
| ------------- | ---------- | ----------------------------------- |
| CPU | 1-3% | Span creation and attribute setting |
| Memory | 2-5 MB | Batch buffer for pending spans |
| Network | 10-50 KB/s | Compressed OTLP export to collector |
| Latency (p99) | <2% | With proper sampling configuration |
---
## 3.4 Detailed CPU Overhead Analysis
### 3.4.1 Per-Operation Costs
| Operation | Time (ns) | Frequency | Impact |
| --------------------- | --------- | ---------------------- | ---------- |
| Span creation | 200-500 | Every traced operation | Low |
| Span end | 100-200 | Every traced operation | Low |
| SetAttribute (string) | 80-120 | 3-5 per span | Low |
| SetAttribute (int) | 40-60 | 2-3 per span | Negligible |
| AddEvent | 50-80 | 0-2 per span | Negligible |
| Context injection | 150-250 | Per outgoing message | Low |
| Context extraction | 100-180 | Per incoming message | Low |
| GetCurrent context | 10-20 | Thread-local access | Negligible |
### 3.4.2 Transaction Processing Overhead
<div align="center">
```mermaid
%%{init: {'pie': {'textPosition': 0.75}}}%%
pie showData
"tx.receive (800ns)" : 800
"tx.validate (500ns)" : 500
"tx.relay (500ns)" : 500
"Context inject (600ns)" : 600
```
**Transaction Tracing Overhead (~2.4μs total)**
</div>
**Overhead percentage**: 2.4 μs / 200 μs (avg tx processing) = **~1.2%**
### 3.4.3 Consensus Round Overhead
| Operation | Count | Cost (ns) | Total |
| ---------------------- | ----- | --------- | ---------- |
| consensus.round span | 1 | ~1000 | ~1 μs |
| consensus.phase spans | 3 | ~700 | ~2.1 μs |
| proposal.receive spans | ~20 | ~600 | ~12 μs |
| proposal.send spans | ~3 | ~600 | ~1.8 μs |
| Context operations | ~30 | ~200 | ~6 μs |
| **TOTAL** | | | **~23 μs** |
**Overhead percentage**: 23 μs / 3s (typical round) = **~0.0008%** (negligible)
### 3.4.4 RPC Request Overhead
| Operation | Cost (ns) |
| ---------------- | ------------ |
| rpc.request span | ~700 |
| rpc.command span | ~600 |
| Context extract | ~250 |
| Context inject | ~200 |
| **TOTAL** | **~1.75 μs** |
- Fast RPC (1ms): 1.75 μs / 1ms = **~0.175%**
- Slow RPC (100ms): 1.75 μs / 100ms = **~0.002%**
---
## 3.5 Memory Overhead Analysis
### 3.5.1 Static Memory
| Component | Size | Allocated |
| ------------------------ | ----------- | ---------- |
| TracerProvider singleton | ~64 KB | At startup |
| BatchSpanProcessor | ~128 KB | At startup |
| OTLP exporter | ~256 KB | At startup |
| Propagator registry | ~8 KB | At startup |
| **Total static** | **~456 KB** | |
### 3.5.2 Dynamic Memory
| Component | Size per unit | Max units | Peak |
| -------------------- | ------------- | ---------- | ----------- |
| Active span | ~200 bytes | 1000 | ~200 KB |
| Queued span (export) | ~500 bytes | 2048 | ~1 MB |
| Attribute storage | ~50 bytes | 5 per span | Included |
| Context storage | ~64 bytes | Per thread | ~6.4 KB |
| **Total dynamic** | | | **~1.2 MB** |
### 3.5.3 Memory Growth Characteristics
```mermaid
---
config:
xyChart:
width: 700
height: 400
---
xychart-beta
title "Memory Usage vs Span Rate"
x-axis "Spans/second" [0, 200, 400, 600, 800, 1000]
y-axis "Memory (MB)" 0 --> 6
line [1, 1.8, 2.6, 3.4, 4.2, 5]
```
**Notes**:
- Memory increases linearly with span rate
- Batch export prevents unbounded growth
- Queue size is configurable (default 2048 spans)
- At queue limit, oldest spans are dropped (not blocked)
---
## 3.6 Network Overhead Analysis
### 3.6.1 Export Bandwidth
| Sampling Rate | Spans/sec | Bandwidth | Notes |
| ------------- | --------- | --------- | ---------------- |
| 100% | ~500 | ~250 KB/s | Development only |
| 10% | ~50 | ~25 KB/s | Staging |
| 1% | ~5 | ~2.5 KB/s | Production |
| Error-only | ~1 | ~0.5 KB/s | Minimal overhead |
### 3.6.2 Trace Context Propagation
| Message Type | Context Size | Messages/sec | Overhead |
| ---------------------- | ------------ | ------------ | ----------- |
| TMTransaction | 32 bytes | ~100 | ~3.2 KB/s |
| TMProposeSet | 32 bytes | ~10 | ~320 B/s |
| TMValidation | 32 bytes | ~50 | ~1.6 KB/s |
| **Total P2P overhead** | | | **~5 KB/s** |
---
## 3.7 Optimization Strategies
### 3.7.1 Sampling Strategies
```mermaid
flowchart TD
trace["New Trace"]
trace --> errors{"Is Error?"}
errors -->|Yes| sample["SAMPLE"]
errors -->|No| consensus{"Is Consensus?"}
consensus -->|Yes| sample
consensus -->|No| slow{"Is Slow?"}
slow -->|Yes| sample
slow -->|No| prob{"Random < 10%?"}
prob -->|Yes| sample
prob -->|No| drop["DROP"]
style sample fill:#4caf50,stroke:#388e3c,color:#fff
style drop fill:#f44336,stroke:#c62828,color:#fff
```
### 3.7.2 Batch Tuning Recommendations
| Environment | Batch Size | Batch Delay | Max Queue |
| ------------------ | ---------- | ----------- | --------- |
| Low-latency | 128 | 1000ms | 512 |
| High-throughput | 1024 | 10000ms | 8192 |
| Memory-constrained | 256 | 2000ms | 512 |
### 3.7.3 Conditional Instrumentation
```cpp
// Compile-time feature flag
#ifndef XRPL_ENABLE_TELEMETRY
// Zero-cost when disabled
#define XRPL_TRACE_SPAN(t, n) ((void)0)
#endif
// Runtime component filtering
if (telemetry.shouldTracePeer())
{
XRPL_TRACE_SPAN(telemetry, "peer.message.receive");
// ... instrumentation
}
// No overhead when component tracing disabled
```
---
## 3.8 Links to Detailed Documentation
- **[Code Samples](./04-code-samples.md)**: Complete implementation code for all components
- **[Configuration Reference](./05-configuration-reference.md)**: Configuration options and collector setup
- **[Implementation Phases](./06-implementation-phases.md)**: Detailed timeline and milestones
---
## 3.9 Code Intrusiveness Assessment
This section provides a detailed assessment of how intrusive the OpenTelemetry integration is to the existing rippled codebase.
### 3.9.1 Files Modified Summary
| Component | Files Modified | Lines Added | Lines Changed | Architectural Impact |
| --------------------- | -------------- | ----------- | ------------- | -------------------- |
| **Core Telemetry** | 5 new files | ~800 | 0 | None (new module) |
| **Application Init** | 2 files | ~30 | ~5 | Minimal |
| **RPC Layer** | 3 files | ~80 | ~20 | Minimal |
| **Transaction Relay** | 4 files | ~120 | ~40 | Low |
| **Consensus** | 3 files | ~100 | ~30 | Low-Medium |
| **Protocol Buffers** | 1 file | ~25 | 0 | Low |
| **CMake/Build** | 3 files | ~50 | ~10 | Minimal |
| **Total** | **~21 files** | **~1,205** | **~105** | **Low** |
### 3.9.2 Detailed File Impact
```mermaid
pie title Code Changes by Component
"New Telemetry Module" : 800
"Transaction Relay" : 160
"Consensus" : 130
"RPC Layer" : 100
"Application Init" : 35
"Protocol Buffers" : 25
"Build System" : 60
```
#### New Files (No Impact on Existing Code)
| File | Lines | Purpose |
| ---------------------------------------------- | ----- | -------------------- |
| `include/xrpl/telemetry/Telemetry.h` | ~160 | Main interface |
| `include/xrpl/telemetry/SpanGuard.h` | ~120 | RAII wrapper |
| `include/xrpl/telemetry/TraceContext.h` | ~80 | Context propagation |
| `src/xrpld/telemetry/TracingInstrumentation.h` | ~60 | Macros |
| `src/libxrpl/telemetry/Telemetry.cpp` | ~200 | Implementation |
| `src/libxrpl/telemetry/TelemetryConfig.cpp` | ~60 | Config parsing |
| `src/libxrpl/telemetry/NullTelemetry.cpp` | ~40 | No-op implementation |
#### Modified Files (Existing Rippled Code)
| File | Lines Added | Lines Changed | Risk Level |
| ------------------------------------------------- | ----------- | ------------- | ---------- |
| `src/xrpld/app/main/Application.cpp` | ~15 | ~3 | Low |
| `include/xrpl/app/main/Application.h` | ~5 | ~2 | Low |
| `src/xrpld/rpc/detail/ServerHandler.cpp` | ~40 | ~10 | Low |
| `src/xrpld/rpc/handlers/*.cpp` | ~30 | ~8 | Low |
| `src/xrpld/overlay/detail/PeerImp.cpp` | ~60 | ~15 | Medium |
| `src/xrpld/overlay/detail/OverlayImpl.cpp` | ~30 | ~10 | Medium |
| `src/xrpld/app/consensus/RCLConsensus.cpp` | ~50 | ~15 | Medium |
| `src/xrpld/app/consensus/RCLConsensusAdaptor.cpp` | ~40 | ~12 | Medium |
| `src/xrpld/core/JobQueue.cpp` | ~20 | ~5 | Low |
| `src/xrpld/overlay/detail/ripple.proto` | ~25 | 0 | Low |
| `CMakeLists.txt` | ~40 | ~8 | Low |
| `cmake/FindOpenTelemetry.cmake` | ~50 | 0 | None (new) |
### 3.9.3 Risk Assessment by Component
<div align="center">
**Do First** ↖ ↗ **Plan Carefully**
```mermaid
quadrantChart
title Code Intrusiveness Risk Matrix
x-axis Low Risk --> High Risk
y-axis Low Value --> High Value
RPC Tracing: [0.2, 0.8]
Transaction Relay: [0.5, 0.9]
Consensus Tracing: [0.7, 0.95]
Peer Message Tracing: [0.8, 0.4]
JobQueue Context: [0.4, 0.5]
Ledger Acquisition: [0.5, 0.6]
```
**Optional** ↙ ↘ **Avoid**
</div>
#### Risk Level Definitions
| Risk Level | Definition | Mitigation |
| ---------- | ---------------------------------------------------------------- | ---------------------------------- |
| **Low** | Additive changes only; no modification to existing logic | Standard code review |
| **Medium** | Minor modifications to existing functions; clear boundaries | Comprehensive unit tests |
| **High** | Changes to core logic or data structures; potential side effects | Integration tests + staged rollout |
### 3.9.4 Architectural Impact Assessment
| Aspect | Impact | Justification |
| -------------------- | ------- | --------------------------------------------------------------------- |
| **Data Flow** | None | Tracing is purely observational; no business logic changes |
| **Threading Model** | Minimal | Context propagation uses thread-local storage (standard OTel pattern) |
| **Memory Model** | Low | Bounded queues prevent unbounded growth; RAII ensures cleanup |
| **Network Protocol** | Low | Optional fields in protobuf (high field numbers); backward compatible |
| **Configuration** | None | New config section; existing configs unaffected |
| **Build System** | Low | Optional CMake flag; builds work without OpenTelemetry |
| **Dependencies** | Low | OpenTelemetry SDK is optional; null implementation when disabled |
### 3.9.5 Backward Compatibility
| Compatibility | Status | Notes |
| --------------- | ------- | ----------------------------------------------------- |
| **Config File** | ✅ Full | New `[telemetry]` section is optional |
| **Protocol** | ✅ Full | Optional protobuf fields with high field numbers |
| **Build** | ✅ Full | `XRPL_ENABLE_TELEMETRY=OFF` produces identical binary |
| **Runtime** | ✅ Full | `enabled=0` produces zero overhead |
| **API** | ✅ Full | No changes to public RPC or P2P APIs |
### 3.9.6 Rollback Strategy
If issues are discovered after deployment:
1. **Immediate**: Set `enabled=0` in config and restart (zero code change)
2. **Quick**: Rebuild with `XRPL_ENABLE_TELEMETRY=OFF`
3. **Complete**: Revert telemetry commits (clean separation makes this easy)
### 3.9.7 Code Change Examples
**Minimal RPC Instrumentation (Low Intrusiveness):**
```cpp
// Before
void ServerHandler::onRequest(...) {
auto result = processRequest(req);
send(result);
}
// After (only ~10 lines added)
void ServerHandler::onRequest(...) {
XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.request"); // +1 line
XRPL_TRACE_SET_ATTR("xrpl.rpc.command", command); // +1 line
auto result = processRequest(req);
XRPL_TRACE_SET_ATTR("xrpl.rpc.status", status); // +1 line
send(result);
}
```
**Consensus Instrumentation (Medium Intrusiveness):**
```cpp
// Before
void RCLConsensusAdaptor::startRound(...) {
// ... existing logic
}
// After (context storage required)
void RCLConsensusAdaptor::startRound(...) {
XRPL_TRACE_CONSENSUS(app_.getTelemetry(), "consensus.round");
XRPL_TRACE_SET_ATTR("xrpl.consensus.ledger.seq", seq);
// Store context for child spans in phase transitions
currentRoundContext_ = _xrpl_guard_->context(); // New member variable
// ... existing logic unchanged
}
```
---
_Previous: [Design Decisions](./02-design-decisions.md)_ | _Next: [Code Samples](./04-code-samples.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,982 @@
# Code Samples
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Implementation Strategy](./03-implementation-strategy.md) | [Configuration Reference](./05-configuration-reference.md)
---
## 4.1 Core Interfaces
### 4.1.1 Main Telemetry Interface
```cpp
// include/xrpl/telemetry/Telemetry.h
#pragma once
#include <xrpl/telemetry/TelemetryConfig.h>
#include <opentelemetry/trace/tracer.h>
#include <opentelemetry/trace/span.h>
#include <opentelemetry/context/context.h>
#include <memory>
#include <string>
#include <string_view>
namespace xrpl {
namespace telemetry {
/**
* Main telemetry interface for OpenTelemetry integration.
*
* This class provides the primary API for distributed tracing in rippled.
* It manages the OpenTelemetry SDK lifecycle and provides convenience
* methods for creating spans and propagating context.
*/
class Telemetry
{
public:
/**
* Configuration for the telemetry system.
*/
struct Setup
{
bool enabled = false;
std::string serviceName = "rippled";
std::string serviceVersion;
std::string serviceInstanceId; // Node public key
// Exporter configuration
std::string exporterType = "otlp_grpc"; // "otlp_grpc", "otlp_http", "none"
std::string exporterEndpoint = "localhost:4317";
bool useTls = false;
std::string tlsCertPath;
// Sampling configuration
double samplingRatio = 1.0; // 1.0 = 100% sampling
// Batch processor settings
std::uint32_t batchSize = 512;
std::chrono::milliseconds batchDelay{5000};
std::uint32_t maxQueueSize = 2048;
// Network attributes
std::uint32_t networkId = 0;
std::string networkType = "mainnet";
// Component filtering
bool traceTransactions = true;
bool traceConsensus = true;
bool traceRpc = true;
bool tracePeer = false; // High volume, disabled by default
bool traceLedger = true;
};
virtual ~Telemetry() = default;
// ═══════════════════════════════════════════════════════════════════════
// LIFECYCLE
// ═══════════════════════════════════════════════════════════════════════
/** Start the telemetry system (call after configuration) */
virtual void start() = 0;
/** Stop the telemetry system (flushes pending spans) */
virtual void stop() = 0;
/** Check if telemetry is enabled */
virtual bool isEnabled() const = 0;
// ═══════════════════════════════════════════════════════════════════════
// TRACER ACCESS
// ═══════════════════════════════════════════════════════════════════════
/** Get the tracer for creating spans */
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Tracer>
getTracer(std::string_view name = "rippled") = 0;
// ═══════════════════════════════════════════════════════════════════════
// SPAN CREATION (Convenience Methods)
// ═══════════════════════════════════════════════════════════════════════
/** Start a new span with default options */
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span>
startSpan(
std::string_view name,
opentelemetry::trace::SpanKind kind =
opentelemetry::trace::SpanKind::kInternal) = 0;
/** Start a span as child of given context */
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span>
startSpan(
std::string_view name,
opentelemetry::context::Context const& parentContext,
opentelemetry::trace::SpanKind kind =
opentelemetry::trace::SpanKind::kInternal) = 0;
// ═══════════════════════════════════════════════════════════════════════
// CONTEXT PROPAGATION
// ═══════════════════════════════════════════════════════════════════════
/** Serialize context for network transmission */
virtual std::string serializeContext(
opentelemetry::context::Context const& ctx) = 0;
/** Deserialize context from network data */
virtual opentelemetry::context::Context deserializeContext(
std::string const& serialized) = 0;
// ═══════════════════════════════════════════════════════════════════════
// COMPONENT FILTERING
// ═══════════════════════════════════════════════════════════════════════
/** Check if transaction tracing is enabled */
virtual bool shouldTraceTransactions() const = 0;
/** Check if consensus tracing is enabled */
virtual bool shouldTraceConsensus() const = 0;
/** Check if RPC tracing is enabled */
virtual bool shouldTraceRpc() const = 0;
/** Check if peer message tracing is enabled */
virtual bool shouldTracePeer() const = 0;
};
// Factory functions
std::unique_ptr<Telemetry>
make_Telemetry(
Telemetry::Setup const& setup,
beast::Journal journal);
Telemetry::Setup
setup_Telemetry(
Section const& section,
std::string const& nodePublicKey,
std::string const& version);
} // namespace telemetry
} // namespace xrpl
```
---
## 4.2 RAII Span Guard
```cpp
// include/xrpl/telemetry/SpanGuard.h
#pragma once
#include <opentelemetry/trace/span.h>
#include <opentelemetry/trace/scope.h>
#include <opentelemetry/trace/status.h>
#include <string_view>
#include <exception>
namespace xrpl {
namespace telemetry {
/**
* RAII guard for OpenTelemetry spans.
*
* Automatically ends the span on destruction and makes it the current
* span in the thread-local context.
*/
class SpanGuard
{
opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span> span_;
opentelemetry::trace::Scope scope_;
public:
/**
* Construct guard with span.
* The span becomes the current span in thread-local context.
*/
explicit SpanGuard(
opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span> span)
: span_(std::move(span))
, scope_(span_)
{
}
// Non-copyable, non-movable
SpanGuard(SpanGuard const&) = delete;
SpanGuard& operator=(SpanGuard const&) = delete;
SpanGuard(SpanGuard&&) = delete;
SpanGuard& operator=(SpanGuard&&) = delete;
~SpanGuard()
{
if (span_)
span_->End();
}
/** Access the underlying span */
opentelemetry::trace::Span& span() { return *span_; }
opentelemetry::trace::Span const& span() const { return *span_; }
/** Set span status to OK */
void setOk()
{
span_->SetStatus(opentelemetry::trace::StatusCode::kOk);
}
/** Set span status with code and description */
void setStatus(
opentelemetry::trace::StatusCode code,
std::string_view description = "")
{
span_->SetStatus(code, std::string(description));
}
/** Set an attribute on the span */
template<typename T>
void setAttribute(std::string_view key, T&& value)
{
span_->SetAttribute(
opentelemetry::nostd::string_view(key.data(), key.size()),
std::forward<T>(value));
}
/** Add an event to the span */
void addEvent(std::string_view name)
{
span_->AddEvent(std::string(name));
}
/** Record an exception on the span */
void recordException(std::exception const& e)
{
span_->RecordException(e);
span_->SetStatus(
opentelemetry::trace::StatusCode::kError,
e.what());
}
/** Get the current trace context */
opentelemetry::context::Context context() const
{
return opentelemetry::context::RuntimeContext::GetCurrent();
}
};
/**
* No-op span guard for when tracing is disabled.
* Provides the same interface but does nothing.
*/
class NullSpanGuard
{
public:
NullSpanGuard() = default;
void setOk() {}
void setStatus(opentelemetry::trace::StatusCode, std::string_view = "") {}
template<typename T>
void setAttribute(std::string_view, T&&) {}
void addEvent(std::string_view) {}
void recordException(std::exception const&) {}
};
} // namespace telemetry
} // namespace xrpl
```
---
## 4.3 Instrumentation Macros
```cpp
// src/xrpld/telemetry/TracingInstrumentation.h
#pragma once
#include <xrpl/telemetry/Telemetry.h>
#include <xrpl/telemetry/SpanGuard.h>
namespace xrpl {
namespace telemetry {
// ═══════════════════════════════════════════════════════════════════════════
// INSTRUMENTATION MACROS
// ═══════════════════════════════════════════════════════════════════════════
#ifdef XRPL_ENABLE_TELEMETRY
// Start a span that is automatically ended when guard goes out of scope
#define XRPL_TRACE_SPAN(telemetry, name) \
auto _xrpl_span_ = (telemetry).startSpan(name); \
::xrpl::telemetry::SpanGuard _xrpl_guard_(_xrpl_span_)
// Start a span with specific kind
#define XRPL_TRACE_SPAN_KIND(telemetry, name, kind) \
auto _xrpl_span_ = (telemetry).startSpan(name, kind); \
::xrpl::telemetry::SpanGuard _xrpl_guard_(_xrpl_span_)
// Conditional span based on component
#define XRPL_TRACE_TX(telemetry, name) \
std::optional<::xrpl::telemetry::SpanGuard> _xrpl_guard_; \
if ((telemetry).shouldTraceTransactions()) { \
_xrpl_guard_.emplace((telemetry).startSpan(name)); \
}
#define XRPL_TRACE_CONSENSUS(telemetry, name) \
std::optional<::xrpl::telemetry::SpanGuard> _xrpl_guard_; \
if ((telemetry).shouldTraceConsensus()) { \
_xrpl_guard_.emplace((telemetry).startSpan(name)); \
}
#define XRPL_TRACE_RPC(telemetry, name) \
std::optional<::xrpl::telemetry::SpanGuard> _xrpl_guard_; \
if ((telemetry).shouldTraceRpc()) { \
_xrpl_guard_.emplace((telemetry).startSpan(name)); \
}
// Set attribute on current span (if exists)
#define XRPL_TRACE_SET_ATTR(key, value) \
if (_xrpl_guard_.has_value()) { \
_xrpl_guard_->setAttribute(key, value); \
}
// Record exception on current span
#define XRPL_TRACE_EXCEPTION(e) \
if (_xrpl_guard_.has_value()) { \
_xrpl_guard_->recordException(e); \
}
#else // XRPL_ENABLE_TELEMETRY not defined
#define XRPL_TRACE_SPAN(telemetry, name) ((void)0)
#define XRPL_TRACE_SPAN_KIND(telemetry, name, kind) ((void)0)
#define XRPL_TRACE_TX(telemetry, name) ((void)0)
#define XRPL_TRACE_CONSENSUS(telemetry, name) ((void)0)
#define XRPL_TRACE_RPC(telemetry, name) ((void)0)
#define XRPL_TRACE_SET_ATTR(key, value) ((void)0)
#define XRPL_TRACE_EXCEPTION(e) ((void)0)
#endif // XRPL_ENABLE_TELEMETRY
} // namespace telemetry
} // namespace xrpl
```
---
## 4.4 Protocol Buffer Extensions
### 4.4.1 TraceContext Message Definition
Add to `src/xrpld/overlay/detail/ripple.proto`:
```protobuf
// Trace context for distributed tracing across nodes
// Uses W3C Trace Context format internally
message TraceContext {
// 16-byte trace identifier (required for valid context)
bytes trace_id = 1;
// 8-byte span identifier of parent span
bytes span_id = 2;
// Trace flags (bit 0 = sampled)
uint32 trace_flags = 3;
// W3C tracestate header value for vendor-specific data
string trace_state = 4;
}
// Extend existing messages with optional trace context
// High field numbers (1000+) to avoid conflicts
message TMTransaction {
// ... existing fields ...
// Optional trace context for distributed tracing
optional TraceContext trace_context = 1001;
}
message TMProposeSet {
// ... existing fields ...
optional TraceContext trace_context = 1001;
}
message TMValidation {
// ... existing fields ...
optional TraceContext trace_context = 1001;
}
message TMGetLedger {
// ... existing fields ...
optional TraceContext trace_context = 1001;
}
message TMLedgerData {
// ... existing fields ...
optional TraceContext trace_context = 1001;
}
```
### 4.4.2 Context Serialization/Deserialization
```cpp
// include/xrpl/telemetry/TraceContext.h
#pragma once
#include <opentelemetry/context/context.h>
#include <opentelemetry/trace/span_context.h>
#include <protocol/messages.h> // Generated protobuf
#include <optional>
#include <string>
namespace xrpl {
namespace telemetry {
/**
* Utilities for trace context serialization and propagation.
*/
class TraceContextPropagator
{
public:
/**
* Extract trace context from Protocol Buffer message.
* Returns empty context if no trace info present.
*/
static opentelemetry::context::Context
extract(protocol::TraceContext const& proto);
/**
* Inject current trace context into Protocol Buffer message.
*/
static void
inject(
opentelemetry::context::Context const& ctx,
protocol::TraceContext& proto);
/**
* Extract trace context from HTTP headers (for RPC).
* Supports W3C Trace Context (traceparent, tracestate).
*/
static opentelemetry::context::Context
extractFromHeaders(
std::function<std::optional<std::string>(std::string_view)> headerGetter);
/**
* Inject trace context into HTTP headers (for RPC responses).
*/
static void
injectToHeaders(
opentelemetry::context::Context const& ctx,
std::function<void(std::string_view, std::string_view)> headerSetter);
};
// ═══════════════════════════════════════════════════════════════════════════
// IMPLEMENTATION
// ═══════════════════════════════════════════════════════════════════════════
inline opentelemetry::context::Context
TraceContextPropagator::extract(protocol::TraceContext const& proto)
{
using namespace opentelemetry::trace;
if (proto.trace_id().size() != 16 || proto.span_id().size() != 8)
return opentelemetry::context::Context{}; // Invalid, return empty
// Construct TraceId and SpanId from bytes
TraceId traceId(reinterpret_cast<uint8_t const*>(proto.trace_id().data()));
SpanId spanId(reinterpret_cast<uint8_t const*>(proto.span_id().data()));
TraceFlags flags(static_cast<uint8_t>(proto.trace_flags()));
// Create SpanContext from extracted data
SpanContext spanContext(traceId, spanId, flags, /* remote = */ true);
// Create context with extracted span as parent
return opentelemetry::context::Context{}.SetValue(
opentelemetry::trace::kSpanKey,
opentelemetry::nostd::shared_ptr<Span>(
new DefaultSpan(spanContext)));
}
inline void
TraceContextPropagator::inject(
opentelemetry::context::Context const& ctx,
protocol::TraceContext& proto)
{
using namespace opentelemetry::trace;
// Get current span from context
auto span = GetSpan(ctx);
if (!span)
return;
auto const& spanCtx = span->GetContext();
if (!spanCtx.IsValid())
return;
// Serialize trace_id (16 bytes)
auto const& traceId = spanCtx.trace_id();
proto.set_trace_id(traceId.Id().data(), TraceId::kSize);
// Serialize span_id (8 bytes)
auto const& spanId = spanCtx.span_id();
proto.set_span_id(spanId.Id().data(), SpanId::kSize);
// Serialize flags
proto.set_trace_flags(spanCtx.trace_flags().flags());
// Note: tracestate not implemented yet
}
} // namespace telemetry
} // namespace xrpl
```
---
## 4.5 Module-Specific Instrumentation
### 4.5.1 Transaction Relay Instrumentation
```cpp
// src/xrpld/overlay/detail/PeerImp.cpp (modified)
#include <xrpl/telemetry/TracingInstrumentation.h>
void
PeerImp::handleTransaction(
std::shared_ptr<protocol::TMTransaction> const& m)
{
// Extract trace context from incoming message
opentelemetry::context::Context parentCtx;
if (m->has_trace_context())
{
parentCtx = telemetry::TraceContextPropagator::extract(
m->trace_context());
}
// Start span as child of remote span (cross-node link)
auto span = app_.getTelemetry().startSpan(
"tx.receive",
parentCtx,
opentelemetry::trace::SpanKind::kServer);
telemetry::SpanGuard guard(span);
try
{
// Parse and validate transaction
SerialIter sit(makeSlice(m->rawtransaction()));
auto stx = std::make_shared<STTx const>(sit);
// Add transaction attributes
guard.setAttribute("xrpl.tx.hash", to_string(stx->getTransactionID()));
guard.setAttribute("xrpl.tx.type", stx->getTxnType());
guard.setAttribute("xrpl.peer.id", remote_address_.to_string());
// Check if we've seen this transaction (HashRouter)
auto const [flags, suppressed] =
app_.getHashRouter().addSuppressionPeer(
stx->getTransactionID(),
id_);
if (suppressed)
{
guard.setAttribute("xrpl.tx.suppressed", true);
guard.addEvent("tx.duplicate");
return; // Already processing this transaction
}
// Create child span for validation
{
auto validateSpan = app_.getTelemetry().startSpan("tx.validate");
telemetry::SpanGuard validateGuard(validateSpan);
auto [validity, reason] = checkTransaction(stx);
validateGuard.setAttribute("xrpl.tx.validity",
validity == Validity::Valid ? "valid" : "invalid");
if (validity != Validity::Valid)
{
validateGuard.setStatus(
opentelemetry::trace::StatusCode::kError,
reason);
return;
}
}
// Relay to other peers (capture context for propagation)
auto ctx = guard.context();
// Create child span for relay
auto relaySpan = app_.getTelemetry().startSpan(
"tx.relay",
ctx,
opentelemetry::trace::SpanKind::kClient);
{
telemetry::SpanGuard relayGuard(relaySpan);
// Inject context into outgoing message
protocol::TraceContext protoCtx;
telemetry::TraceContextPropagator::inject(
relayGuard.context(), protoCtx);
// Relay to other peers
app_.overlay().relay(
stx->getTransactionID(),
*m,
protoCtx, // Pass trace context
exclusions);
relayGuard.setAttribute("xrpl.tx.relay_count",
static_cast<int64_t>(relayCount));
}
guard.setOk();
}
catch (std::exception const& e)
{
guard.recordException(e);
JLOG(journal_.warn()) << "Transaction handling failed: " << e.what();
}
}
```
### 4.5.2 Consensus Instrumentation
```cpp
// src/xrpld/app/consensus/RCLConsensus.cpp (modified)
#include <xrpl/telemetry/TracingInstrumentation.h>
void
RCLConsensusAdaptor::startRound(
NetClock::time_point const& now,
RCLCxLedger::ID const& prevLedgerHash,
RCLCxLedger const& prevLedger,
hash_set<NodeID> const& peers,
bool proposing)
{
XRPL_TRACE_CONSENSUS(app_.getTelemetry(), "consensus.round");
XRPL_TRACE_SET_ATTR("xrpl.consensus.ledger.prev", to_string(prevLedgerHash));
XRPL_TRACE_SET_ATTR("xrpl.consensus.ledger.seq",
static_cast<int64_t>(prevLedger.seq() + 1));
XRPL_TRACE_SET_ATTR("xrpl.consensus.proposers",
static_cast<int64_t>(peers.size()));
XRPL_TRACE_SET_ATTR("xrpl.consensus.mode",
proposing ? "proposing" : "observing");
// Store trace context for use in phase transitions
currentRoundContext_ = _xrpl_guard_.has_value()
? _xrpl_guard_->context()
: opentelemetry::context::Context{};
// ... existing implementation ...
}
ConsensusPhase
RCLConsensusAdaptor::phaseTransition(ConsensusPhase newPhase)
{
// Create span for phase transition
auto span = app_.getTelemetry().startSpan(
"consensus.phase." + to_string(newPhase),
currentRoundContext_);
telemetry::SpanGuard guard(span);
guard.setAttribute("xrpl.consensus.phase", to_string(newPhase));
guard.addEvent("phase.enter");
auto const startTime = std::chrono::steady_clock::now();
try
{
auto result = doPhaseTransition(newPhase);
auto const duration = std::chrono::steady_clock::now() - startTime;
guard.setAttribute("xrpl.consensus.phase_duration_ms",
std::chrono::duration<double, std::milli>(duration).count());
guard.setOk();
return result;
}
catch (std::exception const& e)
{
guard.recordException(e);
throw;
}
}
void
RCLConsensusAdaptor::peerProposal(
NetClock::time_point const& now,
RCLCxPeerPos const& proposal)
{
// Extract trace context from proposal message
opentelemetry::context::Context parentCtx;
if (proposal.hasTraceContext())
{
parentCtx = telemetry::TraceContextPropagator::extract(
proposal.traceContext());
}
auto span = app_.getTelemetry().startSpan(
"consensus.proposal.receive",
parentCtx,
opentelemetry::trace::SpanKind::kServer);
telemetry::SpanGuard guard(span);
guard.setAttribute("xrpl.consensus.proposer",
toBase58(TokenType::NodePublic, proposal.nodeId()));
guard.setAttribute("xrpl.consensus.round",
static_cast<int64_t>(proposal.proposal().proposeSeq()));
// ... existing implementation ...
guard.setOk();
}
```
### 4.5.3 RPC Handler Instrumentation
```cpp
// src/xrpld/rpc/detail/ServerHandler.cpp (modified)
#include <xrpl/telemetry/TracingInstrumentation.h>
void
ServerHandler::onRequest(
http_request_type&& req,
std::function<void(http_response_type&&)>&& send)
{
// Extract trace context from HTTP headers (W3C Trace Context)
auto parentCtx = telemetry::TraceContextPropagator::extractFromHeaders(
[&req](std::string_view name) -> std::optional<std::string> {
auto it = req.find(boost::beast::http::field{
std::string(name)});
if (it != req.end())
return std::string(it->value());
return std::nullopt;
});
// Start request span
auto span = app_.getTelemetry().startSpan(
"rpc.request",
parentCtx,
opentelemetry::trace::SpanKind::kServer);
telemetry::SpanGuard guard(span);
// Add HTTP attributes
guard.setAttribute("http.method", std::string(req.method_string()));
guard.setAttribute("http.target", std::string(req.target()));
guard.setAttribute("http.user_agent",
std::string(req[boost::beast::http::field::user_agent]));
auto const startTime = std::chrono::steady_clock::now();
try
{
// Parse and process request
auto const& body = req.body();
Json::Value jv;
Json::Reader reader;
if (!reader.parse(body, jv))
{
guard.setStatus(
opentelemetry::trace::StatusCode::kError,
"Invalid JSON");
sendError(send, "Invalid JSON");
return;
}
// Extract command name
std::string command = jv.isMember("command")
? jv["command"].asString()
: jv.isMember("method")
? jv["method"].asString()
: "unknown";
guard.setAttribute("xrpl.rpc.command", command);
// Create child span for command execution
auto cmdSpan = app_.getTelemetry().startSpan(
"rpc.command." + command);
{
telemetry::SpanGuard cmdGuard(cmdSpan);
// Execute RPC command
auto result = processRequest(jv);
// Record result attributes
if (result.isMember("status"))
{
cmdGuard.setAttribute("xrpl.rpc.status",
result["status"].asString());
}
if (result["status"].asString() == "error")
{
cmdGuard.setStatus(
opentelemetry::trace::StatusCode::kError,
result.isMember("error_message")
? result["error_message"].asString()
: "RPC error");
}
else
{
cmdGuard.setOk();
}
}
auto const duration = std::chrono::steady_clock::now() - startTime;
guard.setAttribute("http.duration_ms",
std::chrono::duration<double, std::milli>(duration).count());
// Inject trace context into response headers
http_response_type resp;
telemetry::TraceContextPropagator::injectToHeaders(
guard.context(),
[&resp](std::string_view name, std::string_view value) {
resp.set(std::string(name), std::string(value));
});
guard.setOk();
send(std::move(resp));
}
catch (std::exception const& e)
{
guard.recordException(e);
JLOG(journal_.error()) << "RPC request failed: " << e.what();
sendError(send, e.what());
}
}
```
### 4.5.4 JobQueue Context Propagation
```cpp
// src/xrpld/core/JobQueue.h (modified)
#include <opentelemetry/context/context.h>
class Job
{
// ... existing members ...
// Captured trace context at job creation
opentelemetry::context::Context traceContext_;
public:
// Constructor captures current trace context
Job(JobType type, std::function<void()> func, ...)
: type_(type)
, func_(std::move(func))
, traceContext_(opentelemetry::context::RuntimeContext::GetCurrent())
// ... other initializations ...
{
}
// Get trace context for restoration during execution
opentelemetry::context::Context const&
traceContext() const { return traceContext_; }
};
// src/xrpld/core/JobQueue.cpp (modified)
void
Worker::run()
{
while (auto job = getJob())
{
// Restore trace context from job creation
auto token = opentelemetry::context::RuntimeContext::Attach(
job->traceContext());
// Start execution span
auto span = app_.getTelemetry().startSpan("job.execute");
telemetry::SpanGuard guard(span);
guard.setAttribute("xrpl.job.type", to_string(job->type()));
guard.setAttribute("xrpl.job.queue_ms", job->queueTimeMs());
guard.setAttribute("xrpl.job.worker", workerId_);
try
{
job->execute();
guard.setOk();
}
catch (std::exception const& e)
{
guard.recordException(e);
JLOG(journal_.error()) << "Job execution failed: " << e.what();
}
}
}
```
---
## 4.6 Span Flow Visualization
<div align="center">
```mermaid
flowchart TB
subgraph Client["External Client"]
submit["Submit TX"]
end
subgraph NodeA["rippled Node A"]
rpcA["rpc.request"]
cmdA["rpc.command.submit"]
txRecvA["tx.receive"]
txValA["tx.validate"]
txRelayA["tx.relay"]
end
subgraph NodeB["rippled Node B"]
txRecvB["tx.receive"]
txValB["tx.validate"]
txRelayB["tx.relay"]
end
subgraph NodeC["rippled Node C"]
txRecvC["tx.receive"]
consensusC["consensus.round"]
phaseC["consensus.phase.establish"]
end
submit --> rpcA
rpcA --> cmdA
cmdA --> txRecvA
txRecvA --> txValA
txValA --> txRelayA
txRelayA -.->|"TraceContext"| txRecvB
txRecvB --> txValB
txValB --> txRelayB
txRelayB -.->|"TraceContext"| txRecvC
txRecvC --> consensusC
consensusC --> phaseC
style Client fill:#334155,stroke:#1e293b,color:#fff
style NodeA fill:#1e3a8a,stroke:#172554,color:#fff
style NodeB fill:#064e3b,stroke:#022c22,color:#fff
style NodeC fill:#78350f,stroke:#451a03,color:#fff
style submit fill:#e2e8f0,stroke:#cbd5e1,color:#1e293b
style rpcA fill:#1d4ed8,stroke:#1e40af,color:#fff
style cmdA fill:#1d4ed8,stroke:#1e40af,color:#fff
style txRecvA fill:#047857,stroke:#064e3b,color:#fff
style txValA fill:#047857,stroke:#064e3b,color:#fff
style txRelayA fill:#047857,stroke:#064e3b,color:#fff
style txRecvB fill:#047857,stroke:#064e3b,color:#fff
style txValB fill:#047857,stroke:#064e3b,color:#fff
style txRelayB fill:#047857,stroke:#064e3b,color:#fff
style txRecvC fill:#047857,stroke:#064e3b,color:#fff
style consensusC fill:#fef3c7,stroke:#fde68a,color:#1e293b
style phaseC fill:#fef3c7,stroke:#fde68a,color:#1e293b
```
</div>
---
_Previous: [Implementation Strategy](./03-implementation-strategy.md)_ | _Next: [Configuration Reference](./05-configuration-reference.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,958 @@
# Configuration Reference
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Code Samples](./04-code-samples.md) | [Implementation Phases](./06-implementation-phases.md)
---
## 5.1 rippled Configuration
### 5.1.1 Configuration File Section
Add to `cfg/xrpld-example.cfg`:
```ini
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY (OpenTelemetry Distributed Tracing)
# ═══════════════════════════════════════════════════════════════════════════════
#
# Enables distributed tracing for transaction flow, consensus, and RPC calls.
# Traces are exported to an OpenTelemetry Collector using OTLP protocol.
#
# [telemetry]
#
# # Enable/disable telemetry (default: 0 = disabled)
# enabled=1
#
# # Exporter type: "otlp_grpc" (default), "otlp_http", or "none"
# exporter=otlp_grpc
#
# # OTLP endpoint (default: localhost:4317 for gRPC, localhost:4318 for HTTP)
# endpoint=localhost:4317
#
# # Use TLS for exporter connection (default: 0)
# use_tls=0
#
# # Path to CA certificate for TLS (optional)
# # tls_ca_cert=/path/to/ca.crt
#
# # Sampling ratio: 0.0-1.0 (default: 1.0 = 100% sampling)
# # Use lower values in production to reduce overhead
# sampling_ratio=0.1
#
# # Batch processor settings
# batch_size=512 # Spans per batch (default: 512)
# batch_delay_ms=5000 # Max delay before sending batch (default: 5000)
# max_queue_size=2048 # Max queued spans (default: 2048)
#
# # Component-specific tracing (default: all enabled except peer)
# trace_transactions=1 # Transaction relay and processing
# trace_consensus=1 # Consensus rounds and proposals
# trace_rpc=1 # RPC request handling
# trace_peer=0 # Peer messages (high volume, disabled by default)
# trace_ledger=1 # Ledger acquisition and building
#
# # Service identification (automatically detected if not specified)
# # service_name=rippled
# # service_instance_id=<node_public_key>
[telemetry]
enabled=0
```
### 5.1.2 Configuration Options Summary
| Option | Type | Default | Description |
| --------------------- | ------ | ---------------- | ----------------------------------------- |
| `enabled` | bool | `false` | Enable/disable telemetry |
| `exporter` | string | `"otlp_grpc"` | Exporter type: otlp_grpc, otlp_http, none |
| `endpoint` | string | `localhost:4317` | OTLP collector endpoint |
| `use_tls` | bool | `false` | Enable TLS for exporter connection |
| `tls_ca_cert` | string | `""` | Path to CA certificate file |
| `sampling_ratio` | float | `1.0` | Sampling ratio (0.0-1.0) |
| `batch_size` | uint | `512` | Spans per export batch |
| `batch_delay_ms` | uint | `5000` | Max delay before sending batch (ms) |
| `max_queue_size` | uint | `2048` | Maximum queued spans |
| `trace_transactions` | bool | `true` | Enable transaction tracing |
| `trace_consensus` | bool | `true` | Enable consensus tracing |
| `trace_rpc` | bool | `true` | Enable RPC tracing |
| `trace_peer` | bool | `false` | Enable peer message tracing (high volume) |
| `trace_ledger` | bool | `true` | Enable ledger tracing |
| `service_name` | string | `"rippled"` | Service name for traces |
| `service_instance_id` | string | `<node_pubkey>` | Instance identifier |
---
## 5.2 Configuration Parser
```cpp
// src/libxrpl/telemetry/TelemetryConfig.cpp
#include <xrpl/telemetry/Telemetry.h>
#include <xrpl/basics/Log.h>
namespace xrpl {
namespace telemetry {
Telemetry::Setup
setup_Telemetry(
Section const& section,
std::string const& nodePublicKey,
std::string const& version)
{
Telemetry::Setup setup;
// Basic settings
setup.enabled = section.value_or("enabled", false);
setup.serviceName = section.value_or("service_name", "rippled");
setup.serviceVersion = version;
setup.serviceInstanceId = section.value_or(
"service_instance_id", nodePublicKey);
// Exporter settings
setup.exporterType = section.value_or("exporter", "otlp_grpc");
if (setup.exporterType == "otlp_grpc")
setup.exporterEndpoint = section.value_or("endpoint", "localhost:4317");
else if (setup.exporterType == "otlp_http")
setup.exporterEndpoint = section.value_or("endpoint", "localhost:4318");
setup.useTls = section.value_or("use_tls", false);
setup.tlsCertPath = section.value_or("tls_ca_cert", "");
// Sampling
setup.samplingRatio = section.value_or("sampling_ratio", 1.0);
if (setup.samplingRatio < 0.0 || setup.samplingRatio > 1.0)
{
Throw<std::runtime_error>(
"telemetry.sampling_ratio must be between 0.0 and 1.0");
}
// Batch processor
setup.batchSize = section.value_or("batch_size", 512u);
setup.batchDelay = std::chrono::milliseconds{
section.value_or("batch_delay_ms", 5000u)};
setup.maxQueueSize = section.value_or("max_queue_size", 2048u);
// Component filtering
setup.traceTransactions = section.value_or("trace_transactions", true);
setup.traceConsensus = section.value_or("trace_consensus", true);
setup.traceRpc = section.value_or("trace_rpc", true);
setup.tracePeer = section.value_or("trace_peer", false);
setup.traceLedger = section.value_or("trace_ledger", true);
return setup;
}
} // namespace telemetry
} // namespace xrpl
```
---
## 5.3 Application Integration
### 5.3.1 ApplicationImp Changes
```cpp
// src/xrpld/app/main/Application.cpp (modified)
#include <xrpl/telemetry/Telemetry.h>
class ApplicationImp : public Application
{
// ... existing members ...
// Telemetry (must be constructed early, destroyed late)
std::unique_ptr<telemetry::Telemetry> telemetry_;
public:
ApplicationImp(...)
{
// Initialize telemetry early (before other components)
auto telemetrySection = config_->section("telemetry");
auto telemetrySetup = telemetry::setup_Telemetry(
telemetrySection,
toBase58(TokenType::NodePublic, nodeIdentity_.publicKey()),
BuildInfo::getVersionString());
// Set network attributes
telemetrySetup.networkId = config_->NETWORK_ID;
telemetrySetup.networkType = [&]() {
if (config_->NETWORK_ID == 0) return "mainnet";
if (config_->NETWORK_ID == 1) return "testnet";
if (config_->NETWORK_ID == 2) return "devnet";
return "custom";
}();
telemetry_ = telemetry::make_Telemetry(
telemetrySetup,
logs_->journal("Telemetry"));
// ... rest of initialization ...
}
void start() override
{
// Start telemetry first
if (telemetry_)
telemetry_->start();
// ... existing start code ...
}
void stop() override
{
// ... existing stop code ...
// Stop telemetry last (to capture shutdown spans)
if (telemetry_)
telemetry_->stop();
}
telemetry::Telemetry& getTelemetry() override
{
assert(telemetry_);
return *telemetry_;
}
};
```
### 5.3.2 Application Interface Addition
```cpp
// include/xrpl/app/main/Application.h (modified)
namespace telemetry { class Telemetry; }
class Application
{
public:
// ... existing virtual methods ...
/** Get the telemetry system for distributed tracing */
virtual telemetry::Telemetry& getTelemetry() = 0;
};
```
---
## 5.4 CMake Integration
### 5.4.1 Find OpenTelemetry Module
```cmake
# cmake/FindOpenTelemetry.cmake
# Find OpenTelemetry C++ SDK
#
# This module defines:
# OpenTelemetry_FOUND - System has OpenTelemetry
# OpenTelemetry::api - API library target
# OpenTelemetry::sdk - SDK library target
# OpenTelemetry::otlp_grpc_exporter - OTLP gRPC exporter target
# OpenTelemetry::otlp_http_exporter - OTLP HTTP exporter target
find_package(opentelemetry-cpp CONFIG QUIET)
if(opentelemetry-cpp_FOUND)
set(OpenTelemetry_FOUND TRUE)
# Create imported targets if not already created by config
if(NOT TARGET OpenTelemetry::api)
add_library(OpenTelemetry::api ALIAS opentelemetry-cpp::api)
endif()
if(NOT TARGET OpenTelemetry::sdk)
add_library(OpenTelemetry::sdk ALIAS opentelemetry-cpp::sdk)
endif()
if(NOT TARGET OpenTelemetry::otlp_grpc_exporter)
add_library(OpenTelemetry::otlp_grpc_exporter ALIAS
opentelemetry-cpp::otlp_grpc_exporter)
endif()
else()
# Try pkg-config fallback
find_package(PkgConfig QUIET)
if(PKG_CONFIG_FOUND)
pkg_check_modules(OTEL opentelemetry-cpp QUIET)
if(OTEL_FOUND)
set(OpenTelemetry_FOUND TRUE)
# Create imported targets from pkg-config
add_library(OpenTelemetry::api INTERFACE IMPORTED)
target_include_directories(OpenTelemetry::api INTERFACE
${OTEL_INCLUDE_DIRS})
endif()
endif()
endif()
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(OpenTelemetry
REQUIRED_VARS OpenTelemetry_FOUND)
```
### 5.4.2 CMakeLists.txt Changes
```cmake
# CMakeLists.txt (additions)
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY OPTIONS
# ═══════════════════════════════════════════════════════════════════════════════
option(XRPL_ENABLE_TELEMETRY
"Enable OpenTelemetry distributed tracing support" OFF)
if(XRPL_ENABLE_TELEMETRY)
find_package(OpenTelemetry REQUIRED)
# Define compile-time flag
add_compile_definitions(XRPL_ENABLE_TELEMETRY)
message(STATUS "OpenTelemetry tracing: ENABLED")
else()
message(STATUS "OpenTelemetry tracing: DISABLED")
endif()
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY LIBRARY
# ═══════════════════════════════════════════════════════════════════════════════
if(XRPL_ENABLE_TELEMETRY)
add_library(xrpl_telemetry
src/libxrpl/telemetry/Telemetry.cpp
src/libxrpl/telemetry/TelemetryConfig.cpp
src/libxrpl/telemetry/TraceContext.cpp
)
target_include_directories(xrpl_telemetry
PUBLIC
${CMAKE_CURRENT_SOURCE_DIR}/include
)
target_link_libraries(xrpl_telemetry
PUBLIC
OpenTelemetry::api
OpenTelemetry::sdk
OpenTelemetry::otlp_grpc_exporter
PRIVATE
xrpl_basics
)
# Add to main library dependencies
target_link_libraries(xrpld PRIVATE xrpl_telemetry)
else()
# Create null implementation library
add_library(xrpl_telemetry
src/libxrpl/telemetry/NullTelemetry.cpp
)
target_include_directories(xrpl_telemetry
PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include
)
endif()
```
---
## 5.5 OpenTelemetry Collector Configuration
### 5.5.1 Development Configuration
```yaml
# otel-collector-dev.yaml
# Minimal configuration for local development
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 100
exporters:
# Console output for debugging
logging:
verbosity: detailed
sampling_initial: 5
sampling_thereafter: 200
# Jaeger for trace visualization
jaeger:
endpoint: jaeger:14250
tls:
insecure: true
# Grafana Tempo for trace storage
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, jaeger, otlp/tempo]
```
### 5.5.2 Production Configuration
```yaml
# otel-collector-prod.yaml
# Production configuration with filtering, sampling, and multiple backends
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
tls:
cert_file: /etc/otel/server.crt
key_file: /etc/otel/server.key
ca_file: /etc/otel/ca.crt
processors:
# Memory limiter to prevent OOM
memory_limiter:
check_interval: 1s
limit_mib: 1000
spike_limit_mib: 200
# Batch processing for efficiency
batch:
timeout: 5s
send_batch_size: 512
send_batch_max_size: 1024
# Tail-based sampling (keep errors and slow traces)
tail_sampling:
decision_wait: 10s
num_traces: 100000
expected_new_traces_per_sec: 1000
policies:
# Always keep error traces
- name: errors
type: status_code
status_code:
status_codes: [ERROR]
# Keep slow consensus rounds (>5s)
- name: slow-consensus
type: latency
latency:
threshold_ms: 5000
# Keep slow RPC requests (>1s)
- name: slow-rpc
type: and
and:
and_sub_policy:
- name: rpc-spans
type: string_attribute
string_attribute:
key: xrpl.rpc.command
values: [".*"]
enabled_regex_matching: true
- name: latency
type: latency
latency:
threshold_ms: 1000
# Probabilistic sampling for the rest
- name: probabilistic
type: probabilistic
probabilistic:
sampling_percentage: 10
# Attribute processing
attributes:
actions:
# Hash sensitive data
- key: xrpl.tx.account
action: hash
# Add deployment info
- key: deployment.environment
value: production
action: upsert
exporters:
# Grafana Tempo for long-term storage
otlp/tempo:
endpoint: tempo.monitoring:4317
tls:
insecure: false
ca_file: /etc/otel/tempo-ca.crt
# Elastic APM for correlation with logs
otlp/elastic:
endpoint: apm.elastic:8200
headers:
Authorization: "Bearer ${ELASTIC_APM_TOKEN}"
extensions:
health_check:
endpoint: 0.0.0.0:13133
zpages:
endpoint: 0.0.0.0:55679
service:
extensions: [health_check, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, tail_sampling, attributes, batch]
exporters: [otlp/tempo, otlp/elastic]
```
---
## 5.6 Docker Compose Development Environment
```yaml
# docker-compose-telemetry.yaml
version: "3.8"
services:
# OpenTelemetry Collector
otel-collector:
image: otel/opentelemetry-collector-contrib:0.92.0
container_name: otel-collector
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-dev.yaml:/etc/otel-collector-config.yaml:ro
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
- "13133:13133" # Health check
depends_on:
- jaeger
# Jaeger for trace visualization
jaeger:
image: jaegertracing/all-in-one:1.53
container_name: jaeger
environment:
- COLLECTOR_OTLP_ENABLED=true
ports:
- "16686:16686" # UI
- "14250:14250" # gRPC
# Grafana Tempo for trace storage (recommended for production)
tempo:
image: grafana/tempo:2.7.2
container_name: tempo
command: ["-config.file=/etc/tempo.yaml"]
volumes:
- ./tempo.yaml:/etc/tempo.yaml:ro
- tempo-data:/var/tempo
ports:
- "3200:3200" # HTTP API
# Grafana for dashboards
grafana:
image: grafana/grafana:10.2.3
container_name: grafana
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
volumes:
- ./grafana/provisioning:/etc/grafana/provisioning:ro
- ./grafana/dashboards:/var/lib/grafana/dashboards:ro
ports:
- "3000:3000"
depends_on:
- jaeger
- tempo
# Prometheus for metrics (optional, for correlation)
prometheus:
image: prom/prometheus:v2.48.1
container_name: prometheus
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml:ro
ports:
- "9090:9090"
networks:
default:
name: rippled-telemetry
```
---
## 5.7 Configuration Architecture
```mermaid
flowchart TB
subgraph config["Configuration Sources"]
cfgFile["xrpld.cfg<br/>[telemetry] section"]
cmake["CMake<br/>XRPL_ENABLE_TELEMETRY"]
end
subgraph init["Initialization"]
parse["setup_Telemetry()"]
factory["make_Telemetry()"]
end
subgraph runtime["Runtime Components"]
tracer["TracerProvider"]
exporter["OTLP Exporter"]
processor["BatchProcessor"]
end
subgraph collector["Collector Pipeline"]
recv["Receivers"]
proc["Processors"]
exp["Exporters"]
end
cfgFile --> parse
cmake -->|"compile flag"| parse
parse --> factory
factory --> tracer
tracer --> processor
processor --> exporter
exporter -->|"OTLP"| recv
recv --> proc
proc --> exp
style config fill:#e3f2fd,stroke:#1976d2
style runtime fill:#e8f5e9,stroke:#388e3c
style collector fill:#fff3e0,stroke:#ff9800
```
---
## 5.8 Grafana Integration
Step-by-step instructions for integrating rippled traces with Grafana.
### 5.8.1 Data Source Configuration
#### Tempo (Recommended)
```yaml
# grafana/provisioning/datasources/tempo.yaml
apiVersion: 1
datasources:
- name: Tempo
type: tempo
access: proxy
url: http://tempo:3200
jsonData:
httpMethod: GET
tracesToLogs:
datasourceUid: loki
tags: ["service.name", "xrpl.tx.hash"]
mappedTags: [{ key: "trace_id", value: "traceID" }]
mapTagNamesEnabled: true
filterByTraceID: true
serviceMap:
datasourceUid: prometheus
nodeGraph:
enabled: true
search:
hide: false
lokiSearch:
datasourceUid: loki
```
#### Jaeger
```yaml
# grafana/provisioning/datasources/jaeger.yaml
apiVersion: 1
datasources:
- name: Jaeger
type: jaeger
access: proxy
url: http://jaeger:16686
jsonData:
tracesToLogs:
datasourceUid: loki
tags: ["service.name"]
```
#### Elastic APM
```yaml
# grafana/provisioning/datasources/elastic-apm.yaml
apiVersion: 1
datasources:
- name: Elasticsearch-APM
type: elasticsearch
access: proxy
url: http://elasticsearch:9200
database: "apm-*"
jsonData:
esVersion: "8.0.0"
timeField: "@timestamp"
logMessageField: message
logLevelField: log.level
```
### 5.8.2 Dashboard Provisioning
```yaml
# grafana/provisioning/dashboards/dashboards.yaml
apiVersion: 1
providers:
- name: "rippled-dashboards"
orgId: 1
folder: "rippled"
folderUid: "rippled"
type: file
disableDeletion: false
updateIntervalSeconds: 30
options:
path: /var/lib/grafana/dashboards/rippled
```
### 5.8.3 Example Dashboard: RPC Performance
```json
{
"title": "rippled RPC Performance",
"uid": "rippled-rpc-performance",
"panels": [
{
"title": "RPC Latency by Command",
"type": "heatmap",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && span.xrpl.rpc.command != \"\"} | histogram_over_time(duration) by (span.xrpl.rpc.command)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 0 }
},
{
"title": "RPC Error Rate",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && status.code=error} | rate() by (span.xrpl.rpc.command)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 0 }
},
{
"title": "Top 10 Slowest RPC Commands",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && span.xrpl.rpc.command != \"\"} | avg(duration) by (span.xrpl.rpc.command) | topk(10)"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 8 }
},
{
"title": "Recent Traces",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\"}"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 16 }
}
]
}
```
### 5.8.4 Example Dashboard: Transaction Tracing
```json
{
"title": "rippled Transaction Tracing",
"uid": "rippled-tx-tracing",
"panels": [
{
"title": "Transaction Throughput",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.receive\"} | rate()"
}
],
"gridPos": { "h": 4, "w": 6, "x": 0, "y": 0 }
},
{
"title": "Cross-Node Relay Count",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.relay\"} | avg(span.xrpl.tx.relay_count)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 4 }
},
{
"title": "Transaction Validation Errors",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.validate\" && status.code=error}"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 4 }
}
]
}
```
### 5.8.5 TraceQL Query Examples
Common queries for rippled traces:
```
# Find all traces for a specific transaction hash
{resource.service.name="rippled" && span.xrpl.tx.hash="ABC123..."}
# Find slow RPC commands (>100ms)
{resource.service.name="rippled" && name=~"rpc.command.*"} | duration > 100ms
# Find consensus rounds taking >5 seconds
{resource.service.name="rippled" && name="consensus.round"} | duration > 5s
# Find failed transactions with error details
{resource.service.name="rippled" && name="tx.validate" && status.code=error}
# Find transactions relayed to many peers
{resource.service.name="rippled" && name="tx.relay"} | span.xrpl.tx.relay_count > 10
# Compare latency across nodes
{resource.service.name="rippled" && name="rpc.command.account_info"} | avg(duration) by (resource.service.instance.id)
```
### 5.8.6 Correlation with PerfLog
To correlate OpenTelemetry traces with existing PerfLog data:
**Step 1: Configure Loki to ingest PerfLog**
```yaml
# promtail-config.yaml
scrape_configs:
- job_name: rippled-perflog
static_configs:
- targets:
- localhost
labels:
job: rippled
__path__: /var/log/rippled/perf*.log
pipeline_stages:
- json:
expressions:
trace_id: trace_id
ledger_seq: ledger_seq
tx_hash: tx_hash
- labels:
trace_id:
ledger_seq:
tx_hash:
```
**Step 2: Add trace_id to PerfLog entries**
Modify PerfLog to include trace_id when available:
```cpp
// In PerfLog output, add trace_id from current span context
void logPerf(Json::Value& entry) {
auto span = opentelemetry::trace::GetSpan(
opentelemetry::context::RuntimeContext::GetCurrent());
if (span && span->GetContext().IsValid()) {
char traceIdHex[33];
span->GetContext().trace_id().ToLowerBase16(traceIdHex);
entry["trace_id"] = std::string(traceIdHex, 32);
}
// ... existing logging
}
```
**Step 3: Configure Grafana trace-to-logs link**
In Tempo data source configuration, set up the derived field:
```yaml
jsonData:
tracesToLogs:
datasourceUid: loki
tags: ["trace_id", "xrpl.tx.hash"]
filterByTraceID: true
filterBySpanID: false
```
### 5.8.7 Correlation with Insight/OTel System Metrics
To correlate traces with Beast Insight system metrics:
**Step 1: Export Insight metrics to Prometheus**
Beast Insight metrics are exported natively via OTLP to the OTel Collector,
which exposes them on the Prometheus endpoint alongside spanmetrics. No
separate StatsD exporter is needed when using `server=otel`.
```ini
# xrpld.cfg — native OTel metrics (recommended)
[insight]
server=otel
endpoint=http://localhost:4318/v1/metrics
prefix=rippled
```
**Step 2: Add exemplars to metrics**
OpenTelemetry SDK automatically adds exemplars (trace IDs) to metrics when using the Prometheus exporter. This links metrics spikes to specific traces.
**Step 3: Configure Grafana metric-to-trace link**
```yaml
# In Prometheus data source
jsonData:
exemplarTraceIdDestinations:
- name: trace_id
datasourceUid: tempo
```
**Step 4: Dashboard panel with exemplars**
```json
{
"title": "RPC Latency with Trace Links",
"type": "timeseries",
"datasource": "Prometheus",
"targets": [
{
"expr": "histogram_quantile(0.99, rate(rippled_rpc_duration_seconds_bucket[5m]))",
"exemplar": true
}
]
}
```
This allows clicking on metric data points to jump directly to the related trace.
---
_Previous: [Code Samples](./04-code-samples.md)_ | _Next: [Implementation Phases](./06-implementation-phases.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,595 @@
# Observability Backend Recommendations
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Implementation Phases](./06-implementation-phases.md) | [Appendix](./08-appendix.md)
---
## 7.1 Development/Testing Backends
| Backend | Pros | Cons | Use Case |
| ---------- | ------------------- | ----------------- | ----------------- |
| **Jaeger** | Easy setup, good UI | Limited retention | Local dev, CI |
| **Zipkin** | Simple, lightweight | Basic features | Quick prototyping |
### Quick Start with Jaeger
```bash
# Start Jaeger with OTLP support
docker run -d --name jaeger \
-e COLLECTOR_OTLP_ENABLED=true \
-p 16686:16686 \
-p 4317:4317 \
-p 4318:4318 \
jaegertracing/all-in-one:latest
```
---
## 7.2 Production Backends
| Backend | Pros | Cons | Use Case |
| ----------------- | ----------------------------------------- | ------------------ | --------------------------- |
| **Grafana Tempo** | Cost-effective, Grafana integration | Newer project | Most production deployments |
| **Elastic APM** | Full observability stack, log correlation | Resource intensive | Existing Elastic users |
| **Honeycomb** | Excellent query, high cardinality | SaaS cost | Deep debugging needs |
| **Datadog APM** | Full platform, easy setup | SaaS cost | Enterprise with budget |
### Backend Selection Flowchart
```mermaid
flowchart TD
start[Select Backend] --> budget{Budget<br/>Constraints?}
budget -->|Yes| oss[Open Source]
budget -->|No| saas{Prefer<br/>SaaS?}
oss --> existing{Existing<br/>Stack?}
existing -->|Grafana| tempo[Grafana Tempo]
existing -->|Elastic| elastic[Elastic APM]
existing -->|None| tempo
saas -->|Yes| enterprise{Enterprise<br/>Support?}
saas -->|No| oss
enterprise -->|Yes| datadog[Datadog APM]
enterprise -->|No| honeycomb[Honeycomb]
tempo --> final[Configure Collector]
elastic --> final
honeycomb --> final
datadog --> final
style start fill:#0f172a,stroke:#020617,color:#fff
style budget fill:#334155,stroke:#1e293b,color:#fff
style oss fill:#1e293b,stroke:#0f172a,color:#fff
style existing fill:#334155,stroke:#1e293b,color:#fff
style saas fill:#334155,stroke:#1e293b,color:#fff
style enterprise fill:#334155,stroke:#1e293b,color:#fff
style final fill:#0f172a,stroke:#020617,color:#fff
style tempo fill:#1b5e20,stroke:#0d3d14,color:#fff
style elastic fill:#bf360c,stroke:#8c2809,color:#fff
style honeycomb fill:#0d47a1,stroke:#082f6a,color:#fff
style datadog fill:#4a148c,stroke:#2e0d57,color:#fff
```
---
## 7.3 Recommended Production Architecture
```mermaid
flowchart TB
subgraph validators["Validator Nodes"]
v1[rippled<br/>Validator 1]
v2[rippled<br/>Validator 2]
end
subgraph stock["Stock Nodes"]
s1[rippled<br/>Stock 1]
s2[rippled<br/>Stock 2]
end
subgraph collector["OTel Collector Cluster"]
c1[Collector<br/>DC1]
c2[Collector<br/>DC2]
end
subgraph backends["Storage Backends"]
tempo[(Grafana<br/>Tempo)]
elastic[(Elastic<br/>APM)]
archive[(S3/GCS<br/>Archive)]
end
subgraph ui["Visualization"]
grafana[Grafana<br/>Dashboards]
end
v1 -->|OTLP| c1
v2 -->|OTLP| c1
s1 -->|OTLP| c2
s2 -->|OTLP| c2
c1 --> tempo
c1 --> elastic
c2 --> tempo
c2 --> archive
tempo --> grafana
elastic --> grafana
style validators fill:#b71c1c,stroke:#7f1d1d,color:#ffffff
style stock fill:#0d47a1,stroke:#082f6a,color:#ffffff
style collector fill:#bf360c,stroke:#8c2809,color:#ffffff
style backends fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style ui fill:#4a148c,stroke:#2e0d57,color:#ffffff
```
---
## 7.4 Architecture Considerations
### 7.4.1 Collector Placement
| Strategy | Description | Pros | Cons |
| ------------- | -------------------- | ------------------------ | ----------------------- |
| **Sidecar** | Collector per node | Isolation, simple config | Resource overhead |
| **DaemonSet** | Collector per host | Shared resources | Complexity |
| **Gateway** | Central collector(s) | Centralized processing | Single point of failure |
**Recommendation**: Use **Gateway** pattern with regional collectors for rippled networks:
- One collector cluster per datacenter/region
- Tail-based sampling at collector level
- Multiple export destinations for redundancy
### 7.4.2 Sampling Strategy
```mermaid
flowchart LR
subgraph head["Head Sampling (Node)"]
hs[10% probabilistic]
end
subgraph tail["Tail Sampling (Collector)"]
ts1[Keep all errors]
ts2[Keep slow >5s]
ts3[Keep 10% rest]
end
head --> tail
ts1 --> final[Final Traces]
ts2 --> final
ts3 --> final
style head fill:#0d47a1,stroke:#082f6a,color:#fff
style tail fill:#1b5e20,stroke:#0d3d14,color:#fff
style hs fill:#0d47a1,stroke:#082f6a,color:#fff
style ts1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style ts2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style ts3 fill:#1b5e20,stroke:#0d3d14,color:#fff
style final fill:#bf360c,stroke:#8c2809,color:#fff
```
### 7.4.3 Data Retention
| Environment | Hot Storage | Warm Storage | Cold Archive |
| ----------- | ----------- | ------------ | ------------ |
| Development | 24 hours | N/A | N/A |
| Staging | 7 days | N/A | N/A |
| Production | 7 days | 30 days | many years |
---
## 7.5 Integration Checklist
- [ ] Choose primary backend (Tempo recommended for cost/features)
- [ ] Deploy collector cluster with high availability
- [ ] Configure tail-based sampling for error/latency traces
- [ ] Set up Grafana dashboards for trace visualization
- [ ] Configure alerts for trace anomalies
- [ ] Establish data retention policies
- [ ] Test trace correlation with logs and metrics
---
## 7.6 Grafana Dashboard Examples
Pre-built dashboards for rippled observability.
### 7.6.1 Consensus Health Dashboard
```json
{
"title": "rippled Consensus Health",
"uid": "rippled-consensus-health",
"tags": ["rippled", "consensus", "tracing"],
"panels": [
{
"title": "Consensus Round Duration",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"consensus.round\"} | avg(duration) by (resource.service.instance.id)"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"thresholds": {
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 4000 },
{ "color": "red", "value": 5000 }
]
}
}
},
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 0 }
},
{
"title": "Phase Duration Breakdown",
"type": "barchart",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=~\"consensus.phase.*\"} | avg(duration) by (name)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 0 }
},
{
"title": "Proposers per Round",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"consensus.round\"} | avg(span.xrpl.consensus.proposers)"
}
],
"gridPos": { "h": 4, "w": 6, "x": 0, "y": 8 }
},
{
"title": "Recent Slow Rounds (>5s)",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"consensus.round\"} | duration > 5s"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 12 }
}
]
}
```
### 7.6.2 Node Overview Dashboard
```json
{
"title": "rippled Node Overview",
"uid": "rippled-node-overview",
"panels": [
{
"title": "Active Nodes",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\"} | count_over_time() by (resource.service.instance.id) | count()"
}
],
"gridPos": { "h": 4, "w": 4, "x": 0, "y": 0 }
},
{
"title": "Total Transactions (1h)",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.receive\"} | count()"
}
],
"gridPos": { "h": 4, "w": 4, "x": 4, "y": 0 }
},
{
"title": "Error Rate",
"type": "gauge",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && status.code=error} | rate() / {resource.service.name=\"rippled\"} | rate() * 100"
}
],
"fieldConfig": {
"defaults": {
"unit": "percent",
"max": 10,
"thresholds": {
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 1 },
{ "color": "red", "value": 5 }
]
}
}
},
"gridPos": { "h": 4, "w": 4, "x": 8, "y": 0 }
},
{
"title": "Service Map",
"type": "nodeGraph",
"datasource": "Tempo",
"gridPos": { "h": 12, "w": 12, "x": 12, "y": 0 }
}
]
}
```
### 7.6.3 Alert Rules
```yaml
# grafana/provisioning/alerting/rippled-alerts.yaml
apiVersion: 1
groups:
- name: rippled-tracing-alerts
folder: rippled
interval: 1m
rules:
- uid: consensus-slow
title: Consensus Round Slow
condition: A
data:
- refId: A
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="rippled" && name="consensus.round"} | avg(duration) > 5s'
for: 5m
annotations:
summary: Consensus rounds taking >5 seconds
description: "Consensus duration: {{ $value }}ms"
labels:
severity: warning
- uid: rpc-error-spike
title: RPC Error Rate Spike
condition: B
data:
- refId: B
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="rippled" && name=~"rpc.command.*" && status.code=error} | rate() > 0.05'
for: 2m
annotations:
summary: RPC error rate >5%
labels:
severity: critical
- uid: tx-throughput-drop
title: Transaction Throughput Drop
condition: C
data:
- refId: C
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="rippled" && name="tx.receive"} | rate() < 10'
for: 10m
annotations:
summary: Transaction throughput below threshold
labels:
severity: warning
```
---
## 7.7 PerfLog and Insight Correlation
How to correlate OpenTelemetry traces with existing rippled observability.
### 7.7.1 Correlation Architecture
```mermaid
flowchart TB
subgraph rippled["rippled Node"]
otel[OpenTelemetry<br/>Spans]
perflog[PerfLog<br/>JSON Logs]
insight[Beast Insight<br/>StatsD Metrics]
end
subgraph collectors["Data Collection"]
otelc[OTel Collector]
promtail[Promtail/Fluentd]
statsd[StatsD Exporter]
end
subgraph storage["Storage"]
tempo[(Tempo)]
loki[(Loki)]
prom[(Prometheus)]
end
subgraph grafana["Grafana"]
traces[Trace View]
logs[Log View]
metrics[Metrics View]
corr[Correlation<br/>Panel]
end
otel -->|OTLP| otelc --> tempo
perflog -->|JSON| promtail --> loki
insight -->|StatsD| statsd --> prom
tempo --> traces
loki --> logs
prom --> metrics
traces --> corr
logs --> corr
metrics --> corr
style rippled fill:#0d47a1,stroke:#082f6a,color:#fff
style collectors fill:#bf360c,stroke:#8c2809,color:#fff
style storage fill:#1b5e20,stroke:#0d3d14,color:#fff
style grafana fill:#4a148c,stroke:#2e0d57,color:#fff
style otel fill:#0d47a1,stroke:#082f6a,color:#fff
style perflog fill:#0d47a1,stroke:#082f6a,color:#fff
style insight fill:#0d47a1,stroke:#082f6a,color:#fff
style otelc fill:#bf360c,stroke:#8c2809,color:#fff
style promtail fill:#bf360c,stroke:#8c2809,color:#fff
style statsd fill:#bf360c,stroke:#8c2809,color:#fff
style tempo fill:#1b5e20,stroke:#0d3d14,color:#fff
style loki fill:#1b5e20,stroke:#0d3d14,color:#fff
style prom fill:#1b5e20,stroke:#0d3d14,color:#fff
style traces fill:#4a148c,stroke:#2e0d57,color:#fff
style logs fill:#4a148c,stroke:#2e0d57,color:#fff
style metrics fill:#4a148c,stroke:#2e0d57,color:#fff
style corr fill:#4a148c,stroke:#2e0d57,color:#fff
```
### 7.7.2 Correlation Fields
| Source | Field | Link To | Purpose |
| ----------- | --------------------------- | ------------- | -------------------------- |
| **Trace** | `trace_id` | Logs | Find log entries for trace |
| **Trace** | `xrpl.tx.hash` | Logs, Metrics | Find TX-related data |
| **Trace** | `xrpl.consensus.ledger.seq` | Logs | Find ledger-related logs |
| **PerfLog** | `trace_id` (new) | Traces | Jump to trace from log |
| **PerfLog** | `ledger_seq` | Traces | Find consensus trace |
| **Insight** | `exemplar.trace_id` | Traces | Jump from metric spike |
### 7.7.3 Example: Debugging a Slow Transaction
**Step 1: Find the trace**
```
# In Grafana Explore with Tempo
{resource.service.name="rippled" && span.xrpl.tx.hash="ABC123..."}
```
**Step 2: Get the trace_id from the trace view**
```
Trace ID: 4bf92f3577b34da6a3ce929d0e0e4736
```
**Step 3: Find related PerfLog entries**
```
# In Grafana Explore with Loki
{job="rippled"} |= "4bf92f3577b34da6a3ce929d0e0e4736"
```
**Step 4: Check Insight metrics for the time window**
```
# In Grafana with Prometheus
rate(rippled_tx_applied_total[1m])
@ timestamp_from_trace
```
### 7.7.4 Unified Dashboard Example
```json
{
"title": "rippled Unified Observability",
"uid": "rippled-unified",
"panels": [
{
"title": "Transaction Latency (Traces)",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.receive\"} | histogram_over_time(duration)"
}
],
"gridPos": { "h": 6, "w": 8, "x": 0, "y": 0 }
},
{
"title": "Transaction Rate (Metrics)",
"type": "timeseries",
"datasource": "Prometheus",
"targets": [
{
"expr": "rate(rippled_tx_received_total[5m])",
"legendFormat": "{{ instance }}"
}
],
"fieldConfig": {
"defaults": {
"links": [
{
"title": "View traces",
"url": "/explore?left={\"datasource\":\"Tempo\",\"query\":\"{resource.service.name=\\\"rippled\\\" && name=\\\"tx.receive\\\"}\"}"
}
]
}
},
"gridPos": { "h": 6, "w": 8, "x": 8, "y": 0 }
},
{
"title": "Recent Logs",
"type": "logs",
"datasource": "Loki",
"targets": [
{
"expr": "{job=\"rippled\"} | json"
}
],
"gridPos": { "h": 6, "w": 8, "x": 16, "y": 0 }
},
{
"title": "Trace Search",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\"}"
}
],
"fieldConfig": {
"overrides": [
{
"matcher": { "id": "byName", "options": "traceID" },
"properties": [
{
"id": "links",
"value": [
{
"title": "View trace",
"url": "/explore?left={\"datasource\":\"Tempo\",\"query\":\"${__value.raw}\"}"
},
{
"title": "View logs",
"url": "/explore?left={\"datasource\":\"Loki\",\"query\":\"{job=\\\"rippled\\\"} |= \\\"${__value.raw}\\\"\"}"
}
]
}
]
}
]
},
"gridPos": { "h": 12, "w": 24, "x": 0, "y": 6 }
}
]
}
```
---
_Previous: [Implementation Phases](./06-implementation-phases.md)_ | _Next: [Appendix](./08-appendix.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,231 @@
# Appendix
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Observability Backends](./07-observability-backends.md)
---
## 8.1 Glossary
| Term | Definition |
| --------------------- | ---------------------------------------------------------- |
| **Span** | A unit of work with start/end time, name, and attributes |
| **Trace** | A collection of spans representing a complete request flow |
| **Trace ID** | 128-bit unique identifier for a trace |
| **Span ID** | 64-bit unique identifier for a span within a trace |
| **Context** | Carrier for trace/span IDs across boundaries |
| **Propagator** | Component that injects/extracts context |
| **Sampler** | Decides which traces to record |
| **Exporter** | Sends spans to backend |
| **Collector** | Receives, processes, and forwards telemetry |
| **OTLP** | OpenTelemetry Protocol (wire format) |
| **W3C Trace Context** | Standard HTTP headers for trace propagation |
| **Baggage** | Key-value pairs propagated across service boundaries |
| **Resource** | Entity producing telemetry (service, host, etc.) |
| **Instrumentation** | Code that creates telemetry data |
### rippled-Specific Terms
| Term | Definition |
| ----------------- | -------------------------------------------------- |
| **Overlay** | P2P network layer managing peer connections |
| **Consensus** | XRP Ledger consensus algorithm (RCL) |
| **Proposal** | Validator's suggested transaction set for a ledger |
| **Validation** | Validator's signature on a closed ledger |
| **HashRouter** | Component for transaction deduplication |
| **JobQueue** | Thread pool for asynchronous task execution |
| **PerfLog** | Existing performance logging system in rippled |
| **Beast Insight** | Existing metrics framework in rippled |
### Phase 911 Terms
| Term | Definition |
| --------------------------- | ------------------------------------------------------------------------- |
| **MetricsRegistry** | Centralized class for OTel async gauge registrations (Phase 9) |
| **ObservableGauge** | OTel Metrics SDK async instrument polled via callback at fixed intervals |
| **PeriodicMetricReader** | OTel SDK component that invokes gauge callbacks at configurable intervals |
| **CountedObject** | rippled template that tracks live instance counts via atomic counters |
| **TxQ** | Transaction queue managing fee escalation and ordering |
| **Load Factor** | Combined multiplier affecting transaction cost (local, cluster, network) |
| **OTel Collector Receiver** | Custom Go plugin that polls rippled RPC and emits OTel metrics (Phase 11) |
---
## 8.2 Span Hierarchy Visualization
```mermaid
flowchart TB
subgraph trace["Trace: Transaction Lifecycle"]
rpc["rpc.submit<br/>(entry point)"]
validate["tx.validate"]
relay["tx.relay<br/>(parent span)"]
subgraph peers["Peer Spans"]
p1["peer.send<br/>Peer A"]
p2["peer.send<br/>Peer B"]
p3["peer.send<br/>Peer C"]
end
consensus["consensus.round"]
apply["tx.apply"]
end
rpc --> validate
validate --> relay
relay --> p1
relay --> p2
relay --> p3
p1 -.->|"context propagation"| consensus
consensus --> apply
style trace fill:#0f172a,stroke:#020617,color:#fff
style peers fill:#1e3a8a,stroke:#172554,color:#fff
style rpc fill:#1d4ed8,stroke:#1e40af,color:#fff
style validate fill:#047857,stroke:#064e3b,color:#fff
style relay fill:#047857,stroke:#064e3b,color:#fff
style p1 fill:#0e7490,stroke:#155e75,color:#fff
style p2 fill:#0e7490,stroke:#155e75,color:#fff
style p3 fill:#0e7490,stroke:#155e75,color:#fff
style consensus fill:#fef3c7,stroke:#fde68a,color:#1e293b
style apply fill:#047857,stroke:#064e3b,color:#fff
```
---
## 8.3 References
### OpenTelemetry Resources
1. [OpenTelemetry C++ SDK](https://github.com/open-telemetry/opentelemetry-cpp)
2. [OpenTelemetry Specification](https://opentelemetry.io/docs/specs/otel/)
3. [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/)
4. [OTLP Protocol Specification](https://opentelemetry.io/docs/specs/otlp/)
### Standards
5. [W3C Trace Context](https://www.w3.org/TR/trace-context/)
6. [W3C Baggage](https://www.w3.org/TR/baggage/)
7. [Protocol Buffers](https://protobuf.dev/)
### rippled Resources
8. [rippled Source Code](https://github.com/XRPLF/rippled)
9. [XRP Ledger Documentation](https://xrpl.org/docs/)
10. [rippled Overlay README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/overlay/README.md)
11. [rippled RPC README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/rpc/README.md)
12. [rippled Consensus README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/app/consensus/README.md)
---
## 8.4 Version History
| Version | Date | Author | Changes |
| ------- | ---------- | ------ | -------------------------------------------- |
| 1.0 | 2026-02-12 | - | Initial implementation plan |
| 1.1 | 2026-02-13 | - | Refactored into modular documents |
| 1.2 | 2026-03-09 | - | Added Phases 911 (future enhancement plans) |
---
## 8.5 Document Index
### Plan Documents
| Document | Description |
| -------------------------------------------------------------------- | -------------------------------------------- |
| [OpenTelemetryPlan.md](./OpenTelemetryPlan.md) | Master overview and executive summary |
| [00-tracing-fundamentals.md](./00-tracing-fundamentals.md) | Distributed tracing concepts and OTel primer |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | rippled architecture and trace points |
| [02-design-decisions.md](./02-design-decisions.md) | SDK selection, exporters, span conventions |
| [03-implementation-strategy.md](./03-implementation-strategy.md) | Directory structure, performance analysis |
| [04-code-samples.md](./04-code-samples.md) | C++ code examples for all components |
| [05-configuration-reference.md](./05-configuration-reference.md) | rippled config, CMake, Collector configs |
| [06-implementation-phases.md](./06-implementation-phases.md) | Timeline, tasks, risks, success metrics |
| [07-observability-backends.md](./07-observability-backends.md) | Backend selection and architecture |
| [08-appendix.md](./08-appendix.md) | Glossary, references, version history |
| [09-data-collection-reference.md](./09-data-collection-reference.md) | Span/metric/dashboard inventory |
| [presentation.md](./presentation.md) | Slide deck for OTel plan overview |
### Task Lists
| Document | Description |
| -------------------------------------------------------------------------- | --------------------------------------------------- |
| [POC_taskList.md](./POC_taskList.md) | Proof-of-concept telemetry integration |
| [Phase2_taskList.md](./Phase2_taskList.md) | RPC layer trace instrumentation |
| [Phase3_taskList.md](./Phase3_taskList.md) | Peer overlay & consensus tracing |
| [Phase4_taskList.md](./Phase4_taskList.md) | Transaction lifecycle tracing |
| [Phase5_taskList.md](./Phase5_taskList.md) | Ledger processing & advanced tracing |
| [Phase5_IntegrationTest_taskList.md](./Phase5_IntegrationTest_taskList.md) | Observability stack integration tests |
| [Phase7_taskList.md](./Phase7_taskList.md) | Native OTel metrics migration |
| [Phase8_taskList.md](./Phase8_taskList.md) | Log-trace correlation |
| [Phase9_taskList.md](./Phase9_taskList.md) | Internal metric instrumentation gap fill (future) |
| [Phase10_taskList.md](./Phase10_taskList.md) | Synthetic workload generation & validation (future) |
| [Phase11_taskList.md](./Phase11_taskList.md) | Third-party data collection pipelines (future) |
> **Note**: Phases 1 and 6 do not have separate task list files. Phase 1 tasks are documented in [06-implementation-phases.md §6.2](./06-implementation-phases.md). Phase 6 tasks are documented in [06-implementation-phases.md §6.7](./06-implementation-phases.md).
---
## 8.6 Phase 911 Cross-Reference Guide
This guide maps Phase 911 content to its location across the documentation.
### Phase 9: Internal Metric Instrumentation Gap Fill
| Content | Location |
| ------------------------------- | ------------------------------------------------------------------------ |
| Plan & architecture | [06-implementation-phases.md §6.8.2](./06-implementation-phases.md) |
| Task list (10 tasks, 12d) | [Phase9_taskList.md](./Phase9_taskList.md) |
| Future metric definitions (~50) | [09-data-collection-reference.md §5b](./09-data-collection-reference.md) |
| New class: `MetricsRegistry` | `src/xrpld/telemetry/MetricsRegistry.h/.cpp` (planned) |
| New dashboards | `rippled-fee-market`, `rippled-job-queue` (planned) |
**Metric categories**: NodeStore I/O, Cache Hit Rates, TxQ, PerfLog Per-RPC, PerfLog Per-Job, Counted Objects, Fee Escalation & Load Factors.
### Phase 10: Synthetic Workload Generation & Telemetry Validation
| Content | Location |
| ------------------------ | ------------------------------------------------------------------------ |
| Plan & architecture | [06-implementation-phases.md §6.8.3](./06-implementation-phases.md) |
| Task list (7 tasks, 10d) | [Phase10_taskList.md](./Phase10_taskList.md) |
| Validation inventory | [09-data-collection-reference.md §5c](./09-data-collection-reference.md) |
| Test harness | `docker/telemetry/docker-compose.workload.yaml` (planned) |
| CI workflow | `.github/workflows/telemetry-validation.yml` (planned) |
**Validates**: 16 spans, 22 attributes, 300+ metrics, 10 dashboards, log-trace correlation.
### Phase 11: Third-Party Data Collection Pipelines
| Content | Location |
| --------------------------------- | ------------------------------------------------------------------------ |
| Plan & architecture | [06-implementation-phases.md §6.8.4](./06-implementation-phases.md) |
| Task list (11 tasks, 15d) | [Phase11_taskList.md](./Phase11_taskList.md) |
| External metric definitions (~30) | [09-data-collection-reference.md §5d](./09-data-collection-reference.md) |
| Custom OTel Collector receiver | `docker/telemetry/otel-rippled-receiver/` (planned) |
| Prometheus alerting rules (11) | [09-data-collection-reference.md §5d](./09-data-collection-reference.md) |
| New dashboards (4) | Validator Health, Network Topology, Fee Market (External), DEX & AMM |
**Consumer categories**: Exchanges, Payment Processors, DeFi/AMM, NFT Marketplaces, Analytics Providers, Wallets, Compliance, Academic Researchers, Institutional Custody, CBDC Bridge Operators.
---
## 8.7 Effort Summary (All Phases)
| Phase | Description | Effort | Status |
| ----- | -------------------------------- | ---------- | ------------------ |
| 1 | Core SDK integration | 5d | Active |
| 2 | RPC tracing | 5d | Active |
| 3 | Peer & consensus tracing | 8d | Active |
| 4 | Transaction lifecycle | 7d | Active |
| 5 | Ledger & advanced | 7.1d | Active |
| 6 | StatsD → OTel bridge | 8d | Active |
| 7 | Native OTel metrics | 15d | Active |
| 8 | Log-trace correlation | 10d | Active |
| 9 | Internal metric gap fill | 12d | Future Enhancement |
| 10 | Workload generation & validation | 10d | Future Enhancement |
| 11 | Third-party data pipelines | 15d | Future Enhancement |
| | **Total** | **102.1d** | |
---
_Previous: [Observability Backends](./07-observability-backends.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -0,0 +1,992 @@
# Observability Data Collection Reference
> **Audience**: Developers and operators. This is the single source of truth for all telemetry data collected by rippled's observability stack.
>
> **Related docs**: [docs/telemetry-runbook.md](../docs/telemetry-runbook.md) (operator runbook with alerting and troubleshooting) | [03-implementation-strategy.md](./03-implementation-strategy.md) (code structure and performance optimization) | [04-code-samples.md](./04-code-samples.md) (C++ instrumentation examples)
## Data Flow Overview
```mermaid
graph LR
subgraph rippledNode["rippled Node"]
A["Trace Macros<br/>XRPL_TRACE_SPAN<br/>(OTLP/HTTP exporter)"]
B["beast::insight<br/>OTel native metrics<br/>(OTLP/HTTP exporter)"]
C["MetricsRegistry<br/>OTel SDK metrics<br/>(OTLP/HTTP exporter)"]
end
subgraph collector["OTel Collector :4317 / :4318"]
direction TB
R1["OTLP Receiver<br/>:4317 gRPC | :4318 HTTP<br/>(traces + metrics)"]
BP["Batch Processor<br/>timeout 1s, batch 100"]
SM["SpanMetrics Connector<br/>derives RED metrics<br/>from trace spans"]
R1 --> BP
BP --> SM
end
subgraph backends["Trace Backends (choose one or both)"]
D["Jaeger :16686<br/>Trace search &<br/>visualization"]
T["Grafana Tempo<br/>(preferred for production)<br/>S3/GCS long-term storage"]
end
subgraph metrics["Metrics Stack"]
E["Prometheus :9090<br/>scrapes :8889<br/>span-derived + system metrics"]
end
subgraph viz["Visualization"]
F["Grafana :3000<br/>13 dashboards"]
end
A -->|"OTLP/HTTP :4318<br/>(traces + attributes)"| R1
B -->|"OTLP/HTTP :4318<br/>(gauges, counters, histograms)"| R1
C -->|"OTLP/HTTP :4318<br/>(counters, histograms,<br/>observable gauges)"| R1
BP -->|"OTLP/gRPC :4317"| D
BP -->|"OTLP/gRPC"| T
SM -->|"span_calls_total<br/>span_duration_ms<br/>(6 dimension labels)"| E
R1 -->|"rippled_* gauges<br/>rippled_* counters<br/>rippled_* histograms"| E
E -->|"Prometheus<br/>data source"| F
D -->|"Jaeger<br/>data source"| F
T -->|"Tempo<br/>data source"| F
style A fill:#4a90d9,color:#fff,stroke:#2a6db5
style B fill:#4a90d9,color:#fff,stroke:#2a6db5
style R1 fill:#5cb85c,color:#fff,stroke:#3d8b3d
style BP fill:#449d44,color:#fff,stroke:#2d6e2d
style SM fill:#449d44,color:#fff,stroke:#2d6e2d
style D fill:#f0ad4e,color:#000,stroke:#c78c2e
style T fill:#e8953a,color:#000,stroke:#b5732a
style E fill:#f0ad4e,color:#000,stroke:#c78c2e
style F fill:#5bc0de,color:#000,stroke:#3aa8c1
style rippledNode fill:#1a2633,color:#ccc,stroke:#4a90d9
style collector fill:#1a3320,color:#ccc,stroke:#5cb85c
style backends fill:#332a1a,color:#ccc,stroke:#f0ad4e
style metrics fill:#332a1a,color:#ccc,stroke:#f0ad4e
style viz fill:#1a2d33,color:#ccc,stroke:#5bc0de
```
There are two independent telemetry pipelines entering a single **OTel Collector** via the same OTLP receiver:
1. **OpenTelemetry Traces** — Distributed spans with attributes, exported via OTLP/HTTP (:4318) to the collector's **OTLP Receiver**. The **Batch Processor** groups spans (1s timeout, batch size 100) before forwarding to trace backends. The **SpanMetrics Connector** derives RED metrics (rate, errors, duration) from every span and feeds them into the metrics pipeline.
2. **beast::insight OTel Metrics** — System-level gauges, counters, and histograms exported natively via OTLP/HTTP (:4318) to the same **OTLP Receiver**. These are batched and exported to Prometheus alongside span-derived metrics. The StatsD UDP transport has been replaced by native OTLP; `server=statsd` remains available as a fallback.
**Trace backends** — The collector exports traces via OTLP/gRPC to one or both:
- **Jaeger** (development) — Provides trace search UI at `:16686`. Easy single-binary setup.
- **Grafana Tempo** (production) — Preferred for production. Supports S3/GCS object storage for cost-effective long-term trace retention and integrates natively with Grafana.
> **Further reading**: [00-tracing-fundamentals.md](./00-tracing-fundamentals.md) for core OpenTelemetry concepts (traces, spans, context propagation, sampling). [07-observability-backends.md](./07-observability-backends.md) for production backend selection, collector placement, and sampling strategies.
---
## 1. OpenTelemetry Spans
### 1.1 Complete Span Inventory (16 spans)
> **See also**: [02-design-decisions.md §2.3](./02-design-decisions.md#23-span-naming-conventions) for naming conventions and the full span catalog with rationale. [04-code-samples.md §4.6](./04-code-samples.md#46-span-flow-visualization) for span flow diagrams.
#### RPC Spans
Controlled by `trace_rpc=1` in `[telemetry]` config.
| Span Name | Parent | Source File | Description |
| -------------------- | ------------- | ----------------- | ------------------------------------------------------------------------ |
| `rpc.request` | — | ServerHandler.cpp | Top-level HTTP RPC request entry point |
| `rpc.process` | `rpc.request` | ServerHandler.cpp | RPC processing pipeline |
| `rpc.ws_message` | — | ServerHandler.cpp | WebSocket message handling |
| `rpc.command.<name>` | `rpc.process` | RPCHandler.cpp | Per-command span (e.g., `rpc.command.server_info`, `rpc.command.ledger`) |
**Where to find**: Jaeger → Service: `rippled` → Operation: `rpc.request` or `rpc.command.*`
**Grafana dashboard**: _RPC Performance_ (`rippled-rpc-perf`)
#### Transaction Spans
Controlled by `trace_transactions=1` in `[telemetry]` config.
| Span Name | Parent | Source File | Description |
| ------------ | -------------- | --------------- | ----------------------------------------------------------------- |
| `tx.process` | — | NetworkOPs.cpp | Transaction submission entry point (local or peer-relayed) |
| `tx.receive` | — | PeerImp.cpp | Raw transaction received from peer overlay (before deduplication) |
| `tx.apply` | `ledger.build` | BuildLedger.cpp | Transaction set applied to new ledger during consensus |
**Where to find**: Jaeger → Operation: `tx.process` or `tx.receive`
**Grafana dashboard**: _Transaction Overview_ (`rippled-transactions`)
#### Consensus Spans
Controlled by `trace_consensus=1` in `[telemetry]` config.
| Span Name | Parent | Source File | Description |
| --------------------------- | ------ | ---------------- | --------------------------------------------- |
| `consensus.proposal.send` | — | RCLConsensus.cpp | Node broadcasts its transaction set proposal |
| `consensus.ledger_close` | — | RCLConsensus.cpp | Ledger close event triggered by consensus |
| `consensus.accept` | — | RCLConsensus.cpp | Consensus accepts a ledger (round complete) |
| `consensus.validation.send` | — | RCLConsensus.cpp | Validation message sent after ledger accepted |
| `consensus.accept.apply` | — | RCLConsensus.cpp | Ledger application with close time details |
**Where to find**: Jaeger → Operation: `consensus.*`
**Grafana dashboard**: _Consensus Health_ (`rippled-consensus`)
#### Ledger Spans
Controlled by `trace_ledger=1` in `[telemetry]` config.
| Span Name | Parent | Source File | Description |
| ----------------- | ------ | ---------------- | ---------------------------------------------- |
| `ledger.build` | — | BuildLedger.cpp | Build new ledger from accepted transaction set |
| `ledger.validate` | — | LedgerMaster.cpp | Ledger promoted to validated status |
| `ledger.store` | — | LedgerMaster.cpp | Ledger stored to database/history |
**Where to find**: Jaeger → Operation: `ledger.*`
**Grafana dashboard**: _Ledger Operations_ (`rippled-ledger-ops`)
#### Peer Spans
Controlled by `trace_peer=1` in `[telemetry]` config. **Disabled by default** (high volume).
| Span Name | Parent | Source File | Description |
| ------------------------- | ------ | ----------- | ------------------------------------- |
| `peer.proposal.receive` | — | PeerImp.cpp | Consensus proposal received from peer |
| `peer.validation.receive` | — | PeerImp.cpp | Validation message received from peer |
**Where to find**: Jaeger → Operation: `peer.*`
**Grafana dashboard**: _Peer Network_ (`rippled-peer-net`)
---
### 1.2 Complete Attribute Inventory (22 attributes)
> **See also**: [02-design-decisions.md §2.4.2](./02-design-decisions.md#242-span-attributes-by-category) for attribute design rationale and privacy considerations.
Every span can carry key-value attributes that provide context for filtering and aggregation.
#### RPC Attributes
| Attribute | Type | Set On | Description |
| ------------------------ | ------ | --------------- | ------------------------------------------------ |
| `xrpl.rpc.command` | string | `rpc.command.*` | RPC command name (e.g., `server_info`, `ledger`) |
| `xrpl.rpc.version` | int64 | `rpc.command.*` | API version number |
| `xrpl.rpc.role` | string | `rpc.command.*` | Caller role: `"admin"` or `"user"` |
| `xrpl.rpc.status` | string | `rpc.command.*` | Result: `"success"` or `"error"` |
| `xrpl.rpc.duration_ms` | int64 | `rpc.command.*` | Command execution time in milliseconds |
| `xrpl.rpc.error_message` | string | `rpc.command.*` | Error details (only set on failure) |
**Jaeger query**: Tag `xrpl.rpc.command=server_info` to find all `server_info` calls.
**Prometheus label**: `xrpl_rpc_command` (dots converted to underscores by SpanMetrics).
#### Transaction Attributes
| Attribute | Type | Set On | Description |
| -------------------- | ------- | -------------------------- | ---------------------------------------------------- |
| `xrpl.tx.hash` | string | `tx.process`, `tx.receive` | Transaction hash (hex-encoded) |
| `xrpl.tx.local` | boolean | `tx.process` | `true` if locally submitted, `false` if peer-relayed |
| `xrpl.tx.path` | string | `tx.process` | Submission path: `"sync"` or `"async"` |
| `xrpl.tx.suppressed` | boolean | `tx.receive` | `true` if transaction was suppressed (duplicate) |
| `xrpl.tx.status` | string | `tx.receive` | Transaction status (e.g., `"known_bad"`) |
**Jaeger query**: Tag `xrpl.tx.hash=<hash>` to trace a specific transaction across nodes.
**Prometheus label**: `xrpl_tx_local` (used as SpanMetrics dimension).
#### Consensus Attributes
| Attribute | Type | Set On | Description |
| ------------------------------------ | ------- | --------------------------------------------------------------------------------------------------- | ------------------------------------------------------------- |
| `xrpl.consensus.round` | int64 | `consensus.proposal.send` | Consensus round number |
| `xrpl.consensus.mode` | string | `consensus.proposal.send`, `consensus.ledger_close` | Node mode: `"syncing"`, `"tracking"`, `"full"`, `"proposing"` |
| `xrpl.consensus.proposers` | int64 | `consensus.proposal.send`, `consensus.accept` | Number of proposers in the round |
| `xrpl.consensus.proposing` | boolean | `consensus.validation.send` | Whether this node was a proposer |
| `xrpl.consensus.ledger.seq` | int64 | `consensus.ledger_close`, `consensus.accept`, `consensus.validation.send`, `consensus.accept.apply` | Ledger sequence number |
| `xrpl.consensus.close_time` | int64 | `consensus.accept.apply` | Agreed-upon ledger close time (epoch seconds) |
| `xrpl.consensus.close_time_correct` | boolean | `consensus.accept.apply` | Whether validators reached agreement on close time |
| `xrpl.consensus.close_resolution_ms` | int64 | `consensus.accept.apply` | Close time rounding granularity in milliseconds |
| `xrpl.consensus.state` | string | `consensus.accept.apply` | Consensus outcome: `"finished"` or `"moved_on"` |
| `xrpl.consensus.round_time_ms` | int64 | `consensus.accept.apply` | Total consensus round duration in milliseconds |
**Jaeger query**: Tag `xrpl.consensus.mode=proposing` to find rounds where node was proposing.
**Prometheus label**: `xrpl_consensus_mode` (used as SpanMetrics dimension).
#### Ledger Attributes
| Attribute | Type | Set On | Description |
| ------------------------- | ----- | ------------------------------------------------------------- | ---------------------------------------------- |
| `xrpl.ledger.seq` | int64 | `ledger.build`, `ledger.validate`, `ledger.store`, `tx.apply` | Ledger sequence number |
| `xrpl.ledger.validations` | int64 | `ledger.validate` | Number of validations received for this ledger |
| `xrpl.ledger.tx_count` | int64 | `ledger.build`, `tx.apply` | Transactions in the ledger |
| `xrpl.ledger.tx_failed` | int64 | `ledger.build`, `tx.apply` | Failed transactions in the ledger |
**Jaeger query**: Tag `xrpl.ledger.seq=12345` to find all spans for a specific ledger.
#### Peer Attributes
| Attribute | Type | Set On | Description |
| ------------------------------ | ------- | ---------------------------------------------------------------- | ---------------------------------------------------- |
| `xrpl.peer.id` | int64 | `tx.receive`, `peer.proposal.receive`, `peer.validation.receive` | Peer identifier |
| `xrpl.peer.proposal.trusted` | boolean | `peer.proposal.receive` | Whether the proposal came from a trusted validator |
| `xrpl.peer.validation.trusted` | boolean | `peer.validation.receive` | Whether the validation came from a trusted validator |
**Prometheus labels**: `xrpl_peer_proposal_trusted`, `xrpl_peer_validation_trusted` (SpanMetrics dimensions).
---
### 1.3 SpanMetrics — Derived Prometheus Metrics
> **See also**: [01-architecture-analysis.md](./01-architecture-analysis.md) §1.8.2 for how span-derived metrics map to operational insights.
The OTel Collector's SpanMetrics connector automatically generates RED (Rate, Errors, Duration) metrics from every span. No custom metrics code in rippled is needed.
| Prometheus Metric | Type | Description |
| -------------------------------------------------- | --------- | ------------------------------------------------------------------------------ |
| `traces_span_metrics_calls_total` | Counter | Total span invocations |
| `traces_span_metrics_duration_milliseconds_bucket` | Histogram | Latency distribution (buckets: 1, 5, 10, 25, 50, 100, 250, 500, 1000, 5000 ms) |
| `traces_span_metrics_duration_milliseconds_count` | Histogram | Observation count |
| `traces_span_metrics_duration_milliseconds_sum` | Histogram | Cumulative latency |
**Standard labels on every metric**: `span_name`, `status_code`, `service_name`, `span_kind`
**Additional dimension labels** (configured in `otel-collector-config.yaml`):
| Span Attribute | Prometheus Label | Applies To |
| ------------------------------ | ------------------------------ | ------------------------- |
| `xrpl.rpc.command` | `xrpl_rpc_command` | `rpc.command.*` |
| `xrpl.rpc.status` | `xrpl_rpc_status` | `rpc.command.*` |
| `xrpl.consensus.mode` | `xrpl_consensus_mode` | `consensus.ledger_close` |
| `xrpl.tx.local` | `xrpl_tx_local` | `tx.process` |
| `xrpl.peer.proposal.trusted` | `xrpl_peer_proposal_trusted` | `peer.proposal.receive` |
| `xrpl.peer.validation.trusted` | `xrpl_peer_validation_trusted` | `peer.validation.receive` |
**Where to query**: Prometheus → `traces_span_metrics_calls_total{span_name="rpc.command.server_info"}`
---
## 2. System Metrics (beast::insight — OTel native)
> **See also**: [02-design-decisions.md](./02-design-decisions.md) for the beast::insight coexistence design. [06-implementation-phases.md](./06-implementation-phases.md) for the Phase 6/7 metric inventory.
>
> **Migration complete**: Phase 7 replaced the StatsD UDP transport with native OTel Metrics SDK export via OTLP/HTTP. The `beast::insight::Collector` interface and all metric names are preserved — only the wire protocol changed. `[insight] server=statsd` remains as a fallback.
These are system-level metrics emitted by rippled's `beast::insight` framework via OTel OTLP/HTTP. They cover operational data that doesn't map to individual trace spans.
### Configuration
```ini
# Recommended: native OTel metrics via OTLP/HTTP
[insight]
server=otel
endpoint=http://localhost:4318/v1/metrics
prefix=rippled
```
Fallback (StatsD):
```ini
[insight]
server=statsd
address=127.0.0.1:8125
prefix=rippled
```
### 2.1 Gauges
| Prometheus Metric | Source File | Description | Typical Range |
| --------------------------------------------------- | --------------------- | ---------------------------------------- | ------------------------------- |
| `rippled_LedgerMaster_Validated_Ledger_Age` | LedgerMaster.h | Seconds since last validated ledger | 010 (healthy), >30 (stale) |
| `rippled_LedgerMaster_Published_Ledger_Age` | LedgerMaster.h | Seconds since last published ledger | 010 (healthy) |
| `rippled_State_Accounting_Disconnected_duration` | NetworkOPs.cpp | Cumulative seconds in Disconnected state | Monotonic |
| `rippled_State_Accounting_Connected_duration` | NetworkOPs.cpp | Cumulative seconds in Connected state | Monotonic |
| `rippled_State_Accounting_Syncing_duration` | NetworkOPs.cpp | Cumulative seconds in Syncing state | Monotonic |
| `rippled_State_Accounting_Tracking_duration` | NetworkOPs.cpp | Cumulative seconds in Tracking state | Monotonic |
| `rippled_State_Accounting_Full_duration` | NetworkOPs.cpp | Cumulative seconds in Full state | Monotonic (should dominate) |
| `rippled_State_Accounting_Disconnected_transitions` | NetworkOPs.cpp | Count of transitions to Disconnected | Low |
| `rippled_State_Accounting_Connected_transitions` | NetworkOPs.cpp | Count of transitions to Connected | Low |
| `rippled_State_Accounting_Syncing_transitions` | NetworkOPs.cpp | Count of transitions to Syncing | Low |
| `rippled_State_Accounting_Tracking_transitions` | NetworkOPs.cpp | Count of transitions to Tracking | Low |
| `rippled_State_Accounting_Full_transitions` | NetworkOPs.cpp | Count of transitions to Full | Low (should be 1 after startup) |
| `rippled_Peer_Finder_Active_Inbound_Peers` | PeerfinderManager.cpp | Active inbound peer connections | 085 |
| `rippled_Peer_Finder_Active_Outbound_Peers` | PeerfinderManager.cpp | Active outbound peer connections | 1021 |
| `rippled_Overlay_Peer_Disconnects` | OverlayImpl.cpp | Cumulative peer disconnection count | Low growth |
| `rippled_job_count` | JobQueue.cpp | Current job queue depth | 0100 (healthy) |
**Grafana dashboard**: _Node Health (System Metrics)_ (`rippled-system-node-health`)
### 2.2 Counters
| Prometheus Metric | Source File | Description |
| --------------------------------- | ------------------ | --------------------------------------------- |
| `rippled_rpc_requests` | ServerHandler.cpp | Total RPC requests received |
| `rippled_ledger_fetches` | InboundLedgers.cpp | Inbound ledger fetch attempts |
| `rippled_ledger_history_mismatch` | LedgerHistory.cpp | Ledger hash mismatches detected |
| `rippled_warn` | Logic.h | Resource manager warnings issued |
| `rippled_drop` | Logic.h | Resource manager drops (connections rejected) |
**Note**: With `server=otel`, `rippled_warn` and `rippled_drop` are properly exported as OTel Counter instruments. The previous StatsD `|m` type limitation no longer applies.
**Grafana dashboard**: _RPC & Pathfinding (System Metrics)_ (`rippled-system-rpc`)
### 2.3 Histograms (Event timers)
| Prometheus Metric | Source File | Unit | Description |
| ----------------------- | ----------------- | ----- | ------------------------------ |
| `rippled_rpc_time` | ServerHandler.cpp | ms | RPC response time distribution |
| `rippled_rpc_size` | ServerHandler.cpp | bytes | RPC response size distribution |
| `rippled_ios_latency` | Application.cpp | ms | I/O service loop latency |
| `rippled_pathfind_fast` | PathRequests.h | ms | Fast pathfinding duration |
| `rippled_pathfind_full` | PathRequests.h | ms | Full pathfinding duration |
Quantiles collected: 0th, 50th, 90th, 95th, 99th, 100th percentile.
**Grafana dashboards**: _Node Health_ (`ios_latency`), _RPC & Pathfinding_ (`rpc_time`, `rpc_size`, `pathfind_*`)
### 2.4 Overlay Traffic Metrics
For each of the 45+ overlay traffic categories (defined in `TrafficCount.h`), four gauges are emitted:
- `rippled_{category}_Bytes_In`
- `rippled_{category}_Bytes_Out`
- `rippled_{category}_Messages_In`
- `rippled_{category}_Messages_Out`
**Key categories**:
| Category | Description |
| ----------------------------------------------------------------- | -------------------------- |
| `total` | All traffic aggregated |
| `overhead` / `overhead_overlay` | Protocol overhead |
| `transactions` / `transactions_duplicate` | Transaction relay |
| `proposals` / `proposals_untrusted` / `proposals_duplicate` | Consensus proposals |
| `validations` / `validations_untrusted` / `validations_duplicate` | Consensus validations |
| `ledger_data_get` / `ledger_data_share` | Ledger data exchange |
| `ledger_data_Transaction_Node_get/share` | Transaction node data |
| `ledger_data_Account_State_Node_get/share` | Account state node data |
| `ledger_data_Transaction_Set_candidate_get/share` | Transaction set candidates |
| `getObject` / `haveTxSet` / `ledgerData` | Object requests |
| `ping` / `status` | Keepalive and status |
| `set_get` | Set requests |
**Grafana dashboards**: _Network Traffic_ (`rippled-system-network`), _Overlay Traffic Detail_ (`rippled-system-overlay-detail`), _Ledger Data & Sync_ (`rippled-system-ledger-sync`)
---
## 3. Grafana Dashboard Reference
> **See also**: [05-configuration-reference.md](./05-configuration-reference.md) §5.8 for Grafana data source provisioning (Tempo, Jaeger, Prometheus) and TraceQL query examples.
### 3.1 Span-Derived Dashboards (5)
| Dashboard | UID | Data Source | Key Panels |
| -------------------- | ---------------------- | ------------------------ | ---------------------------------------------------------------------------------- |
| RPC Performance | `rippled-rpc-perf` | Prometheus (SpanMetrics) | Request rate by command, p95 latency by command, error rate, heatmap, top commands |
| Transaction Overview | `rippled-transactions` | Prometheus (SpanMetrics) | Processing rate, latency p95/p50, local vs relay split, apply duration, heatmap |
| Consensus Health | `rippled-consensus` | Prometheus (SpanMetrics) | Round duration p95/p50, proposals rate, close duration, mode timeline, heatmap |
| Ledger Operations | `rippled-ledger-ops` | Prometheus (SpanMetrics) | Build rate, build duration, validation rate, store rate, build vs close comparison |
| Peer Network | `rippled-peer-net` | Prometheus (SpanMetrics) | Proposal receive rate, validation receive rate, trusted vs untrusted breakdown |
### 3.2 System Metrics Dashboards (5)
| Dashboard | UID | Data Source | Key Panels |
| ---------------------- | ------------------------------- | ----------------- | --------------------------------------------------------------------------------- |
| Node Health | `rippled-system-node-health` | Prometheus (OTLP) | Ledger age, operating mode, I/O latency, job queue, fetch rate |
| Network Traffic | `rippled-system-network` | Prometheus (OTLP) | Active peers, disconnects, bytes in/out, messages in/out, traffic by category |
| RPC & Pathfinding | `rippled-system-rpc` | Prometheus (OTLP) | RPC rate, response time/size, pathfinding duration, resource warnings/drops |
| Overlay Traffic Detail | `rippled-system-overlay-detail` | Prometheus (OTLP) | Squelch, overhead, validator lists, set get/share, have/requested tx, proof paths |
| Ledger Data & Sync | `rippled-system-ledger-sync` | Prometheus (OTLP) | Ledger data exchange, legacy ledger share/get, getobject by type, traffic heatmap |
### 3.3 Accessing the Dashboards
1. Open Grafana at **http://localhost:3000**
2. Navigate to **Dashboards → rippled** folder
3. All 10 dashboards are auto-provisioned from `docker/telemetry/grafana/dashboards/`
---
## 4. Jaeger Trace Search Guide
> **See also**: [08-appendix.md](./08-appendix.md) §8.2 for span hierarchy visualizations. [05-configuration-reference.md](./05-configuration-reference.md) §5.8.5 for TraceQL examples when using Grafana Tempo instead of Jaeger.
### Finding Traces by Type
| What to Find | Jaeger Search Parameters |
| ------------------------ | ---------------------------------------------------------- |
| All RPC calls | Service: `rippled`, Operation: `rpc.request` |
| Specific RPC command | Operation: `rpc.command.server_info` (or any command name) |
| Slow RPC calls | Operation: `rpc.command.*`, Min Duration: `100ms` |
| Failed RPC calls | Tag: `xrpl.rpc.status=error` |
| Specific transaction | Tag: `xrpl.tx.hash=<hex_hash>` |
| Local transactions only | Tag: `xrpl.tx.local=true` |
| Consensus rounds | Operation: `consensus.accept` |
| Rounds by mode | Tag: `xrpl.consensus.mode=proposing` |
| Specific ledger | Tag: `xrpl.ledger.seq=12345` |
| Peer proposals (trusted) | Tag: `xrpl.peer.proposal.trusted=true` |
### Trace Structure
A typical RPC trace shows the span hierarchy:
```
rpc.request (ServerHandler)
└── rpc.process (ServerHandler)
└── rpc.command.server_info (RPCHandler)
```
A consensus round produces independent spans (not parent-child):
```
consensus.ledger_close (close event)
consensus.proposal.send (broadcast proposal)
ledger.build (build new ledger)
└── tx.apply (apply transaction set)
consensus.accept (accept result)
consensus.validation.send (send validation)
ledger.validate (promote to validated)
ledger.store (persist to DB)
```
---
## 5. Prometheus Query Examples
> **See also**: [05-configuration-reference.md](./05-configuration-reference.md) §5.8.7 for correlating Prometheus system metrics with trace-derived metrics.
### Span-Derived Metrics
```promql
# RPC request rate by command (last 5 minutes)
sum by (xrpl_rpc_command) (rate(traces_span_metrics_calls_total{span_name=~"rpc.command.*"}[5m]))
# RPC p95 latency by command
histogram_quantile(0.95, sum by (le, xrpl_rpc_command) (rate(traces_span_metrics_duration_milliseconds_bucket{span_name=~"rpc.command.*"}[5m])))
# Consensus round duration p95
histogram_quantile(0.95, sum by (le) (rate(traces_span_metrics_duration_milliseconds_bucket{span_name="consensus.accept"}[5m])))
# Transaction processing rate (local vs relay)
sum by (xrpl_tx_local) (rate(traces_span_metrics_calls_total{span_name="tx.process"}[5m]))
# Trusted vs untrusted proposal rate
sum by (xrpl_peer_proposal_trusted) (rate(traces_span_metrics_calls_total{span_name="peer.proposal.receive"}[5m]))
```
### StatsD Metrics
```promql
# Validated ledger age (should be < 10s)
rippled_LedgerMaster_Validated_Ledger_Age
# Active peer count
rippled_Peer_Finder_Active_Inbound_Peers + rippled_Peer_Finder_Active_Outbound_Peers
# RPC response time p95
histogram_quantile(0.95, rippled_rpc_time_bucket)
# Total network bytes in (rate)
rate(rippled_total_Bytes_In[5m])
# Operating mode (should be "Full" after startup)
rippled_State_Accounting_Full_duration
```
---
## 5a. Log-Trace Correlation (Phase 8)
> **Plan details**: [06-implementation-phases.md §6.8.1](./06-implementation-phases.md) — motivation, architecture, Mermaid diagrams
> **Task breakdown**: [Phase8_taskList.md](./Phase8_taskList.md) — per-task implementation details
Phase 8 injects OTel trace context into rippled's `Logs::format()` output, enabling log-trace correlation. When a log line is emitted within an active OTel span, the trace and span identifiers are automatically appended after the severity field:
### Log Format
```
<timestamp> <partition>:<severity> trace_id=<32hex> span_id=<16hex> <message>
```
Example:
```
2024-01-15T10:30:45.123Z LedgerMaster:NFO trace_id=abc123def456789012345678abcdef01 span_id=0123456789abcdef Validated ledger 42
```
- **`trace_id=<hex32>`** — 32-character lowercase hex trace identifier. Links to the distributed trace in Tempo/Jaeger.
- **`span_id=<hex16>`** — 16-character lowercase hex span identifier. Identifies the specific span within the trace.
- **Only present** when the log is emitted within an active OTel span. Log lines outside of traced code paths have no trace context fields.
### Implementation
The trace context injection is implemented in `Logs::format()` (`src/libxrpl/basics/Log.cpp`), guarded by `#ifdef XRPL_ENABLE_TELEMETRY`. It reads the current span from OTel's thread-local runtime context via `opentelemetry::trace::GetSpan()` and `opentelemetry::context::RuntimeContext::GetCurrent()`. Both calls are lock-free thread-local reads measured at <10ns per call.
### Log Ingestion Pipeline
```
rippled debug.log -> OTel Collector filelog receiver -> regex_parser -> Loki exporter -> Grafana Loki
```
The OTel Collector's `filelog` receiver tails `debug.log` files and uses a `regex_parser` operator to extract structured fields:
| Field | Type | Description |
| ----------- | -------- | -------------------------------------------------------- |
| `timestamp` | datetime | Log timestamp |
| `partition` | string | Log partition (e.g., `LedgerMaster`, `PeerImp`) |
| `severity` | string | Severity code (`TRC`, `DBG`, `NFO`, `WRN`, `ERR`, `FTL`) |
| `trace_id` | string | 32-hex trace identifier (optional) |
| `span_id` | string | 16-hex span identifier (optional) |
| `message` | string | Log message body |
### Grafana Correlation
Bidirectional linking between logs and traces is configured via Grafana datasource provisioning:
- **Tempo -> Loki** (`tracesToLogs`): Clicking "Logs for this trace" on a Tempo trace view filters Loki logs by `trace_id`, showing all log lines from that trace.
- **Loki -> Tempo** (`derivedFields`): A regex-based derived field on the Loki datasource extracts `trace_id` from log lines and renders it as a clickable link to the corresponding trace in Tempo.
### Loki Backend
Grafana Loki (v2.9.0) serves as the log storage backend. It receives log entries from the OTel Collector's `loki` exporter via the push API at `http://loki:3100/loki/api/v1/push`.
### LogQL Query Examples
```logql
# Find all logs for a specific trace
{job="rippled"} |= "trace_id=abc123def456789012345678abcdef01"
# Error logs with trace context
{job="rippled"} |= "ERR" |= "trace_id="
# Logs from a specific partition with trace context
{job="rippled"} |= "LedgerMaster" | regexp `trace_id=(?P<trace_id>[a-f0-9]+)` | trace_id != ""
# Count traced log lines over time
count_over_time({job="rippled"} |= "trace_id=" [5m])
```
---
## 5b. Future: Internal Metric Gap Fill (Phase 9)
> **Status**: Planned, not yet implemented.
> **Plan details**: [06-implementation-phases.md §6.8.2](./06-implementation-phases.md) — motivation, architecture, third-party context
> **Task breakdown**: [Phase9_taskList.md](./Phase9_taskList.md) — per-task implementation details
Phase 9 fills ~50+ metrics that exist inside rippled but currently lack time-series export. Uses a hybrid approach: `beast::insight` extensions for NodeStore I/O, OTel `ObservableGauge` async callbacks for new categories.
### New Metric Categories
#### NodeStore I/O (via beast::insight)
| Prometheus Metric | Type | Description |
| ------------------------------------ | ----- | ----------------------------------- |
| `rippled_nodestore_reads_total` | Gauge | Cumulative read operations |
| `rippled_nodestore_reads_hit` | Gauge | Cache-served reads |
| `rippled_nodestore_writes` | Gauge | Cumulative write operations |
| `rippled_nodestore_written_bytes` | Gauge | Cumulative bytes written |
| `rippled_nodestore_read_bytes` | Gauge | Cumulative bytes read |
| `rippled_nodestore_read_duration_us` | Gauge | Cumulative read time (microseconds) |
| `rippled_nodestore_write_load` | Gauge | Current write load score |
| `rippled_nodestore_read_queue` | Gauge | Items in read queue |
#### Cache Hit Rates (via OTel MetricsRegistry)
| Prometheus Metric | Type | Description |
| ------------------------------- | ----- | ------------------------------------ |
| `rippled_cache_SLE_hit_rate` | Gauge | SLE cache hit rate (0.0-1.0) |
| `rippled_cache_ledger_hit_rate` | Gauge | Ledger object cache hit rate |
| `rippled_cache_AL_hit_rate` | Gauge | AcceptedLedger cache hit rate |
| `rippled_cache_treenode_size` | Gauge | SHAMap TreeNode cache size (entries) |
| `rippled_cache_fullbelow_size` | Gauge | FullBelow cache size |
#### Transaction Queue (via OTel MetricsRegistry)
| Prometheus Metric | Type | Description |
| -------------------------------------- | ----- | -------------------------------- |
| `rippled_txq_count` | Gauge | Current transactions in queue |
| `rippled_txq_max_size` | Gauge | Maximum queue capacity |
| `rippled_txq_in_ledger` | Gauge | Transactions in open ledger |
| `rippled_txq_per_ledger` | Gauge | Expected transactions per ledger |
| `rippled_txq_open_ledger_fee_level` | Gauge | Open ledger fee escalation level |
| `rippled_txq_med_fee_level` | Gauge | Median fee level in queue |
| `rippled_txq_reference_fee_level` | Gauge | Reference fee level |
| `rippled_txq_min_processing_fee_level` | Gauge | Minimum fee to get processed |
#### PerfLog Per-RPC Method (via OTel Metrics SDK)
| Prometheus Metric | Type | Labels | Description |
| --------------------------------------- | --------- | ----------------- | --------------------------- |
| `rippled_rpc_method_started_total` | Counter | `method="<name>"` | RPC calls started |
| `rippled_rpc_method_finished_total` | Counter | `method="<name>"` | RPC calls completed |
| `rippled_rpc_method_errored_total` | Counter | `method="<name>"` | RPC calls errored |
| `rippled_rpc_method_duration_us_bucket` | Histogram | `method="<name>"` | Execution time distribution |
#### PerfLog Per-Job Type (via OTel Metrics SDK)
| Prometheus Metric | Type | Labels | Description |
| ---------------------------------------- | --------- | ------------------- | --------------- |
| `rippled_job_queued_total` | Counter | `job_type="<name>"` | Jobs queued |
| `rippled_job_started_total` | Counter | `job_type="<name>"` | Jobs started |
| `rippled_job_finished_total` | Counter | `job_type="<name>"` | Jobs completed |
| `rippled_job_queued_duration_us_bucket` | Histogram | `job_type="<name>"` | Queue wait time |
| `rippled_job_running_duration_us_bucket` | Histogram | `job_type="<name>"` | Execution time |
#### Counted Object Instances (via OTel MetricsRegistry)
| Prometheus Metric | Type | Labels | Description |
| ---------------------- | ----- | --------------- | ------------------------------- |
| `rippled_object_count` | Gauge | `type="<name>"` | Live instances of internal type |
Tracked types: `Transaction`, `Ledger`, `NodeObject`, `STTx`, `STLedgerEntry`, `InboundLedger`, `Pathfinder`, `PathRequest`, `HashRouterEntry`
#### Fee Escalation & Load Factors (via OTel MetricsRegistry)
| Prometheus Metric | Type | Description |
| ------------------------------------ | ----- | ------------------------------------ |
| `rippled_load_factor` | Gauge | Combined transaction cost multiplier |
| `rippled_load_factor_server` | Gauge | Server + cluster + network load |
| `rippled_load_factor_local` | Gauge | Local server load only |
| `rippled_load_factor_net` | Gauge | Network-wide load estimate |
| `rippled_load_factor_cluster` | Gauge | Cluster peer load |
| `rippled_load_factor_fee_escalation` | Gauge | Open ledger fee escalation |
| `rippled_load_factor_fee_queue` | Gauge | Queue entry fee level |
### New Grafana Dashboards (Phase 9)
| Dashboard | UID | Data Source | Key Panels |
| ------------------ | -------------------- | ----------- | ----------------------------------------------------------------- |
| Fee Market & TxQ | `rippled-fee-market` | Prometheus | TxQ depth/capacity, fee levels, load factor breakdown, escalation |
| Job Queue Analysis | `rippled-job-queue` | Prometheus | Per-job rates, queue wait times, execution times, queue depth |
---
## 5c. Future: Synthetic Workload Generation & Telemetry Validation (Phase 10)
> **Plan details**: [06-implementation-phases.md §6.8.3](./06-implementation-phases.md) — motivation, architecture
> **Task breakdown**: [Phase10_taskList.md](./Phase10_taskList.md) — per-task implementation details
> **Tools**: [docker/telemetry/workload/](../docker/telemetry/workload/) — RPC load generator, transaction submitter, validation suite, benchmarks
Phase 10 builds a 5-node validator docker-compose harness with RPC load generators, transaction submitters, and automated validation scripts that verify all spans, metrics, dashboards, and log-trace correlation work end-to-end. Includes a benchmark suite comparing telemetry-ON vs telemetry-OFF overhead.
### Running the Validation Suite
```bash
# Full end-to-end validation (start cluster, generate load, validate):
docker/telemetry/workload/run-full-validation.sh --xrpld .build/xrpld
# Validation only (assumes stack and cluster are already running):
python3 docker/telemetry/workload/validate_telemetry.py --report /tmp/report.json
# Performance benchmark (baseline vs telemetry):
docker/telemetry/workload/benchmark.sh --xrpld .build/xrpld --duration 300
```
### Validated Telemetry Inventory
| Category | Expected Count | Validation Method | Config File |
| ------------------ | -------------- | -------------------------------- | ----------------------- |
| Trace spans | 17 | Jaeger/Tempo API query | `expected_spans.json` |
| Span attributes | 22 | Per-span attribute assertion | `expected_spans.json` |
| StatsD metrics | 255+ | Prometheus query | `expected_metrics.json` |
| Phase 9 metrics | 50+ | Prometheus query | `expected_metrics.json` |
| SpanMetrics RED | 4 per span | Prometheus query | `expected_metrics.json` |
| Grafana dashboards | 10 | Dashboard API "no data" check | `expected_metrics.json` |
| Log-trace links | Present | Loki query + Tempo reverse check | — |
### Performance Overhead Targets
| Metric | Target | Measurement Method |
| ----------------- | ------------ | ----------------------------------- |
| CPU overhead | < 3% | ps avg CPU% baseline vs telemetry |
| Memory overhead | < 5MB | ps peak RSS baseline vs telemetry |
| RPC p99 latency | < 2ms impact | server_info round-trip timing |
| Throughput impact | < 5% | Ledger close rate comparison |
| Consensus impact | < 1% | Consensus round time p95 comparison |
---
## 5d. Future: Third-Party Data Collection Pipelines (Phase 11)
> **Status**: Planned, not yet implemented.
> **Plan details**: [06-implementation-phases.md §6.8.4](./06-implementation-phases.md) — motivation, architecture, consumer gap analysis
> **Task breakdown**: [Phase11_taskList.md](./Phase11_taskList.md) — per-task implementation details
Phase 11 builds a custom OTel Collector receiver (Go) that polls rippled's admin RPCs and exports `xrpl_*` metrics for external consumers. No rippled code changes.
### Exported Metrics (via Custom OTel Collector Receiver)
#### Node Health (from server_info)
| Prometheus Metric | Type | Description |
| --------------------------------------- | ----- | ----------------------------------------------- |
| `xrpl_server_state` | Gauge | Operating mode (0=disconnected ... 5=proposing) |
| `xrpl_server_state_duration_seconds` | Gauge | Seconds in current state |
| `xrpl_uptime_seconds` | Gauge | Consecutive seconds running |
| `xrpl_io_latency_ms` | Gauge | I/O subsystem latency |
| `xrpl_amendment_blocked` | Gauge | 1 if amendment-blocked, 0 otherwise |
| `xrpl_peers_count` | Gauge | Connected peers |
| `xrpl_validated_ledger_seq` | Gauge | Latest validated ledger sequence |
| `xrpl_validated_ledger_age_seconds` | Gauge | Seconds since last validated close |
| `xrpl_last_close_proposers` | Gauge | Proposers in last consensus round |
| `xrpl_last_close_converge_time_seconds` | Gauge | Last consensus round duration |
| `xrpl_load_factor` | Gauge | Transaction cost multiplier |
| `xrpl_state_duration_seconds` | Gauge | Per-state duration (`state` label) |
| `xrpl_state_transitions_total` | Gauge | Per-state transition count (`state` label) |
#### Peer Topology (from peers)
| Prometheus Metric | Type | Description |
| --------------------------- | ----- | ----------------------------------- |
| `xrpl_peers_inbound_count` | Gauge | Inbound peer connections |
| `xrpl_peers_outbound_count` | Gauge | Outbound peer connections |
| `xrpl_peer_latency_p50_ms` | Gauge | Median peer latency |
| `xrpl_peer_latency_p95_ms` | Gauge | p95 peer latency |
| `xrpl_peer_version_count` | Gauge | Peers per version (`version` label) |
| `xrpl_peer_diverged_count` | Gauge | Peers with diverged tracking status |
#### Validator & Amendment (from validators, feature)
| Prometheus Metric | Type | Description |
| ------------------------------------- | ----- | --------------------------------------- |
| `xrpl_trusted_validators_count` | Gauge | UNL validator count |
| `xrpl_amendment_enabled_count` | Gauge | Enabled amendments |
| `xrpl_amendment_majority_count` | Gauge | Amendments with majority |
| `xrpl_amendment_unsupported_majority` | Gauge | 1 if unsupported amendment has majority |
| `xrpl_validator_list_active` | Gauge | 1 if validator list is active |
#### Fee Market (from fee)
| Prometheus Metric | Type | Description |
| -------------------------------- | ----- | ------------------------------------- |
| `xrpl_fee_open_ledger_fee_drops` | Gauge | Minimum fee for open ledger inclusion |
| `xrpl_fee_median_fee_drops` | Gauge | Median fee level |
| `xrpl_fee_queue_size` | Gauge | Current transaction queue depth |
| `xrpl_fee_current_ledger_size` | Gauge | Transactions in current open ledger |
#### DEX & AMM (optional, from book_offers, amm_info)
| Prometheus Metric | Type | Labels | Description |
| -------------------------- | ----- | --------------------- | ---------------------- |
| `xrpl_amm_tvl_drops` | Gauge | `pool="<id>"` | Total value locked |
| `xrpl_amm_trading_fee` | Gauge | `pool="<id>"` | Pool trading fee (bps) |
| `xrpl_orderbook_bid_depth` | Gauge | `pair="<base/quote>"` | Total bid volume |
| `xrpl_orderbook_ask_depth` | Gauge | `pair="<base/quote>"` | Total ask volume |
| `xrpl_orderbook_spread` | Gauge | `pair="<base/quote>"` | Best bid-ask spread |
### Phase 9: OTel SDK-Exported Metrics (MetricsRegistry)
Phase 9 introduces the `MetricsRegistry` class (`src/xrpld/telemetry/MetricsRegistry.h/.cpp`)
which registers metrics directly with the OpenTelemetry Metrics SDK. These are exported
via OTLP/HTTP to the OTel Collector and scraped by Prometheus.
#### NodeStore I/O (Observable Gauge — `nodestore_state`)
| Prometheus Metric | Type | Labels | Description |
| ------------------------------------------------------ | ----- | -------- | ------------------------------------ |
| `rippled_nodestore_state{metric="node_reads_total"}` | Gauge | `metric` | Cumulative NodeStore read operations |
| `rippled_nodestore_state{metric="node_reads_hit"}` | Gauge | `metric` | Reads served from cache |
| `rippled_nodestore_state{metric="node_writes"}` | Gauge | `metric` | Cumulative write operations |
| `rippled_nodestore_state{metric="node_written_bytes"}` | Gauge | `metric` | Cumulative bytes written |
| `rippled_nodestore_state{metric="node_read_bytes"}` | Gauge | `metric` | Cumulative bytes read |
| `rippled_nodestore_state{metric="write_load"}` | Gauge | `metric` | Current write load score |
| `rippled_nodestore_state{metric="read_queue"}` | Gauge | `metric` | Items in read prefetch queue |
#### Cache Hit Rates & Sizes (Observable Gauge — `cache_metrics`)
| Prometheus Metric | Type | Labels | Description |
| ----------------------------------------------------- | ----- | -------- | ----------------------------- |
| `rippled_cache_metrics{metric="SLE_hit_rate"}` | Gauge | `metric` | SLE cache hit rate (0.0-1.0) |
| `rippled_cache_metrics{metric="ledger_hit_rate"}` | Gauge | `metric` | Ledger cache hit rate |
| `rippled_cache_metrics{metric="AL_hit_rate"}` | Gauge | `metric` | AcceptedLedger cache hit rate |
| `rippled_cache_metrics{metric="treenode_cache_size"}` | Gauge | `metric` | SHAMap TreeNode cache entries |
| `rippled_cache_metrics{metric="treenode_track_size"}` | Gauge | `metric` | Tracked tree nodes |
| `rippled_cache_metrics{metric="fullbelow_size"}` | Gauge | `metric` | FullBelow cache entries |
#### Transaction Queue (Observable Gauge — `txq_metrics`)
| Prometheus Metric | Type | Labels | Description |
| ------------------------------------------------------------ | ----- | -------- | -------------------------------- |
| `rippled_txq_metrics{metric="txq_count"}` | Gauge | `metric` | Transactions currently in queue |
| `rippled_txq_metrics{metric="txq_max_size"}` | Gauge | `metric` | Maximum queue capacity |
| `rippled_txq_metrics{metric="txq_in_ledger"}` | Gauge | `metric` | Transactions in open ledger |
| `rippled_txq_metrics{metric="txq_per_ledger"}` | Gauge | `metric` | Expected transactions per ledger |
| `rippled_txq_metrics{metric="txq_reference_fee_level"}` | Gauge | `metric` | Reference fee level |
| `rippled_txq_metrics{metric="txq_min_processing_fee_level"}` | Gauge | `metric` | Minimum fee to get processed |
| `rippled_txq_metrics{metric="txq_med_fee_level"}` | Gauge | `metric` | Median fee level in queue |
| `rippled_txq_metrics{metric="txq_open_ledger_fee_level"}` | Gauge | `metric` | Open ledger fee escalation level |
#### Per-RPC Method Metrics (Synchronous Counters/Histogram)
| Prometheus Metric | Type | Labels | Description |
| ----------------------------------- | --------- | ----------------- | -------------------------------- |
| `rippled_rpc_method_started_total` | Counter | `method="<name>"` | RPC calls started |
| `rippled_rpc_method_finished_total` | Counter | `method="<name>"` | RPC calls completed successfully |
| `rippled_rpc_method_errored_total` | Counter | `method="<name>"` | RPC calls that errored |
| `rippled_rpc_method_duration_us` | Histogram | `method="<name>"` | Execution time distribution (us) |
#### Per-Job-Type Metrics (Synchronous Counters/Histogram)
| Prometheus Metric | Type | Labels | Description |
| --------------------------------- | --------- | ------------------- | --------------------------------- |
| `rippled_job_queued_total` | Counter | `job_type="<name>"` | Jobs enqueued |
| `rippled_job_started_total` | Counter | `job_type="<name>"` | Jobs started |
| `rippled_job_finished_total` | Counter | `job_type="<name>"` | Jobs completed |
| `rippled_job_queued_duration_us` | Histogram | `job_type="<name>"` | Queue wait time distribution (us) |
| `rippled_job_running_duration_us` | Histogram | `job_type="<name>"` | Execution time distribution (us) |
#### Counted Object Instances (Observable Gauge — `object_count`)
| Prometheus Metric | Type | Labels | Description |
| ---------------------------------------------- | ----- | --------------- | ------------------------------ |
| `rippled_object_count{type="Transaction"}` | Gauge | `type="<name>"` | Live Transaction objects |
| `rippled_object_count{type="Ledger"}` | Gauge | `type="<name>"` | Live Ledger objects |
| `rippled_object_count{type="NodeObject"}` | Gauge | `type="<name>"` | Live NodeObject instances |
| `rippled_object_count{type="STTx"}` | Gauge | `type="<name>"` | Serialized transaction objects |
| `rippled_object_count{type="STLedgerEntry"}` | Gauge | `type="<name>"` | Serialized ledger entries |
| `rippled_object_count{type="InboundLedger"}` | Gauge | `type="<name>"` | Ledgers being fetched |
| `rippled_object_count{type="Pathfinder"}` | Gauge | `type="<name>"` | Active pathfinding operations |
| `rippled_object_count{type="PathRequest"}` | Gauge | `type="<name>"` | Active path requests |
| `rippled_object_count{type="HashRouterEntry"}` | Gauge | `type="<name>"` | Hash router entries |
#### Load Factor Breakdown (Observable Gauge — `load_factor_metrics`)
| Prometheus Metric | Type | Labels | Description |
| ------------------------------------------------------------------ | ----- | -------- | --------------------------------------- |
| `rippled_load_factor_metrics{metric="load_factor"}` | Gauge | `metric` | Combined transaction cost multiplier |
| `rippled_load_factor_metrics{metric="load_factor_server"}` | Gauge | `metric` | Server + cluster + network contribution |
| `rippled_load_factor_metrics{metric="load_factor_local"}` | Gauge | `metric` | Local server load only |
| `rippled_load_factor_metrics{metric="load_factor_net"}` | Gauge | `metric` | Network-wide load estimate |
| `rippled_load_factor_metrics{metric="load_factor_cluster"}` | Gauge | `metric` | Cluster peer load |
| `rippled_load_factor_metrics{metric="load_factor_fee_escalation"}` | Gauge | `metric` | Open ledger fee escalation |
| `rippled_load_factor_metrics{metric="load_factor_fee_queue"}` | Gauge | `metric` | Queue entry fee level |
#### Prometheus Query Examples (Phase 9)
```promql
# NodeStore cache hit ratio
rippled_nodestore_state{metric="node_reads_hit"} / rippled_nodestore_state{metric="node_reads_total"}
# RPC error rate for server_info
rate(rippled_rpc_method_errored_total{method="server_info"}[5m])
# Job queue wait time p95
histogram_quantile(0.95, sum by (le) (rate(rippled_job_queued_duration_us_bucket[5m])))
# TxQ utilization percentage
rippled_txq_metrics{metric="txq_count"} / rippled_txq_metrics{metric="txq_max_size"}
# High load factor alert candidate
rippled_load_factor_metrics{metric="load_factor"} > 5
```
### New Grafana Dashboards (Phase 9)
| Dashboard | UID | Data Source | Key Panels |
| ---------------------- | -------------------- | ----------- | --------------------------------------------------------- |
| Fee Market & TxQ | `rippled-fee-market` | Prometheus | TxQ depth/capacity, fee levels, load factor breakdown |
| Job Queue Analysis | `rippled-job-queue` | Prometheus | Per-job rates, queue wait times, execution times |
| RPC Performance (OTel) | `rippled-rpc-perf` | Prometheus | Per-method call rates, error rates, latency distributions |
### Updated Grafana Dashboards (Phase 9)
| Dashboard | UID | New Panels Added |
| -------------------- | ---------------------------- | ------------------------------------------------------ |
| Node Health (StatsD) | `rippled-statsd-node-health` | NodeStore I/O, cache hit rates, object instance counts |
### New Grafana Dashboards (Phase 11)
| Dashboard | UID | Data Source | Key Panels |
| ------------------ | ----------------------------- | ----------- | ---------------------------------------------------------------------- |
| Validator Health | `rippled-validator-health` | Prometheus | Server state timeline, proposer count, converge time, amendment voting |
| Network Topology | `rippled-network-topology` | Prometheus | Peer count, version distribution, latency distribution, diverged peers |
| Fee Market (Ext) | `rippled-fee-market-external` | Prometheus | Fee levels, queue depth, load factor breakdown, escalation timeline |
| DEX & AMM Overview | `rippled-dex-amm` | Prometheus | AMM TVL, order book depth, spread trends, trading fee revenue |
### Prometheus Alerting Rules (Phase 11)
| Alert Name | Severity | Condition | For |
| ---------------------------------- | -------- | ----------------------------------------------------------- | --- |
| `XRPLServerNotFull` | Critical | `xrpl_server_state < 4` for 15m | 15m |
| `XRPLAmendmentBlocked` | Critical | `xrpl_amendment_blocked == 1` | 1m |
| `XRPLNoPeers` | Critical | `xrpl_peers_count == 0` | 5m |
| `XRPLLedgerStale` | Critical | `xrpl_validated_ledger_age_seconds > 120` | 2m |
| `XRPLHighIOLatency` | Critical | `xrpl_io_latency_ms > 100` | 5m |
| `XRPLUnsupportedAmendmentMajority` | Critical | `xrpl_amendment_unsupported_majority == 1` | 1m |
| `XRPLLowPeerCount` | Warning | `xrpl_peers_count < 10` | 15m |
| `XRPLHighLoadFactor` | Warning | `xrpl_load_factor > 10` | 10m |
| `XRPLSlowConsensus` | Warning | `xrpl_last_close_converge_time_seconds > 6` | 5m |
| `XRPLValidatorListExpiring` | Warning | `(xrpl_validator_list_expiration_seconds - time()) < 86400` | 1h |
| `XRPLStateFlapping` | Warning | `rate(xrpl_state_transitions_total{state="full"}[1h]) > 2` | 30m |
---
## 6. Known Issues
| Issue | Impact | Status |
| ------------------------------------------------------------------ | ------------------------------------------------ | -------------------------------------------------------------------- |
| `warn` and `drop` metrics use non-standard StatsD `\|m` meter type | Metrics silently dropped by OTel StatsD receiver | Phase 6 Task 6.1 needs `\|m` `\|c` change in StatsDCollector.cpp |
| `rippled_job_count` may not emit in standalone mode | Missing from Prometheus in some test configs | Requires active job queue activity |
| `rippled_rpc_requests` depends on `[insight]` config | Zero series if StatsD not configured | Requires `[insight] server=statsd` in xrpld.cfg |
| Peer tracing disabled by default | No `peer.*` spans unless `trace_peer=1` | Intentional high volume on mainnet |
---
## 7. Privacy and Data Collection
The telemetry system is designed with privacy in mind:
- **No private keys** are ever included in spans or metrics
- **No account balances** or financial data is traced
- **Transaction hashes** are included (public on-ledger data) but not transaction contents
- **Peer IDs** are internal identifiers, not IP addresses
- **All telemetry is opt-in** disabled by default at build time (`-Dtelemetry=OFF`)
- **Sampling** reduces data volume `sampling_ratio=0.01` recommended for production
- **Data stays local** the default stack sends data to `localhost` only
---
## 8. Configuration Quick Reference
> **Full reference**: [05-configuration-reference.md](./05-configuration-reference.md) §5.1 for all `[telemetry]` options with defaults, the config parser implementation, and collector YAML configurations (dev and production).
### Minimal Setup (development)
```ini
[telemetry]
enabled=1
[insight]
server=statsd
address=127.0.0.1:8125
prefix=rippled
```
### Production Setup
```ini
[telemetry]
enabled=1
endpoint=http://otel-collector:4318/v1/traces
sampling_ratio=0.01
trace_peer=0
batch_size=1024
max_queue_size=4096
[insight]
server=statsd
address=otel-collector:8125
prefix=rippled
```
### Trace Category Toggle
| Config Key | Default | Controls |
| -------------------- | ------- | ---------------------------- |
| `trace_rpc` | `1` | `rpc.*` spans |
| `trace_transactions` | `1` | `tx.*` spans |
| `trace_consensus` | `1` | `consensus.*` spans |
| `trace_ledger` | `1` | `ledger.*` spans |
| `trace_peer` | `0` | `peer.*` spans (high volume) |

View File

@@ -0,0 +1,205 @@
# [OpenTelemetry](00-tracing-fundamentals.md) Distributed Tracing Implementation Plan for rippled (xrpld)
## Executive Summary
This document provides a comprehensive implementation plan for integrating OpenTelemetry distributed tracing into the rippled XRP Ledger node software. The plan addresses the unique challenges of a decentralized peer-to-peer system where trace context must propagate across network boundaries between independent nodes.
### Key Benefits
- **End-to-end transaction visibility**: Track transactions from submission through consensus to ledger inclusion
- **Consensus round analysis**: Understand timing and behavior of consensus phases across validators
- **RPC performance insights**: Identify slow handlers and optimize response times
- **Network topology understanding**: Visualize message propagation patterns between peers
- **Incident debugging**: Correlate events across distributed nodes during issues
### Estimated Performance Overhead
| Metric | Overhead | Notes |
| ------------- | ---------- | ----------------------------------- |
| CPU | 1-3% | Span creation and attribute setting |
| Memory | 2-5 MB | Batch buffer for pending spans |
| Network | 10-50 KB/s | Compressed OTLP export to collector |
| Latency (p99) | <2% | With proper sampling configuration |
---
## Document Structure
This implementation plan is organized into modular documents for easier navigation:
<div align="center">
```mermaid
flowchart TB
overview["📋 OpenTelemetryPlan.md<br/>(This Document)"]
subgraph analysis["Analysis & Design"]
arch["01-architecture-analysis.md"]
design["02-design-decisions.md"]
end
subgraph impl["Implementation"]
strategy["03-implementation-strategy.md"]
code["04-code-samples.md"]
config["05-configuration-reference.md"]
end
subgraph deploy["Deployment & Planning"]
phases["06-implementation-phases.md"]
backends["07-observability-backends.md"]
appendix["08-appendix.md"]
dataref["09-data-collection-reference.md"]
end
overview --> analysis
overview --> impl
overview --> deploy
arch --> design
design --> strategy
strategy --> code
code --> config
config --> phases
phases --> backends
backends --> appendix
appendix --> dataref
style overview fill:#1b5e20,stroke:#0d3d14,color:#fff,stroke-width:2px
style analysis fill:#0d47a1,stroke:#082f6a,color:#fff
style impl fill:#bf360c,stroke:#8c2809,color:#fff
style deploy fill:#4a148c,stroke:#2e0d57,color:#fff
style arch fill:#0d47a1,stroke:#082f6a,color:#fff
style design fill:#0d47a1,stroke:#082f6a,color:#fff
style strategy fill:#bf360c,stroke:#8c2809,color:#fff
style code fill:#bf360c,stroke:#8c2809,color:#fff
style config fill:#bf360c,stroke:#8c2809,color:#fff
style phases fill:#4a148c,stroke:#2e0d57,color:#fff
style backends fill:#4a148c,stroke:#2e0d57,color:#fff
style appendix fill:#4a148c,stroke:#2e0d57,color:#fff
style dataref fill:#4a148c,stroke:#2e0d57,color:#fff
```
</div>
---
## Table of Contents
| Section | Document | Description |
| ------- | -------------------------------------------------------------- | ---------------------------------------------------------------------- |
| **1** | [Architecture Analysis](./01-architecture-analysis.md) | rippled component analysis, trace points, instrumentation priorities |
| **2** | [Design Decisions](./02-design-decisions.md) | SDK selection, exporters, span naming, attributes, context propagation |
| **3** | [Implementation Strategy](./03-implementation-strategy.md) | Directory structure, key principles, performance optimization |
| **4** | [Code Samples](./04-code-samples.md) | Complete C++ implementation examples for all components |
| **5** | [Configuration Reference](./05-configuration-reference.md) | rippled config, CMake integration, Collector configurations |
| **6** | [Implementation Phases](./06-implementation-phases.md) | 5-phase timeline, tasks, risks, success metrics |
| **7** | [Observability Backends](./07-observability-backends.md) | Backend selection guide and production architecture |
| **8** | [Appendix](./08-appendix.md) | Glossary, references, version history |
| **9** | [Data Collection Reference](./09-data-collection-reference.md) | Complete inventory of spans, attributes, metrics, and dashboards |
---
## 1. Architecture Analysis
The rippled node consists of several key components that require instrumentation for comprehensive distributed tracing. The main areas include the RPC server (HTTP/WebSocket), Overlay P2P network, Consensus mechanism (RCLConsensus), JobQueue for async task execution, and existing observability infrastructure (PerfLog, Insight/StatsD, Journal logging).
Key trace points span across transaction submission via RPC, peer-to-peer message propagation, consensus round execution, and ledger building. The implementation prioritizes high-value, low-risk components first: RPC handlers provide immediate value with minimal risk, while consensus tracing requires careful implementation to avoid timing impacts.
➡️ **[Read full Architecture Analysis](./01-architecture-analysis.md)**
---
## 2. Design Decisions
The OpenTelemetry C++ SDK is selected for its CNCF backing, active development, and native performance characteristics. Traces are exported via OTLP/gRPC (primary) or OTLP/HTTP (fallback) to an OpenTelemetry Collector, which provides flexible routing and sampling.
Span naming follows a hierarchical `<component>.<operation>` convention (e.g., `rpc.submit`, `tx.relay`, `consensus.round`). Context propagation uses W3C Trace Context headers for HTTP and embedded Protocol Buffer fields for P2P messages. The implementation coexists with existing PerfLog and Insight observability systems through correlation IDs.
**Data Collection & Privacy**: Telemetry collects only operational metadata (timing, counts, hashes) — never sensitive content (private keys, balances, amounts, raw payloads). Privacy protection includes account hashing, configurable redaction, sampling, and collector-level filtering. Node operators retain full control(not penned down in this document yet) over what data is exported.
➡️ **[Read full Design Decisions](./02-design-decisions.md)**
---
## 3. Implementation Strategy
The telemetry code is organized under `include/xrpl/telemetry/` for headers and `src/libxrpl/telemetry/` for implementation. Key principles include RAII-based span management via `SpanGuard`, conditional compilation with `XRPL_ENABLE_TELEMETRY`, and minimal runtime overhead through batch processing and efficient sampling.
Performance optimization strategies include probabilistic head sampling (10% default), tail-based sampling at the collector for errors and slow traces, batch export to reduce network overhead, and conditional instrumentation that compiles to no-ops when disabled.
➡️ **[Read full Implementation Strategy](./03-implementation-strategy.md)**
---
## 4. Code Samples
Complete C++ implementation examples are provided for all telemetry components:
- `Telemetry.h` - Core interface for tracer access and span creation
- `SpanGuard.h` - RAII wrapper for automatic span lifecycle management
- `TracingInstrumentation.h` - Macros for conditional instrumentation
- Protocol Buffer extensions for trace context propagation
- Module-specific instrumentation (RPC, Consensus, P2P, JobQueue)
➡️ **[View all Code Samples](./04-code-samples.md)**
---
## 5. Configuration Reference
Configuration is handled through the `[telemetry]` section in `xrpld.cfg` with options for enabling/disabling, exporter selection, endpoint configuration, sampling ratios, and component-level filtering. CMake integration includes a `XRPL_ENABLE_TELEMETRY` option for compile-time control.
OpenTelemetry Collector configurations are provided for development (with Jaeger) and production (with tail-based sampling, Tempo, and Elastic APM). Docker Compose examples enable quick local development environment setup.
➡️ **[View full Configuration Reference](./05-configuration-reference.md)**
---
## 6. Implementation Phases
The implementation spans 13 weeks across 8 phases:
| Phase | Duration | Focus | Key Deliverables |
| ----- | ----------- | --------------------- | ----------------------------------------------------------- |
| 1 | Weeks 1-2 | Core Infrastructure | SDK integration, Telemetry interface, Configuration |
| 2 | Weeks 3-4 | RPC Tracing | HTTP context extraction, Handler instrumentation |
| 3 | Weeks 5-6 | Transaction Tracing | Protocol Buffer context, Relay propagation |
| 4 | Weeks 7-8 | Consensus Tracing | Round spans, Proposal/validation tracing |
| 5 | Week 9 | Documentation | Runbook, Dashboards, Training |
| 6 | Week 10 | StatsD Metrics Bridge | OTel Collector StatsD receiver, 3 Grafana dashboards |
| 7 | Weeks 11-12 | Native OTel Metrics | OTelCollector impl, OTLP metrics export, StatsD deprecation |
| 8 | Week 13 | Log-Trace Correlation | trace_id in logs, Loki ingestion, Tempo↔Loki linking |
**Total Effort**: 65.1 developer-days with 2 developers
➡️ **[View full Implementation Phases](./06-implementation-phases.md)**
---
## 7. Observability Backends
For development and testing, Jaeger provides easy setup with a good UI. For production deployments, Grafana Tempo is recommended for its cost-effectiveness and Grafana integration, while Elastic APM is ideal for organizations with existing Elastic infrastructure.
The recommended production architecture uses a gateway collector pattern with regional collectors performing tail-based sampling, routing traces to multiple backends (Tempo for primary storage, Elastic for log correlation, S3/GCS for long-term archive).
➡️ **[View Observability Backend Recommendations](./07-observability-backends.md)**
---
## 8. Appendix
The appendix contains a glossary of OpenTelemetry and rippled-specific terms, references to external documentation and specifications, version history for this implementation plan, and a complete document index.
➡️ **[View Appendix](./08-appendix.md)**
---
## 9. Data Collection Reference
A single-source-of-truth reference documenting every piece of telemetry data collected by rippled. Covers all 16 OpenTelemetry spans with their 22 attributes, all StatsD metrics (gauges, counters, histograms, overlay traffic), SpanMetrics-derived Prometheus metrics, and all 8 Grafana dashboards. Includes Jaeger search guides and Prometheus query examples.
➡️ **[View Data Collection Reference](./09-data-collection-reference.md)**
---
_This document provides a comprehensive implementation plan for integrating OpenTelemetry distributed tracing into the rippled XRP Ledger node software. For detailed information on any section, follow the links to the corresponding sub-documents._

View File

@@ -0,0 +1,610 @@
# OpenTelemetry POC Task List
> **Goal**: Build a minimal end-to-end proof of concept that demonstrates distributed tracing in rippled. A successful POC will show RPC request traces flowing from rippled through an OTel Collector into Jaeger, viewable in a browser UI.
>
> **Scope**: RPC tracing only (highest value, lowest risk per the [CRAWL phase](./06-implementation-phases.md#6102-quick-wins-immediate-value) in the implementation phases). No cross-node P2P context propagation or consensus tracing in the POC.
### Related Plan Documents
| Document | Relevance to POC |
| ---------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [00-tracing-fundamentals.md](./00-tracing-fundamentals.md) | Core concepts: traces, spans, context propagation, sampling |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | RPC request flow (§1.5), key trace points (§1.6), instrumentation priority (§1.7) |
| [02-design-decisions.md](./02-design-decisions.md) | SDK selection (§2.1), exporter config (§2.2), span naming (§2.3), attribute schema (§2.4), coexistence with PerfLog/Insight (§2.6) |
| [03-implementation-strategy.md](./03-implementation-strategy.md) | Directory structure (§3.1), key principles (§3.2), performance overhead (§3.3-3.6), conditional compilation (§3.7.3), code intrusiveness (§3.9) |
| [04-code-samples.md](./04-code-samples.md) | Telemetry interface (§4.1), SpanGuard (§4.2), macros (§4.3), RPC instrumentation (§4.5.3) |
| [05-configuration-reference.md](./05-configuration-reference.md) | rippled config (§5.1), config parser (§5.2), Application integration (§5.3), CMake (§5.4), Collector config (§5.5), Docker Compose (§5.6), Grafana (§5.8) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 1 core tasks (§6.2), Phase 2 RPC tasks (§6.3), quick wins (§6.10), definition of done (§6.11) |
| [07-observability-backends.md](./07-observability-backends.md) | Jaeger dev setup (§7.1), Grafana dashboards (§7.6), alert rules (§7.6.3) |
---
## Task 0: Docker Observability Stack Setup
**Objective**: Stand up the backend infrastructure to receive, store, and display traces.
**What to do**:
- Create `docker/telemetry/docker-compose.yml` in the repo with three services:
1. **OpenTelemetry Collector** (`otel/opentelemetry-collector-contrib:latest`)
- Expose ports `4317` (OTLP gRPC) and `4318` (OTLP HTTP)
- Expose port `13133` (health check)
- Mount a config file `docker/telemetry/otel-collector-config.yaml`
2. **Jaeger** (`jaegertracing/all-in-one:latest`)
- Expose port `16686` (UI) and `14250` (gRPC collector)
- Set env `COLLECTOR_OTLP_ENABLED=true`
3. **Grafana** (`grafana/grafana:latest`) — optional but useful
- Expose port `3000`
- Enable anonymous admin access for local dev (`GF_AUTH_ANONYMOUS_ENABLED=true`, `GF_AUTH_ANONYMOUS_ORG_ROLE=Admin`)
- Provision Jaeger as a data source via `docker/telemetry/grafana/provisioning/datasources/jaeger.yaml`
- Create `docker/telemetry/otel-collector-config.yaml`:
```yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 100
exporters:
logging:
verbosity: detailed
otlp/jaeger:
endpoint: jaeger:4317
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp/jaeger]
```
- Create Grafana Jaeger datasource provisioning file at `docker/telemetry/grafana/provisioning/datasources/jaeger.yaml`:
```yaml
apiVersion: 1
datasources:
- name: Jaeger
type: jaeger
access: proxy
url: http://jaeger:16686
```
**Verification**: Run `docker compose -f docker/telemetry/docker-compose.yml up -d`, then:
- `curl http://localhost:13133` returns healthy (Collector)
- `http://localhost:16686` opens Jaeger UI (no traces yet)
- `http://localhost:3000` opens Grafana (optional)
**Reference**:
- [05-configuration-reference.md §5.5](./05-configuration-reference.md) — Collector config (dev YAML with Jaeger exporter)
- [05-configuration-reference.md §5.6](./05-configuration-reference.md) — Docker Compose development environment
- [07-observability-backends.md §7.1](./07-observability-backends.md) — Jaeger quick start and backend selection
- [05-configuration-reference.md §5.8](./05-configuration-reference.md) — Grafana datasource provisioning and dashboards
---
## Task 1: Add OpenTelemetry C++ SDK Dependency
**Objective**: Make `opentelemetry-cpp` available to the build system.
**What to do**:
- Edit `conanfile.py` to add `opentelemetry-cpp` as an **optional** dependency. The gRPC otel plugin flag (`"grpc/*:otel_plugin": False`) in the existing conanfile may need to remain false — we pull the OTel SDK separately.
- Add a Conan option: `with_telemetry = [True, False]` defaulting to `False`
- When `with_telemetry` is `True`, add `opentelemetry-cpp` to `self.requires()`
- Required OTel Conan components: `opentelemetry-cpp` (which bundles api, sdk, and exporters). If the package isn't in Conan Center, consider using `FetchContent` in CMake or building from source as a fallback.
- Edit `CMakeLists.txt`:
- Add option: `option(XRPL_ENABLE_TELEMETRY "Enable OpenTelemetry tracing" OFF)`
- When ON, `find_package(opentelemetry-cpp CONFIG REQUIRED)` and add compile definition `XRPL_ENABLE_TELEMETRY`
- When OFF, do nothing (zero build impact)
- Verify the build succeeds with `-DXRPL_ENABLE_TELEMETRY=OFF` (no regressions) and with `-DXRPL_ENABLE_TELEMETRY=ON` (SDK links successfully).
**Key files**:
- `conanfile.py`
- `CMakeLists.txt`
**Reference**:
- [05-configuration-reference.md §5.4](./05-configuration-reference.md) — CMake integration, `FindOpenTelemetry.cmake`, `XRPL_ENABLE_TELEMETRY` option
- [03-implementation-strategy.md §3.2](./03-implementation-strategy.md) — Key principle: zero-cost when disabled via compile-time flags
- [02-design-decisions.md §2.1](./02-design-decisions.md) — SDK selection rationale and required OTel components
---
## Task 2: Create Core Telemetry Interface and NullTelemetry
**Objective**: Define the `Telemetry` abstract interface and a no-op implementation so the rest of the codebase can reference telemetry without hard-depending on the OTel SDK.
**What to do**:
- Create `include/xrpl/telemetry/Telemetry.h`:
- Define `namespace xrpl::telemetry`
- Define `struct Telemetry::Setup` holding: `enabled`, `exporterEndpoint`, `samplingRatio`, `serviceName`, `serviceVersion`, `serviceInstanceId`, `traceRpc`, `traceTransactions`, `traceConsensus`, `tracePeer`
- Define abstract `class Telemetry` with:
- `virtual void start() = 0;`
- `virtual void stop() = 0;`
- `virtual bool isEnabled() const = 0;`
- `virtual nostd::shared_ptr<Tracer> getTracer(string_view name = "rippled") = 0;`
- `virtual nostd::shared_ptr<Span> startSpan(string_view name, SpanKind kind = kInternal) = 0;`
- `virtual nostd::shared_ptr<Span> startSpan(string_view name, Context const& parentContext, SpanKind kind = kInternal) = 0;`
- `virtual bool shouldTraceRpc() const = 0;`
- `virtual bool shouldTraceTransactions() const = 0;`
- `virtual bool shouldTraceConsensus() const = 0;`
- Factory: `std::unique_ptr<Telemetry> make_Telemetry(Setup const&, beast::Journal);`
- Config parser: `Telemetry::Setup setup_Telemetry(Section const&, std::string const& nodePublicKey, std::string const& version);`
- Create `include/xrpl/telemetry/SpanGuard.h`:
- RAII guard that takes an `nostd::shared_ptr<Span>`, creates a `Scope`, and calls `span->End()` in destructor.
- Convenience: `setAttribute()`, `setOk()`, `setStatus()`, `addEvent()`, `recordException()`, `context()`
- See [04-code-samples.md](./04-code-samples.md) §4.2 for the full implementation.
- Create `src/libxrpl/telemetry/NullTelemetry.cpp`:
- Implements `Telemetry` with all no-ops.
- `isEnabled()` returns `false`, `startSpan()` returns a noop span.
- This is used when `XRPL_ENABLE_TELEMETRY` is OFF or `enabled=0` in config.
- Guard all OTel SDK headers behind `#ifdef XRPL_ENABLE_TELEMETRY`. The `NullTelemetry` implementation should compile without the OTel SDK present.
**Key new files**:
- `include/xrpl/telemetry/Telemetry.h`
- `include/xrpl/telemetry/SpanGuard.h`
- `src/libxrpl/telemetry/NullTelemetry.cpp`
**Reference**:
- [04-code-samples.md §4.1](./04-code-samples.md) — Full `Telemetry` interface with `Setup` struct, lifecycle, tracer access, span creation, and component filtering methods
- [04-code-samples.md §4.2](./04-code-samples.md) — Full `SpanGuard` RAII implementation and `NullSpanGuard` no-op class
- [03-implementation-strategy.md §3.1](./03-implementation-strategy.md) — Directory structure: `include/xrpl/telemetry/` for headers, `src/libxrpl/telemetry/` for implementation
- [03-implementation-strategy.md §3.7.3](./03-implementation-strategy.md) — Conditional instrumentation and zero-cost compile-time disabled pattern
---
## Task 3: Implement OTel-Backed Telemetry
**Objective**: Implement the real `Telemetry` class that initializes the OTel SDK, configures the OTLP exporter and batch processor, and creates tracers/spans.
**What to do**:
- Create `src/libxrpl/telemetry/Telemetry.cpp` (compiled only when `XRPL_ENABLE_TELEMETRY=ON`):
- `class TelemetryImpl : public Telemetry` that:
- In `start()`: creates a `TracerProvider` with:
- Resource attributes: `service.name`, `service.version`, `service.instance.id`
- An `OtlpGrpcExporter` pointed at `setup.exporterEndpoint` (default `localhost:4317`)
- A `BatchSpanProcessor` with configurable batch size and delay
- A `TraceIdRatioBasedSampler` using `setup.samplingRatio`
- Sets the global `TracerProvider`
- In `stop()`: calls `ForceFlush()` then shuts down the provider
- In `startSpan()`: delegates to `getTracer()->StartSpan(name, ...)`
- `shouldTraceRpc()` etc. read from `Setup` fields
- Create `src/libxrpl/telemetry/TelemetryConfig.cpp`:
- `setup_Telemetry()` parses the `[telemetry]` config section from `xrpld.cfg`
- Maps config keys: `enabled`, `exporter`, `endpoint`, `sampling_ratio`, `trace_rpc`, `trace_transactions`, `trace_consensus`, `trace_peer`
- Wire `make_Telemetry()` factory:
- If `setup.enabled` is true AND `XRPL_ENABLE_TELEMETRY` is defined: return `TelemetryImpl`
- Otherwise: return `NullTelemetry`
- Add telemetry source files to CMake. When `XRPL_ENABLE_TELEMETRY=ON`, compile `Telemetry.cpp` and `TelemetryConfig.cpp` and link against `opentelemetry-cpp::api`, `opentelemetry-cpp::sdk`, `opentelemetry-cpp::otlp_grpc_exporter`. When OFF, compile only `NullTelemetry.cpp`.
**Key new files**:
- `src/libxrpl/telemetry/Telemetry.cpp`
- `src/libxrpl/telemetry/TelemetryConfig.cpp`
**Key modified files**:
- `CMakeLists.txt` (add telemetry library target)
**Reference**:
- [04-code-samples.md §4.1](./04-code-samples.md) — `Telemetry` interface that `TelemetryImpl` must implement
- [05-configuration-reference.md §5.2](./05-configuration-reference.md) — `setup_Telemetry()` config parser implementation
- [02-design-decisions.md §2.2](./02-design-decisions.md) — OTLP/gRPC exporter config (endpoint, TLS options)
- [02-design-decisions.md §2.4.1](./02-design-decisions.md) — Resource attributes: `service.name`, `service.version`, `service.instance.id`, `xrpl.network.id`
- [03-implementation-strategy.md §3.4](./03-implementation-strategy.md) — Per-operation CPU costs and overhead budget for span creation
- [03-implementation-strategy.md §3.5](./03-implementation-strategy.md) — Memory overhead: static (~456 KB) and dynamic (~1.2 MB) budgets
---
## Task 4: Integrate Telemetry into Application Lifecycle
**Objective**: Wire the `Telemetry` object into `Application` so all components can access it.
**What to do**:
- Edit `src/xrpld/app/main/Application.h`:
- Forward-declare `namespace xrpl::telemetry { class Telemetry; }`
- Add pure virtual method: `virtual telemetry::Telemetry& getTelemetry() = 0;`
- Edit `src/xrpld/app/main/Application.cpp` (the `ApplicationImp` class):
- Add member: `std::unique_ptr<telemetry::Telemetry> telemetry_;`
- In the constructor, after config is loaded and node identity is known:
```cpp
auto const telemetrySection = config_->section("telemetry");
auto telemetrySetup = telemetry::setup_Telemetry(
telemetrySection,
toBase58(TokenType::NodePublic, nodeIdentity_.publicKey()),
BuildInfo::getVersionString());
telemetry_ = telemetry::make_Telemetry(telemetrySetup, logs_->journal("Telemetry"));
```
- In `start()`: call `telemetry_->start()` early
- In `stop()` or destructor: call `telemetry_->stop()` late (to flush pending spans)
- Implement `getTelemetry()` override: return `*telemetry_`
- Add `[telemetry]` section to the example config `cfg/rippled-example.cfg`:
```ini
# [telemetry]
# enabled=1
# endpoint=localhost:4317
# sampling_ratio=1.0
# trace_rpc=1
```
**Key modified files**:
- `src/xrpld/app/main/Application.h`
- `src/xrpld/app/main/Application.cpp`
- `cfg/rippled-example.cfg` (or equivalent example config)
**Reference**:
- [05-configuration-reference.md §5.3](./05-configuration-reference.md) — `ApplicationImp` changes: member declaration, constructor init, `start()`/`stop()` wiring, `getTelemetry()` override
- [05-configuration-reference.md §5.1](./05-configuration-reference.md) — `[telemetry]` config section format and all option defaults
- [03-implementation-strategy.md §3.9.2](./03-implementation-strategy.md) — File impact assessment: `Application.cpp` ~15 lines added, ~3 changed (Low risk)
---
## Task 5: Create Instrumentation Macros
**Objective**: Define convenience macros that make instrumenting code one-liners, and that compile to zero-cost no-ops when telemetry is disabled.
**What to do**:
- Create `src/xrpld/telemetry/TracingInstrumentation.h`:
- When `XRPL_ENABLE_TELEMETRY` is defined:
```cpp
#define XRPL_TRACE_SPAN(telemetry, name) \
auto _xrpl_span_ = (telemetry).startSpan(name); \
::xrpl::telemetry::SpanGuard _xrpl_guard_(_xrpl_span_)
#define XRPL_TRACE_RPC(telemetry, name) \
std::optional<::xrpl::telemetry::SpanGuard> _xrpl_guard_; \
if ((telemetry).shouldTraceRpc()) { \
_xrpl_guard_.emplace((telemetry).startSpan(name)); \
}
#define XRPL_TRACE_SET_ATTR(key, value) \
if (_xrpl_guard_.has_value()) { \
_xrpl_guard_->setAttribute(key, value); \
}
#define XRPL_TRACE_EXCEPTION(e) \
if (_xrpl_guard_.has_value()) { \
_xrpl_guard_->recordException(e); \
}
```
- When `XRPL_ENABLE_TELEMETRY` is NOT defined, all macros expand to `((void)0)`
**Key new file**:
- `src/xrpld/telemetry/TracingInstrumentation.h`
**Reference**:
- [04-code-samples.md §4.3](./04-code-samples.md) — Full macro definitions for `XRPL_TRACE_SPAN`, `XRPL_TRACE_RPC`, `XRPL_TRACE_CONSENSUS`, `XRPL_TRACE_SET_ATTR`, `XRPL_TRACE_EXCEPTION` with both enabled and disabled branches
- [03-implementation-strategy.md §3.7.3](./03-implementation-strategy.md) — Conditional instrumentation pattern: compile-time `#ifndef` and runtime `shouldTrace*()` checks
- [03-implementation-strategy.md §3.9.7](./03-implementation-strategy.md) — Before/after code examples showing minimal intrusiveness (~1-3 lines per instrumentation point)
---
## Task 6: Instrument RPC ServerHandler
**Objective**: Add tracing to the HTTP RPC entry point so every incoming RPC request creates a span.
**What to do**:
- Edit `src/xrpld/rpc/detail/ServerHandler.cpp`:
- `#include` the `TracingInstrumentation.h` header
- In `ServerHandler::onRequest(Session& session)`:
- At the top of the method, add: `XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.request");`
- After the RPC command name is extracted, set attribute: `XRPL_TRACE_SET_ATTR("xrpl.rpc.command", command);`
- After the response status is known, set: `XRPL_TRACE_SET_ATTR("http.status_code", static_cast<int64_t>(statusCode));`
- Wrap error paths with: `XRPL_TRACE_EXCEPTION(e);`
- In `ServerHandler::processRequest(...)`:
- Add a child span: `XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.process");`
- Set method attribute: `XRPL_TRACE_SET_ATTR("xrpl.rpc.method", request_method);`
- In `ServerHandler::onWSMessage(...)` (WebSocket path):
- Add: `XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.ws.message");`
- The goal is to see spans like:
```
rpc.request
└── rpc.process
```
in Jaeger for every HTTP RPC call.
**Key modified file**:
- `src/xrpld/rpc/detail/ServerHandler.cpp` (~15-25 lines added)
**Reference**:
- [04-code-samples.md §4.5.3](./04-code-samples.md) — Complete `ServerHandler::onRequest()` instrumented code sample with W3C header extraction, span creation, attribute setting, and error handling
- [01-architecture-analysis.md §1.5](./01-architecture-analysis.md) — RPC request flow diagram: HTTP request -> attributes -> jobqueue.enqueue -> rpc.command -> response
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — Key trace points table: `rpc.request` in `ServerHandler.cpp::onRequest()` (Priority: High)
- [02-design-decisions.md §2.3](./02-design-decisions.md) — Span naming convention: `rpc.request`, `rpc.command.*`
- [02-design-decisions.md §2.4.2](./02-design-decisions.md) — RPC span attributes: `xrpl.rpc.command`, `xrpl.rpc.version`, `xrpl.rpc.role`, `xrpl.rpc.params`
- [03-implementation-strategy.md §3.9.2](./03-implementation-strategy.md) — File impact: `ServerHandler.cpp` ~40 lines added, ~10 changed (Low risk)
---
## Task 7: Instrument RPC Command Execution
**Objective**: Add per-command tracing inside the RPC handler so each command (e.g., `submit`, `account_info`, `server_info`) gets its own child span.
**What to do**:
- Edit `src/xrpld/rpc/detail/RPCHandler.cpp`:
- `#include` the `TracingInstrumentation.h` header
- In `doCommand(RPC::JsonContext& context, Json::Value& result)`:
- At the top: `XRPL_TRACE_RPC(context.app.getTelemetry(), "rpc.command." + context.method);`
- Set attributes:
- `XRPL_TRACE_SET_ATTR("xrpl.rpc.command", context.method);`
- `XRPL_TRACE_SET_ATTR("xrpl.rpc.version", static_cast<int64_t>(context.apiVersion));`
- `XRPL_TRACE_SET_ATTR("xrpl.rpc.role", (context.role == Role::ADMIN) ? "admin" : "user");`
- On success: `XRPL_TRACE_SET_ATTR("xrpl.rpc.status", "success");`
- On error: `XRPL_TRACE_SET_ATTR("xrpl.rpc.status", "error");` and set the error message
- After this, traces in Jaeger should look like:
```
rpc.request (xrpl.rpc.command=account_info)
└── rpc.process
└── rpc.command.account_info (xrpl.rpc.version=2, xrpl.rpc.role=user, xrpl.rpc.status=success)
```
**Key modified file**:
- `src/xrpld/rpc/detail/RPCHandler.cpp` (~15-20 lines added)
**Reference**:
- [04-code-samples.md §4.5.3](./04-code-samples.md) — `ServerHandler::onRequest()` code sample (includes child span pattern for `rpc.command.*`)
- [02-design-decisions.md §2.3](./02-design-decisions.md) — Span naming: `rpc.command.*` pattern with dynamic command name (e.g., `rpc.command.server_info`)
- [02-design-decisions.md §2.4.2](./02-design-decisions.md) — RPC attribute schema: `xrpl.rpc.command`, `xrpl.rpc.version`, `xrpl.rpc.role`, `xrpl.rpc.status`
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — Key trace points table: `rpc.command.*` in `RPCHandler.cpp::doCommand()` (Priority: High)
- [02-design-decisions.md §2.6.5](./02-design-decisions.md) — Correlation with PerfLog: how `doCommand()` can link trace_id with existing PerfLog entries
- [03-implementation-strategy.md §3.4.4](./03-implementation-strategy.md) — RPC request overhead budget: ~1.75 μs total per request
---
## Task 8: Build, Run, and Verify End-to-End
**Objective**: Prove the full pipeline works: rippled emits traces -> OTel Collector receives them -> Jaeger displays them.
**What to do**:
1. **Start the Docker stack**:
```bash
docker compose -f docker/telemetry/docker-compose.yml up -d
```
Verify Collector health: `curl http://localhost:13133`
2. **Build rippled with telemetry**:
```bash
# Adjust for your actual build workflow
conan install . --build=missing -o with_telemetry=True
cmake --preset default -DXRPL_ENABLE_TELEMETRY=ON
cmake --build --preset default
```
3. **Configure rippled**:
Add to `rippled.cfg` (or your local test config):
```ini
[telemetry]
enabled=1
endpoint=localhost:4317
sampling_ratio=1.0
trace_rpc=1
```
4. **Start rippled** in standalone mode:
```bash
./rippled --conf rippled.cfg -a --start
```
5. **Generate RPC traffic**:
```bash
# server_info
curl -s -X POST http://localhost:5005 \
-H "Content-Type: application/json" \
-d '{"method":"server_info","params":[{}]}'
# ledger
curl -s -X POST http://localhost:5005 \
-H "Content-Type: application/json" \
-d '{"method":"ledger","params":[{"ledger_index":"current"}]}'
# account_info (will error in standalone, that's fine — we trace errors too)
curl -s -X POST http://localhost:5005 \
-H "Content-Type: application/json" \
-d '{"method":"account_info","params":[{"account":"rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"}]}'
```
6. **Verify in Jaeger**:
- Open `http://localhost:16686`
- Select service `rippled` from the dropdown
- Click "Find Traces"
- Confirm you see traces with spans: `rpc.request` -> `rpc.process` -> `rpc.command.server_info`
- Click into a trace and verify attributes: `xrpl.rpc.command`, `xrpl.rpc.status`, `xrpl.rpc.version`
7. **Verify zero-overhead when disabled**:
- Rebuild with `XRPL_ENABLE_TELEMETRY=OFF`, or set `enabled=0` in config
- Run the same RPC calls
- Confirm no new traces appear and no errors in rippled logs
**Verification Checklist**:
- [ ] Docker stack starts without errors
- [ ] rippled builds with `-DXRPL_ENABLE_TELEMETRY=ON`
- [ ] rippled starts and connects to OTel Collector (check rippled logs for telemetry messages)
- [ ] Traces appear in Jaeger UI under service "rippled"
- [ ] Span hierarchy is correct (parent-child relationships)
- [ ] Span attributes are populated (`xrpl.rpc.command`, `xrpl.rpc.status`, etc.)
- [ ] Error spans show error status and message
- [ ] Building with `XRPL_ENABLE_TELEMETRY=OFF` produces no regressions
- [ ] Setting `enabled=0` at runtime produces no traces and no errors
**Reference**:
- [06-implementation-phases.md §6.11.1](./06-implementation-phases.md) — Phase 1 definition of done: SDK compiles, runtime toggle works, span creation verified in Jaeger, config validation passes
- [06-implementation-phases.md §6.11.2](./06-implementation-phases.md) — Phase 2 definition of done: 100% RPC coverage, traceparent propagation, <1ms p99 overhead, dashboard deployed
- [06-implementation-phases.md §6.8](./06-implementation-phases.md) — Success metrics: trace coverage >95%, CPU overhead <3%, memory <5 MB, latency impact <2%
- [03-implementation-strategy.md §3.9.5](./03-implementation-strategy.md) — Backward compatibility: config optional, protocol unchanged, `XRPL_ENABLE_TELEMETRY=OFF` produces identical binary
- [01-architecture-analysis.md §1.8](./01-architecture-analysis.md) — Observable outcomes: what traces, metrics, and dashboards to expect
---
## Task 9: Document POC Results and Next Steps
**Objective**: Capture findings, screenshots, and remaining work for the team.
**What to do**:
- Take screenshots of Jaeger showing:
- The service list with "rippled"
- A trace with the full span tree
- Span detail view showing attributes
- Document any issues encountered (build issues, SDK quirks, missing attributes)
- Note performance observations (build time impact, any noticeable runtime overhead)
- Write a short summary of what the POC proves and what it doesn't cover yet:
- **Proves**: OTel SDK integrates with rippled, OTLP export works, RPC traces visible
- **Doesn't cover**: Cross-node P2P context propagation, consensus tracing, protobuf trace context, W3C traceparent header extraction, tail-based sampling, production deployment
- Outline next steps (mapping to the full plan phases):
- [Phase 2](./06-implementation-phases.md) completion: [W3C header extraction](./02-design-decisions.md) (§2.5), WebSocket tracing, all [RPC handlers](./01-architecture-analysis.md) (§1.6)
- [Phase 3](./06-implementation-phases.md): [Protobuf `TraceContext` message](./04-code-samples.md) (§4.4), [transaction relay tracing](./04-code-samples.md) (§4.5.1) across nodes
- [Phase 4](./06-implementation-phases.md): [Consensus round and phase tracing](./04-code-samples.md) (§4.5.2)
- [Phase 5](./06-implementation-phases.md): [Production collector config](./05-configuration-reference.md) (§5.5.2), [Grafana dashboards](./07-observability-backends.md) (§7.6), [alerting](./07-observability-backends.md) (§7.6.3)
**Reference**:
- [06-implementation-phases.md §6.1](./06-implementation-phases.md) — Full 5-phase timeline overview and Gantt chart
- [06-implementation-phases.md §6.10](./06-implementation-phases.md) — Crawl-Walk-Run strategy: POC is the CRAWL phase, next steps are WALK and RUN
- [06-implementation-phases.md §6.12](./06-implementation-phases.md) — Recommended implementation order (14 steps across 9 weeks)
- [03-implementation-strategy.md §3.9](./03-implementation-strategy.md) — Code intrusiveness assessment and risk matrix for each remaining component
- [07-observability-backends.md §7.2](./07-observability-backends.md) — Production backend selection (Tempo, Elastic APM, Honeycomb, Datadog)
- [02-design-decisions.md §2.5](./02-design-decisions.md) — Context propagation design: W3C HTTP headers, protobuf P2P, JobQueue internal
- [00-tracing-fundamentals.md](./00-tracing-fundamentals.md) — Reference for team onboarding on distributed tracing concepts
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ------------------------------------ | --------- | -------------- | ---------- |
| 0 | Docker observability stack | 4 | 0 | — |
| 1 | OTel C++ SDK dependency | 0 | 2 | — |
| 2 | Core Telemetry interface + NullImpl | 3 | 0 | 1 |
| 3 | OTel-backed Telemetry implementation | 2 | 1 | 1, 2 |
| 4 | Application lifecycle integration | 0 | 3 | 2, 3 |
| 5 | Instrumentation macros | 1 | 0 | 2 |
| 6 | Instrument RPC ServerHandler | 0 | 1 | 4, 5 |
| 7 | Instrument RPC command execution | 0 | 1 | 4, 5 |
| 8 | End-to-end verification | 0 | 0 | 0-7 |
| 9 | Document results and next steps | 1 | 0 | 8 |
**Parallel work**: Tasks 0 and 1 can run in parallel. Tasks 2 and 5 have no dependency on each other. Tasks 6 and 7 can be done in parallel once Tasks 4 and 5 are complete.
---
## Next Steps (Post-POC)
### Metrics Pipeline for Grafana Dashboards
The current POC exports **traces only**. Grafana's Explore view can query Jaeger for individual traces, but time-series charts (latency histograms, request throughput, error rates) require a **metrics pipeline**. To enable this:
1. **Add a `spanmetrics` connector** to the OTel Collector config that derives RED metrics (Rate, Errors, Duration) from trace spans automatically:
```yaml
connectors:
spanmetrics:
histogram:
explicit:
buckets: [1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 5s]
dimensions:
- name: xrpl.rpc.command
- name: xrpl.rpc.status
exporters:
prometheus:
endpoint: 0.0.0.0:8889
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp/jaeger, spanmetrics]
metrics:
receivers: [spanmetrics]
exporters: [prometheus]
```
2. **Add Prometheus** to the Docker Compose stack to scrape the collector's metrics endpoint.
3. **Add Prometheus as a Grafana datasource** and build dashboards for:
- RPC request latency (p50/p95/p99) by command
- RPC throughput (requests/sec) by command
- Error rate by command
- Span duration distribution
### Additional Instrumentation
- **W3C `traceparent` header extraction** in `ServerHandler` to support cross-service context propagation from external callers
- **WebSocket RPC tracing** in `ServerHandler::onWSMessage()`
- **Transaction relay tracing** across nodes using protobuf `TraceContext` messages
- **Consensus round and phase tracing** for validator coordination visibility
- **Ledger close tracing** to measure close-to-validated latency
### Production Hardening
- **Tail-based sampling** in the OTel Collector to reduce volume while retaining error/slow traces
- **TLS configuration** for the OTLP exporter in production deployments
- **Resource limits** on the batch processor queue to prevent unbounded memory growth
- **Health monitoring** for the telemetry pipeline itself (collector lag, export failures)
### POC Lessons Learned
Issues encountered during POC implementation that inform future work:
| Issue | Resolution | Impact on Future Work |
| -------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------- | ---------------------------------------------------------------- |
| Conan lockfile rejected `opentelemetry-cpp/1.18.0` | Used `--lockfile=""` to bypass | Lockfile must be regenerated when adding new dependencies |
| Conan package only builds OTLP HTTP exporter, not gRPC | Switched from gRPC to HTTP exporter (`localhost:4318/v1/traces`) | HTTP exporter is the default; gRPC requires custom Conan profile |
| CMake target `opentelemetry-cpp::api` etc. don't exist in Conan package | Use umbrella target `opentelemetry-cpp::opentelemetry-cpp` | Conan targets differ from upstream CMake targets |
| OTel Collector `logging` exporter deprecated | Renamed to `debug` exporter | Use `debug` in all collector configs going forward |
| Macro parameter `telemetry` collided with `::xrpl::telemetry::` namespace | Renamed macro params to `_tel_obj_`, `_span_name_` | Avoid common words as macro parameter names |
| `opentelemetry::trace::Scope` creates new context on move | Store scope as member, create once in constructor | SpanGuard move semantics need care with Scope lifecycle |
| `TracerProviderFactory::Create` returns `unique_ptr<sdk::TracerProvider>`, not `nostd::shared_ptr` | Use `std::shared_ptr` member, wrap in `nostd::shared_ptr` for global provider | OTel SDK factory return types don't match API provider types |

View File

@@ -0,0 +1,256 @@
# Phase 10: Synthetic Workload Generation & Telemetry Validation — Task List
> **Status**: Future Enhancement
>
> **Goal**: Build tools that generate realistic XRPL traffic to validate the full Phases 1-9 telemetry stack end-to-end — all spans, attributes, metrics, dashboards, and log-trace correlation — under controlled load.
>
> **Scope**: Python/shell test harness + multi-node docker-compose environment + automated validation scripts + performance benchmarks.
>
> **Branch**: `pratik/otel-phase10-workload-validation` (from `pratik/otel-phase9-metric-gap-fill`)
>
> **Depends on**: Phase 9 (internal metric gap fill) — validates the full metric surface
### Related Plan Documents
| Document | Relevance |
| -------------------------------------------------------------------- | --------------------------------------------------------------- |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 10 plan: motivation, architecture, exit criteria (§6.8.3) |
| [09-data-collection-reference.md](./09-data-collection-reference.md) | Defines the full inventory of spans/metrics to validate |
| [Phase9_taskList.md](./Phase9_taskList.md) | Prerequisite — all internal metrics must be emitting |
### Why This Phase Exists
Before Phases 1-9 can be considered production-ready, we need proof that:
1. All 16 spans fire with correct attributes under real transaction workloads
2. All 255+ StatsD metrics + ~50 Phase 9 metrics appear in Prometheus with non-zero values
3. Log-trace correlation (Phase 8) produces clickable trace_id links in Loki
4. All 10 Grafana dashboards render meaningful data (no empty panels)
5. Performance overhead stays within bounds (< 3% CPU, < 5MB memory)
6. The telemetry stack survives sustained load without data loss or queue backpressure
---
## Task 10.1: Multi-Node Test Harness
**Objective**: Create a docker-compose environment with 3-5 validator nodes that produces real consensus rounds.
**What to do**:
- Create `docker/telemetry/docker-compose.workload.yaml`:
- 5 rippled validator nodes with UNL configured for each other
- All telemetry enabled: `[telemetry] enabled=1`, `[insight] server=otel`
- Full OTel stack: Collector, Jaeger, Tempo, Prometheus, Loki, Grafana
- Shared network with service discovery
- Each node should:
- Generate validator keys at startup
- Configure all 5 nodes in its UNL
- Enable all trace categories including `trace_peer=1`
- Write logs to a file tailed by the OTel Collector filelog receiver
- Include a `Makefile` target: `make telemetry-workload-up` / `make telemetry-workload-down`
**Key files**:
- New: `docker/telemetry/docker-compose.workload.yaml`
- New: `docker/telemetry/workload/generate-validator-keys.sh`
- New: `docker/telemetry/workload/xrpld-validator.cfg.template`
---
## Task 10.2: RPC Load Generator
**Objective**: Configurable tool that fires all traced RPC commands at controlled rates.
**What to do**:
- Create `docker/telemetry/workload/rpc_load_generator.py`:
- Connects to one or more rippled WebSocket endpoints
- Fires all RPC commands that have trace spans: `server_info`, `ledger`, `tx`, `account_info`, `account_lines`, `fee`, `submit`, etc.
- Configurable parameters: rate (RPS), duration, command distribution weights
- Injects `traceparent` HTTP headers to test W3C context propagation
- Logs progress and errors to stdout
- Command distribution should match realistic production ratios:
- 40% `server_info` / `fee` (health checks)
- 30% `account_info` / `account_lines` / `account_objects` (wallet queries)
- 15% `ledger` / `ledger_data` (explorer queries)
- 10% `tx` / `account_tx` (transaction lookups)
- 5% `book_offers` / `amm_info` (DEX queries)
**Key files**:
- New: `docker/telemetry/workload/rpc_load_generator.py`
- New: `docker/telemetry/workload/requirements.txt`
---
## Task 10.3: Transaction Submitter
**Objective**: Generate diverse transaction types to exercise `tx.*` and `ledger.*` spans.
**What to do**:
- Create `docker/telemetry/workload/tx_submitter.py`:
- Pre-funds test accounts from genesis account
- Submits a mix of transaction types:
- `Payment` (XRP and issued currencies) exercises `tx.process`, `tx.apply`
- `OfferCreate` / `OfferCancel` DEX activity
- `TrustSet` trust line creation for issued currencies
- `NFTokenMint` / `NFTokenCreateOffer` / `NFTokenAcceptOffer` NFT activity
- `EscrowCreate` / `EscrowFinish` escrow lifecycle
- `AMMCreate` / `AMMDeposit` / `AMMWithdraw` AMM pool operations (if amendment enabled)
- Configurable: TPS target, transaction mix weights, duration
- Monitors submission results and tracks success/failure rates
- The transaction mix ensures the telemetry captures the full range of ledger activity that third parties care about.
**Key files**:
- New: `docker/telemetry/workload/tx_submitter.py`
- New: `docker/telemetry/workload/test_accounts.json` (pre-generated keypairs)
---
## Task 10.4: Telemetry Validation Suite
**Objective**: Automated scripts that verify all expected telemetry data exists after a workload run.
**What to do**:
- Create `docker/telemetry/workload/validate_telemetry.py`:
**Span validation** (queries Jaeger/Tempo API):
- Assert all 16 span names appear in traces
- Assert each span has its required attributes (22 total attributes across spans)
- Assert parent-child relationships are correct (`rpc.request` `rpc.process` `rpc.command.*`)
- Assert span durations are reasonable (> 0, < 60s)
**Metric validation** (queries Prometheus API):
- Assert all SpanMetrics-derived metrics are non-zero: `traces_span_metrics_calls_total`, `traces_span_metrics_duration_milliseconds_bucket`
- Assert all StatsD metrics are non-zero: `rippled_LedgerMaster_Validated_Ledger_Age`, `rippled_Peer_Finder_Active_*`, etc.
- Assert all Phase 9 metrics are non-zero: `rippled_nodestore_*`, `rippled_cache_*`, `rippled_txq_*`, `rippled_rpc_method_*`, `rippled_object_count`, `rippled_load_factor*`
- Assert metric label cardinality is within bounds
**Log-trace correlation validation** (queries Loki API):
- Assert logs contain `trace_id=` and `span_id=` fields
- Pick a random trace_id from Jaeger query Loki for matching logs assert results exist
- Assert Grafana derived field links are functional
**Dashboard validation**:
- For each of the 10 Grafana dashboards, query the dashboard API and assert no panels show "No data"
- Output: JSON report with pass/fail per check, suitable for CI.
**Key files**:
- New: `docker/telemetry/workload/validate_telemetry.py`
- New: `docker/telemetry/workload/expected_spans.json` (span inventory for validation)
- New: `docker/telemetry/workload/expected_metrics.json` (metric inventory for validation)
---
## Task 10.5: Performance Benchmark Suite
**Objective**: Measure CPU/memory/latency overhead of the telemetry stack.
**What to do**:
- Create `docker/telemetry/workload/benchmark.sh`:
- **Baseline run**: Start cluster with `[telemetry] enabled=0`, run transaction workload for 5 minutes, record metrics
- **Telemetry run**: Start cluster with full telemetry enabled, run identical workload, record metrics
- **Comparison**: Calculate deltas for:
- CPU usage (per-node average)
- Memory RSS (per-node peak)
- RPC p99 latency
- Transaction throughput (TPS)
- Consensus round time p95
- Ledger close time p95
- Output: Markdown table comparing baseline vs. telemetry, with pass/fail against targets:
- CPU overhead < 3%
- Memory overhead < 5MB
- RPC latency impact < 2ms p99
- Throughput impact < 5%
- Consensus impact < 1%
- Store results in `docker/telemetry/workload/benchmark-results/` for historical tracking.
**Key files**:
- New: `docker/telemetry/workload/benchmark.sh`
- New: `docker/telemetry/workload/collect_system_metrics.sh`
---
## Task 10.6: CI Integration
**Objective**: Wire the validation suite into CI for regression detection.
**What to do**:
- Create a CI workflow (GitHub Actions or equivalent) that:
1. Builds rippled with `-DXRPL_ENABLE_TELEMETRY=ON`
2. Starts the multi-node workload harness
3. Runs the RPC load generator + transaction submitter for 2 minutes
4. Runs the validation suite
5. Runs the benchmark suite
6. Fails the build if any validation check fails or benchmark exceeds thresholds
7. Archives the validation report and benchmark results as artifacts
- This should be a separate workflow (not part of the main CI), triggered manually or on telemetry-related branch changes.
**Key files**:
- New: `.github/workflows/telemetry-validation.yml`
- New: `docker/telemetry/workload/run-full-validation.sh` (orchestrator script)
---
## Task 10.7: Documentation
**Objective**: Document the workload tools and validation process.
**What to do**:
- Create `docker/telemetry/workload/README.md`:
- Quick start guide for running workload harness
- Configuration options for load generator and tx submitter
- How to read validation reports
- How to run benchmarks and interpret results
- Update `docs/telemetry-runbook.md`:
- Add "Validating Telemetry Stack" section
- Add "Performance Benchmarking" section
- Update `OpenTelemetryPlan/09-data-collection-reference.md`:
- Add "Validation" section with expected metric/span counts
---
## Effort Summary
| Task | Description | Effort | Risk |
| ---- | --------------------------- | ------ | ------ |
| 10.1 | Multi-node test harness | 2d | Medium |
| 10.2 | RPC load generator | 1d | Low |
| 10.3 | Transaction submitter | 2d | Medium |
| 10.4 | Telemetry validation suite | 2d | Medium |
| 10.5 | Performance benchmark suite | 1.5d | Low |
| 10.6 | CI integration | 1d | Medium |
| 10.7 | Documentation | 0.5d | Low |
**Total Effort**: 10 days
## Exit Criteria
- [ ] 5-node validator cluster starts and reaches consensus in docker-compose
- [ ] RPC load generator fires all traced RPC commands at configurable rates
- [ ] Transaction submitter generates 6+ transaction types at configurable TPS
- [ ] Validation suite confirms all 16 spans, 22 attributes, 300+ metrics are present
- [ ] Log-trace correlation validated end-to-end (Loki Tempo)
- [ ] All 10 Grafana dashboards render data (no empty panels)
- [ ] Benchmark shows < 3% CPU overhead, < 5MB memory overhead
- [ ] CI workflow runs validation on telemetry branch changes
- [ ] Validation report output is CI-parseable (JSON with exit codes)

View File

@@ -0,0 +1,471 @@
# Phase 11: Third-Party Data Collection Pipelines — Task List
> **Status**: Future Enhancement
>
> **Goal**: Build a custom OTel Collector receiver that periodically polls rippled's admin RPCs and exports structured metrics for external consumers — making all XRPL health, validator, peer, fee, and DEX data available as Prometheus/OTLP metrics without rippled code changes.
>
> **Scope**: Go-based OTel Collector receiver plugin + Grafana dashboards + Prometheus alerting rules.
>
> **Branch**: `pratik/otel-phase11-third-party-collection` (from `pratik/otel-phase10-workload-validation`)
>
> **Depends on**: Phase 10 (validation harness for testing the new receiver)
### Related Plan Documents
| Document | Relevance |
| -------------------------------------------------------------------- | --------------------------------------------------------------- |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 11 plan: motivation, architecture, exit criteria (§6.8.4) |
| [09-data-collection-reference.md](./09-data-collection-reference.md) | Defines full metric inventory including third-party metrics |
| [Phase10_taskList.md](./Phase10_taskList.md) | Prerequisite — validation harness for testing |
### Third-Party Consumer Gap Analysis
This phase addresses the cross-cutting gap identified during research: **rippled has no native Prometheus/OTLP metrics export for data accessible only via RPC**. Every consumer (exchanges, payment processors, analytics providers, validators, researchers, compliance firms, custodians) must build custom JSON-RPC polling and conversion. This receiver centralizes that work.
| Consumer Category | Data Unlocked by This Phase |
| -------------------------- | ------------------------------------------------------------------ |
| **Exchanges** | Real-time fee estimates, TxQ capacity, server health scores |
| **Payment Processors** | Settlement latency percentiles, corridor health, path availability |
| **Analytics Providers** | Validator metrics, network topology, amendment voting status |
| **DeFi / AMM** | AMM pool TVL, DEX order book depth, trade volumes |
| **Validators / Operators** | Per-peer latency, version distribution, UNL health, alerting |
| **Compliance** | Transaction volume trends, network growth metrics |
| **Academic Researchers** | Consensus performance time-series, decentralization metrics |
| **CBDC / Tokenization** | Token supply tracking, trust line adoption, freeze status |
| **Institutional Custody** | Multi-sig status, escrow tracking, reserve calculations |
| **Wallet Providers** | Server health for node selection, fee prediction data |
---
## Task 11.1: OTel Collector Receiver Scaffold
**Objective**: Create the Go project structure for a custom OTel Collector receiver that polls rippled JSON-RPC.
**What to do**:
- Create `docker/telemetry/otel-rippled-receiver/`:
- `receiver.go` — implements `receiver.Metrics` interface
- `config.go` — configuration struct (endpoint, poll interval, enabled RPCs)
- `factory.go` — receiver factory registration
- `go.mod` / `go.sum` — Go module with OTel Collector SDK dependency
- Configuration model:
```yaml
rippled_receiver:
endpoint: "http://localhost:5005" # rippled admin RPC
poll_interval: 30s # how often to poll
enabled_collectors:
- server_info
- get_counts
- fee
- peers
- validators
- feature
- server_state
amm_pools: [] # optional: AMM pool IDs to track
book_offers_pairs: [] # optional: currency pairs for DEX depth
```
- Build a custom OTel Collector binary that includes this receiver alongside the standard receivers.
**Key files**:
- New: `docker/telemetry/otel-rippled-receiver/receiver.go`
- New: `docker/telemetry/otel-rippled-receiver/config.go`
- New: `docker/telemetry/otel-rippled-receiver/factory.go`
- New: `docker/telemetry/otel-rippled-receiver/go.mod`
- New: `docker/telemetry/otel-rippled-receiver/Dockerfile`
---
## Task 11.2: server_info / server_state Collector
**Objective**: Poll `server_info` and `server_state` and export all fields as OTel metrics.
**What to do**:
- Implement `serverInfoCollector` that calls `server_info` (admin) and extracts:
**Node Health Gauges:**
- `xrpl_server_state` (enum → int: disconnected=0, connected=1, syncing=2, tracking=3, full=4, proposing=5)
- `xrpl_server_state_duration_seconds`
- `xrpl_uptime_seconds`
- `xrpl_io_latency_ms`
- `xrpl_amendment_blocked` (0 or 1)
- `xrpl_peers_count`
- `xrpl_peer_disconnects_total`
- `xrpl_peer_disconnects_resources_total`
- `xrpl_jq_trans_overflow_total`
**Consensus Gauges:**
- `xrpl_last_close_proposers`
- `xrpl_last_close_converge_time_seconds`
- `xrpl_validation_quorum`
**Ledger Gauges:**
- `xrpl_validated_ledger_seq`
- `xrpl_validated_ledger_age_seconds`
- `xrpl_validated_ledger_base_fee_drops`
- `xrpl_validated_ledger_reserve_base_drops`
- `xrpl_validated_ledger_reserve_inc_drops`
- `xrpl_close_time_offset_seconds` (0 when absent)
**Load Factor Gauges:**
- `xrpl_load_factor`
- `xrpl_load_factor_server`
- `xrpl_load_factor_fee_escalation`
- `xrpl_load_factor_fee_queue`
- `xrpl_load_factor_local`
- `xrpl_load_factor_net`
- `xrpl_load_factor_cluster`
**State Accounting Gauges** (per state: disconnected, connected, syncing, tracking, full):
- `xrpl_state_duration_seconds{state="<name>"}`
- `xrpl_state_transitions_total{state="<name>"}`
**Validator Info** (when node is a validator):
- `xrpl_validator_list_count`
- `xrpl_validator_list_expiration_seconds` (epoch)
- `xrpl_validator_list_active` (0 or 1)
**Key files**:
- New: `docker/telemetry/otel-rippled-receiver/collectors/server_info.go`
---
## Task 11.3: get_counts Collector
**Objective**: Poll `get_counts` and export internal object counts and NodeStore stats.
**What to do**:
- Implement `getCountsCollector`:
**Database Gauges:**
- `xrpl_db_size_kb{db="total"}`, `xrpl_db_size_kb{db="ledger"}`, `xrpl_db_size_kb{db="transaction"}`
**NodeStore Gauges:**
- `xrpl_nodestore_reads_total`, `xrpl_nodestore_reads_hit`, `xrpl_nodestore_writes_total`
- `xrpl_nodestore_read_bytes`, `xrpl_nodestore_written_bytes`
- `xrpl_nodestore_read_duration_us`, `xrpl_nodestore_write_load`
- `xrpl_nodestore_read_queue`, `xrpl_nodestore_read_threads_running`
**Cache Gauges:**
- `xrpl_cache_hit_rate{cache="SLE"}`, `xrpl_cache_hit_rate{cache="ledger"}`, `xrpl_cache_hit_rate{cache="accepted_ledger"}`
- `xrpl_cache_size{cache="treenode"}`, `xrpl_cache_size{cache="fullbelow"}`, `xrpl_cache_size{cache="accepted_ledger"}`
**Object Count Gauges:**
- `xrpl_object_count{type="<name>"}` for each counted object type (Transaction, Ledger, NodeObject, STTx, STLedgerEntry, InboundLedger, Pathfinder, etc.)
**Rates:**
- `xrpl_historical_fetch_per_minute`
- `xrpl_local_txs`
**Key files**:
- New: `docker/telemetry/otel-rippled-receiver/collectors/get_counts.go`
---
## Task 11.4: Peer Topology Collector
**Objective**: Poll `peers` and export per-peer and aggregate network metrics.
**What to do**:
- Implement `peersCollector`:
**Aggregate Gauges:**
- `xrpl_peers_inbound_count`
- `xrpl_peers_outbound_count`
- `xrpl_peers_cluster_count`
**Per-Peer Gauges** (with labels `peer_key` truncated to 8 chars for cardinality control):
- `xrpl_peer_latency_ms{peer="<key>", version="<ver>", inbound="<bool>"}`
- `xrpl_peer_uptime_seconds{peer="<key>"}`
- `xrpl_peer_load{peer="<key>"}`
**Distribution Gauges** (aggregated across all peers):
- `xrpl_peer_latency_p50_ms`, `xrpl_peer_latency_p95_ms`, `xrpl_peer_latency_p99_ms`
- `xrpl_peer_version_count{version="<semver>"}` — count of peers per software version
**Tracking Status:**
- `xrpl_peer_diverged_count` — peers with `track=diverged`
- `xrpl_peer_unknown_count` — peers with `track=unknown`
**Key files**:
- New: `docker/telemetry/otel-rippled-receiver/collectors/peers.go`
**Cardinality note**: Per-peer metrics use truncated keys. For large peer sets (50+), the aggregate distribution gauges are preferred over per-peer labels.
---
## Task 11.5: Validator & Amendment Collector
**Objective**: Poll `validators` and `feature` to export validator health and amendment voting status.
**What to do**:
- Implement `validatorCollector`:
**From `validators` RPC:**
- `xrpl_trusted_validators_count`
- `xrpl_validator_signing` (0 or 1 — whether local validator is signing)
**From `feature` RPC:**
- `xrpl_amendment_enabled_count` — total enabled amendments
- `xrpl_amendment_majority_count` — amendments with majority but not yet enabled
- `xrpl_amendment_vetoed_count` — locally vetoed amendments
- `xrpl_amendment_unsupported_majority` (0 or 1) — any unsupported amendment has majority (critical alert)
**Per-amendment with majority** (limited cardinality — only amendments with `majority` set):
- `xrpl_amendment_majority_time{name="<amendment>"}` — epoch time when majority was gained
- `xrpl_amendment_votes{name="<amendment>"}` — current vote count
- `xrpl_amendment_threshold{name="<amendment>"}` — votes needed
**Key files**:
- New: `docker/telemetry/otel-rippled-receiver/collectors/validators.go`
---
## Task 11.6: Fee & TxQ Collector
**Objective**: Poll `fee` RPC and export real-time fee market data.
**What to do**:
- Implement `feeCollector` that calls the public `fee` RPC:
**Fee Level Gauges:**
- `xrpl_fee_current_ledger_size` — transactions in current open ledger
- `xrpl_fee_expected_ledger_size` — expected transactions at close
- `xrpl_fee_max_queue_size` — maximum transaction queue size
- `xrpl_fee_open_ledger_fee_drops` — minimum fee for open ledger inclusion
- `xrpl_fee_median_fee_drops` — median fee level
- `xrpl_fee_minimum_fee_drops` — base reference fee
- `xrpl_fee_queue_size` — current queue depth
- This overlaps with Phase 9's internal TxQ metrics but provides an external-only collection path that doesn't require rippled code changes.
**Key files**:
- New: `docker/telemetry/otel-rippled-receiver/collectors/fee.go`
---
## Task 11.7: DEX & AMM Collector (Optional)
**Objective**: Periodically poll configured AMM pools and order book pairs for DeFi metrics.
**What to do**:
- Implement `dexCollector` (enabled only when `amm_pools` or `book_offers_pairs` are configured):
**AMM Pool Gauges** (per configured pool):
- `xrpl_amm_reserve{pool="<id>", asset="<currency>"}` — pool reserve amount
- `xrpl_amm_lp_token_supply{pool="<id>"}` — outstanding LP tokens
- `xrpl_amm_trading_fee{pool="<id>"}` — pool trading fee (basis points)
- `xrpl_amm_tvl_drops{pool="<id>"}` — total value locked (XRP-denominated)
**Order Book Gauges** (per configured pair):
- `xrpl_orderbook_bid_depth{pair="<base>/<quote>"}` — total bid volume
- `xrpl_orderbook_ask_depth{pair="<base>/<quote>"}` — total ask volume
- `xrpl_orderbook_spread{pair="<base>/<quote>"}` — best bid-ask spread
- `xrpl_orderbook_offer_count{pair="<base>/<quote>", side="bid|ask"}` — number of offers
**Key files**:
- New: `docker/telemetry/otel-rippled-receiver/collectors/dex.go`
**Note**: This is optional because it requires explicit configuration of which pools/pairs to track. Default configuration tracks no DEX data.
---
## Task 11.8: Prometheus Alerting Rules
**Objective**: Create production-ready alerting rules for the metrics exported by this receiver.
**What to do**:
- Create `docker/telemetry/prometheus/rippled-alerts.yml`:
**Tier 1 — Critical (page immediately):**
```yaml
- alert: XRPLServerNotFull
expr: xrpl_server_state < 4
for: 15m
- alert: XRPLAmendmentBlocked
expr: xrpl_amendment_blocked == 1
for: 1m
- alert: XRPLNoPeers
expr: xrpl_peers_count == 0
for: 5m
- alert: XRPLLedgerStale
expr: xrpl_validated_ledger_age_seconds > 120
for: 2m
- alert: XRPLHighIOLatency
expr: xrpl_io_latency_ms > 100
for: 5m
- alert: XRPLUnsupportedAmendmentMajority
expr: xrpl_amendment_unsupported_majority == 1
for: 1m
```
**Tier 2 — Warning (investigate within hours):**
```yaml
- alert: XRPLLowPeerCount
expr: xrpl_peers_count < 10
for: 15m
- alert: XRPLHighLoadFactor
expr: xrpl_load_factor > 10
for: 10m
- alert: XRPLSlowConsensus
expr: xrpl_last_close_converge_time_seconds > 6
for: 5m
- alert: XRPLValidatorListExpiring
expr: (xrpl_validator_list_expiration_seconds - time()) < 86400
for: 1h
- alert: XRPLClockDrift
expr: xrpl_close_time_offset_seconds > 0
for: 5m
- alert: XRPLStateFlapping
expr: rate(xrpl_state_transitions_total{state="full"}[1h]) > 2
for: 30m
```
**Key files**:
- New: `docker/telemetry/prometheus/rippled-alerts.yml`
- Update: `docker/telemetry/prometheus/prometheus.yml` (add rule_files reference)
---
## Task 11.9: New Grafana Dashboards
**Objective**: Create 4 new dashboards for the data exported by the receiver.
**What to do**:
- **Validator Health** (`rippled-validator-health`):
- Server state timeline, state duration breakdown
- Proposer count trend, converge time trend, validation quorum
- Validator list expiration countdown
- Amendment voting status (majority/enabled/vetoed)
- **Network Topology** (`rippled-network-topology`):
- Peer count (inbound/outbound/cluster), peer version distribution
- Peer latency distribution (p50/p95/p99), diverged peer count
- Geographic distribution (if enriched with GeoIP)
- Peer uptime distribution
- **Fee Market** (`rippled-fee-market-external`):
- Current fee levels (open ledger, median, minimum), fee escalation timeline
- Queue depth vs. capacity, transactions per ledger
- Load factor breakdown (server/network/cluster/escalation)
- **DEX & AMM Overview** (`rippled-dex-amm`) (only populated when DEX collectors are configured):
- AMM pool TVL, reserve ratios, LP token supply
- Order book depth per pair, spread trends
- Trading fee revenue estimates
**Key files**:
- New: `docker/telemetry/grafana/dashboards/rippled-validator-health.json`
- New: `docker/telemetry/grafana/dashboards/rippled-network-topology.json`
- New: `docker/telemetry/grafana/dashboards/rippled-fee-market-external.json`
- New: `docker/telemetry/grafana/dashboards/rippled-dex-amm.json`
---
## Task 11.10: Integration with Phase 10 Validation
**Objective**: Extend the Phase 10 validation suite to verify this receiver's metrics.
**What to do**:
- Update `docker/telemetry/workload/validate_telemetry.py`:
- Add assertions for all `xrpl_*` metrics produced by the receiver
- Verify metric labels have expected values
- Verify alerting rules fire correctly (inject a "bad" state and check alert)
- Update `docker/telemetry/docker-compose.workload.yaml`:
- Add the custom OTel Collector build with the rippled receiver
- Configure the receiver to poll one of the test nodes
**Key files**:
- Update: `docker/telemetry/workload/validate_telemetry.py`
- Update: `docker/telemetry/docker-compose.workload.yaml`
- Update: `docker/telemetry/workload/expected_metrics.json`
---
## Task 11.11: Documentation
**Objective**: Document the receiver, its metrics, deployment, and alerting.
**What to do**:
- Create `docker/telemetry/otel-rippled-receiver/README.md`:
- Architecture overview (how the receiver fits into the OTel Collector)
- Configuration reference (all config options with defaults)
- Metric reference table (all exported metrics with types and labels)
- Deployment guide (building custom collector binary, docker-compose integration)
- Update `OpenTelemetryPlan/09-data-collection-reference.md`:
- Add "Third-Party Metrics (OTel Collector Receiver)" section
- Add new Grafana dashboard reference (4 dashboards)
- Add alerting rules reference
- Update `docs/telemetry-runbook.md`:
- Add "Third-Party Metrics Receiver" troubleshooting section
- Add alerting playbook (what to do for each Tier 1/Tier 2 alert)
---
## Effort Summary
| Task | Description | Effort | Risk |
| ----- | ------------------------------------ | ------ | ------ |
| 11.1 | OTel Collector receiver scaffold | 1.5d | Medium |
| 11.2 | server_info / server_state collector | 2d | Low |
| 11.3 | get_counts collector | 1.5d | Low |
| 11.4 | Peer topology collector | 1.5d | Medium |
| 11.5 | Validator & amendment collector | 1d | Low |
| 11.6 | Fee & TxQ collector | 0.5d | Low |
| 11.7 | DEX & AMM collector (optional) | 1.5d | Medium |
| 11.8 | Prometheus alerting rules | 1d | Low |
| 11.9 | New Grafana dashboards (4) | 2d | Low |
| 11.10 | Integration with Phase 10 validation | 1d | Low |
| 11.11 | Documentation | 1d | Low |
**Total Effort**: 15 days
## Exit Criteria
- [ ] Custom OTel Collector receiver builds and starts without errors
- [ ] All `xrpl_*` metrics from server_info, get_counts, peers, validators, fee appear in Prometheus
- [ ] Metrics update at configured poll interval (default 30s)
- [ ] 4 new Grafana dashboards operational with data
- [ ] Prometheus alerting rules fire correctly for simulated failure conditions
- [ ] DEX/AMM collector works when configured (optional — not required for base exit criteria)
- [ ] Phase 10 validation suite passes with receiver metrics included
- [ ] Receiver handles rippled restart/unavailability gracefully (no crash, logs warning, retries)
- [ ] Documentation complete: receiver README, metric reference, alerting playbook
- [ ] Go receiver has unit tests with >80% coverage

View File

@@ -0,0 +1,220 @@
# Phase 2: RPC Tracing Completion Task List
> **Goal**: Complete full RPC tracing coverage with W3C Trace Context propagation, unit tests, and performance validation. Build on the POC foundation to achieve production-quality RPC observability.
>
> **Scope**: W3C header extraction, TraceContext propagation utilities, unit tests for core telemetry, integration tests for RPC tracing, and performance benchmarks.
>
> **Branch**: `pratik/otel-phase2-rpc-tracing` (from `pratik/OpenTelemetry_and_DistributedTracing_planning`)
### Related Plan Documents
| Document | Relevance |
| ------------------------------------------------------------ | ------------------------------------------------------------- |
| [04-code-samples.md](./04-code-samples.md) | TraceContextPropagator (§4.4.2), RPC instrumentation (§4.5.3) |
| [02-design-decisions.md](./02-design-decisions.md) | W3C Trace Context (§2.5), span attributes (§2.4.2) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 2 tasks (§6.3), definition of done (§6.11.2) |
---
## Task 2.1: Implement W3C Trace Context HTTP Header Extraction
**Objective**: Extract `traceparent` and `tracestate` headers from incoming HTTP RPC requests so external callers can propagate their trace context into rippled.
**What to do**:
- Create `include/xrpl/telemetry/TraceContextPropagator.h`:
- `extractFromHeaders(headerGetter)` - extract W3C traceparent/tracestate from HTTP headers
- `injectToHeaders(ctx, headerSetter)` - inject trace context into response headers
- Use OTel's `TextMapPropagator` with `W3CTraceContextPropagator` for standards compliance
- Only compiled when `XRPL_ENABLE_TELEMETRY` is defined
- Create `src/libxrpl/telemetry/TraceContextPropagator.cpp`:
- Implement a simple `TextMapCarrier` adapter for HTTP headers
- Use `opentelemetry::context::propagation::GlobalTextMapPropagator` for extraction/injection
- Register the W3C propagator in `TelemetryImpl::start()`
- Modify `src/xrpld/rpc/detail/ServerHandler.cpp`:
- In the HTTP request handler, extract parent context from headers before creating span
- Pass extracted context to `startSpan()` as parent
- Inject trace context into response headers
**Key new files**:
- `include/xrpl/telemetry/TraceContextPropagator.h`
- `src/libxrpl/telemetry/TraceContextPropagator.cpp`
**Key modified files**:
- `src/xrpld/rpc/detail/ServerHandler.cpp`
- `src/libxrpl/telemetry/Telemetry.cpp` (register W3C propagator)
**Reference**:
- [04-code-samples.md §4.4.2](./04-code-samples.md) — TraceContextPropagator with extractFromHeaders/injectToHeaders
- [02-design-decisions.md §2.5](./02-design-decisions.md) — W3C Trace Context propagation design
---
## Task 2.2: Add XRPL_TRACE_PEER Macro
**Objective**: Add the missing peer-tracing macro for future Phase 3 use and ensure macro completeness.
**What to do**:
- Edit `src/xrpld/telemetry/TracingInstrumentation.h`:
- Add `XRPL_TRACE_PEER(_tel_obj_, _span_name_)` macro that checks `shouldTracePeer()`
- Add `XRPL_TRACE_LEDGER(_tel_obj_, _span_name_)` macro (for future ledger tracing)
- Ensure disabled variants expand to `((void)0)`
**Key modified file**:
- `src/xrpld/telemetry/TracingInstrumentation.h`
---
## Task 2.3: Add shouldTraceLedger() to Telemetry Interface
**Objective**: The `Setup` struct has a `traceLedger` field but there's no corresponding virtual method. Add it for interface completeness.
**What to do**:
- Edit `include/xrpl/telemetry/Telemetry.h`:
- Add `virtual bool shouldTraceLedger() const = 0;`
- Update all implementations:
- `src/libxrpl/telemetry/Telemetry.cpp` (TelemetryImpl, NullTelemetryOtel)
- `src/libxrpl/telemetry/NullTelemetry.cpp` (NullTelemetry)
**Key modified files**:
- `include/xrpl/telemetry/Telemetry.h`
- `src/libxrpl/telemetry/Telemetry.cpp`
- `src/libxrpl/telemetry/NullTelemetry.cpp`
---
## Task 2.4: Unit Tests for Core Telemetry Infrastructure
**Objective**: Add unit tests for the core telemetry abstractions to validate correctness and catch regressions.
**What to do**:
- Create `src/test/telemetry/Telemetry_test.cpp`:
- Test NullTelemetry: verify all methods return expected no-op values
- Test Setup defaults: verify all Setup fields have correct defaults
- Test setup_Telemetry config parser: verify parsing of [telemetry] section
- Test enabled/disabled factory paths
- Test shouldTrace\* methods respect config flags
- Create `src/test/telemetry/SpanGuard_test.cpp`:
- Test SpanGuard RAII lifecycle (span ends on destruction)
- Test move constructor works correctly
- Test setAttribute, setOk, setStatus, addEvent, recordException
- Test context() returns valid context
- Add test files to CMake build
**Key new files**:
- `src/test/telemetry/Telemetry_test.cpp`
- `src/test/telemetry/SpanGuard_test.cpp`
**Reference**:
- [06-implementation-phases.md §6.11.1](./06-implementation-phases.md) — Phase 1 exit criteria (unit tests passing)
---
## Task 2.5: Enhance RPC Span Attributes
**Objective**: Add additional attributes to RPC spans per the semantic conventions defined in the plan.
**What to do**:
- Edit `src/xrpld/rpc/detail/ServerHandler.cpp`:
- Add `http.method` attribute for HTTP requests
- Add `http.status_code` attribute for responses
- Add `net.peer.ip` attribute for client IP (if available)
- Edit `src/xrpld/rpc/detail/RPCHandler.cpp`:
- Add `xrpl.rpc.duration_ms` attribute on completion
- Add error message attribute on failure: `xrpl.rpc.error_message`
**Key modified files**:
- `src/xrpld/rpc/detail/ServerHandler.cpp`
- `src/xrpld/rpc/detail/RPCHandler.cpp`
**Reference**:
- [02-design-decisions.md §2.4.2](./02-design-decisions.md) — RPC attribute schema
---
## Task 2.6: Build Verification and Performance Baseline
**Objective**: Verify the build succeeds with and without telemetry, and establish a performance baseline.
**What to do**:
1. Build with `telemetry=ON` and verify no compilation errors
2. Build with `telemetry=OFF` and verify no regressions
3. Run existing unit tests to verify no breakage
4. Document any build issues in lessons.md
**Verification Checklist**:
- [ ] `conan install . --build=missing -o telemetry=True` succeeds
- [ ] `cmake --preset default -Dtelemetry=ON` configures correctly
- [ ] Build succeeds with telemetry ON
- [ ] Build succeeds with telemetry OFF
- [ ] Existing tests pass with telemetry ON
- [ ] Existing tests pass with telemetry OFF
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ------------------------------------------- | --------- | -------------- | ---------- |
| 2.1 | W3C Trace Context header extraction | 2 | 2 | POC |
| 2.2 | Add XRPL_TRACE_PEER/LEDGER macros | 0 | 1 | POC |
| 2.3 | Add shouldTraceLedger() interface method | 0 | 3 | POC |
| 2.4 | Unit tests for core telemetry | 2 | 1 | POC |
| 2.5 | Enhanced RPC span attributes | 0 | 2 | POC |
| 2.6 | Build verification and performance baseline | 0 | 0 | 2.1-2.5 |
**Parallel work**: Tasks 2.1, 2.2, 2.3 can run in parallel. Task 2.4 depends on 2.3. Task 2.5 can run in parallel with 2.4. Task 2.6 depends on all others.
---
## Known Issues / Future Work
### Thread safety of TelemetryImpl::stop() vs startSpan()
`TelemetryImpl::stop()` resets `sdkProvider_` (a `std::shared_ptr`) without
synchronization. `getTracer()` reads the same member from RPC handler threads.
This is a data race if any thread calls `startSpan()` concurrently with `stop()`.
**Current mitigation**: `Application::stop()` shuts down `serverHandler_`,
`overlay_`, and `jobQueue_` before calling `telemetry_->stop()`, so no callers
remain. See comments in `Telemetry.cpp:stop()` and `Application.cpp`.
**TODO**: Add an `std::atomic<bool> stopped_` flag checked in `getTracer()` to
make this robust against future shutdown order changes.
### Macro incompatibility: XRPL_TRACE_SPAN vs XRPL_TRACE_SET_ATTR
`XRPL_TRACE_SPAN` and `XRPL_TRACE_SPAN_KIND` declare `_xrpl_guard_` as a bare
`SpanGuard`, but `XRPL_TRACE_SET_ATTR` and `XRPL_TRACE_EXCEPTION` call
`_xrpl_guard_.has_value()` which requires `std::optional<SpanGuard>`. Using
`XRPL_TRACE_SPAN` followed by `XRPL_TRACE_SET_ATTR` in the same scope would
fail to compile.
**Current mitigation**: No call site currently uses `XRPL_TRACE_SPAN` — all
production code uses the conditional macros (`XRPL_TRACE_RPC`, `XRPL_TRACE_TX`,
etc.) which correctly wrap the guard in `std::optional`.
**TODO**: Either make `XRPL_TRACE_SPAN`/`XRPL_TRACE_SPAN_KIND` also wrap in
`std::optional`, or document that `XRPL_TRACE_SET_ATTR` is only compatible with
the conditional macros.

View File

@@ -0,0 +1,263 @@
# Phase 3: Transaction Tracing Task List
> **Goal**: Trace the full transaction lifecycle from RPC submission through peer relay, including cross-node context propagation via Protocol Buffer extensions. This is the WALK phase that demonstrates true distributed tracing.
>
> **Scope**: Protocol Buffer `TraceContext` message, context serialization, PeerImp transaction instrumentation, NetworkOPs processing instrumentation, HashRouter visibility, and multi-node relay context propagation.
>
> **Branch**: `pratik/otel-phase3-tx-tracing` (from `pratik/otel-phase2-rpc-tracing`)
### Related Plan Documents
| Document | Relevance |
| ------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ |
| [04-code-samples.md](./04-code-samples.md) | TraceContext protobuf (§4.4.1), PeerImp instrumentation (§4.5.1), context serialization (§4.4.2) |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | Transaction flow (§1.3), key trace points (§1.6) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 3 tasks (§6.4), definition of done (§6.11.3) |
| [02-design-decisions.md](./02-design-decisions.md) | Context propagation design (§2.5), attribute schema (§2.4.3) |
---
## Task 3.1: Define TraceContext Protocol Buffer Message
**Objective**: Add trace context fields to the P2P protocol messages so trace IDs can propagate across nodes.
**What to do**:
- Edit `include/xrpl/proto/xrpl.proto` (or `src/ripple/proto/ripple.proto`, wherever the proto is):
- Add `TraceContext` message definition:
```protobuf
message TraceContext {
bytes trace_id = 1; // 16-byte trace identifier
bytes span_id = 2; // 8-byte span identifier
uint32 trace_flags = 3; // bit 0 = sampled
string trace_state = 4; // W3C tracestate value
}
```
- Add `optional TraceContext trace_context = 1001;` to:
- `TMTransaction`
- `TMProposeSet` (for Phase 4 use)
- `TMValidation` (for Phase 4 use)
- Use high field numbers (1001+) to avoid conflicts with existing fields
- Regenerate protobuf C++ code
**Key modified files**:
- `include/xrpl/proto/xrpl.proto` (or equivalent)
**Reference**:
- [04-code-samples.md §4.4.1](./04-code-samples.md) — TraceContext message definition
- [02-design-decisions.md §2.5.2](./02-design-decisions.md) — Protocol buffer context propagation design
---
## Task 3.2: Implement Protobuf Context Serialization
**Objective**: Create utilities to serialize/deserialize OTel trace context to/from protobuf `TraceContext` messages.
**What to do**:
- Create `include/xrpl/telemetry/TraceContextPropagator.h` (extend from Phase 2 if exists, or add protobuf methods):
- Add protobuf-specific methods:
- `static Context extractFromProtobuf(protocol::TraceContext const& proto)` — reconstruct OTel context from protobuf fields
- `static void injectToProtobuf(Context const& ctx, protocol::TraceContext& proto)` — serialize current span context into protobuf fields
- Both methods guard behind `#ifdef XRPL_ENABLE_TELEMETRY`
- Create/extend `src/libxrpl/telemetry/TraceContextPropagator.cpp`:
- Implement extraction: read trace_id (16 bytes), span_id (8 bytes), trace_flags from protobuf, construct `SpanContext`, wrap in `Context`
- Implement injection: get current span from context, serialize its TraceId, SpanId, and TraceFlags into protobuf fields
**Key new/modified files**:
- `include/xrpl/telemetry/TraceContextPropagator.h`
- `src/libxrpl/telemetry/TraceContextPropagator.cpp`
**Reference**:
- [04-code-samples.md §4.4.2](./04-code-samples.md) — Full extract/inject implementation
---
## Task 3.3: Instrument PeerImp Transaction Handling
**Objective**: Add trace spans to the peer-level transaction receive and relay path.
**What to do**:
- Edit `src/xrpld/overlay/detail/PeerImp.cpp`:
- In `onMessage(TMTransaction)` / `handleTransaction()`:
- Extract parent trace context from incoming `TMTransaction::trace_context` field (if present)
- Create `tx.receive` span as child of extracted context (or new root if none)
- Set attributes: `xrpl.tx.hash`, `xrpl.peer.id`, `xrpl.tx.status`
- On HashRouter suppression (duplicate): set `xrpl.tx.suppressed=true`, add `tx.duplicate` event
- Wrap validation call with child span `tx.validate`
- Wrap relay with `tx.relay` span
- When relaying to peers:
- Inject current trace context into outgoing `TMTransaction::trace_context`
- Set `xrpl.tx.relay_count` attribute
- Include `TracingInstrumentation.h` and use `XRPL_TRACE_TX` macro
**Key modified files**:
- `src/xrpld/overlay/detail/PeerImp.cpp`
**Reference**:
- [04-code-samples.md §4.5.1](./04-code-samples.md) — Full PeerImp instrumentation example
- [01-architecture-analysis.md §1.3](./01-architecture-analysis.md) — Transaction flow diagram
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — tx.receive trace point
---
## Task 3.4: Instrument NetworkOPs Transaction Processing
**Objective**: Trace the transaction processing pipeline in NetworkOPs, covering both sync and async paths.
**What to do**:
- Edit `src/xrpld/app/misc/NetworkOPs.cpp`:
- In `processTransaction()`:
- Create `tx.process` span
- Set attributes: `xrpl.tx.hash`, `xrpl.tx.type`, `xrpl.tx.local` (whether from RPC or peer)
- Record whether sync or async path is taken
- In `doTransactionAsync()`:
- Capture parent context before queuing
- Create `tx.queue` span with queue depth attribute
- Add event when transaction is dequeued for processing
- In `doTransactionSync()`:
- Create `tx.process_sync` span
- Record result (applied, queued, rejected)
**Key modified files**:
- `src/xrpld/app/misc/NetworkOPs.cpp`
**Reference**:
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — tx.validate and tx.process trace points
- [02-design-decisions.md §2.4.3](./02-design-decisions.md) — Transaction attribute schema
---
## Task 3.5: Instrument HashRouter for Dedup Visibility
**Objective**: Make transaction deduplication visible in traces by recording HashRouter decisions as span attributes/events.
**What to do**:
- Edit `src/xrpld/overlay/detail/PeerImp.cpp` (in handleTransaction):
- After calling `HashRouter::shouldProcess()` or `addSuppressionPeer()`:
- Record `xrpl.tx.suppressed` attribute (true/false)
- Record `xrpl.tx.flags` showing current HashRouter state (SAVED, TRUSTED, etc.)
- Add `tx.first_seen` or `tx.duplicate` event
- This is NOT a modification to HashRouter itself — just recording its decisions as span attributes in the existing PeerImp instrumentation from Task 3.3.
**Key modified files**:
- `src/xrpld/overlay/detail/PeerImp.cpp` (same changes as 3.3, logically grouped)
---
## Task 3.6: Context Propagation in Transaction Relay
**Objective**: Ensure trace context flows correctly when transactions are relayed between peers, creating linked spans across nodes.
**What to do**:
- Verify the relay path injects trace context:
- When `PeerImp` relays a transaction, the `TMTransaction` message should carry `trace_context`
- When a remote peer receives it, the context is extracted and used as parent
- Test context propagation:
- Manually verify with 2+ node setup that trace IDs match across nodes
- Confirm parent-child span relationships are correct in Jaeger
- Handle edge cases:
- Missing trace context (older peers): create new root span
- Corrupted trace context: log warning, create new root span
- Sampled-out traces: respect trace flags
**Key modified files**:
- `src/xrpld/overlay/detail/PeerImp.cpp`
- `src/xrpld/overlay/detail/OverlayImpl.cpp` (if relay method needs context param)
**Reference**:
- [02-design-decisions.md §2.5](./02-design-decisions.md) — Context propagation design
- [04-code-samples.md §4.5.1](./04-code-samples.md) — Relay context injection pattern
---
## Task 3.7: Build Verification and Testing
**Objective**: Verify all Phase 3 changes compile and work correctly.
**What to do**:
1. Build with `telemetry=ON` — verify no compilation errors
2. Build with `telemetry=OFF` — verify no regressions
3. Run existing unit tests
4. Verify protobuf regeneration produces correct C++ code
5. Document any issues encountered
**Verification Checklist**:
- [ ] Protobuf changes generate valid C++
- [ ] Build succeeds with telemetry ON
- [ ] Build succeeds with telemetry OFF
- [ ] Existing tests pass
- [ ] No undefined symbols from new telemetry calls
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ----------------------------------- | --------- | -------------- | ---------- |
| 3.1 | TraceContext protobuf message | 0 | 1 | Phase 2 |
| 3.2 | Protobuf context serialization | 1-2 | 0 | 3.1 |
| 3.3 | PeerImp transaction instrumentation | 0 | 1 | 3.2 |
| 3.4 | NetworkOPs transaction processing | 0 | 1 | Phase 2 |
| 3.5 | HashRouter dedup visibility | 0 | 1 | 3.3 |
| 3.6 | Relay context propagation | 0 | 1-2 | 3.3, 3.5 |
| 3.7 | Build verification and testing | 0 | 0 | 3.1-3.6 |
**Parallel work**: Tasks 3.1 and 3.4 can start in parallel. Task 3.2 depends on 3.1. Tasks 3.3 and 3.5 depend on 3.2. Task 3.6 depends on 3.3 and 3.5.
**Exit Criteria** (from [06-implementation-phases.md §6.11.3](./06-implementation-phases.md)):
- [ ] Transaction traces span across nodes
- [ ] Trace context in Protocol Buffer messages
- [ ] HashRouter deduplication visible in traces
- [ ] <5% overhead on transaction throughput
---
## Known Issues / Future Work
### Propagation utilities not yet wired into P2P flow
`extractFromProtobuf()` and `injectToProtobuf()` in `TraceContextPropagator.h`
are implemented and tested but not called from production code. To enable
cross-node distributed traces:
- Call `injectToProtobuf()` in `PeerImp` when sending `TMTransaction` /
`TMProposeSet` messages
- Call `extractFromProtobuf()` in the corresponding message handlers to
reconstruct the parent span context, then pass it to `startSpan()` as the
parent
This was deferred to validate single-node tracing performance first.
### Unused trace_state proto field
The `TraceContext.trace_state` field (field 4) in `xrpl.proto` is reserved for
W3C `tracestate` vendor-specific key-value pairs but is not read or written by
`TraceContextPropagator`. Wire it when cross-vendor trace propagation is needed.
No wire cost since proto `optional` fields are zero-cost when absent.

View File

@@ -0,0 +1,244 @@
# Phase 4: Consensus Tracing Task List
> **Goal**: Full observability into consensus rounds — track round lifecycle, phase transitions, proposal handling, and validation. This is the RUN phase that completes the distributed tracing story.
>
> **Scope**: RCLConsensus instrumentation for round starts, phase transitions (open/establish/accept), proposal send/receive, validation handling, and correlation with transaction traces from Phase 3.
>
> **Branch**: `pratik/otel-phase4-consensus-tracing` (from `pratik/otel-phase3-tx-tracing`)
### Related Plan Documents
| Document | Relevance |
| ------------------------------------------------------------ | ----------------------------------------------------------- |
| [04-code-samples.md](./04-code-samples.md) | Consensus instrumentation (§4.5.2), consensus span patterns |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | Consensus round flow (§1.4), key trace points (§1.6) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 4 tasks (§6.5), definition of done (§6.11.4) |
| [02-design-decisions.md](./02-design-decisions.md) | Consensus attribute schema (§2.4.4) |
---
## Task 4.1: Instrument Consensus Round Start
**Objective**: Create a root span for each consensus round that captures the round's key parameters.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp`:
- In `RCLConsensus::startRound()` (or the Adaptor's startRound):
- Create `consensus.round` span using `XRPL_TRACE_CONSENSUS` macro
- Set attributes:
- `xrpl.consensus.ledger.prev` — previous ledger hash
- `xrpl.consensus.ledger.seq` — target ledger sequence
- `xrpl.consensus.proposers` — number of trusted proposers
- `xrpl.consensus.mode` — "proposing" or "observing"
- Store the span context for use by child spans in phase transitions
- Add a member to hold current round trace context:
- `opentelemetry::context::Context currentRoundContext_` (guarded by `#ifdef`)
- Updated at round start, used by phase transition spans
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `src/xrpld/app/consensus/RCLConsensus.h` (add context member)
**Reference**:
- [04-code-samples.md §4.5.2](./04-code-samples.md) — startRound instrumentation example
- [01-architecture-analysis.md §1.4](./01-architecture-analysis.md) — Consensus round flow
---
## Task 4.2: Instrument Phase Transitions
**Objective**: Create child spans for each consensus phase (open, establish, accept) to show timing breakdown.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp`:
- Identify where phase transitions occur (the `Consensus<Adaptor>` template drives this)
- For each phase entry:
- Create span as child of `currentRoundContext_`: `consensus.phase.open`, `consensus.phase.establish`, `consensus.phase.accept`
- Set `xrpl.consensus.phase` attribute
- Add `phase.enter` event at start, `phase.exit` event at end
- Record phase duration in milliseconds
- In the `onClose` adaptor method:
- Create `consensus.ledger_close` span
- Set attributes: close_time, mode, transaction count in initial position
- Note: The Consensus template class in `include/xrpl/consensus/Consensus.h` drives phase transitions — check if instrumentation goes there or in the Adaptor
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- Possibly `include/xrpl/consensus/Consensus.h` (for template-level phase tracking)
**Reference**:
- [04-code-samples.md §4.5.2](./04-code-samples.md) — phaseTransition instrumentation
---
## Task 4.3: Instrument Proposal Handling
**Objective**: Trace proposal send and receive to show validator coordination.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp`:
- In `Adaptor::propose()`:
- Create `consensus.proposal.send` span
- Set attributes: `xrpl.consensus.round` (proposal sequence), proposal hash
- Inject trace context into outgoing `TMProposeSet::trace_context` (from Phase 3 protobuf)
- In `Adaptor::peerProposal()` (or wherever peer proposals are received):
- Extract trace context from incoming `TMProposeSet::trace_context`
- Create `consensus.proposal.receive` span as child of extracted context
- Set attributes: `xrpl.consensus.proposer` (node ID), `xrpl.consensus.round`
- In `Adaptor::share(RCLCxPeerPos)`:
- Create `consensus.proposal.relay` span for relaying peer proposals
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
**Reference**:
- [04-code-samples.md §4.5.2](./04-code-samples.md) — peerProposal instrumentation
- [02-design-decisions.md §2.4.4](./02-design-decisions.md) — Consensus attribute schema
---
## Task 4.4: Instrument Validation Handling
**Objective**: Trace validation send and receive to show ledger validation flow.
**What to do**:
- Edit `src/xrpld/app/consensus/RCLConsensus.cpp` (or the validation handler):
- When sending our validation:
- Create `consensus.validation.send` span
- Set attributes: validated ledger hash, sequence, signing time
- When receiving a peer validation:
- Extract trace context from `TMValidation::trace_context` (if present)
- Create `consensus.validation.receive` span
- Set attributes: `xrpl.consensus.validator` (node ID), ledger hash
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `src/xrpld/app/misc/NetworkOPs.cpp` (if validation handling is here)
---
## Task 4.5: Add Consensus-Specific Attributes
**Objective**: Enrich consensus spans with detailed attributes for debugging and analysis.
**What to do**:
- Review all consensus spans and ensure they include:
- `xrpl.consensus.ledger.seq` — target ledger sequence number
- `xrpl.consensus.round` — consensus round number
- `xrpl.consensus.mode` — proposing/observing/wrongLedger
- `xrpl.consensus.phase` — current phase name
- `xrpl.consensus.phase_duration_ms` — time spent in phase
- `xrpl.consensus.proposers` — number of trusted proposers
- `xrpl.consensus.tx_count` — transactions in proposed set
- `xrpl.consensus.disputes` — number of disputed transactions
- `xrpl.consensus.converge_percent` — convergence percentage
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
---
## Task 4.6: Correlate Transaction and Consensus Traces
**Objective**: Link transaction traces from Phase 3 with consensus traces so you can follow a transaction from submission through consensus into the ledger.
**What to do**:
- In `onClose()` or `onAccept()`:
- When building the consensus position, link the round span to individual transaction spans using span links (if OTel SDK supports it) or events
- At minimum, record the transaction hashes included in the consensus set as span events: `tx.included` with `xrpl.tx.hash` attribute
- In `processTransactionSet()` (NetworkOPs):
- If the consensus round span context is available, create child spans for each transaction applied to the ledger
**Key modified files**:
- `src/xrpld/app/consensus/RCLConsensus.cpp`
- `src/xrpld/app/misc/NetworkOPs.cpp`
---
## Task 4.7: Build Verification and Testing
**Objective**: Verify all Phase 4 changes compile and don't affect consensus timing.
**What to do**:
1. Build with `telemetry=ON` — verify no compilation errors
2. Build with `telemetry=OFF` — verify no regressions (critical for consensus code)
3. Run existing consensus-related unit tests
4. Verify that all macros expand to no-ops when disabled
5. Check that no consensus-critical code paths are affected by instrumentation overhead
**Verification Checklist**:
- [ ] Build succeeds with telemetry ON
- [ ] Build succeeds with telemetry OFF
- [ ] Existing consensus tests pass
- [ ] No new includes in consensus headers when telemetry is OFF
- [ ] Phase timing instrumentation doesn't use blocking operations
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ------------------------------------- | --------- | -------------- | ------------- |
| 4.1 | Consensus round start instrumentation | 0 | 2 | Phase 3 |
| 4.2 | Phase transition instrumentation | 0 | 1-2 | 4.1 |
| 4.3 | Proposal handling instrumentation | 0 | 1 | 4.1 |
| 4.4 | Validation handling instrumentation | 0 | 1-2 | 4.1 |
| 4.5 | Consensus-specific attributes | 0 | 1 | 4.2, 4.3, 4.4 |
| 4.6 | Transaction-consensus correlation | 0 | 2 | 4.2, Phase 3 |
| 4.7 | Build verification and testing | 0 | 0 | 4.1-4.6 |
**Parallel work**: Tasks 4.2, 4.3, and 4.4 can run in parallel after 4.1 is complete. Task 4.5 depends on all three. Task 4.6 depends on 4.2 and Phase 3.
### Implemented Spans
| Span Name | Method | Key Attributes |
| --------------------------- | ---------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| `consensus.proposal.send` | `Adaptor::propose` | `xrpl.consensus.round` |
| `consensus.ledger_close` | `Adaptor::onClose` | `xrpl.consensus.ledger.seq`, `xrpl.consensus.mode` |
| `consensus.accept` | `Adaptor::onAccept` | `xrpl.consensus.proposers`, `xrpl.consensus.round_time_ms` |
| `consensus.accept.apply` | `Adaptor::doAccept` | `xrpl.consensus.close_time`, `close_time_correct`, `close_resolution_ms`, `state`, `proposing`, `round_time_ms`, `ledger.seq` |
| `consensus.validation.send` | `Adaptor::onAccept` (via validate) | `xrpl.consensus.proposing` |
#### Close Time Attributes (consensus.accept.apply)
The `consensus.accept.apply` span captures ledger close time agreement details
driven by `avCT_CONSENSUS_PCT` (75% validator agreement threshold):
- **`xrpl.consensus.close_time`** — Agreed-upon ledger close time (epoch seconds). When validators disagree (`consensusCloseTime == epoch`), this is synthetically set to `prevCloseTime + 1s`.
- **`xrpl.consensus.close_time_correct`** — `true` if validators reached agreement, `false` if they "agreed to disagree" (close time forced to prev+1s).
- **`xrpl.consensus.close_resolution_ms`** — Rounding granularity for close time (starts at 30s, decreases as ledger interval stabilizes).
- **`xrpl.consensus.state`** — `"finished"` (normal) or `"moved_on"` (consensus failed, adopted best available).
- **`xrpl.consensus.proposing`** — Whether this node was proposing.
- **`xrpl.consensus.round_time_ms`** — Total consensus round duration.
**Exit Criteria** (from [06-implementation-phases.md §6.11.4](./06-implementation-phases.md)):
- [ ] Complete consensus round traces
- [ ] Phase transitions visible
- [ ] Proposals and validations traced
- [ ] Close time agreement tracked (per `avCT_CONSENSUS_PCT`)
- [ ] No impact on consensus timing

View File

@@ -0,0 +1,221 @@
# Phase 5: Integration Test Task List
> **Goal**: End-to-end verification of the complete telemetry pipeline using a
> 6-node consensus network. Proves that RPC, transaction, and consensus spans
> flow through the observability stack (otel-collector, Jaeger, Prometheus,
> Grafana) under realistic conditions.
>
> **Scope**: Integration test script, manual testing plan, 6-node local network
> setup, Jaeger/Prometheus/Grafana verification.
>
> **Branch**: `pratik/otel-phase5-docs-deployment`
### Related Plan Documents
| Document | Relevance |
| ---------------------------------------------------------------- | ------------------------------------------ |
| [07-observability-backends.md](./07-observability-backends.md) | Jaeger, Grafana, Prometheus setup |
| [05-configuration-reference.md](./05-configuration-reference.md) | Collector config, Docker Compose |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 5 tasks, definition of done |
| [Phase5_taskList.md](./Phase5_taskList.md) | Phase 5 main task list (5.6 = integration) |
---
## Task IT.1: Create Integration Test Script
**Objective**: Automated bash script that stands up a 6-node xrpld network
with telemetry, exercises all span categories, and verifies data in
Jaeger/Prometheus.
**What to do**:
- Create `docker/telemetry/integration-test.sh`:
- Prerequisites check (docker, xrpld binary, curl, jq)
- Start observability stack via `docker compose`
- Generate 6 validator key pairs via temp standalone xrpld
- Generate 6 node configs + shared `validators.txt`
- Start 6 xrpld nodes in consensus mode (`--start`, no `-a`)
- Wait for all nodes to reach `"proposing"` state (120s timeout)
**Key new file**: `docker/telemetry/integration-test.sh`
**Verification**:
- [ ] Script starts without errors
- [ ] All 6 nodes reach "proposing" state
- [ ] Observability stack is healthy (otel-collector, Jaeger, Prometheus, Grafana)
---
## Task IT.2: RPC Span Verification (Phase 2)
**Objective**: Verify RPC spans flow through the telemetry pipeline.
**What to do**:
- Send `server_info`, `server_state`, `ledger` RPCs to node1 (port 5005)
- Wait for batch export (5s)
- Query Jaeger API for:
- `rpc.request` spans (ServerHandler::onRequest)
- `rpc.process` spans (ServerHandler::processRequest)
- `rpc.command.server_info` spans (callMethod)
- `rpc.command.server_state` spans (callMethod)
- `rpc.command.ledger` spans (callMethod)
- Verify `xrpl.rpc.command` attribute present on `rpc.command.*` spans
**Verification**:
- [ ] Jaeger shows `rpc.request` traces
- [ ] Jaeger shows `rpc.process` traces
- [ ] Jaeger shows `rpc.command.*` traces with correct attributes
---
## Task IT.3: Transaction Span Verification (Phase 3)
**Objective**: Verify transaction spans flow through the telemetry pipeline.
**What to do**:
- Get genesis account sequence via `account_info` RPC
- Submit Payment transaction using genesis seed (`snoPBrXtMeMyMHUVTgbuqAfg1SUTb`)
- Wait for consensus inclusion (10s)
- Query Jaeger API for:
- `tx.process` spans (NetworkOPsImp::processTransaction) on submitting node
- `tx.receive` spans (PeerImp::handleTransaction) on peer nodes
- Verify `xrpl.tx.hash` attribute on `tx.process` spans
- Verify `xrpl.peer.id` attribute on `tx.receive` spans
**Verification**:
- [ ] Jaeger shows `tx.process` traces with `xrpl.tx.hash`
- [ ] Jaeger shows `tx.receive` traces with `xrpl.peer.id`
---
## Task IT.4: Consensus Span Verification (Phase 4)
**Objective**: Verify consensus spans flow through the telemetry pipeline.
**What to do**:
- Consensus runs automatically in 6-node network
- Query Jaeger API for:
- `consensus.proposal.send` (Adaptor::propose)
- `consensus.ledger_close` (Adaptor::onClose)
- `consensus.accept` (Adaptor::onAccept)
- `consensus.validation.send` (Adaptor::validate)
- Verify attributes:
- `xrpl.consensus.mode` on `consensus.ledger_close`
- `xrpl.consensus.proposers` on `consensus.accept`
- `xrpl.consensus.ledger.seq` on `consensus.validation.send`
**Verification**:
- [ ] Jaeger shows `consensus.ledger_close` traces with `xrpl.consensus.mode`
- [ ] Jaeger shows `consensus.accept` traces with `xrpl.consensus.proposers`
- [ ] Jaeger shows `consensus.proposal.send` traces
- [ ] Jaeger shows `consensus.validation.send` traces
---
## Task IT.5: Spanmetrics Verification (Phase 5)
**Objective**: Verify spanmetrics connector derives RED metrics from spans.
**What to do**:
- Query Prometheus for `traces_span_metrics_calls_total`
- Query Prometheus for `traces_span_metrics_duration_milliseconds_count`
- Verify Grafana loads at `http://localhost:3000`
**Verification**:
- [ ] Prometheus returns non-empty results for `traces_span_metrics_calls_total`
- [ ] Prometheus returns non-empty results for duration histogram
- [ ] Grafana UI accessible with dashboards visible
---
## Task IT.6: Manual Testing Plan
**Objective**: Document how to run tests manually for future reference.
**What to do**:
- Create `docker/telemetry/TESTING.md` with:
- Prerequisites section
- Single-node standalone test (quick verification)
- 6-node consensus test (full verification)
- Expected span catalog (all 12 span names with attributes)
- Verification queries (Jaeger API, Prometheus API)
- Troubleshooting guide
**Key new file**: `docker/telemetry/TESTING.md`
**Verification**:
- [ ] Document covers both single-node and multi-node testing
- [ ] All 12 span names documented with source file and attributes
- [ ] Troubleshooting section covers common failure modes
---
## Task IT.7: Run and Verify
**Objective**: Execute the integration test and validate results.
**What to do**:
- Run `docker/telemetry/integration-test.sh` locally
- Debug any failures
- Leave stack running for manual verification
- Share URLs:
- Jaeger: `http://localhost:16686`
- Grafana: `http://localhost:3000`
- Prometheus: `http://localhost:9090`
**Verification**:
- [ ] Script completes with all checks passing
- [ ] Jaeger UI shows rippled service with all expected span names
- [ ] Grafana dashboards load and show data
---
## Task IT.8: Commit
**Objective**: Commit all new files to Phase 5 branch.
**What to do**:
- Run `pcc` (pre-commit checks)
- Commit 3 new files to `pratik/otel-phase5-docs-deployment`
**Verification**:
- [ ] `pcc` passes
- [ ] Commit created on Phase 5 branch
---
## Summary
| Task | Description | New Files | Depends On |
| ---- | ----------------------------- | --------- | ---------- |
| IT.1 | Integration test script | 1 | Phase 5 |
| IT.2 | RPC span verification | 0 | IT.1 |
| IT.3 | Transaction span verification | 0 | IT.1 |
| IT.4 | Consensus span verification | 0 | IT.1 |
| IT.5 | Spanmetrics verification | 0 | IT.1 |
| IT.6 | Manual testing plan | 1 | -- |
| IT.7 | Run and verify | 0 | IT.1-IT.6 |
| IT.8 | Commit | 0 | IT.7 |
**Exit Criteria**:
- [ ] All 6 xrpld nodes reach "proposing" state
- [ ] All 11 expected span names visible in Jaeger
- [ ] Spanmetrics available in Prometheus
- [ ] Grafana dashboards show data
- [ ] Manual testing plan document complete

View File

@@ -0,0 +1,241 @@
# Phase 5: Documentation & Deployment Task List
> **Goal**: Production readiness — Grafana dashboards, spanmetrics pipeline, operator runbook, alert definitions, and final integration testing. This phase ensures the telemetry system is useful and maintainable in production.
>
> **Scope**: Grafana dashboard definitions, OTel Collector spanmetrics connector, Prometheus integration, alert rules, operator documentation, and production-ready Docker Compose stack.
>
> **Branch**: `pratik/otel-phase5-docs-deployment` (from `pratik/otel-phase4-consensus-tracing`)
### Related Plan Documents
| Document | Relevance |
| ---------------------------------------------------------------- | -------------------------------------------------------------------------- |
| [07-observability-backends.md](./07-observability-backends.md) | Jaeger setup (§7.1), Grafana dashboards (§7.6), alerts (§7.6.3) |
| [05-configuration-reference.md](./05-configuration-reference.md) | Collector config (§5.5), production config (§5.5.2), Docker Compose (§5.6) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 5 tasks (§6.6), definition of done (§6.11.5) |
---
## Task 5.1: Add Spanmetrics Connector to OTel Collector
**Objective**: Derive RED metrics (Rate, Errors, Duration) from trace spans automatically, enabling Grafana time-series dashboards.
**What to do**:
- Edit `docker/telemetry/otel-collector-config.yaml`:
- Add `spanmetrics` connector:
```yaml
connectors:
spanmetrics:
histogram:
explicit:
buckets: [1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 5s]
dimensions:
- name: xrpl.rpc.command
- name: xrpl.rpc.status
- name: xrpl.consensus.phase
- name: xrpl.tx.type
```
- Add `prometheus` exporter:
```yaml
exporters:
prometheus:
endpoint: 0.0.0.0:8889
```
- Wire the pipeline:
```yaml
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp/jaeger, spanmetrics]
metrics:
receivers: [spanmetrics]
exporters: [prometheus]
```
- Edit `docker/telemetry/docker-compose.yml`:
- Expose port `8889` on the collector for Prometheus scraping
- Add Prometheus service
- Add Prometheus as Grafana datasource
**Key modified files**:
- `docker/telemetry/otel-collector-config.yaml`
- `docker/telemetry/docker-compose.yml`
**Key new files**:
- `docker/telemetry/prometheus.yml` (Prometheus scrape config)
- `docker/telemetry/grafana/provisioning/datasources/prometheus.yaml`
**Reference**:
- [POC_taskList.md §Next Steps](./POC_taskList.md) — Metrics pipeline for Grafana dashboards
---
## Task 5.2: Create Grafana Dashboards
**Objective**: Provide pre-built Grafana dashboards for RPC performance, transaction lifecycle, and consensus health.
**What to do**:
- Create `docker/telemetry/grafana/provisioning/dashboards/dashboards.yaml` (provisioning config)
- Create dashboard JSON files:
1. **RPC Performance Dashboard** (`rpc-performance.json`):
- RPC request latency (p50/p95/p99) by command — histogram panel
- RPC throughput (requests/sec) by command — time series
- RPC error rate by command — bar gauge
- Top slowest RPC commands — table
2. **Transaction Overview Dashboard** (`transaction-overview.json`):
- Transaction processing rate — time series
- Transaction latency distribution — histogram
- Suppression rate (duplicates) — stat panel
- Transaction processing path (sync vs async) — pie chart
3. **Consensus Health Dashboard** (`consensus-health.json`):
- Consensus round duration — time series
- Phase duration breakdown (open/establish/accept) — stacked bar
- Proposals sent/received per round — stat panel
- Consensus mode distribution (proposing/observing) — pie chart
- Store dashboards in `docker/telemetry/grafana/dashboards/`
**Key new files**:
- `docker/telemetry/grafana/provisioning/dashboards/dashboards.yaml`
- `docker/telemetry/grafana/dashboards/rpc-performance.json`
- `docker/telemetry/grafana/dashboards/transaction-overview.json`
- `docker/telemetry/grafana/dashboards/consensus-health.json`
**Reference**:
- [07-observability-backends.md §7.6](./07-observability-backends.md) — Grafana dashboard specifications
- [01-architecture-analysis.md §1.8.3](./01-architecture-analysis.md) — Dashboard panel examples
---
## Task 5.3: Define Alert Rules
**Objective**: Create alert definitions for key telemetry anomalies.
**What to do**:
- Create `docker/telemetry/grafana/provisioning/alerting/alerts.yaml`:
- **RPC Latency Alert**: p99 latency > 1s for any command over 5 minutes
- **RPC Error Rate Alert**: Error rate > 5% for any command over 5 minutes
- **Consensus Duration Alert**: Round duration > 10s (warn), > 30s (critical)
- **Transaction Processing Alert**: Processing rate drops below threshold
- **Telemetry Pipeline Health**: No spans received for > 2 minutes
**Key new files**:
- `docker/telemetry/grafana/provisioning/alerting/alerts.yaml`
**Reference**:
- [07-observability-backends.md §7.6.3](./07-observability-backends.md) — Alert rule definitions
---
## Task 5.4: Production Collector Configuration
**Objective**: Create a production-ready OTel Collector configuration with tail-based sampling and resource limits.
**What to do**:
- Create `docker/telemetry/otel-collector-config-production.yaml`:
- Tail-based sampling policy:
- Always sample errors and slow traces
- 10% base sampling rate for normal traces
- Always sample first trace for each unique RPC command
- Resource limits:
- Memory limiter processor (80% of available memory)
- Queued retry for export failures
- TLS configuration for production endpoints
- Health check endpoint
**Key new files**:
- `docker/telemetry/otel-collector-config-production.yaml`
**Reference**:
- [05-configuration-reference.md §5.5.2](./05-configuration-reference.md) — Production collector config
---
## Task 5.5: Operator Runbook
**Objective**: Create operator documentation for managing the telemetry system in production.
**What to do**:
- Create `docs/telemetry-runbook.md`:
- **Setup**: How to enable telemetry in rippled
- **Configuration**: All config options with descriptions
- **Collector Deployment**: Docker Compose vs. Kubernetes vs. bare metal
- **Troubleshooting**: Common issues and resolutions
- No traces appearing
- High memory usage from telemetry
- Collector connection failures
- Sampling configuration tuning
- **Performance Tuning**: Batch size, queue size, sampling ratio guidelines
- **Upgrading**: How to upgrade OTel SDK and Collector versions
**Key new files**:
- `docs/telemetry-runbook.md`
---
## Task 5.6: Final Integration Testing
**Objective**: Validate the complete telemetry stack end-to-end.
**What to do**:
1. Start full Docker stack (Collector, Jaeger, Grafana, Prometheus)
2. Build rippled with `telemetry=ON`
3. Run in standalone mode with telemetry enabled
4. Generate RPC traffic and verify traces in Jaeger
5. Verify dashboards populate in Grafana
6. Verify alerts trigger correctly
7. Test telemetry OFF path (no regressions)
8. Run full test suite
**Verification Checklist**:
- [ ] Docker stack starts without errors
- [ ] Traces appear in Jaeger with correct hierarchy
- [ ] Grafana dashboards show metrics derived from spans
- [ ] Prometheus scrapes spanmetrics successfully
- [ ] Alerts can be triggered by simulated conditions
- [ ] Build succeeds with telemetry ON and OFF
- [ ] Full test suite passes
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ---------------------------------- | --------- | -------------- | ---------- |
| 5.1 | Spanmetrics connector + Prometheus | 2 | 2 | Phase 4 |
| 5.2 | Grafana dashboards | 4 | 0 | 5.1 |
| 5.3 | Alert definitions | 1 | 0 | 5.1 |
| 5.4 | Production collector config | 1 | 0 | Phase 4 |
| 5.5 | Operator runbook | 1 | 0 | Phase 4 |
| 5.6 | Final integration testing | 0 | 0 | 5.1-5.5 |
**Parallel work**: Tasks 5.1, 5.4, and 5.5 can run in parallel. Tasks 5.2 and 5.3 depend on 5.1. Task 5.6 depends on all others.
**Exit Criteria** (from [06-implementation-phases.md §6.11.5](./06-implementation-phases.md)):
- [ ] Dashboards deployed and showing data
- [ ] Alerts configured and tested
- [ ] Operator documentation complete
- [ ] Production collector config ready
- [ ] Full test suite passes

View File

@@ -0,0 +1,256 @@
# Phase 7: Native OTel Metrics Migration — Task List
> **Goal**: Replace `StatsDCollector` with a native OpenTelemetry Metrics SDK implementation behind the existing `beast::insight::Collector` interface, eliminating the StatsD UDP dependency.
>
> **Scope**: New `OTelCollectorImpl` class, `CollectorManager` config change, OTel Collector pipeline update, Grafana dashboard metric name migration, integration tests.
>
> **Branch**: `pratik/otel-phase7-native-metrics` (from `pratik/otel-phase6-statsd`)
### Related Plan Documents
| Document | Relevance |
| -------------------------------------------------------------------- | --------------------------------------------------------------- |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 7 plan: motivation, architecture, exit criteria (§6.8) |
| [02-design-decisions.md](./02-design-decisions.md) | Collector interface design, beast::insight coexistence strategy |
| [05-configuration-reference.md](./05-configuration-reference.md) | `[insight]` and `[telemetry]` config sections |
| [09-data-collection-reference.md](./09-data-collection-reference.md) | Complete metric inventory that must be preserved |
---
## Task 7.1: Add OTel Metrics SDK to Build Dependencies
**Objective**: Enable the OTel C++ Metrics SDK components in the build system.
**What to do**:
- Edit `conanfile.py`:
- Add OTel metrics SDK components to the dependency list when `telemetry=True`
- Components needed: `opentelemetry-cpp::metrics`, `opentelemetry-cpp::otlp_http_metric_exporter`
- Edit `CMakeLists.txt` (telemetry section):
- Link `opentelemetry::metrics` and `opentelemetry::otlp_http_metric_exporter` targets
**Key modified files**:
- `conanfile.py`
- `CMakeLists.txt` (or the relevant telemetry cmake target)
**Reference**: [05-configuration-reference.md §5.3](./05-configuration-reference.md) — CMake integration
---
## Task 7.2: Implement OTelCollector Class
**Objective**: Create the core `OTelCollector` implementation that maps beast::insight instruments to OTel Metrics SDK instruments.
**What to do**:
- Create `include/xrpl/beast/insight/OTelCollector.h`:
- Public factory: `static std::shared_ptr<OTelCollector> New(std::string const& endpoint, std::string const& prefix, beast::Journal journal)`
- Derives from `StatsDCollector` (or directly from `Collector` — TBD based on shared code)
- Create `src/libxrpl/beast/insight/OTelCollector.cpp` (~400-500 lines):
- **OTelCounterImpl**: Wraps `opentelemetry::metrics::Counter<int64_t>`. `increment(amount)` calls `counter->Add(amount)`.
- **OTelGaugeImpl**: Uses `opentelemetry::metrics::ObservableGauge<uint64_t>` with an async callback. `set(value)` stores value atomically; callback reads it during collection.
- **OTelMeterImpl**: Wraps `opentelemetry::metrics::Counter<uint64_t>`. `increment(amount)` calls `counter->Add(amount)`. Semantically identical to Counter but unsigned.
- **OTelEventImpl**: Wraps `opentelemetry::metrics::Histogram<double>`. `notify(duration)` calls `histogram->Record(duration.count())`. Uses explicit bucket boundaries matching SpanMetrics: [1, 5, 10, 25, 50, 100, 250, 500, 1000, 5000] ms.
- **OTelHookImpl**: Stores handler function. Called during periodic metric collection (same 1s pattern via PeriodicMetricReader).
- **OTelCollectorImp**: Main class.
- Creates `MeterProvider` with `PeriodicMetricReader` (1s export interval)
- Creates `OtlpHttpMetricExporter` pointing to `[telemetry]` endpoint
- Sets resource attributes (service.name, service.instance.id) matching trace exporter
- Implements all `make_*()` factory methods
- Prefixes metric names with `[insight] prefix=` value
- Guard all OTel SDK includes with `#ifdef XRPL_ENABLE_TELEMETRY` to compile to `NullCollector` equivalents when telemetry disabled.
**Key new files**:
- `include/xrpl/beast/insight/OTelCollector.h`
- `src/libxrpl/beast/insight/OTelCollector.cpp`
**Key patterns to follow**:
- Match `StatsDCollector.cpp` structure: private impl classes, intrusive list for metrics, strand-based thread safety
- Match existing telemetry code style from `src/libxrpl/telemetry/Telemetry.cpp`
- Use RAII for MeterProvider lifecycle (shutdown on destructor)
**Reference**: [04-code-samples.md](./04-code-samples.md) — code style and patterns
---
## Task 7.3: Update CollectorManager
**Objective**: Add `server=otel` config option to route metric creation to the new OTel backend.
**What to do**:
- Edit `src/xrpld/app/main/CollectorManager.cpp`:
- In the constructor, add a third branch after `server == "statsd"`:
```cpp
else if (server == "otel")
{
// Read endpoint from [telemetry] section
auto const endpoint = get(telemetryParams, "endpoint",
"http://localhost:4318/v1/metrics");
std::string const& prefix(get(params, "prefix"));
m_collector = beast::insight::OTelCollector::New(
endpoint, prefix, journal);
}
```
- This requires access to the `[telemetry]` config section — may need to pass it as a parameter or read from Application config.
- Edit `src/xrpld/app/main/CollectorManager.h`:
- Add `#include <xrpl/beast/insight/OTelCollector.h>`
**Key modified files**:
- `src/xrpld/app/main/CollectorManager.cpp`
- `src/xrpld/app/main/CollectorManager.h`
---
## Task 7.4: Update OTel Collector Configuration
**Objective**: Add a metrics pipeline to the OTLP receiver and remove the StatsD receiver dependency.
**What to do**:
- Edit `docker/telemetry/otel-collector-config.yaml`:
- Remove `statsd` receiver (no longer needed when `server=otel`)
- Add metrics pipeline under `service.pipelines`:
```yaml
metrics:
receivers: [otlp, spanmetrics]
processors: [batch]
exporters: [prometheus]
```
- The OTLP receiver already listens on :4318 — it just needs to be added to the metrics pipeline receivers.
- Keep `spanmetrics` connector in the metrics pipeline so span-derived RED metrics continue working.
- Edit `docker/telemetry/docker-compose.yml`:
- Remove UDP :8125 port mapping from otel-collector service
- Update rippled service config: change `[insight] server=statsd` to `server=otel`
**Key modified files**:
- `docker/telemetry/otel-collector-config.yaml`
- `docker/telemetry/docker-compose.yml`
**Note**: Keep a commented-out `statsd` receiver block for operators who need backward compatibility.
---
## Task 7.5: Preserve Metric Names in Prometheus
**Objective**: Ensure existing Grafana dashboards continue working with identical metric names.
**What to do**:
- In `OTelCollector.cpp`, construct OTel instrument names to match existing Prometheus metric names:
- beast::insight `make_gauge("LedgerMaster", "Validated_Ledger_Age")` → OTel instrument name: `rippled_LedgerMaster_Validated_Ledger_Age`
- The prefix + group + name concatenation must produce the same string as `StatsDCollector`'s format
- Use underscores as separators (matching StatsD convention)
- Verify in integration test that key Prometheus queries still return data:
- `rippled_LedgerMaster_Validated_Ledger_Age`
- `rippled_Peer_Finder_Active_Inbound_Peers`
- `rippled_rpc_requests`
**Key consideration**: OTel Prometheus exporter may normalize metric names differently than StatsD receiver. Test this early (Task 7.2) and adjust naming strategy if needed. The OTel SDK's Prometheus exporter adds `_total` suffix to counters and converts dots to underscores — match existing conventions.
---
## Task 7.6: Update Grafana Dashboards
**Objective**: Update the 3 StatsD dashboards if any metric names change due to OTLP export format differences.
**What to do**:
- If Task 7.5 confirms metric names are preserved exactly, no dashboard changes needed.
- If OTLP export produces different names (e.g., `_total` suffix on counters), update:
- `docker/telemetry/grafana/dashboards/statsd-node-health.json`
- `docker/telemetry/grafana/dashboards/statsd-network-traffic.json`
- `docker/telemetry/grafana/dashboards/statsd-rpc-pathfinding.json`
- Rename dashboard titles from "StatsD" to "System Metrics" or similar (since they're no longer StatsD-sourced).
**Key modified files**:
- `docker/telemetry/grafana/dashboards/statsd-*.json` (3 files, conditionally)
---
## Task 7.7: Update Integration Tests
**Objective**: Verify the full OTLP metrics pipeline end-to-end.
**What to do**:
- Edit `docker/telemetry/integration-test.sh`:
- Update test config to use `[insight] server=otel`
- Verify metrics arrive in Prometheus via OTLP (not StatsD)
- Add check that StatsD receiver is no longer required
- Preserve all existing metric presence checks
**Key modified files**:
- `docker/telemetry/integration-test.sh`
---
## Task 7.8: Update Documentation
**Objective**: Update all plan docs, runbook, and reference docs to reflect the migration.
**What to do**:
- Edit `docs/telemetry-runbook.md`:
- Update `[insight]` config examples to show `server=otel`
- Update troubleshooting section (no more StatsD UDP debugging)
- Edit `OpenTelemetryPlan/09-data-collection-reference.md`:
- Update Data Flow Overview diagram (remove StatsD receiver)
- Update Section 2 header from "StatsD Metrics" to "System Metrics (OTel native)"
- Update config examples
- Edit `OpenTelemetryPlan/05-configuration-reference.md`:
- Add `server=otel` option to `[insight]` section docs
- Edit `docker/telemetry/TESTING.md`:
- Update setup instructions to use `server=otel`
**Key modified files**:
- `docs/telemetry-runbook.md`
- `OpenTelemetryPlan/09-data-collection-reference.md`
- `OpenTelemetryPlan/05-configuration-reference.md`
- `docker/telemetry/TESTING.md`
---
## Summary Table
| Task | Description | New Files | Modified Files | Effort | Risk | Depends On |
| ---- | -------------------------------------- | --------- | -------------- | ------ | ------ | ---------- |
| 7.1 | Add OTel Metrics SDK to build deps | 0 | 2 | 0.5d | Low | — |
| 7.2 | Implement OTelCollector class | 2 | 0 | 3d | Medium | 7.1 |
| 7.3 | Update CollectorManager config routing | 0 | 2 | 0.5d | Low | 7.2 |
| 7.4 | Update OTel Collector YAML and Docker | 0 | 2 | 0.5d | Low | 7.3 |
| 7.5 | Preserve metric names in Prometheus | 0 | 1 | 1d | Medium | 7.2 |
| 7.6 | Update Grafana dashboards (if needed) | 0 | 3 | 1d | Low | 7.5 |
| 7.7 | Update integration tests | 0 | 1 | 0.5d | Low | 7.4 |
| 7.8 | Update documentation | 0 | 4 | 1d | Low | 7.6 |
**Total Effort**: 8 days
**Parallel work**: Tasks 7.4 and 7.5 can run in parallel after 7.2/7.3 complete. Task 7.6 depends on 7.5's findings. Tasks 7.7 and 7.8 can run in parallel after 7.6.
**Exit Criteria** (from [06-implementation-phases.md §6.8](./06-implementation-phases.md)):
- [ ] All 255+ metrics visible in Prometheus via OTLP pipeline (no StatsD receiver)
- [ ] `server=otel` is the default in development docker-compose
- [ ] `server=statsd` still works as a fallback
- [ ] Existing Grafana dashboards display data correctly
- [ ] Integration test passes with OTLP-only metrics pipeline
- [ ] No performance regression vs StatsD baseline (< 1% CPU overhead)
- [ ] Deferred Task 6.1 (`|m` wire format) no longer relevant — Meter mapped to OTel Counter

View File

@@ -0,0 +1,234 @@
# Phase 8: Log-Trace Correlation and Centralized Log Ingestion — Task List
> **Goal**: Inject trace context (trace_id, span_id) into rippled's Journal log output for log-trace correlation, and add OTel Collector filelog receiver to ingest logs into Grafana Loki for unified observability.
>
> **Scope**: Two independent sub-phases — 8a (code change: trace_id in logs) and 8b (infra only: filelog receiver to Loki). No changes to the `beast::Journal` public API.
>
> **Branch**: `pratik/otel-phase8-log-correlation` (from `pratik/otel-phase7-native-metrics`)
### Related Plan Documents
| Document | Relevance |
| ---------------------------------------------------------------- | -------------------------------------------------------------- |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 8 plan: motivation, architecture, exit criteria (§6.8.1) |
| [07-observability-backends.md](./07-observability-backends.md) | Loki backend recommendation, Grafana data source provisioning |
| [Phase7_taskList.md](./Phase7_taskList.md) | Prerequisite — native OTel metrics pipeline must be working |
| [05-configuration-reference.md](./05-configuration-reference.md) | `[telemetry]` config (trace_id injection toggle) |
---
## Task 8.1: Inject trace_id into Logs::format()
**Objective**: Add OTel trace context to every log line that is emitted within an active span.
**What to do**:
- Edit `src/libxrpl/basics/Log.cpp`:
- In `Logs::format()` (around line 346), after severity is appended, check for active OTel span:
```cpp
#ifdef XRPL_ENABLE_TELEMETRY
auto span = opentelemetry::trace::GetSpan(
opentelemetry::context::RuntimeContext::GetCurrent());
auto ctx = span->GetContext();
if (ctx.IsValid())
{
// Append trace context as structured fields
char traceId[33], spanId[17];
ctx.trace_id().ToLowerBase16(traceId);
ctx.span_id().ToLowerBase16(spanId);
output += "trace_id=";
output.append(traceId, 32);
output += " span_id=";
output.append(spanId, 16);
output += ' ';
}
#endif
```
- Add `#include` for OTel context headers, guarded by `#ifdef XRPL_ENABLE_TELEMETRY`
- Edit `include/xrpl/basics/Log.h`:
- No changes needed — format() signature unchanged
**Key modified files**:
- `src/libxrpl/basics/Log.cpp`
**Performance note**: `GetSpan()` and `GetContext()` are thread-local reads with no locking — measured at <10ns per call. With ~1000 JLOG calls/min, this adds <10us/min of overhead.
---
## Task 8.2: Add Loki to Docker Compose Stack
**Objective**: Add Grafana Loki as a log storage backend in the development observability stack.
**What to do**:
- Edit `docker/telemetry/docker-compose.yml`:
- Add Loki service:
```yaml
loki:
image: grafana/loki:2.9.0
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
```
- Add Loki as a Grafana data source in provisioning
- Create `docker/telemetry/grafana/provisioning/datasources/loki.yaml`:
- Configure Loki data source with derived fields linking `trace_id` to Tempo
**Key new files**:
- `docker/telemetry/grafana/provisioning/datasources/loki.yaml`
**Key modified files**:
- `docker/telemetry/docker-compose.yml`
---
## Task 8.3: Add Filelog Receiver to OTel Collector
**Objective**: Configure the OTel Collector to tail rippled's log file and export to Loki.
**What to do**:
- Edit `docker/telemetry/otel-collector-config.yaml`:
- Add `filelog` receiver:
```yaml
receivers:
filelog:
include: [/var/log/rippled/debug.log]
operators:
- type: regex_parser
regex: '^(?P<timestamp>\S+)\s+(?P<partition>\S+):(?P<severity>\S+)\s+(?:trace_id=(?P<trace_id>[a-f0-9]+)\s+span_id=(?P<span_id>[a-f0-9]+)\s+)?(?P<message>.*)$'
timestamp:
parse_from: attributes.timestamp
layout: "%Y-%m-%dT%H:%M:%S.%fZ"
```
- Add logs pipeline:
```yaml
service:
pipelines:
logs:
receivers: [filelog]
processors: [batch]
exporters: [otlp/loki]
```
- Add Loki exporter:
```yaml
exporters:
otlp/loki:
endpoint: loki:3100
tls:
insecure: true
```
- Mount rippled's log directory into the collector container via docker-compose volume
**Key modified files**:
- `docker/telemetry/otel-collector-config.yaml`
- `docker/telemetry/docker-compose.yml`
---
## Task 8.4: Configure Grafana Trace-to-Log Correlation
**Objective**: Enable one-click navigation from Tempo traces to Loki logs in Grafana.
**What to do**:
- Edit Grafana Tempo data source provisioning to add `tracesToLogs` configuration:
```yaml
tracesToLogs:
datasourceUid: loki
filterByTraceID: true
filterBySpanID: false
tags: ["partition", "severity"]
```
- Edit Grafana Loki data source provisioning to add `derivedFields` linking trace_id back to Tempo:
```yaml
derivedFields:
- datasourceUid: tempo
matcherRegex: "trace_id=(\\w+)"
name: TraceID
url: "$${__value.raw}"
```
**Key modified files**:
- `docker/telemetry/grafana/provisioning/datasources/loki.yaml`
- `docker/telemetry/grafana/provisioning/datasources/` (Tempo data source file)
---
## Task 8.5: Update Integration Tests
**Objective**: Verify trace_id appears in logs and Loki correlation works.
**What to do**:
- Edit `docker/telemetry/integration-test.sh`:
- After sending RPC requests (which create spans), grep rippled's log output for `trace_id=`
- Verify trace_id matches a trace visible in Jaeger
- Optionally: query Loki via API to confirm log ingestion
**Key modified files**:
- `docker/telemetry/integration-test.sh`
---
## Task 8.6: Update Documentation
**Objective**: Document the log correlation feature in runbook and reference docs.
**What to do**:
- Edit `docs/telemetry-runbook.md`:
- Add "Log-Trace Correlation" section explaining how to use Grafana Tempo -> Loki linking
- Add LogQL query examples for filtering by trace_id
- Edit `OpenTelemetryPlan/09-data-collection-reference.md`:
- Add new section "3. Log Correlation" between SpanMetrics and StatsD sections
- Document the log format with trace_id injection
- Document Loki as a new backend
- Edit `docker/telemetry/TESTING.md`:
- Add log correlation verification steps
**Key modified files**:
- `docs/telemetry-runbook.md`
- `OpenTelemetryPlan/09-data-collection-reference.md`
- `docker/telemetry/TESTING.md`
---
## Summary Table
| Task | Description | Sub-Phase | New Files | Modified Files | Effort | Risk | Depends On |
| ---- | ------------------------------------------ | --------- | --------- | -------------- | ------ | ------ | ---------- |
| 8.1 | Inject trace_id into Logs::format() | 8a | 0 | 1 | 1d | Low | Phase 7 |
| 8.2 | Add Loki to Docker Compose stack | 8b | 1 | 1 | 0.5d | Low | -- |
| 8.3 | Add filelog receiver to OTel Collector | 8b | 0 | 2 | 1d | Medium | 8.1, 8.2 |
| 8.4 | Configure Grafana trace-to-log correlation | 8b | 0 | 2 | 0.5d | Low | 8.3 |
| 8.5 | Update integration tests | 8a + 8b | 0 | 1 | 0.5d | Low | 8.4 |
| 8.6 | Update documentation | 8a + 8b | 0 | 3 | 1d | Low | 8.5 |
**Total Effort**: 4.5 days
**Parallel work**: Task 8.2 (Loki infra) can run in parallel with Task 8.1 (code change). Tasks 8.3-8.6 are sequential.
**Exit Criteria** (from [06-implementation-phases.md §6.8.1](./06-implementation-phases.md)):
- [ ] Log lines within active spans contain `trace_id=<hex> span_id=<hex>`
- [ ] Log lines outside spans have no trace context (no empty fields)
- [ ] Loki ingests rippled logs via OTel Collector filelog receiver
- [ ] Grafana Tempo -> Loki one-click correlation works
- [ ] Grafana Loki -> Tempo reverse lookup works via derived field
- [ ] Integration test verifies trace_id presence in logs
- [ ] No performance regression from trace_id injection (< 0.1% overhead)

View File

@@ -0,0 +1,329 @@
# Phase 9: Internal Metric Instrumentation Gap Fill — Task List
> **Status**: Future Enhancement
>
> **Goal**: Instrument rippled to emit ~50+ metrics that exist in `get_counts`/`server_info`/TxQ/PerfLog but currently lack time-series export via the OTel or beast::insight pipelines.
>
> **Scope**: Hybrid approach — extend `beast::insight` for metrics near existing registrations, use OTel Metrics SDK `ObservableGauge` callbacks for new categories (TxQ, PerfLog, CountedObjects).
>
> **Branch**: `pratik/otel-phase9-metric-gap-fill` (from `pratik/otel-phase8-log-correlation`)
>
> **Depends on**: Phase 7 (native OTel metrics pipeline) and Phase 8 (log-trace correlation)
### Related Plan Documents
| Document | Relevance |
| -------------------------------------------------------------------- | -------------------------------------------------------------- |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 9 plan: motivation, architecture, exit criteria (§6.8.2) |
| [09-data-collection-reference.md](./09-data-collection-reference.md) | Current metric inventory + future metrics section |
| [Phase7_taskList.md](./Phase7_taskList.md) | Prerequisite — OTel Metrics SDK and `OTelCollector` class |
| [Phase8_taskList.md](./Phase8_taskList.md) | Prerequisite — log-trace correlation |
### Third-Party Consumer Context
These metrics serve multiple external consumer categories identified during research:
| Consumer Category | Key Metrics They Need |
| ------------------------- | --------------------------------------------------------------- |
| **Exchanges** | Fee escalation levels, TxQ depth, settlement latency |
| **Payment Processors** | Load factors, io_latency, transaction throughput |
| **Analytics Providers** | NodeStore I/O, cache hit rates, counted objects |
| **Validators/Operators** | Per-job execution times, PerfLog RPC counters, consensus timing |
| **Academic Researchers** | Consensus performance time-series, fee market dynamics |
| **Institutional Custody** | Server health scores, reserve calculations, node availability |
---
## Task 9.1: NodeStore I/O Metrics
**Objective**: Export node store read/write performance as time-series metrics.
**What to do**:
- In `src/libxrpl/nodestore/Database.cpp`, extend existing `beast::insight` registrations to add:
- Gauge: `node_reads_total` (cumulative read operations)
- Gauge: `node_reads_hit` (cache-served reads)
- Gauge: `node_writes` (cumulative write operations)
- Gauge: `node_written_bytes` (cumulative bytes written)
- Gauge: `node_read_bytes` (cumulative bytes read)
- Gauge: `node_reads_duration_us` (cumulative read time in microseconds)
- Gauge: `write_load` (current write load score)
- Gauge: `read_queue` (items in read queue)
- These values are already computed in `Database::getCountsJson()` (line ~236). Wire the same counters to `beast::insight` hooks.
**Key modified files**:
- `src/libxrpl/nodestore/Database.cpp`
- `src/libxrpl/nodestore/Database.h` (add insight members)
**Derived Prometheus metrics**: `rippled_nodestore_reads_total`, `rippled_nodestore_reads_hit`, `rippled_nodestore_write_load`, etc.
**Grafana dashboard**: Add "NodeStore I/O" panel group to _Node Health_ dashboard.
---
## Task 9.2: Cache Hit Rate Metrics
**Objective**: Export SHAMap and ledger cache performance as time-series gauges.
**What to do**:
- Register OTel `ObservableGauge` callbacks (via Phase 7's `OTelCollector`) for:
- `SLE_hit_rate` — SLE cache hit rate (0.01.0)
- `ledger_hit_rate` — Ledger object cache hit rate
- `AL_hit_rate` — AcceptedLedger cache hit rate
- `treenode_cache_size` — SHAMap TreeNode cache size (entries)
- `treenode_track_size` — Tracked tree nodes
- `fullbelow_size` — FullBelow cache size
- The callback should read from the same sources as `GetCounts.cpp` handler (line ~43).
- Create a centralized `MetricsRegistry` class that holds all OTel async gauge registrations, polled at 10-second intervals by the `PeriodicMetricReader`.
**Key modified files**:
- New: `src/xrpld/telemetry/MetricsRegistry.h` / `.cpp`
- `src/xrpld/rpc/handlers/GetCounts.cpp` (extract shared access methods)
- `src/xrpld/app/main/Application.cpp` (register MetricsRegistry at startup)
**Derived Prometheus metrics**: `rippled_cache_SLE_hit_rate`, `rippled_cache_ledger_hit_rate`, `rippled_cache_treenode_size`, etc.
---
## Task 9.3: Transaction Queue (TxQ) Metrics
**Objective**: Export TxQ depth, capacity, and fee escalation levels as time-series.
**What to do**:
- Register OTel `ObservableGauge` callbacks for TxQ state (from `TxQ.h` line ~143):
- `txq_count` — Current transactions in queue
- `txq_max_size` — Maximum queue capacity
- `txq_in_ledger` — Transactions in current open ledger
- `txq_per_ledger` — Expected transactions per ledger
- `txq_reference_fee_level` — Reference fee level
- `txq_min_processing_fee_level` — Minimum fee to get processed
- `txq_med_fee_level` — Median fee level in queue
- `txq_open_ledger_fee_level` — Open ledger fee escalation level
- Add to the `MetricsRegistry` (Task 9.2).
**Key modified files**:
- `src/xrpld/telemetry/MetricsRegistry.cpp` (add TxQ callbacks)
- `src/xrpld/app/tx/detail/TxQ.h` (expose metrics accessor if needed)
**Derived Prometheus metrics**: `rippled_txq_count`, `rippled_txq_max_size`, `rippled_txq_open_ledger_fee_level`, etc.
**Grafana dashboard**: New _Fee Market & TxQ_ dashboard (`rippled-fee-market`).
---
## Task 9.4: PerfLog Per-RPC Method Metrics
**Objective**: Export per-RPC-method call counts and latency as OTel metrics.
**What to do**:
- Register OTel instruments for PerfLog RPC counters (from `PerfLogImp.cpp` line ~63):
- Counter: `rpc_method_started_total{method="<name>"}` — calls started
- Counter: `rpc_method_finished_total{method="<name>"}` — calls completed
- Counter: `rpc_method_errored_total{method="<name>"}` — calls errored
- Histogram: `rpc_method_duration_us{method="<name>"}` — execution time distribution
- Use OTel `Counter<int64_t>` and `Histogram<double>` instruments with `method` attribute label.
- Hook into the existing PerfLog callback mechanism rather than adding new instrumentation points.
**Key modified files**:
- `src/xrpld/perflog/detail/PerfLogImp.cpp` (add OTel instrument updates alongside existing JSON counters)
- `src/xrpld/telemetry/MetricsRegistry.cpp` (register instruments)
**Derived Prometheus metrics**: `rippled_rpc_method_started_total{method="server_info"}`, `rippled_rpc_method_duration_us_bucket{method="ledger"}`, etc.
**Grafana dashboard**: Add "Per-Method RPC Breakdown" panel group to _RPC Performance_ dashboard.
---
## Task 9.5: PerfLog Per-Job-Type Metrics
**Objective**: Export per-job-type queue and execution metrics.
**What to do**:
- Register OTel instruments for PerfLog job counters:
- Counter: `job_queued_total{job_type="<name>"}` — jobs queued
- Counter: `job_started_total{job_type="<name>"}` — jobs started
- Counter: `job_finished_total{job_type="<name>"}` — jobs completed
- Histogram: `job_queued_duration_us{job_type="<name>"}` — time spent waiting in queue
- Histogram: `job_running_duration_us{job_type="<name>"}` — execution time distribution
- Hook into PerfLog's existing job tracking alongside Task 9.4.
**Key modified files**:
- `src/xrpld/perflog/detail/PerfLogImp.cpp`
- `src/xrpld/telemetry/MetricsRegistry.cpp`
**Derived Prometheus metrics**: `rippled_job_queued_total{job_type="ledgerData"}`, `rippled_job_running_duration_us_bucket{job_type="transaction"}`, etc.
**Grafana dashboard**: New _Job Queue Analysis_ dashboard (`rippled-job-queue`).
---
## Task 9.6: Counted Object Instance Metrics
**Objective**: Export live instance counts for key internal object types.
**What to do**:
- Register OTel `ObservableGauge` callbacks for `CountedObject<T>` instance counts:
- `object_count{type="Transaction"}` — live Transaction objects
- `object_count{type="Ledger"}` — live Ledger objects
- `object_count{type="NodeObject"}` — live NodeObject instances
- `object_count{type="STTx"}` — serialized transaction objects
- `object_count{type="STLedgerEntry"}` — serialized ledger entries
- `object_count{type="InboundLedger"}` — ledgers being fetched
- `object_count{type="Pathfinder"}` — active pathfinding computations
- `object_count{type="PathRequest"}` — active path requests
- `object_count{type="HashRouterEntry"}` — hash router entries
- The `CountedObject` template already tracks these via atomic counters. The callback just reads the current counts.
**Key modified files**:
- `src/xrpld/telemetry/MetricsRegistry.cpp` (add counted object callbacks)
- `include/xrpl/basics/CountedObject.h` (may need static accessor for iteration)
**Derived Prometheus metrics**: `rippled_object_count{type="Transaction"}`, `rippled_object_count{type="NodeObject"}`, etc.
**Grafana dashboard**: Add "Object Instance Counts" panel to _Node Health_ dashboard.
---
## Task 9.7: Fee Escalation & Load Factor Metrics
**Objective**: Export the full load factor breakdown as time-series.
**What to do**:
- Register OTel `ObservableGauge` callbacks for load factors (from `NetworkOPs.cpp` line ~2694):
- `load_factor` — combined transaction cost multiplier
- `load_factor_server` — server + cluster + network contribution
- `load_factor_local` — local server load only
- `load_factor_net` — network-wide load estimate
- `load_factor_cluster` — cluster peer load
- `load_factor_fee_escalation` — open ledger fee escalation
- `load_factor_fee_queue` — queue entry fee level
- These overlap with some existing StatsD metrics but provide finer granularity (individual factor breakdown vs. combined value).
**Key modified files**:
- `src/xrpld/telemetry/MetricsRegistry.cpp`
- `src/xrpld/app/misc/NetworkOPs.cpp` (expose load factor accessors if needed)
**Derived Prometheus metrics**: `rippled_load_factor`, `rippled_load_factor_fee_escalation`, etc.
**Grafana dashboard**: Add "Load Factor Breakdown" panel to _Fee Market & TxQ_ dashboard.
---
## Task 9.8: New Grafana Dashboards
**Objective**: Create Grafana dashboards for the new metric categories.
**What to do**:
- Create 2 new dashboards:
1. **Fee Market & TxQ** (`rippled-fee-market`) — TxQ depth/capacity, fee levels, load factor breakdown, fee escalation timeline
2. **Job Queue Analysis** (`rippled-job-queue`) — Per-job-type rates, queue wait times, execution times, job queue depth
- Update 2 existing dashboards:
1. **Node Health** (`rippled-statsd-node-health`) — Add NodeStore I/O panels, cache hit rate panels, object instance counts
2. **RPC Performance** (`rippled-rpc-perf`) — Add per-method RPC breakdown panels
**Key modified files**:
- New: `docker/telemetry/grafana/dashboards/rippled-fee-market.json`
- New: `docker/telemetry/grafana/dashboards/rippled-job-queue.json`
- `docker/telemetry/grafana/dashboards/rippled-statsd-node-health.json`
- `docker/telemetry/grafana/dashboards/rippled-rpc-perf.json`
---
## Task 9.9: Update Documentation
**Objective**: Update telemetry reference docs with all new metrics.
**What to do**:
- Update `OpenTelemetryPlan/09-data-collection-reference.md`:
- Add new section for OTel SDK-exported metrics (NodeStore, cache, TxQ, PerfLog, CountedObjects, load factors)
- Update Grafana dashboard reference table (add 2 new dashboards)
- Add Prometheus query examples for new metrics
- Update `docs/telemetry-runbook.md`:
- Add alerting rules for new metrics (NodeStore write_load, TxQ capacity, cache hit rate degradation)
- Add troubleshooting entries for new metric categories
**Key modified files**:
- `OpenTelemetryPlan/09-data-collection-reference.md`
- `docs/telemetry-runbook.md`
---
## Task 9.10: Integration Tests
**Objective**: Verify all new metrics appear in Prometheus after a test workload.
**What to do**:
- Extend the existing telemetry integration test:
- Start rippled with `[telemetry] enabled=1` and `[insight] server=otel`
- Submit a batch of RPC calls and transactions
- Query Prometheus for each new metric family
- Assert non-zero values for: NodeStore reads, cache hit rates, TxQ count, PerfLog RPC counters, object counts, load factors
- Add unit tests for the `MetricsRegistry` class:
- Verify callback registration and deregistration
- Verify metric values match `get_counts` JSON output
- Verify graceful behavior when telemetry is disabled
**Key modified files**:
- `src/test/telemetry/MetricsRegistry_test.cpp` (new)
- Existing integration test script (extend assertions)
---
## Effort Summary
| Task | Description | Effort | Risk |
| ---- | ---------------------------------------- | ------ | ------ |
| 9.1 | NodeStore I/O metrics | 1d | Low |
| 9.2 | Cache hit rate metrics + MetricsRegistry | 2d | Medium |
| 9.3 | TxQ metrics | 1d | Low |
| 9.4 | PerfLog per-RPC metrics | 1.5d | Medium |
| 9.5 | PerfLog per-job metrics | 1d | Low |
| 9.6 | Counted object instance metrics | 0.5d | Low |
| 9.7 | Fee escalation & load factor metrics | 0.5d | Low |
| 9.8 | New Grafana dashboards | 2d | Low |
| 9.9 | Update documentation | 1d | Low |
| 9.10 | Integration tests | 1.5d | Medium |
**Total Effort**: 12 days
## Exit Criteria
- [ ] All ~50 new metrics visible in Prometheus via OTLP pipeline
- [ ] `MetricsRegistry` class registers/deregisters cleanly with OTel SDK
- [ ] Async gauge callbacks execute at 10s intervals without performance impact
- [ ] 2 new Grafana dashboards operational (Fee Market, Job Queue)
- [ ] 2 existing dashboards updated with new panel groups
- [ ] Integration test validates all new metric families are non-zero
- [ ] No performance regression (< 0.5% CPU overhead from new callbacks)
- [ ] Documentation updated with full new metric inventory

View File

@@ -0,0 +1,280 @@
# OpenTelemetry Distributed Tracing for rippled
---
## Slide 1: Introduction
### What is OpenTelemetry?
OpenTelemetry is an open-source, CNCF-backed observability framework for distributed tracing, metrics, and logs.
### Why OpenTelemetry for rippled?
- **End-to-End Transaction Visibility**: Track transactions from submission → consensus → ledger inclusion
- **Cross-Node Correlation**: Follow requests across multiple independent nodes using a unique `trace_id`
- **Consensus Round Analysis**: Understand timing and behavior across validators
- **Incident Debugging**: Correlate events across distributed nodes during issues
```mermaid
flowchart LR
A["Node A<br/>tx.receive<br/>trace_id: abc123"] --> B["Node B<br/>tx.relay<br/>trace_id: abc123"] --> C["Node C<br/>tx.validate<br/>trace_id: abc123"] --> D["Node D<br/>ledger.apply<br/>trace_id: abc123"]
style A fill:#1565c0,stroke:#0d47a1,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#2e7d32,stroke:#1b5e20,color:#fff
style D fill:#e65100,stroke:#bf360c,color:#fff
```
> **Trace ID: abc123** — All nodes share the same trace, enabling cross-node correlation.
---
## Slide 2: OpenTelemetry vs Open Source Alternatives
| Feature | OpenTelemetry | Jaeger | Zipkin | SkyWalking | Pinpoint | Prometheus |
| ------------------- | ---------------- | ---------------- | ------------------ | ---------- | ---------- | ---------- |
| **Tracing** | YES | YES | YES | YES | YES | NO |
| **Metrics** | YES | NO | NO | YES | YES | YES |
| **Logs** | YES | NO | NO | YES | NO | NO |
| **C++ SDK** | YES Official | YES (Deprecated) | YES (Unmaintained) | NO | NO | YES |
| **Vendor Neutral** | YES Primary goal | NO | NO | NO | NO | NO |
| **Instrumentation** | Manual + Auto | Manual | Manual | Auto-first | Auto-first | Manual |
| **Backend** | Any (exporters) | Self | Self | Self | Self | Self |
| **CNCF Status** | Incubating | Graduated | NO | Incubating | NO | Graduated |
> **Why OpenTelemetry?** It's the only actively maintained, full-featured C++ option with vendor neutrality — allowing export to Jaeger, Prometheus, Grafana, or any commercial backend without changing instrumentation.
---
## Slide 3: Comparison with rippled's Existing Solutions
### Current Observability Stack
| Aspect | PerfLog (JSON) | StatsD (Metrics) | OpenTelemetry (NEW) |
| --------------------- | --------------------- | --------------------- | --------------------------- |
| **Type** | Logging | Metrics | Distributed Tracing |
| **Scope** | Single node | Single node | **Cross-node** |
| **Data** | JSON log entries | Counters, gauges | Spans with context |
| **Correlation** | By timestamp | By metric name | By `trace_id` |
| **Overhead** | Low (file I/O) | Low (UDP) | Low-Medium (configurable) |
| **Question Answered** | "What happened here?" | "How many? How fast?" | **"What was the journey?"** |
### Use Case Matrix
| Scenario | PerfLog | StatsD | OpenTelemetry |
| -------------------------------- | ------- | ------ | ------------- |
| "How many TXs per second?" | ❌ | ✅ | ❌ |
| "Why was this specific TX slow?" | ⚠️ | ❌ | ✅ |
| "Which node delayed consensus?" | ❌ | ❌ | ✅ |
| "Show TX journey across 5 nodes" | ❌ | ❌ | ✅ |
> **Key Insight**: OpenTelemetry **complements** (not replaces) existing systems.
---
## Slide 4: Architecture
### High-Level Integration Architecture
```mermaid
flowchart TB
subgraph rippled["rippled Node"]
subgraph services["Core Services"]
direction LR
RPC["RPC Server<br/>(HTTP/WS)"] ~~~ Overlay["Overlay<br/>(P2P Network)"] ~~~ Consensus["Consensus<br/>(RCLConsensus)"]
end
Telemetry["Telemetry Module<br/>(OpenTelemetry SDK)"]
services --> Telemetry
end
Telemetry -->|OTLP/gRPC| Collector["OTel Collector"]
Collector --> Tempo["Grafana Tempo"]
Collector --> Jaeger["Jaeger"]
Collector --> Elastic["Elastic APM"]
style rippled fill:#424242,stroke:#212121,color:#fff
style services fill:#1565c0,stroke:#0d47a1,color:#fff
style Telemetry fill:#2e7d32,stroke:#1b5e20,color:#fff
style Collector fill:#e65100,stroke:#bf360c,color:#fff
```
### Context Propagation
```mermaid
sequenceDiagram
participant Client
participant NodeA as Node A
participant NodeB as Node B
Client->>NodeA: Submit TX (no context)
Note over NodeA: Creates trace_id: abc123<br/>span: tx.receive
NodeA->>NodeB: Relay TX<br/>(traceparent: abc123)
Note over NodeB: Links to trace_id: abc123<br/>span: tx.relay
```
- **HTTP/RPC**: W3C Trace Context headers (`traceparent`)
- **P2P Messages**: Protocol Buffer extension fields
---
## Slide 5: Implementation Plan
### 5-Phase Rollout (9 Weeks)
```mermaid
gantt
title Implementation Timeline
dateFormat YYYY-MM-DD
axisFormat Week %W
section Phase 1
Core Infrastructure :p1, 2024-01-01, 2w
section Phase 2
RPC Tracing :p2, after p1, 2w
section Phase 3
Transaction Tracing :p3, after p2, 2w
section Phase 4
Consensus Tracing :p4, after p3, 2w
section Phase 5
Documentation :p5, after p4, 1w
```
### Phase Details
| Phase | Focus | Key Deliverables | Effort |
| ----- | ------------------- | -------------------------------------------- | ------- |
| 1 | Core Infrastructure | SDK integration, Telemetry interface, Config | 10 days |
| 2 | RPC Tracing | HTTP context extraction, Handler spans | 10 days |
| 3 | Transaction Tracing | Protobuf context, P2P relay propagation | 10 days |
| 4 | Consensus Tracing | Round spans, Proposal/validation tracing | 10 days |
| 5 | Documentation | Runbook, Dashboards, Training | 7 days |
**Total Effort**: ~47 developer-days (2 developers)
---
## Slide 6: Performance Overhead
### Estimated System Impact
| Metric | Overhead | Notes |
| ----------------- | ---------- | ----------------------------------- |
| **CPU** | 1-3% | Span creation and attribute setting |
| **Memory** | 2-5 MB | Batch buffer for pending spans |
| **Network** | 10-50 KB/s | Compressed OTLP export to collector |
| **Latency (p99)** | <2% | With proper sampling configuration |
### Per-Message Overhead (Context Propagation)
Each P2P message carries trace context with the following overhead:
| Field | Size | Description |
| ------------- | ------------- | ----------------------------------------- |
| `trace_id` | 16 bytes | Unique identifier for the entire trace |
| `span_id` | 8 bytes | Current span (becomes parent on receiver) |
| `trace_flags` | 4 bytes | Sampling decision flags |
| `trace_state` | 0-4 bytes | Optional vendor-specific data |
| **Total** | **~32 bytes** | **Added per traced P2P message** |
```mermaid
flowchart LR
subgraph msg["P2P Message with Trace Context"]
A["Original Message<br/>(variable size)"] --> B["+ TraceContext<br/>(~32 bytes)"]
end
subgraph breakdown["Context Breakdown"]
C["trace_id<br/>16 bytes"]
D["span_id<br/>8 bytes"]
E["flags<br/>4 bytes"]
F["state<br/>0-4 bytes"]
end
B --> breakdown
style A fill:#424242,stroke:#212121,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#1565c0,stroke:#0d47a1,color:#fff
style D fill:#1565c0,stroke:#0d47a1,color:#fff
style E fill:#e65100,stroke:#bf360c,color:#fff
style F fill:#4a148c,stroke:#2e0d57,color:#fff
```
> **Note**: 32 bytes is negligible compared to typical transaction messages (hundreds to thousands of bytes)
### Mitigation Strategies
```mermaid
flowchart LR
A["Head Sampling<br/>10% default"] --> B["Tail Sampling<br/>Keep errors/slow"] --> C["Batch Export<br/>Reduce I/O"] --> D["Conditional Compile<br/>XRPL_ENABLE_TELEMETRY"]
style A fill:#1565c0,stroke:#0d47a1,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#e65100,stroke:#bf360c,color:#fff
style D fill:#4a148c,stroke:#2e0d57,color:#fff
```
### Kill Switches (Rollback Options)
1. **Config Disable**: Set `enabled=0` in config instant disable, no restart needed for sampling
2. **Rebuild**: Compile with `XRPL_ENABLE_TELEMETRY=OFF` zero overhead (no-op)
3. **Full Revert**: Clean separation allows easy commit reversion
---
## Slide 7: Data Collection & Privacy
### What Data is Collected
| Category | Attributes Collected | Purpose |
| --------------- | ---------------------------------------------------------------------------------- | --------------------------- |
| **Transaction** | `tx.hash`, `tx.type`, `tx.result`, `tx.fee`, `ledger_index` | Trace transaction lifecycle |
| **Consensus** | `round`, `phase`, `mode`, `proposers`(public key or public node id), `duration_ms` | Analyze consensus timing |
| **RPC** | `command`, `version`, `status`, `duration_ms` | Monitor RPC performance |
| **Peer** | `peer.id`(public key), `latency_ms`, `message.type`, `message.size` | Network topology analysis |
| **Ledger** | `ledger.hash`, `ledger.index`, `close_time`, `tx_count` | Ledger progression tracking |
| **Job** | `job.type`, `queue_ms`, `worker` | JobQueue performance |
### What is NOT Collected (Privacy Guarantees)
```mermaid
flowchart LR
subgraph notCollected["❌ NOT Collected"]
direction LR
A["Private Keys"] ~~~ B["Account Balances"] ~~~ C["Transaction Amounts"]
end
subgraph alsoNot["❌ Also Excluded"]
direction LR
D["IP Addresses<br/>(configurable)"] ~~~ E["Personal Data"] ~~~ F["Raw TX Payloads"]
end
style A fill:#c62828,stroke:#8c2809,color:#fff
style B fill:#c62828,stroke:#8c2809,color:#fff
style C fill:#c62828,stroke:#8c2809,color:#fff
style D fill:#c62828,stroke:#8c2809,color:#fff
style E fill:#c62828,stroke:#8c2809,color:#fff
style F fill:#c62828,stroke:#8c2809,color:#fff
```
### Privacy Protection Mechanisms
| Mechanism | Description |
| -------------------------- | ------------------------------------------------------------- |
| **Account Hashing** | `xrpl.tx.account` is hashed at collector level before storage |
| **Configurable Redaction** | Sensitive fields can be excluded via config |
| **Sampling** | Only 10% of traces recorded by default (reduces exposure) |
| **Local Control** | Node operators control what gets exported |
| **No Raw Payloads** | Transaction content is never recorded, only metadata |
> **Key Principle**: Telemetry collects **operational metadata** (timing, counts, hashes) — never **sensitive content** (keys, balances, amounts).
---
_End of Presentation_

View File

@@ -1529,3 +1529,46 @@ validators.txt
# set to ssl_verify to 0.
[ssl_verify]
1
#-------------------------------------------------------------------------------
#
# 11. Telemetry (OpenTelemetry Tracing)
#
#-------------------------------------------------------------------------------
#
# Enables distributed tracing via OpenTelemetry. Requires building with
# -DXRPL_ENABLE_TELEMETRY=ON (telemetry Conan option).
#
# [telemetry]
#
# enabled=0
#
# Enable or disable telemetry at runtime. Default: 0 (disabled).
#
# endpoint=http://localhost:4318/v1/traces
#
# The OpenTelemetry Collector endpoint (OTLP/HTTP). Default: http://localhost:4318/v1/traces.
#
# exporter=otlp_http
#
# Exporter type: otlp_http. Default: otlp_http.
#
# sampling_ratio=1.0
#
# Fraction of traces to sample (0.0 to 1.0). Default: 1.0 (all traces).
#
# trace_rpc=1
#
# Enable RPC request tracing. Default: 1.
#
# trace_transactions=1
#
# Enable transaction lifecycle tracing. Default: 1.
#
# trace_consensus=1
#
# Enable consensus round tracing. Default: 1.
#
# trace_peer=0
#
# Enable peer message tracing (high volume). Default: 0.
#

View File

@@ -1,60 +0,0 @@
include(CMakeFindDependencyMacro)
# need to represent system dependencies of the lib here
#[=========================================================[
Boost
#]=========================================================]
if(static OR APPLE OR MSVC)
set(Boost_USE_STATIC_LIBS ON)
endif()
set(Boost_USE_MULTITHREADED ON)
if(static OR MSVC)
set(Boost_USE_STATIC_RUNTIME ON)
else()
set(Boost_USE_STATIC_RUNTIME OFF)
endif()
find_dependency(
Boost
COMPONENTS
chrono
container
context
coroutine
date_time
filesystem
program_options
regex
system
thread
)
#[=========================================================[
OpenSSL
#]=========================================================]
if(NOT DEFINED OPENSSL_ROOT_DIR)
if(DEFINED ENV{OPENSSL_ROOT})
set(OPENSSL_ROOT_DIR $ENV{OPENSSL_ROOT})
elseif(APPLE)
find_program(homebrew brew)
if(homebrew)
execute_process(
COMMAND ${homebrew} --prefix openssl
OUTPUT_VARIABLE OPENSSL_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE
)
endif()
endif()
file(TO_CMAKE_PATH "${OPENSSL_ROOT_DIR}" OPENSSL_ROOT_DIR)
endif()
if(static OR APPLE OR MSVC)
set(OPENSSL_USE_STATIC_LIBS ON)
endif()
set(OPENSSL_MSVC_STATIC_RT ON)
find_dependency(OpenSSL REQUIRED)
find_dependency(ZLIB)
find_dependency(date)
if(TARGET ZLIB::ZLIB)
set_target_properties(
OpenSSL::Crypto
PROPERTIES INTERFACE_LINK_LIBRARIES ZLIB::ZLIB
)
endif()

View File

@@ -78,6 +78,13 @@ include(target_link_modules)
# Level 01
add_module(xrpl beast)
target_link_libraries(xrpl.libxrpl.beast PUBLIC xrpl.imports.main)
# OTelCollector in beast/insight uses OTel Metrics SDK when telemetry is enabled.
if(telemetry)
target_link_libraries(
xrpl.libxrpl.beast
PUBLIC opentelemetry-cpp::opentelemetry-cpp
)
endif()
include(GitInfo)
add_module(xrpl git)
@@ -180,6 +187,23 @@ target_link_libraries(
add_module(xrpl tx)
target_link_libraries(xrpl.libxrpl.tx PUBLIC xrpl.libxrpl.ledger)
# Telemetry module — OpenTelemetry distributed tracing support.
# Sources: include/xrpl/telemetry/ (headers), src/libxrpl/telemetry/ (impl).
# When telemetry=ON, links the Conan-provided umbrella target
# opentelemetry-cpp::opentelemetry-cpp (individual component targets like
# ::api, ::sdk are not available in the Conan package).
add_module(xrpl telemetry)
target_link_libraries(
xrpl.libxrpl.telemetry
PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.beast
)
if(telemetry)
target_link_libraries(
xrpl.libxrpl.telemetry
PUBLIC opentelemetry-cpp::opentelemetry-cpp
)
endif()
add_library(xrpl.libxrpl)
set_target_properties(xrpl.libxrpl PROPERTIES OUTPUT_NAME xrpl)
@@ -210,6 +234,7 @@ target_link_modules(
resource
server
shamap
telemetry
tx
)

View File

@@ -2,100 +2,38 @@
install stuff
#]===================================================================]
include(create_symbolic_link)
include(GNUInstallDirs)
# If no suffix is defined for executables (e.g. Windows uses .exe but Linux
# and macOS use none), then explicitly set it to the empty string.
if(NOT DEFINED suffix)
set(suffix "")
if(is_root_project AND TARGET xrpld)
install(
TARGETS xrpld
RUNTIME DESTINATION "${CMAKE_INSTALL_BINDIR}" COMPONENT runtime
)
install(
FILES "${CMAKE_CURRENT_SOURCE_DIR}/cfg/xrpld-example.cfg"
DESTINATION "${CMAKE_INSTALL_SYSCONFDIR}/xrpld"
RENAME xrpld.cfg
COMPONENT runtime
)
install(
FILES "${CMAKE_CURRENT_SOURCE_DIR}/cfg/validators-example.txt"
DESTINATION "${CMAKE_INSTALL_SYSCONFDIR}/xrpld"
RENAME validators.txt
COMPONENT runtime
)
endif()
install(
TARGETS
common
opts
xrpl_boost
xrpl_libs
xrpl_syslibs
xrpl.imports.main
xrpl.libpb
xrpl.libxrpl
xrpl.libxrpl.basics
xrpl.libxrpl.beast
xrpl.libxrpl.conditions
xrpl.libxrpl.core
xrpl.libxrpl.crypto
xrpl.libxrpl.git
xrpl.libxrpl.json
xrpl.libxrpl.rdb
xrpl.libxrpl.ledger
xrpl.libxrpl.net
xrpl.libxrpl.nodestore
xrpl.libxrpl.protocol
xrpl.libxrpl.resource
xrpl.libxrpl.server
xrpl.libxrpl.shamap
xrpl.libxrpl.tx
antithesis-sdk-cpp
EXPORT XrplExports
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
RUNTIME DESTINATION bin
INCLUDES DESTINATION include
TARGETS xrpl.libpb xrpl.libxrpl
LIBRARY DESTINATION "${CMAKE_INSTALL_LIBDIR}" COMPONENT development
ARCHIVE DESTINATION "${CMAKE_INSTALL_LIBDIR}" COMPONENT development
RUNTIME DESTINATION "${CMAKE_INSTALL_BINDIR}" COMPONENT development
)
install(
DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/include/xrpl"
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}"
)
install(
EXPORT XrplExports
FILE XrplTargets.cmake
NAMESPACE Xrpl::
DESTINATION lib/cmake/xrpl
)
include(CMakePackageConfigHelpers)
write_basic_package_version_file(
XrplConfigVersion.cmake
VERSION ${xrpld_version}
COMPATIBILITY SameMajorVersion
)
if(is_root_project AND TARGET xrpld)
install(TARGETS xrpld RUNTIME DESTINATION bin)
set_target_properties(xrpld PROPERTIES INSTALL_RPATH_USE_LINK_PATH ON)
# sample configs should not overwrite existing files
# install if-not-exists workaround as suggested by
# https://cmake.org/Bug/view.php?id=12646
install(
CODE
"
macro (copy_if_not_exists SRC DEST NEWNAME)
if (NOT EXISTS \"\$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/\${DEST}/\${NEWNAME}\")
file (INSTALL FILE_PERMISSIONS OWNER_READ OWNER_WRITE DESTINATION \"\${CMAKE_INSTALL_PREFIX}/\${DEST}\" FILES \"\${SRC}\" RENAME \"\${NEWNAME}\")
else ()
message (\"-- Skipping : \$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/\${DEST}/\${NEWNAME}\")
endif ()
endmacro()
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/xrpld-example.cfg\" etc xrpld.cfg)
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/validators-example.txt\" etc validators.txt)
"
)
install(
CODE
"
set(CMAKE_MODULE_PATH \"${CMAKE_MODULE_PATH}\")
include(create_symbolic_link)
create_symbolic_link(xrpld${suffix} \
\$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_BINDIR}/rippled${suffix})
"
)
endif()
install(
FILES
${CMAKE_CURRENT_SOURCE_DIR}/cmake/XrplConfig.cmake
${CMAKE_CURRENT_BINARY_DIR}/XrplConfigVersion.cmake
DESTINATION lib/cmake/xrpl
COMPONENT development
)

View File

@@ -50,6 +50,13 @@ if(MSVC AND CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
message(FATAL_ERROR "Visual Studio 32-bit build is not supported.")
endif()
if(voidstar AND NOT is_amd64)
message(
FATAL_ERROR
"The voidstar library only supported on amd64/x86_64. Detected archictecture was: ${CMAKE_SYSTEM_PROCESSOR}"
)
endif()
if(APPLE AND NOT HOMEBREW)
find_program(HOMEBREW brew)
endif()

View File

@@ -10,10 +10,13 @@
"rocksdb/10.5.1#4a197eca381a3e5ae8adf8cffa5aacd0%1765850186.86",
"re2/20230301#ca3b241baec15bd31ea9187150e0b333%1765850148.103",
"protobuf/6.32.1#f481fd276fc23a33b85a3ed1e898b693%1765850161.038",
"openssl/3.5.5#05a4ac5b7323f7a329b2db1391d9941f%1769599205.414",
"opentelemetry-cpp/1.18.0#efd9851e173f8a13b9c7d35232de8cf1%1750409186.472",
"openssl/3.5.5#05a4ac5b7323f7a329b2db1391d9941f%1770229825.601",
"nudb/2.0.9#0432758a24204da08fee953ec9ea03cb%1769436073.32",
"nlohmann_json/3.11.3#45828be26eb619a2e04ca517bb7b828d%1701220705.259",
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504%1765850143.914",
"libiconv/1.17#1e65319e945f2d31941a9d28cc13c058%1765842973.492",
"libcurl/8.18.0#364bc3755cb9ef84ed9a7ae9c7efc1c1%1770984390.024",
"libbacktrace/cci.20210118#a7691bfccd8caaf66309df196790a5a1%1765842973.03",
"libarchive/3.8.1#ffee18995c706e02bf96e7a2f7042e0d%1765850144.736",
"jemalloc/5.3.0#e951da9cf599e956cebc117880d2d9f8%1729241615.244",
@@ -30,9 +33,15 @@
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1765850150.075",
"strawberryperl/5.32.1.1#707032463aa0620fa17ec0d887f5fe41%1765850165.196",
"protobuf/6.32.1#f481fd276fc23a33b85a3ed1e898b693%1765850161.038",
"pkgconf/2.5.1#93c2051284cba1279494a43a4fcfeae2%1757684701.089",
"opentelemetry-proto/1.4.0#4096a3b05916675ef9628f3ffd571f51%1732731336.11",
"ninja/1.13.2#c8c5dc2a52ed6e4e42a66d75b4717ceb%1764096931.974",
"nasm/2.16.01#31e26f2ee3c4346ecd347911bd126904%1765850144.707",
"msys2/cci.latest#eea83308ad7e9023f7318c60d5a9e6cb%1770199879.083",
"meson/1.10.0#60786758ea978964c24525de19603cf4%1768294926.103",
"m4/1.4.19#70dc8bbb33e981d119d2acc0175cf381%1763158052.846",
"libtool/2.4.7#14e7739cc128bc1623d2ed318008e47e%1755679003.847",
"gnu-config/cci.20210814#466e9d4d7779e1c142443f7ea44b4284%1762363589.329",
"cmake/4.2.0#ae0a44f44a1ef9ab68fd4b3e9a1f8671%1765850153.937",
"cmake/3.31.10#313d16a1aa16bbdb2ca0792467214b76%1765850153.479",
"b2/5.3.3#107c15377719889654eb9a162a673975%1765850144.355",

View File

@@ -22,6 +22,7 @@ class Xrpl(ConanFile):
"rocksdb": [True, False],
"shared": [True, False],
"static": [True, False],
"telemetry": [True, False],
"tests": [True, False],
"unity": [True, False],
"xrpld": [True, False],
@@ -54,6 +55,7 @@ class Xrpl(ConanFile):
"rocksdb": True,
"shared": False,
"static": True,
"telemetry": True,
"tests": False,
"unity": False,
"xrpld": False,
@@ -140,6 +142,10 @@ class Xrpl(ConanFile):
self.requires("jemalloc/5.3.0")
if self.options.rocksdb:
self.requires("rocksdb/10.5.1")
# OpenTelemetry C++ SDK for distributed tracing (optional).
# Provides OTLP/HTTP exporter, batch span processor, and trace API.
if self.options.telemetry:
self.requires("opentelemetry-cpp/1.18.0")
self.requires("xxhash/0.8.3", **transitive_headers_opt)
exports_sources = (
@@ -168,6 +174,7 @@ class Xrpl(ConanFile):
tc.variables["rocksdb"] = self.options.rocksdb
tc.variables["BUILD_SHARED_LIBS"] = self.options.shared
tc.variables["static"] = self.options.static
tc.variables["telemetry"] = self.options.telemetry
tc.variables["unity"] = self.options.unity
tc.variables["xrpld"] = self.options.xrpld
tc.generate()
@@ -220,3 +227,5 @@ class Xrpl(ConanFile):
]
if self.options.rocksdb:
libxrpl.requires.append("rocksdb::librocksdb")
if self.options.telemetry:
libxrpl.requires.append("opentelemetry-cpp::opentelemetry-cpp")

View File

@@ -87,6 +87,8 @@ words:
- daria
- dcmake
- dearmor
- Dedup
- dedup
- deleteme
- demultiplexer
- deserializaton
@@ -97,6 +99,7 @@ words:
- doxyfile
- dxrpl
- endmacro
- EOCFG
- exceptioned
- Falco
- finalizers
@@ -140,6 +143,7 @@ words:
- libxrpl
- llection
- LOCALGOOD
- logql
- logwstream
- lseq
- lsmf
@@ -180,6 +184,7 @@ words:
- NOLINTNEXTLINE
- nonxrp
- noripple
- nostd
- nudb
- nullptr
- nunl
@@ -192,8 +197,11 @@ words:
- permdex
- perminute
- permissioned
- pgrep
- pkill
- pointee
- populator
- pratik
- preauth
- preauthorization
- preauthorize
@@ -208,6 +216,7 @@ words:
- queuable
- Raphson
- replayer
- reqps
- rerere
- retriable
- RIPD
@@ -302,6 +311,10 @@ words:
- xchain
- ximinez
- EXPECT_STREQ
- Gantt
- gantt
- otelc
- traceql
- XMACRO
- xrpkuwait
- xrpl
@@ -309,3 +322,5 @@ words:
- xrplf
- xxhash
- xxhasher
- xychart
- zpages

703
docker/telemetry/TESTING.md Normal file
View File

@@ -0,0 +1,703 @@
# OpenTelemetry Integration Testing Guide
This document describes how to verify the rippled OpenTelemetry telemetry
pipeline end-to-end, from span generation through the observability stack
(otel-collector, Jaeger, Prometheus, Grafana).
---
## Prerequisites
### Build xrpld with telemetry
```bash
conan install . --build=missing -o telemetry=True
cmake --preset default -Dtelemetry=ON
cmake --build --preset default --target xrpld
```
The binary is at `.build/xrpld`.
### Required tools
- **Docker** with `docker compose` (v2)
- **curl**
- **jq** (JSON processor)
### Verify binary
```bash
.build/xrpld --version
```
---
## Test 1: Single-Node Standalone (Quick Verification)
This test verifies RPC and transaction spans in standalone mode. Consensus
spans will not fire because standalone mode does not run consensus.
### Step 1: Start the observability stack
```bash
docker compose -f docker/telemetry/docker-compose.yml up -d
```
Wait for services to be ready:
```bash
# otel-collector health
curl -sf http://localhost:13133/ && echo "collector ready"
# Jaeger UI
curl -sf http://localhost:16686/ > /dev/null && echo "jaeger ready"
```
### Step 2: Start xrpld in standalone mode
```bash
.build/xrpld --conf docker/telemetry/xrpld-telemetry.cfg -a --start
```
Wait a few seconds for the node to initialize.
### Step 3: Exercise RPC spans
```bash
# server_info
curl -s http://localhost:5005 \
-d '{"method":"server_info"}' | jq .result.info.server_state
# server_state
curl -s http://localhost:5005 \
-d '{"method":"server_state"}' | jq .result.state.server_state
# ledger
curl -s http://localhost:5005 \
-d '{"method":"ledger","params":[{"ledger_index":"current"}]}' \
| jq .result.ledger_current_index
```
### Step 4: Submit a transaction
Close the ledger first (required in standalone mode):
```bash
curl -s http://localhost:5005 -d '{"method":"ledger_accept"}'
```
Submit a Payment from the genesis account:
```bash
curl -s http://localhost:5005 -d '{
"method": "submit",
"params": [{
"secret": "snoPBrXtMeMyMHUVTgbuqAfg1SUTb",
"tx_json": {
"TransactionType": "Payment",
"Account": "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh",
"Destination": "rPMh7Pi9ct699iZUTWzJaUMR1o42VEfGqF",
"Amount": "10000000"
}
}]
}' | jq .result.engine_result
```
Expected result: `"tesSUCCESS"`.
Close the ledger again to finalize:
```bash
curl -s http://localhost:5005 -d '{"method":"ledger_accept"}'
```
### Step 5: Verify traces in Jaeger
Wait 5 seconds for the batch export, then:
```bash
JAEGER="http://localhost:16686"
# Check rippled service is registered
curl -s "$JAEGER/api/services" | jq '.data'
# Check RPC spans
curl -s "$JAEGER/api/traces?service=rippled&operation=rpc.request&limit=5&lookback=1h" \
| jq '.data | length'
curl -s "$JAEGER/api/traces?service=rippled&operation=rpc.process&limit=5&lookback=1h" \
| jq '.data | length'
curl -s "$JAEGER/api/traces?service=rippled&operation=rpc.command.server_info&limit=5&lookback=1h" \
| jq '.data | length'
# Check transaction spans
curl -s "$JAEGER/api/traces?service=rippled&operation=tx.process&limit=5&lookback=1h" \
| jq '.data | length'
```
Or open the Jaeger UI: http://localhost:16686
### Step 6: Teardown
```bash
# Kill xrpld (Ctrl+C or)
kill $(pgrep -f 'xrpld.*xrpld-telemetry')
# Stop observability stack
docker compose -f docker/telemetry/docker-compose.yml down
# Clean xrpld data
rm -rf data/
```
### Expected spans (standalone mode)
| Span Name | Expected | Notes |
| --------------------------- | -------- | ----------------------------- |
| `rpc.request` | Yes | Every HTTP RPC call |
| `rpc.process` | Yes | Every RPC processing |
| `rpc.command.server_info` | Yes | server_info RPC |
| `rpc.command.server_state` | Yes | server_state RPC |
| `rpc.command.ledger` | Yes | ledger RPC |
| `rpc.command.submit` | Yes | submit RPC |
| `rpc.command.ledger_accept` | Yes | ledger_accept RPC |
| `tx.process` | Yes | Transaction submission |
| `tx.receive` | No | No peers in standalone |
| `consensus.*` | No | Consensus disabled standalone |
---
## Test 2: 6-Node Consensus Network (Full Verification)
This test verifies ALL span categories including consensus and peer
transaction relay, using a 6-node validator network.
### Automated
Run the integration test script:
```bash
bash docker/telemetry/integration-test.sh
```
The script will:
1. Start the observability stack
2. Generate 6 validator key pairs
3. Create config files for each node
4. Start all 6 nodes
5. Wait for consensus ("proposing" state)
6. Exercise RPC, submit transactions
7. Verify all span categories in Jaeger
8. Verify spanmetrics in Prometheus
9. Print results and leave the stack running
### Manual
If you prefer to run the steps manually:
#### Step 1: Start observability stack
```bash
docker compose -f docker/telemetry/docker-compose.yml up -d
```
#### Step 2: Generate validator keys
Start a temporary standalone xrpld:
```bash
.build/xrpld --conf docker/telemetry/xrpld-telemetry.cfg -a --start &
TEMP_PID=$!
sleep 5
```
Generate 6 key pairs:
```bash
for i in $(seq 1 6); do
curl -s http://localhost:5005 \
-d '{"method":"validation_create"}' | jq '.result'
done
```
Record the `validation_seed` and `validation_public_key` for each.
Kill the temporary node:
```bash
kill $TEMP_PID
rm -rf data/
```
#### Step 3: Create node configs
For each node (1-6), create a config file. Template:
```ini
[server]
port_rpc
port_peer
[port_rpc]
port = {5004 + node_number}
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_peer]
port = {51234 + node_number}
ip = 0.0.0.0
protocol = peer
[node_db]
type=NuDB
path=/tmp/xrpld-integration/node{N}/nudb
online_delete=256
[database_path]
/tmp/xrpld-integration/node{N}/db
[debug_logfile]
/tmp/xrpld-integration/node{N}/debug.log
[validation_seed]
{seed from step 2}
[validators_file]
/tmp/xrpld-integration/validators.txt
[ips_fixed]
127.0.0.1 51235
127.0.0.1 51236
127.0.0.1 51237
127.0.0.1 51238
127.0.0.1 51239
127.0.0.1 51240
[peer_private]
1
[telemetry]
enabled=1
endpoint=http://localhost:4318/v1/traces
exporter=otlp_http
sampling_ratio=1.0
batch_size=512
batch_delay_ms=2000
max_queue_size=2048
trace_rpc=1
trace_transactions=1
trace_consensus=1
trace_peer=0
trace_ledger=1
[rpc_startup]
{ "command": "log_level", "severity": "warning" }
[ssl_verify]
0
```
#### Step 4: Create validators.txt
```ini
[validators]
{public_key_1}
{public_key_2}
{public_key_3}
{public_key_4}
{public_key_5}
{public_key_6}
```
#### Step 5: Start all 6 nodes
```bash
for i in $(seq 1 6); do
.build/xrpld --conf /tmp/xrpld-integration/node$i/xrpld.cfg --start &
echo $! > /tmp/xrpld-integration/node$i/xrpld.pid
done
```
#### Step 6: Wait for consensus
Poll each node until `server_state` = `"proposing"`:
```bash
for port in 5005 5006 5007 5008 5009 5010; do
while true; do
state=$(curl -s http://localhost:$port \
-d '{"method":"server_info"}' \
| jq -r '.result.info.server_state')
echo "Port $port: $state"
[ "$state" = "proposing" ] && break
sleep 5
done
done
```
#### Step 7: Exercise RPC and submit transaction
```bash
# RPC calls
curl -s http://localhost:5005 -d '{"method":"server_info"}'
curl -s http://localhost:5005 -d '{"method":"server_state"}'
curl -s http://localhost:5005 -d '{"method":"ledger","params":[{"ledger_index":"current"}]}'
# Submit transaction
curl -s http://localhost:5005 -d '{
"method": "submit",
"params": [{
"secret": "snoPBrXtMeMyMHUVTgbuqAfg1SUTb",
"tx_json": {
"TransactionType": "Payment",
"Account": "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh",
"Destination": "rPMh7Pi9ct699iZUTWzJaUMR1o42VEfGqF",
"Amount": "10000000"
}
}]
}'
```
Wait 15 seconds for consensus and batch export.
#### Step 8: Verify in Jaeger
See the "Verification Queries" section below.
---
## Expected Span Catalog
All 16 production span names instrumented across Phases 2-5:
| Span Name | Source File | Phase | Key Attributes | How to Trigger |
| --------------------------- | --------------------- | ----- | --------------------------------------------------------------------------------- | ------------------------- |
| `rpc.request` | ServerHandler.cpp:271 | 2 | -- | Any HTTP RPC call |
| `rpc.process` | ServerHandler.cpp:573 | 2 | -- | Any HTTP RPC call |
| `rpc.ws_message` | ServerHandler.cpp:384 | 2 | -- | WebSocket RPC message |
| `rpc.command.<name>` | RPCHandler.cpp:161 | 2 | `xrpl.rpc.command`, `xrpl.rpc.version`, `xrpl.rpc.role` | Any RPC command |
| `tx.process` | NetworkOPs.cpp:1227 | 3 | `xrpl.tx.hash`, `xrpl.tx.local`, `xrpl.tx.path` | Submit transaction |
| `tx.receive` | PeerImp.cpp:1273 | 3 | `xrpl.peer.id` | Peer relays transaction |
| `consensus.proposal.send` | RCLConsensus.cpp:177 | 4 | `xrpl.consensus.round` | Consensus proposing phase |
| `consensus.ledger_close` | RCLConsensus.cpp:282 | 4 | `xrpl.consensus.ledger.seq`, `xrpl.consensus.mode` | Ledger close event |
| `consensus.accept` | RCLConsensus.cpp:395 | 4 | `xrpl.consensus.proposers`, `xrpl.consensus.round_time_ms` | Ledger accepted |
| `consensus.validation.send` | RCLConsensus.cpp:753 | 4 | `xrpl.consensus.ledger.seq`, `xrpl.consensus.proposing` | Validation sent |
| `consensus.accept.apply` | RCLConsensus.cpp:453 | 4 | `xrpl.consensus.close_time`, `close_time_correct`, `close_resolution_ms`, `state` | Ledger apply + close time |
| `tx.apply` | BuildLedger.cpp:88 | 5 | `xrpl.ledger.tx_count`, `xrpl.ledger.tx_failed` | Ledger close (tx set) |
| `ledger.build` | BuildLedger.cpp:31 | 5 | `xrpl.ledger.seq` | Ledger build |
| `ledger.validate` | LedgerMaster.cpp:915 | 5 | `xrpl.ledger.seq`, `xrpl.ledger.validations` | Ledger validated |
| `ledger.store` | LedgerMaster.cpp:409 | 5 | `xrpl.ledger.seq` | Ledger stored |
| `peer.proposal.receive` | PeerImp.cpp:1667 | 5 | `xrpl.peer.id`, `xrpl.peer.proposal.trusted` | Peer sends proposal |
| `peer.validation.receive` | PeerImp.cpp:2264 | 5 | `xrpl.peer.id`, `xrpl.peer.validation.trusted` | Peer sends validation |
---
## Verification Queries
### Jaeger API
Base URL: `http://localhost:16686`
```bash
JAEGER="http://localhost:16686"
# List all services
curl -s "$JAEGER/api/services" | jq '.data'
# List operations for rippled
curl -s "$JAEGER/api/services/rippled/operations" | jq '.data'
# Query traces by operation
for op in "rpc.request" "rpc.process" \
"rpc.command.server_info" "rpc.command.server_state" "rpc.command.ledger" \
"tx.process" "tx.receive" "tx.apply" \
"consensus.proposal.send" "consensus.ledger_close" \
"consensus.accept" "consensus.accept.apply" \
"consensus.validation.send" \
"ledger.build" "ledger.validate" "ledger.store" \
"peer.proposal.receive" "peer.validation.receive"; do
count=$(curl -s "$JAEGER/api/traces?service=rippled&operation=$op&limit=5&lookback=1h" \
| jq '.data | length')
printf "%-35s %s traces\n" "$op" "$count"
done
```
### Prometheus API
Base URL: `http://localhost:9090`
```bash
PROM="http://localhost:9090"
# Span call counts (from spanmetrics connector)
curl -s "$PROM/api/v1/query?query=traces_span_metrics_calls_total" \
| jq '.data.result[] | {span: .metric.span_name, count: .value[1]}'
# Latency histogram
curl -s "$PROM/api/v1/query?query=traces_span_metrics_duration_milliseconds_count" \
| jq '.data.result[] | {span: .metric.span_name, count: .value[1]}'
# RPC calls by command
curl -s "$PROM/api/v1/query?query=traces_span_metrics_calls_total{span_name=~\"rpc.command.*\"}" \
| jq '.data.result[] | {command: .metric["xrpl.rpc.command"], count: .value[1]}'
```
### System Metrics (beast::insight via OTel native)
rippled's built-in `beast::insight` framework exports metrics natively via OTLP/HTTP to the OTel Collector
on port 4318 (same endpoint as traces). These appear in Prometheus alongside spanmetrics.
Requires `[insight]` config in `xrpld.cfg`:
```ini
[insight]
server=otel
endpoint=http://localhost:4318/v1/metrics
prefix=rippled
```
Verify system metrics in Prometheus:
```bash
# Ledger age gauge
curl -s "$PROM/api/v1/query?query=rippled_LedgerMaster_Validated_Ledger_Age" | jq '.data.result'
# Peer counts
curl -s "$PROM/api/v1/query?query=rippled_Peer_Finder_Active_Inbound_Peers" | jq '.data.result'
# RPC request counter
curl -s "$PROM/api/v1/query?query=rippled_rpc_requests" | jq '.data.result'
# State accounting
curl -s "$PROM/api/v1/query?query=rippled_State_Accounting_Full_duration" | jq '.data.result'
# Overlay traffic
curl -s "$PROM/api/v1/query?query=rippled_total_Bytes_In" | jq '.data.result'
```
Key system metrics (prefix `rippled_`):
| Metric | Type | Source |
| ------------------------------------- | --------- | ----------------------------------------- |
| `LedgerMaster_Validated_Ledger_Age` | gauge | LedgerMaster.h:373 |
| `LedgerMaster_Published_Ledger_Age` | gauge | LedgerMaster.h:374 |
| `State_Accounting_{Mode}_duration` | gauge | NetworkOPs.cpp:774 |
| `State_Accounting_{Mode}_transitions` | gauge | NetworkOPs.cpp:780 |
| `Peer_Finder_Active_Inbound_Peers` | gauge | PeerfinderManager.cpp:214 |
| `Peer_Finder_Active_Outbound_Peers` | gauge | PeerfinderManager.cpp:215 |
| `Overlay_Peer_Disconnects` | gauge | OverlayImpl.h:557 |
| `job_count` | gauge | JobQueue.cpp:26 |
| `rpc_requests` | counter | ServerHandler.cpp:108 |
| `rpc_time` | histogram | ServerHandler.cpp:110 |
| `rpc_size` | histogram | ServerHandler.cpp:109 |
| `ios_latency` | histogram | Application.cpp:438 |
| `pathfind_fast` | histogram | PathRequests.h:23 |
| `pathfind_full` | histogram | PathRequests.h:24 |
| `ledger_fetches` | counter | InboundLedgers.cpp:44 |
| `ledger_history_mismatch` | counter | LedgerHistory.cpp:16 |
| `warn` | counter | Logic.h:33 |
| `drop` | counter | Logic.h:34 |
| `{category}_Bytes_In/Out` | gauge | OverlayImpl.h:535 (57 traffic categories) |
| `{category}_Messages_In/Out` | gauge | OverlayImpl.h:535 (57 traffic categories) |
### Grafana
Open http://localhost:3000 (anonymous admin access enabled).
Pre-configured dashboards (span-derived):
- **RPC Performance**: Request rates, latency percentiles by command, top commands, WebSocket rate
- **Transaction Overview**: Transaction processing rates, apply duration, peer relay, failed tx rate
- **Consensus Health**: Consensus round duration, proposer counts, mode tracking, accept heatmap
- **Ledger Operations**: Build/validate/store rates and durations, TX apply metrics
- **Peer Network**: Proposal/validation receive rates, trusted vs untrusted breakdown (requires `trace_peer=1`)
Pre-configured dashboards (system metrics):
- **Node Health (System Metrics)**: Validated/published ledger age, operating mode, I/O latency, job queue
- **Network Traffic (System Metrics)**: Peer counts, disconnects, overlay traffic by category
- **RPC & Pathfinding (System Metrics)**: RPC request rate/time/size, pathfinding duration, resource warnings
Pre-configured datasources:
- **Jaeger**: Trace data at `http://jaeger:16686`
- **Tempo**: Trace data at `http://tempo:3200` (via Grafana Explore)
- **Prometheus**: Metrics at `http://prometheus:9090`
- **Loki**: Log data at `http://loki:3100` (via Grafana Explore)
---
## Test 3: Log-Trace Correlation (Phase 8)
Phase 8 injects `trace_id` and `span_id` into rippled's log output when
a log line is emitted within an active OTel span. This test verifies the
end-to-end log-trace correlation pipeline.
### Step 1: Verify trace_id in log output
After running Test 1 or Test 2 (which generate RPC spans), check the
rippled debug.log for trace context:
```bash
grep 'trace_id=[a-f0-9]\{32\} span_id=[a-f0-9]\{16\}' /path/to/debug.log
```
Expected: log lines with `trace_id=<32hex> span_id=<16hex>` between the
severity code and the message. Example:
```
2024-01-15T10:30:45.123Z RPCHandler:NFO trace_id=abc123def456789012345678abcdef01 span_id=0123456789abcdef Calling server_info
```
Lines emitted outside of an active span (background tasks, startup) will
NOT have trace context — this is expected.
### Step 2: Cross-check trace_id in Jaeger
Extract a `trace_id` from the log and verify it exists in Jaeger:
```bash
TRACE_ID=$(grep -o 'trace_id=[a-f0-9]\{32\}' /path/to/debug.log | head -1 | cut -d= -f2)
echo "Checking trace: $TRACE_ID"
curl -s "http://localhost:16686/api/traces/$TRACE_ID" | jq '.data | length'
```
Expected result: `1` (the trace exists in Jaeger).
### Step 3: Verify Loki log ingestion
The OTel Collector's filelog receiver tails rippled's debug.log and
exports parsed entries to Loki. Verify Loki has received entries:
```bash
# Query Loki for any rippled logs
curl -sG "http://localhost:3100/loki/api/v1/query" \
--data-urlencode 'query={job="rippled"}' \
--data-urlencode 'limit=5' | jq '.data.result | length'
```
Expected: > 0 results.
### Step 4: Verify Grafana Tempo-to-Loki correlation
1. Open Grafana at http://localhost:3000
2. Navigate to **Explore** -> select **Tempo** datasource
3. Search for a trace (e.g., operation `rpc.command.server_info`)
4. Click **"Logs for this trace"** in the trace detail view
5. Verify that Loki log lines appear, filtered by the trace's `trace_id`
### Step 5: Verify Grafana Loki-to-Tempo correlation
1. In Grafana **Explore**, select **Loki** datasource
2. Query: `{job="rippled"} |= "trace_id="`
3. In the log results, click the **TraceID** derived field link
4. Verify it navigates to the full trace in Tempo
### Expected results
| Check | Expected |
| ------------------------------ | ---------------------------------------- |
| `trace_id=` in debug.log | Present in log lines within active spans |
| `span_id=` in debug.log | Present alongside trace_id |
| Logs without active span | No trace_id/span_id fields |
| trace_id in Jaeger | Matches a valid trace |
| Loki log ingestion | Logs visible via LogQL |
| Tempo -> Loki "Logs for trace" | Shows correlated log lines |
| Loki -> Tempo TraceID link | Navigates to correct trace |
---
## Troubleshooting
### No traces in Jaeger
1. Check otel-collector logs:
```bash
docker compose -f docker/telemetry/docker-compose.yml logs otel-collector
```
2. Verify xrpld telemetry config has `enabled=1` and correct endpoint
3. Check that otel-collector port 4318 is accessible:
```bash
curl -sf http://localhost:4318 && echo "reachable"
```
4. Increase `batch_delay_ms` or decrease `batch_size` in xrpld config
### Nodes not reaching "proposing" state
1. Check that all peer ports (51235-51240) are not in use:
```bash
for p in 51235 51236 51237 51238 51239 51240; do
ss -tlnp | grep ":$p " && echo "port $p in use"
done
```
2. Verify `[ips_fixed]` lists all 6 peer ports
3. Verify `validators.txt` has all 6 public keys
4. Check node debug logs: `tail -50 /tmp/xrpld-integration/node1/debug.log`
5. Ensure `[peer_private]` is set to `1` (prevents reaching out to public network)
### Transaction not processing
1. Verify genesis account exists:
```bash
curl -s http://localhost:5005 \
-d '{"method":"account_info","params":[{"account":"rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"}]}' \
| jq .result.account_data.Balance
```
2. Check submit response for error codes
3. In standalone mode, remember to call `ledger_accept` after submitting
### No trace_id in log output (Phase 8)
1. Verify rippled was built with `telemetry=ON` (`-Dtelemetry=ON` in CMake)
2. Verify `enabled=1` in the `[telemetry]` config section
3. Log lines only contain trace context when emitted inside an active span.
Background logs (startup, periodic tasks outside spans) will not have
`trace_id`/`span_id`.
4. Ensure the trace category is enabled (e.g., `trace_rpc=1` for RPC logs)
### No logs in Loki (Phase 8)
1. Verify the log file mount in docker-compose.yml:
```yaml
volumes:
- /tmp/xrpld-integration:/var/log/rippled:ro
```
2. Check OTel Collector logs for filelog receiver errors:
```bash
docker compose -f docker/telemetry/docker-compose.yml logs otel-collector | grep -i "filelog\|loki\|error"
```
3. Verify Loki is running:
```bash
curl -s http://localhost:3100/ready
```
4. Verify the filelog receiver glob pattern matches your log files:
The default pattern is `/var/log/rippled/*/debug.log`
### Grafana trace-log links not working (Phase 8)
1. Verify `tracesToLogs` is configured in the Tempo datasource provisioning
(`docker/telemetry/grafana/provisioning/datasources/tempo.yaml`)
2. Verify `derivedFields` is configured in the Loki datasource provisioning
(`docker/telemetry/grafana/provisioning/datasources/loki.yaml`)
3. Restart Grafana after changing provisioning files:
```bash
docker compose -f docker/telemetry/docker-compose.yml restart grafana
```
### Spanmetrics not appearing in Prometheus
1. Verify otel-collector config has `spanmetrics` connector
2. Check that the metrics pipeline is configured:
```yaml
service:
pipelines:
metrics:
receivers: [otlp, spanmetrics]
exporters: [prometheus]
```
3. Verify Prometheus can reach collector:
```bash
curl -s http://localhost:9090/api/v1/targets | jq '.data.activeTargets'
```

View File

@@ -0,0 +1,137 @@
# Docker Compose workload harness for Phase 10 telemetry validation.
#
# Runs a 5-node validator cluster with full OTel telemetry stack:
# - 5 rippled validator nodes (consensus network)
# - OTel Collector (traces + StatsD metrics)
# - Jaeger (trace search UI)
# - Tempo (production trace backend)
# - Prometheus (metrics)
# - Loki (log aggregation for log-trace correlation)
# - Grafana (dashboards + trace/log exploration)
#
# Usage:
# # Start the harness (requires pre-built xrpld image or mount binary):
# docker compose -f docker/telemetry/docker-compose.workload.yaml up -d
#
# # Or use the orchestrator:
# docker/telemetry/workload/run-full-validation.sh
#
# Prerequisites:
# - xrpld binary built with -DXRPL_ENABLE_TELEMETRY=ON
# - Validator keys generated via generate-validator-keys.sh
# - Node configs generated by run-full-validation.sh
version: "3.8"
services:
# ---------------------------------------------------------------------------
# Telemetry Backend Stack
# ---------------------------------------------------------------------------
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
command: ["--config=/etc/otel-collector-config.yaml"]
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
- "8125:8125/udp" # StatsD UDP (beast::insight metrics)
- "8889:8889" # Prometheus metrics endpoint
- "13133:13133" # Health check
volumes:
- ../otel-collector-config.yaml:/etc/otel-collector-config.yaml:ro
depends_on:
- jaeger
- tempo
networks:
- workload-net
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:13133/"]
interval: 5s
timeout: 3s
retries: 10
jaeger:
image: jaegertracing/all-in-one:latest
environment:
- COLLECTOR_OTLP_ENABLED=true
ports:
- "16686:16686" # Jaeger UI
- "14250:14250" # gRPC
networks:
- workload-net
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:16686/"]
interval: 5s
timeout: 3s
retries: 10
tempo:
image: grafana/tempo:2.7.2
command: ["-config.file=/etc/tempo.yaml"]
ports:
- "3200:3200" # Tempo HTTP API
volumes:
- ../tempo.yaml:/etc/tempo.yaml:ro
- tempo-data:/var/tempo
networks:
- workload-net
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ../prometheus.yml:/etc/prometheus/prometheus.yml:ro
depends_on:
otel-collector:
condition: service_healthy
networks:
- workload-net
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9090/-/healthy"]
interval: 5s
timeout: 3s
retries: 10
loki:
image: grafana/loki:2.9.4
ports:
- "3100:3100" # Loki HTTP API
command: ["-config.file=/etc/loki/local-config.yaml"]
networks:
- workload-net
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3100/ready"]
interval: 5s
timeout: 3s
retries: 10
grafana:
image: grafana/grafana:latest
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
ports:
- "3000:3000"
volumes:
- ../grafana/provisioning:/etc/grafana/provisioning:ro
- ../grafana/dashboards:/var/lib/grafana/dashboards:ro
depends_on:
- jaeger
- tempo
- prometheus
- loki
networks:
- workload-net
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/api/health"]
interval: 5s
timeout: 3s
retries: 10
volumes:
tempo-data:
networks:
workload-net:
driver: bridge

View File

@@ -0,0 +1,118 @@
# Docker Compose stack for rippled OpenTelemetry observability.
#
# Provides services for local development:
# - otel-collector: receives OTLP traces from rippled, batches and
# forwards them to Jaeger and Tempo. Also tails rippled log files
# via filelog receiver and exports to Loki. Listens on ports
# 4317 (gRPC), 4318 (HTTP), and 8125 (StatsD UDP).
# - jaeger: all-in-one tracing backend with UI on port 16686.
# - tempo: Grafana Tempo tracing backend, queryable via Grafana Explore
# on port 3000. Recommended for production (S3/GCS storage, TraceQL).
# - loki: Grafana Loki log aggregation backend for centralized log
# ingestion and log-trace correlation (Phase 8).
# - grafana: dashboards on port 3000, pre-configured with Jaeger, Tempo,
# Prometheus, and Loki datasources.
#
# Usage:
# docker compose -f docker/telemetry/docker-compose.yml up -d
#
# Configure rippled to export traces by adding to xrpld.cfg:
# [telemetry]
# enabled=1
# endpoint=http://localhost:4318/v1/traces
version: "3.8"
services:
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
command: ["--config=/etc/otel-collector-config.yaml"]
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP (traces + native OTel metrics)
- "8889:8889" # Prometheus metrics (spanmetrics + OTLP)
- "13133:13133" # Health check
# StatsD UDP port removed — beast::insight now uses native OTLP.
# Uncomment if using server=statsd fallback:
# - "8125:8125/udp"
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml:ro
# Phase 8: Mount rippled log directory for filelog receiver.
# The integration test writes logs to /tmp/xrpld-integration/;
# mount it read-only so the collector can tail debug.log files.
- /tmp/xrpld-integration:/var/log/rippled:ro
depends_on:
- jaeger
- tempo
- loki
networks:
- rippled-telemetry
jaeger:
image: jaegertracing/all-in-one:latest
environment:
- COLLECTOR_OTLP_ENABLED=true
ports:
- "16686:16686" # Jaeger UI
- "14250:14250" # gRPC
networks:
- rippled-telemetry
tempo:
image: grafana/tempo:2.7.2
command: ["-config.file=/etc/tempo.yaml"]
ports:
- "3200:3200" # Tempo HTTP API (health, query)
volumes:
- ./tempo.yaml:/etc/tempo.yaml:ro
- tempo-data:/var/tempo
networks:
- rippled-telemetry
# Phase 8: Grafana Loki for centralized log ingestion and log-trace
# correlation. Loki 3.x supports native OTLP ingestion, so the OTel
# Collector exports via otlphttp to Loki's /otlp endpoint.
# Query logs via Grafana Explore -> Loki at http://localhost:3000.
loki:
image: grafana/loki:3.4.2
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
networks:
- rippled-telemetry
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
depends_on:
- otel-collector
networks:
- rippled-telemetry
grafana:
image: grafana/grafana:latest
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
ports:
- "3000:3000"
volumes:
- ./grafana/provisioning:/etc/grafana/provisioning:ro
- ./grafana/dashboards:/var/lib/grafana/dashboards:ro
depends_on:
- jaeger
- tempo
- prometheus
- loki
networks:
- rippled-telemetry
volumes:
tempo-data:
networks:
rippled-telemetry:
driver: bridge

View File

@@ -0,0 +1,440 @@
{
"annotations": {
"list": []
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Consensus Round Duration",
"description": "p95 and p50 duration of consensus accept rounds. The consensus.accept span (RCLConsensus.cpp:395) measures the time to process an accepted ledger including transaction application and state finalization. The span carries xrpl.consensus.proposers and xrpl.consensus.round_time_ms attributes. Normal range is 3-6 seconds on mainnet.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"consensus.accept\"}[5m])))",
"legendFormat": "P95 Round Duration"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"consensus.accept\"}[5m])))",
"legendFormat": "P50 Round Duration"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Consensus Proposals Sent Rate",
"description": "Rate at which this node sends consensus proposals to the network. Sourced from the consensus.proposal.send span (RCLConsensus.cpp:177) which fires each time the node proposes a transaction set. The span carries xrpl.consensus.round identifying the consensus round number. A healthy proposing node should show steady proposal output.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"consensus.proposal.send\"}[5m]))",
"legendFormat": "Proposals / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Proposals / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Ledger Close Duration",
"description": "p95 duration of the ledger close event. The consensus.ledger_close span (RCLConsensus.cpp:282) measures the time from when consensus triggers a ledger close to completion. Carries xrpl.consensus.ledger.seq and xrpl.consensus.mode attributes. Compare with Consensus Round Duration to understand how close timing relates to overall round time.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"consensus.ledger_close\"}[5m])))",
"legendFormat": "P95 Close Duration"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Validation Send Rate",
"description": "Rate at which this node sends ledger validations to the network. Sourced from the consensus.validation.send span (RCLConsensus.cpp:753). Each validation confirms the node has fully validated a ledger. The span carries xrpl.consensus.ledger.seq and xrpl.consensus.proposing. Should closely track the ledger close rate when the node is healthy.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"consensus.validation.send\"}[5m]))",
"legendFormat": "Validations / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops"
},
"overrides": []
}
},
{
"title": "Ledger Apply Duration (doAccept)",
"description": "Time spent applying the consensus result to build a new ledger. Measured by the consensus.accept.apply span in doAccept().",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"consensus.accept.apply\"}[5m])))",
"legendFormat": "P95 Apply Duration"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"consensus.accept.apply\"}[5m])))",
"legendFormat": "P50 Apply Duration"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms"
},
"overrides": []
}
},
{
"title": "Close Time Agreement",
"description": "Rate of close time agreement vs disagreement across consensus rounds. Based on xrpl.consensus.close_time_correct attribute (true = validators agreed, false = agreed to disagree per avCT_CONSENSUS_PCT).",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"consensus.accept.apply\"}[5m]))",
"legendFormat": "Total Rounds / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops"
},
"overrides": []
}
},
{
"title": "Consensus Mode Over Time",
"description": "Breakdown of consensus ledger close events by the node's consensus mode (proposing, observing, wrongLedger, switchedLedger). Grouped by the xrpl.consensus.mode span attribute from consensus.ledger_close. A healthy validator should be predominantly in 'proposing' mode. Frequent 'wrongLedger' or 'switchedLedger' indicates sync issues.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (xrpl_consensus_mode) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"consensus.ledger_close\"}[5m]))",
"legendFormat": "{{xrpl_consensus_mode}}"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Events / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Accept vs Close Rate",
"description": "Compares the rate of consensus.accept (ledger accepted after consensus) vs consensus.ledger_close (ledger close initiated). These should track closely in a healthy network. A divergence means some close events are not completing the accept phase, potentially indicating consensus failures or timeouts.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"consensus.accept\"}[5m]))",
"legendFormat": "Accepts / Sec"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"consensus.ledger_close\"}[5m]))",
"legendFormat": "Closes / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Events / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Validation vs Close Rate",
"description": "Compares the rate of consensus.validation.send vs consensus.ledger_close. Each validated ledger should produce one validation message. If validations lag behind closes, the node may be falling behind on validation or experiencing issues with the validation pipeline.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 32
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"consensus.validation.send\"}[5m]))",
"legendFormat": "Validations / Sec"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"consensus.ledger_close\"}[5m]))",
"legendFormat": "Closes / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Events / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Consensus Accept Duration Heatmap",
"description": "Heatmap showing the distribution of consensus.accept span durations across histogram buckets over time. Each cell represents how many accept events fell into that duration bucket in a 5m window. Useful for detecting outlier consensus rounds that take abnormally long.",
"type": "heatmap",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 32
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"yAxis": {
"axisLabel": "Duration (ms)"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(increase(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"consensus.accept\"}[5m])) by (le)",
"legendFormat": "{{le}}",
"format": "heatmap"
}
]
}
],
"schemaVersion": 39,
"tags": ["rippled", "consensus", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id \u2014 e.g. Node-1)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
},
{
"name": "consensus_mode",
"label": "Consensus Mode",
"description": "Filter by consensus mode (proposing, observing, wrongLedger, switchedLedger)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total{span_name=\"consensus.ledger_close\"}, xrpl_consensus_mode)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Consensus Health",
"uid": "rippled-consensus"
}

View File

@@ -0,0 +1,347 @@
{
"annotations": {
"list": []
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Ledger Build Rate",
"description": "Rate at which new ledgers are being built. The ledger.build span (BuildLedger.cpp:31) wraps the entire buildLedgerImpl() function which creates a new ledger from a parent, applies transactions, flushes SHAMap nodes, and sets the accepted state. Should match the consensus close rate (~0.25/sec on mainnet with ~4s rounds).",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"ledger.build\"}[5m]))",
"legendFormat": "Builds / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops"
},
"overrides": []
}
},
{
"title": "Ledger Build Duration",
"description": "p95 and p50 duration of ledger builds. Measures the full buildLedgerImpl() call including transaction application, SHAMap flushing, and ledger acceptance. The span records xrpl.ledger.seq as an attribute. Long build times indicate expensive transaction sets or I/O pressure from SHAMap flushes.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"ledger.build\"}[5m])))",
"legendFormat": "P95 Build Duration"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"ledger.build\"}[5m])))",
"legendFormat": "P50 Build Duration"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Ledger Validation Rate",
"description": "Rate at which ledgers pass the validation threshold and are accepted as fully validated. The ledger.validate span (LedgerMaster.cpp:915) fires in checkAccept() only after the ledger receives sufficient trusted validations (>= quorum). Records xrpl.ledger.seq and xrpl.ledger.validations (the number of validations received).",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"ledger.validate\"}[5m]))",
"legendFormat": "Validations / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops"
},
"overrides": []
}
},
{
"title": "Ledger Build Duration Heatmap",
"description": "Heatmap showing the distribution of ledger.build durations across histogram buckets over time. Each cell represents the count of ledger builds that fell into that duration bucket in a 5m window. Useful for spotting occasional slow ledger builds that may not appear in percentile charts.",
"type": "heatmap",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"yAxis": {
"axisLabel": "Duration (ms)"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(increase(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"ledger.build\"}[5m])) by (le)",
"legendFormat": "{{le}}",
"format": "heatmap"
}
]
},
{
"title": "Transaction Apply Duration",
"description": "p95 and p50 duration of applying the consensus transaction set during ledger building. The tx.apply span (BuildLedger.cpp:88) wraps applyTransactions() which iterates through the CanonicalTXSet with multiple retry passes. Records xrpl.ledger.tx_count (successful) and xrpl.ledger.tx_failed (failed) as attributes.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"tx.apply\"}[5m])))",
"legendFormat": "P95 tx.apply"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"tx.apply\"}[5m])))",
"legendFormat": "P50 tx.apply"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Transaction Apply Rate",
"description": "Rate of tx.apply span invocations, reflecting how frequently the transaction application phase runs during ledger building. Each ledger build triggers one tx.apply call. Should closely match the ledger build rate.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"tx.apply\"}[5m]))",
"legendFormat": "tx.apply / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Operations / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Ledger Store Rate",
"description": "Rate at which ledgers are stored into the ledger history. The ledger.store span (LedgerMaster.cpp:409) wraps storeLedger() which inserts the ledger into the LedgerHistory cache. Records xrpl.ledger.seq. Should match the ledger build rate under normal operation.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"ledger.store\"}[5m]))",
"legendFormat": "Stores / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops"
},
"overrides": []
}
},
{
"title": "Build vs Close Duration",
"description": "Compares p95 durations of ledger.build (the actual ledger construction in BuildLedger.cpp) vs consensus.ledger_close (the consensus close event in RCLConsensus.cpp). Build time is a subset of close time. A large gap between them indicates overhead in the consensus pipeline outside of ledger construction itself.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"ledger.build\"}[5m])))",
"legendFormat": "P95 ledger.build"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"consensus.ledger_close\"}[5m])))",
"legendFormat": "P95 consensus.ledger_close"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "ledger", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id \u2014 e.g. Node-1)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Ledger Operations",
"uid": "rippled-ledger-ops"
}

View File

@@ -0,0 +1,195 @@
{
"annotations": {
"list": []
},
"description": "Requires trace_peer=1 in the [telemetry] config section.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Peer Proposal Receive Rate",
"description": "Rate of consensus proposals received from network peers. The peer.proposal.receive span (PeerImp.cpp:1667) fires in onMessage(TMProposeSet) for each incoming proposal. Records xrpl.peer.id (sending peer) and xrpl.peer.proposal.trusted (whether the proposer is in our UNL). Requires trace_peer=1 in the telemetry config.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"peer.proposal.receive\"}[5m]))",
"legendFormat": "Proposals Received / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Proposals / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Peer Validation Receive Rate",
"description": "Rate of ledger validations received from network peers. The peer.validation.receive span (PeerImp.cpp:2264) fires in onMessage(TMValidation) for each incoming validation message. Records xrpl.peer.id (sending peer) and xrpl.peer.validation.trusted (whether the validator is trusted). Requires trace_peer=1 in the telemetry config.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"peer.validation.receive\"}[5m]))",
"legendFormat": "Validations Received / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Validations / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Proposals Trusted vs Untrusted",
"description": "Pie chart showing the ratio of proposals received from trusted validators (in our UNL) vs untrusted validators. Grouped by the xrpl.peer.proposal.trusted span attribute (true/false). A healthy node connected to a well-configured UNL should see a significant portion of trusted proposals. Note: proposals that fail early validation may not have the trusted attribute set.",
"type": "piechart",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (xrpl_peer_proposal_trusted) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"peer.proposal.receive\"}[5m]))",
"legendFormat": "Trusted = {{xrpl_peer_proposal_trusted}}"
}
]
},
{
"title": "Validations Trusted vs Untrusted",
"description": "Pie chart showing the ratio of validations received from trusted validators (in our UNL) vs untrusted validators. Grouped by the xrpl.peer.validation.trusted span attribute (true/false). Monitoring this helps detect if the node is receiving validations from the expected set of trusted validators. Note: validations that fail early checks may not have the trusted attribute set.",
"type": "piechart",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (xrpl_peer_validation_trusted) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"peer.validation.receive\"}[5m]))",
"legendFormat": "Trusted = {{xrpl_peer_validation_trusted}}"
}
]
}
],
"schemaVersion": 39,
"tags": ["rippled", "peer", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id \u2014 e.g. Node-1)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
},
{
"name": "proposal_trusted",
"label": "Proposal Trusted",
"description": "Filter by proposal trust status (true = from trusted validator)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total{span_name=\"peer.proposal.receive\"}, xrpl_peer_proposal_trusted)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Peer Network",
"uid": "rippled-peer-net"
}

View File

@@ -0,0 +1,338 @@
{
"annotations": {
"list": []
},
"description": "Fee market dynamics: TxQ depth/capacity, fee escalation levels, and load factor breakdown. Sourced from OTel MetricsRegistry observable gauges (Phase 9).",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Transaction Queue Depth",
"description": "Current number of transactions waiting in the queue vs. maximum capacity. Sourced from MetricsRegistry txq_metrics observable gauge with metric=txq_count and metric=txq_max_size.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_txq_metrics{exported_instance=~\"$node\", metric=\"txq_count\"}",
"legendFormat": "Queue Depth"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_txq_metrics{exported_instance=~\"$node\", metric=\"txq_max_size\"}",
"legendFormat": "Max Capacity"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 10
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Transactions Per Ledger",
"description": "Transactions in the current open ledger vs. expected per-ledger count. Sourced from txq_metrics with metric=txq_in_ledger and metric=txq_per_ledger.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_txq_metrics{exported_instance=~\"$node\", metric=\"txq_in_ledger\"}",
"legendFormat": "In Ledger"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_txq_metrics{exported_instance=~\"$node\", metric=\"txq_per_ledger\"}",
"legendFormat": "Expected Per Ledger"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 10
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Fee Escalation Levels",
"description": "Fee levels that control transaction queue admission. Reference fee level is the baseline; open ledger fee level triggers escalation. Sourced from txq_metrics observable gauge.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_txq_metrics{exported_instance=~\"$node\", metric=\"txq_reference_fee_level\"}",
"legendFormat": "Reference Fee Level"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_txq_metrics{exported_instance=~\"$node\", metric=\"txq_min_processing_fee_level\"}",
"legendFormat": "Min Processing Fee Level"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_txq_metrics{exported_instance=~\"$node\", metric=\"txq_med_fee_level\"}",
"legendFormat": "Median Fee Level"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_txq_metrics{exported_instance=~\"$node\", metric=\"txq_open_ledger_fee_level\"}",
"legendFormat": "Open Ledger Fee Level"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 5,
"scaleDistribution": {
"type": "log",
"log": 2
}
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Load Factor Breakdown",
"description": "Decomposed load factor components: server (max of local, net, cluster), fee escalation, fee queue, and combined. Values are unitless multipliers where 1.0 = no load. Sourced from load_factor_metrics observable gauge.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_load_factor_metrics{exported_instance=~\"$node\", metric=\"load_factor\"}",
"legendFormat": "Combined Load Factor"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_load_factor_metrics{exported_instance=~\"$node\", metric=\"load_factor_server\"}",
"legendFormat": "Server"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_load_factor_metrics{exported_instance=~\"$node\", metric=\"load_factor_fee_escalation\"}",
"legendFormat": "Fee Escalation"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_load_factor_metrics{exported_instance=~\"$node\", metric=\"load_factor_fee_queue\"}",
"legendFormat": "Fee Queue"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 5
},
"color": {
"mode": "palette-classic"
},
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 2
},
{
"color": "red",
"value": 10
}
]
}
},
"overrides": []
}
},
{
"title": "Load Factor Components",
"description": "Individual load factor contributors: local server load, network load, and cluster load. Only differ from 1.0 under load conditions. Sourced from load_factor_metrics observable gauge.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_load_factor_metrics{exported_instance=~\"$node\", metric=\"load_factor_local\"}",
"legendFormat": "Local"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_load_factor_metrics{exported_instance=~\"$node\", metric=\"load_factor_net\"}",
"legendFormat": "Network"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_load_factor_metrics{exported_instance=~\"$node\", metric=\"load_factor_cluster\"}",
"legendFormat": "Cluster"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 5
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "otel", "fee-market"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {},
"timezone": "browser",
"title": "rippled - Fee Market & TxQ",
"uid": "rippled-fee-market",
"version": 1
}

View File

@@ -0,0 +1,365 @@
{
"annotations": {
"list": []
},
"description": "Job queue analysis: per-job-type throughput rates, queue wait times, and execution times. Sourced from OTel MetricsRegistry synchronous counters and histograms (Phase 9).",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Job Throughput Rate (per second)",
"description": "Rate of jobs queued, started, and finished across all job types. Computed as rate() over the OTel counter values. High queue rates with low finish rates indicate backlog.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(rippled_job_queued_total{exported_instance=~\"$node\"}[5m]))",
"legendFormat": "Queued/s"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(rippled_job_started_total{exported_instance=~\"$node\"}[5m]))",
"legendFormat": "Started/s"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(rippled_job_finished_total{exported_instance=~\"$node\"}[5m]))",
"legendFormat": "Finished/s"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 10
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Per-Job-Type Queued Rate",
"description": "Rate of jobs queued broken down by job_type label. Identifies which job types contribute most to queue activity.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"legend": {
"displayMode": "table",
"placement": "right",
"calcs": ["mean", "max"]
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, rate(rippled_job_queued_total{exported_instance=~\"$node\"}[5m]))",
"legendFormat": "{{job_type}}"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"drawStyle": "line",
"lineWidth": 1,
"fillOpacity": 5
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Per-Job-Type Finish Rate",
"description": "Rate of jobs completing broken down by job_type. Compare with queued rate to identify backlog per type.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"legend": {
"displayMode": "table",
"placement": "right",
"calcs": ["mean", "max"]
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, rate(rippled_job_finished_total{exported_instance=~\"$node\"}[5m]))",
"legendFormat": "{{job_type}}"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"drawStyle": "line",
"lineWidth": 1,
"fillOpacity": 5
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Job Queue Wait Time (p50, p95, p99)",
"description": "Histogram quantiles for time jobs spend waiting in the queue before execution starts. High values indicate thread pool saturation.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le) (rate(rippled_job_queued_duration_us_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "p50"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le) (rate(rippled_job_queued_duration_us_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "p95"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.99, sum by (le) (rate(rippled_job_queued_duration_us_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "p99"
}
],
"fieldConfig": {
"defaults": {
"unit": "us",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 5
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Job Execution Time (p50, p95, p99)",
"description": "Histogram quantiles for actual job execution time. High values indicate expensive operations or resource contention.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le) (rate(rippled_job_running_duration_us_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "p50"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le) (rate(rippled_job_running_duration_us_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "p95"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.99, sum by (le) (rate(rippled_job_running_duration_us_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "p99"
}
],
"fieldConfig": {
"defaults": {
"unit": "us",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 5
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Per-Job-Type Execution Time (p95)",
"description": "95th percentile execution time broken down by job type. Identifies the slowest job types.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"legend": {
"displayMode": "table",
"placement": "right",
"calcs": ["mean", "max"]
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, histogram_quantile(0.95, sum by (le, job_type) (rate(rippled_job_running_duration_us_bucket{exported_instance=~\"$node\"}[5m]))))",
"legendFormat": "{{job_type}}"
}
],
"fieldConfig": {
"defaults": {
"unit": "us",
"custom": {
"drawStyle": "line",
"lineWidth": 1,
"fillOpacity": 5
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "otel", "job-queue"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
},
{
"name": "job_type",
"label": "Job Type",
"description": "Filter by job type",
"type": "query",
"query": "label_values(rippled_job_queued_total, job_type)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {},
"timezone": "browser",
"title": "rippled - Job Queue Analysis",
"uid": "rippled-job-queue",
"version": 1
}

View File

@@ -0,0 +1,374 @@
{
"annotations": {
"list": []
},
"description": "Per-RPC-method performance: call rates, error rates, and latency distributions. Sourced from OTel MetricsRegistry synchronous counters and histograms (Phase 9).",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "RPC Call Rate (all methods)",
"description": "Aggregate rate of RPC calls started, finished, and errored across all methods. Computed as rate() over OTel counters.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(rippled_rpc_method_started_total{exported_instance=~\"$node\"}[5m]))",
"legendFormat": "Started/s"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(rippled_rpc_method_finished_total{exported_instance=~\"$node\"}[5m]))",
"legendFormat": "Finished/s"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(rippled_rpc_method_errored_total{exported_instance=~\"$node\"}[5m]))",
"legendFormat": "Errored/s"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 10
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Per-Method Call Rate (Top 10)",
"description": "Per-method RPC call rate, showing the 10 most active methods. Useful for identifying hot paths.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"legend": {
"displayMode": "table",
"placement": "right",
"calcs": ["mean", "max"]
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, rate(rippled_rpc_method_started_total{exported_instance=~\"$node\"}[5m]))",
"legendFormat": "{{method}}"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"drawStyle": "line",
"lineWidth": 1,
"fillOpacity": 5
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Per-Method Error Rate (Top 10)",
"description": "Per-method RPC error rate. Non-zero values warrant investigation. Common culprits: invalid parameters, resource exhaustion.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"legend": {
"displayMode": "table",
"placement": "right",
"calcs": ["mean", "max"]
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, rate(rippled_rpc_method_errored_total{exported_instance=~\"$node\"}[5m]))",
"legendFormat": "{{method}}"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"drawStyle": "line",
"lineWidth": 1,
"fillOpacity": 5
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "RPC Latency (p50, p95, p99) - All Methods",
"description": "Histogram quantiles for RPC execution time across all methods. Sourced from rpc_method_duration_us histogram.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le) (rate(rippled_rpc_method_duration_us_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "p50"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le) (rate(rippled_rpc_method_duration_us_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "p95"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.99, sum by (le) (rate(rippled_rpc_method_duration_us_bucket{exported_instance=~\"$node\"}[5m])))",
"legendFormat": "p99"
}
],
"fieldConfig": {
"defaults": {
"unit": "us",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 5
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Per-Method Latency p95 (Top 10 Slowest)",
"description": "95th percentile execution time per method. Identifies the slowest RPC endpoints.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"legend": {
"displayMode": "table",
"placement": "right",
"calcs": ["mean", "max"]
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, histogram_quantile(0.95, sum by (le, method) (rate(rippled_rpc_method_duration_us_bucket{exported_instance=~\"$node\"}[5m]))))",
"legendFormat": "{{method}}"
}
],
"fieldConfig": {
"defaults": {
"unit": "us",
"custom": {
"drawStyle": "line",
"lineWidth": 1,
"fillOpacity": 5
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "RPC Error Ratio by Method",
"description": "Error ratio (errors / total started) per method. Values above 0.05 (5%) warrant investigation.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"legend": {
"displayMode": "table",
"placement": "right",
"calcs": ["mean", "max"]
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, rate(rippled_rpc_method_errored_total{exported_instance=~\"$node\"}[5m]) / (rate(rippled_rpc_method_started_total{exported_instance=~\"$node\"}[5m]) > 0))",
"legendFormat": "{{method}}"
}
],
"fieldConfig": {
"defaults": {
"unit": "percentunit",
"min": 0,
"max": 1,
"custom": {
"drawStyle": "line",
"lineWidth": 1,
"fillOpacity": 5
},
"color": {
"mode": "palette-classic"
},
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 0.05
},
{
"color": "red",
"value": 0.25
}
]
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "otel", "rpc"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
},
{
"name": "method",
"label": "RPC Method",
"description": "Filter by RPC method",
"type": "query",
"query": "label_values(rippled_rpc_method_started_total, method)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {},
"timezone": "browser",
"title": "rippled - RPC Performance (OTel)",
"uid": "rippled-rpc-perf",
"version": 1
}

View File

@@ -0,0 +1,376 @@
{
"annotations": {
"list": []
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "RPC Request Rate by Command",
"description": "Per-second rate of RPC command executions, broken down by command name (e.g. server_info, submit). Calculated as rate(traces_span_metrics_calls_total{span_name=~\"rpc.command.*\"}) over a 5m window, grouped by the xrpl.rpc.command span attribute.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (xrpl_rpc_command) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=~\"rpc.command.*\"}[5m]))",
"legendFormat": "{{xrpl_rpc_command}}"
}
],
"fieldConfig": {
"defaults": {
"unit": "reqps",
"custom": {
"axisLabel": "Requests / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "RPC Latency p95 by Command",
"description": "95th percentile response time for each RPC command. Computed from the spanmetrics duration histogram using histogram_quantile(0.95) over rpc.command.* spans, grouped by xrpl.rpc.command. High values indicate slow commands that may need optimization.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le, xrpl_rpc_command) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=~\"rpc.command.*\"}[5m])))",
"legendFormat": "P95 {{xrpl_rpc_command}}"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Latency (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "RPC Error Rate",
"description": "Percentage of RPC commands that completed with an error status, per command. Calculated as (error calls / total calls) * 100, where errors have status_code=STATUS_CODE_ERROR. Thresholds: green < 1%, yellow 1-5%, red > 5%.",
"type": "bargauge",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (xrpl_rpc_command) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=~\"rpc.command.*\", status_code=\"STATUS_CODE_ERROR\"}[5m])) / sum by (xrpl_rpc_command) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=~\"rpc.command.*\"}[5m])) * 100",
"legendFormat": "{{xrpl_rpc_command}}"
}
],
"fieldConfig": {
"defaults": {
"unit": "percent",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 1
},
{
"color": "red",
"value": 5
}
]
}
},
"overrides": []
}
},
{
"title": "RPC Latency Heatmap",
"description": "Distribution of RPC command response times across histogram buckets. Shows the density of requests at each latency level over time. Each cell represents the count of requests that fell into that duration bucket in a 5m window. Useful for spotting bimodal latency patterns.",
"type": "heatmap",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"yAxis": {
"axisLabel": "Duration (ms)"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(increase(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=~\"rpc.command.*\"}[5m])) by (le)",
"legendFormat": "{{le}}",
"format": "heatmap"
}
]
},
{
"title": "Overall RPC Throughput",
"description": "Aggregate RPC throughput showing two layers of the request pipeline. rpc.request is the outer HTTP handler (ServerHandler.cpp:271) that accepts incoming connections. rpc.process is the inner processing layer (ServerHandler.cpp:573) that parses and dispatches. A gap between the two indicates requests being queued or rejected before processing.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=\"rpc.request\"}[5m]))",
"legendFormat": "rpc.request / Sec"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=\"rpc.process\"}[5m]))",
"legendFormat": "rpc.process / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "reqps",
"custom": {
"axisLabel": "Requests / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "RPC Success vs Error",
"description": "Aggregate rate of successful vs failed RPC commands across all command types. Success = status_code UNSET (OpenTelemetry default for OK spans). Error = status_code STATUS_CODE_ERROR. A sustained error rate warrants investigation via per-command breakdown above.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=~\"rpc.command.*\", status_code=\"STATUS_CODE_UNSET\"}[5m]))",
"legendFormat": "Success"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=~\"rpc.command.*\", status_code=\"STATUS_CODE_ERROR\"}[5m]))",
"legendFormat": "Error"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Commands / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Top Commands by Volume",
"description": "Top 10 most frequently called RPC commands by total invocation count over the last 5 minutes. Uses topk(10, increase(calls_total)) to rank commands. Helps identify the hottest API endpoints driving load on the node.",
"type": "bargauge",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, sum by (xrpl_rpc_command) (increase(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=~\"rpc.command.*\"}[5m])))",
"legendFormat": "{{xrpl_rpc_command}}"
}
],
"fieldConfig": {
"defaults": {
"unit": "short"
},
"overrides": []
}
},
{
"title": "WebSocket Message Rate",
"description": "Rate of incoming WebSocket RPC messages processed by the server. Sourced from the rpc.ws_message span (ServerHandler.cpp:384). Only active when clients connect via WebSocket instead of HTTP. Zero is normal if only HTTP RPC is in use.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", xrpl_rpc_command=~\"$command\", span_name=\"rpc.ws_message\"}[5m]))",
"legendFormat": "WS Messages / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops"
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "rpc", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id \u2014 e.g. Node-1)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
},
{
"name": "command",
"label": "RPC Command",
"description": "Filter by RPC command name (e.g., server_info, submit)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total{span_name=~\"rpc.command.*\"}, xrpl_rpc_command)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "RPC Performance",
"uid": "rippled-rpc-perf"
}

View File

@@ -0,0 +1,527 @@
{
"annotations": {
"list": []
},
"description": "Ledger data exchange and object fetch traffic from beast::insight System Metrics. Covers ledger sync, node data retrieval, and transaction set exchange. Requires [insight] server=otel in rippled config.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Ledger Data Exchange (Bytes In)",
"description": "Inbound bytes for ledger data sub-categories. 'ledger_data' = aggregated ledger data, sub-types include Transaction_Set_candidate (proposed tx sets), Transaction_Node (tx tree nodes), and Account_State_Node (state tree nodes). High Account_State_Node traffic indicates state sync; high Transaction_Set_candidate indicates consensus catch-up. Sourced from TrafficCount.h ledger_data_* categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_data_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Ledger Data Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_data_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Ledger Data Share"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_data_Transaction_Set_candidate_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Set Candidate Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_data_Transaction_Set_candidate_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Set Candidate Share"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_data_Transaction_Node_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Node Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_data_Transaction_Node_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Node Share"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_data_Account_State_Node_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Account State Node Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_data_Account_State_Node_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Account State Node Share"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes In",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Ledger Share/Get Traffic (Bytes)",
"description": "Legacy ledger share and get traffic by sub-type. These are the older ledger fetch protocol categories (as opposed to ledger_data_* which is the newer protocol). Sub-types: Transaction_Set_candidate, Transaction_node, Account_State_node, plus aggregate ledger_share and ledger_get. Sourced from TrafficCount.h ledger_* categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Ledger Share In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Ledger Get In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_Transaction_Set_candidate_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Set Candidate Share"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_Transaction_Set_candidate_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Set Candidate Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_Transaction_node_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Node Share"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_Transaction_node_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Node Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_Account_State_node_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Account State Share"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ledger_Account_State_node_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Account State Get"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes In",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "GetObject Traffic by Type (Bytes In)",
"description": "Object fetch traffic by object type. GetObject is the protocol for fetching specific SHAMap nodes. Types: Ledger (full ledger headers), Transaction (individual txs), Transaction_node (tx tree nodes), Account_State_node (state tree nodes), CAS (Content Addressable Storage objects), Fetch_Pack (batch fetch during catch-up), Transactions (bulk tx fetch). High Fetch_Pack traffic indicates a node is catching up. Sourced from TrafficCount.h getobject_* categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Ledger_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Ledger Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Ledger_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Ledger Share"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Transaction_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Transaction Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Transaction_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Transaction Share"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Transaction_node_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Node Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Transaction_node_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Node Share"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Account_State_node_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Account State Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Account_State_node_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Account State Share"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes In",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "GetObject Aggregate & Special Types (Bytes In)",
"description": "Aggregate getobject traffic plus special categories: CAS (Content Addressable Storage) for SHAMap node fetch, Fetch_Pack for bulk batch downloads during catch-up, Transactions for bulk tx fetch, and the aggregate getobject_get/getobject_share totals. Sourced from TrafficCount.h getobject_* categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_CAS_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "CAS Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_CAS_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "CAS Share"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Fetch_Pack_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Fetch Pack Share"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Fetch_Pack_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Fetch Pack Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Transactions_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Transactions Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Aggregate Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Aggregate Share"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes In",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "GetObject Messages by Type",
"description": "Message counts for object fetch operations. Shows how many individual fetch requests and responses are exchanged per type. High message counts with low byte counts indicate small object fetches; the inverse indicates large batch transfers. Sourced from TrafficCount.h getobject_* categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Ledger_get_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Ledger Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Transaction_get_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Transaction Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Transaction_node_get_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Node Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Account_State_node_get_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Account State Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_CAS_get_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "CAS Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Fetch_Pack_get_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Fetch Pack Get"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_getobject_Transactions_get_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Transactions Get"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Messages In",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Overlay Traffic Heatmap (All Categories, Bytes In)",
"description": "Bar gauge showing all overlay traffic categories ranked by inbound bytes. Provides a complete at-a-glance view of which protocol message types consume the most bandwidth across all 57+ traffic categories. Sourced from all TrafficCount.h categories via wildcard match.",
"type": "bargauge",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"displayMode": "gradient",
"orientation": "horizontal",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(20, {exported_instance=~\"$node\", __name__=~\"rippled_.*_Bytes_In\", __name__!~\"rippled_total_.*\"})",
"legendFormat": "{{__name__}}"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 1048576
},
{
"color": "red",
"value": 104857600
}
]
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "statsd", "ledger", "sync", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(rippled_ledger_data_get_Bytes_In, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Ledger Data & Sync (System Metrics)",
"uid": "rippled-system-ledger-sync"
}

View File

@@ -0,0 +1,692 @@
{
"annotations": {
"list": []
},
"description": "Network traffic and peer metrics from beast::insight System Metrics. Requires [insight] server=otel in rippled config.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Active Peers",
"description": "Number of active inbound and outbound peer connections. Sourced from Peer_Finder.Active_Inbound_Peers and Peer_Finder.Active_Outbound_Peers gauges (PeerfinderManager.cpp:214-215). A healthy mainnet node typically has 10-21 outbound and 0-85 inbound peers depending on configuration.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_Peer_Finder_Active_Inbound_Peers{exported_instance=~\"$node\"}",
"legendFormat": "Inbound Peers"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_Peer_Finder_Active_Outbound_Peers{exported_instance=~\"$node\"}",
"legendFormat": "Outbound Peers"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Peers",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Peer Disconnects",
"description": "Cumulative count of peer disconnections. Sourced from the Overlay.Peer_Disconnects gauge (OverlayImpl.h:557). A rising trend indicates network instability, aggressive peer management, or resource exhaustion causing connection drops.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_Overlay_Peer_Disconnects{exported_instance=~\"$node\"}",
"legendFormat": "Disconnects"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Disconnects",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Total Network Bytes",
"description": "Total bytes sent and received across all peer connections. Sourced from the total.Bytes_In and total.Bytes_Out traffic category gauges (OverlayImpl.h:535-548). Provides a high-level view of network bandwidth consumption.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_total_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Bytes In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_total_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Bytes Out"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Total Network Messages",
"description": "Total messages sent and received across all peer connections. Sourced from the total.Messages_In and total.Messages_Out traffic category gauges (OverlayImpl.h:535-548). Shows the overall message throughput of the overlay network.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_total_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Messages In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_total_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Messages Out"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Messages",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Transaction Traffic",
"description": "Bytes and messages for transaction-related overlay traffic. Includes the transactions traffic category (OverlayImpl/TrafficCount.h). Spikes indicate high transaction volume on the network or transaction flooding.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_transactions_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Messages In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_transactions_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "TX Messages Out"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_transactions_duplicate_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "TX Duplicate In"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Messages",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Proposal Traffic",
"description": "Messages for consensus proposal overlay traffic. Includes proposals, proposals_untrusted, and proposals_duplicate categories (TrafficCount.h). High untrusted or duplicate counts may indicate UNL misconfiguration or network spam.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_proposals_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Proposals In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_proposals_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Proposals Out"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_proposals_untrusted_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Untrusted In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_proposals_duplicate_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Duplicate In"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Messages",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Validation Traffic",
"description": "Messages for validation overlay traffic. Includes validations, validations_untrusted, and validations_duplicate categories (TrafficCount.h). Monitoring trusted vs untrusted validation traffic helps detect UNL health issues.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validations_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Validations In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validations_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Validations Out"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validations_untrusted_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Untrusted In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validations_duplicate_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Duplicate In"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Messages",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Overlay Traffic by Category (Bytes In)",
"description": "Top traffic categories by inbound bytes. Includes all 57 overlay traffic categories from TrafficCount.h. Shows which protocol message types consume the most bandwidth. Categories include transactions, proposals, validations, ledger data, getobject, and overlay overhead.",
"type": "bargauge",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(10, {exported_instance=~\"$node\", __name__=~\"rippled_.*_Bytes_In\", __name__!~\"rippled_total_.*\"})",
"legendFormat": "{{__name__}}"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "rippled_transactions_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Transactions"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_proposals_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Proposals"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_validations_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Validations"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_overhead_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Overhead"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_overhead_overlay_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Overhead Overlay"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ping_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Ping"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_status_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Status"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_getObject_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Get Object"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_haveTxSet_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Have Tx Set"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledgerData_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Ledger Data"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_share_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Ledger Share"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_get_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Ledger Data Get"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_share_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Ledger Data Share"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_Account_State_Node_get_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Account State Node Get"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_Account_State_Node_share_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Account State Node Share"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_Transaction_Node_get_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Transaction Node Get"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_Transaction_Node_share_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Transaction Node Share"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_data_Transaction_Set_candidate_get_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Tx Set Candidate Get"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_Account_State_node_share_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Account State Node Share (Legacy)"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_Transaction_Set_candidate_share_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Tx Set Candidate Share"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_ledger_Transaction_node_share_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Transaction Node Share (Legacy)"
}
]
},
{
"matcher": {
"id": "byName",
"options": "rippled_set_get_Bytes_In"
},
"properties": [
{
"id": "displayName",
"value": "Set Get"
}
]
}
]
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "statsd", "network", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(rippled_Peer_Finder_Active_Inbound_Peers, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Network Traffic (System Metrics)",
"uid": "rippled-system-network"
}

View File

@@ -0,0 +1,744 @@
{
"annotations": {
"list": []
},
"description": "Node health metrics from beast::insight System Metrics. Requires [insight] server=otel in rippled config.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Validated Ledger Age",
"description": "Age of the most recently validated ledger in seconds. Sourced from the LedgerMaster.Validated_Ledger_Age gauge (LedgerMaster.h:373) which is updated every collection interval via the insight hook. Values above 20s indicate the node is falling behind the network.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_LedgerMaster_Validated_Ledger_Age{exported_instance=~\"$node\"}",
"legendFormat": "Validated Age"
}
],
"fieldConfig": {
"defaults": {
"unit": "s",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 10
},
{
"color": "red",
"value": 20
}
]
}
},
"overrides": []
}
},
{
"title": "Published Ledger Age",
"description": "Age of the most recently published ledger in seconds. Sourced from the LedgerMaster.Published_Ledger_Age gauge (LedgerMaster.h:374). Published ledger age should track close to validated ledger age. A growing gap indicates publish pipeline backlog.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_LedgerMaster_Published_Ledger_Age{exported_instance=~\"$node\"}",
"legendFormat": "Published Age"
}
],
"fieldConfig": {
"defaults": {
"unit": "s",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 10
},
{
"color": "red",
"value": 20
}
]
}
},
"overrides": []
}
},
{
"title": "Operating Mode Duration",
"description": "Cumulative time spent in each operating mode (Disconnected, Connected, Syncing, Tracking, Full). Sourced from State_Accounting.*_duration gauges (NetworkOPs.cpp:774-778). A healthy node should spend the vast majority of time in Full mode.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Full_duration{exported_instance=~\"$node\"}",
"legendFormat": "Full"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Tracking_duration{exported_instance=~\"$node\"}",
"legendFormat": "Tracking"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Syncing_duration{exported_instance=~\"$node\"}",
"legendFormat": "Syncing"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Connected_duration{exported_instance=~\"$node\"}",
"legendFormat": "Connected"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Disconnected_duration{exported_instance=~\"$node\"}",
"legendFormat": "Disconnected"
}
],
"fieldConfig": {
"defaults": {
"unit": "s",
"custom": {
"axisLabel": "Duration (Sec)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Operating Mode Transitions",
"description": "Count of transitions into each operating mode. Sourced from State_Accounting.*_transitions gauges (NetworkOPs.cpp:780-786). Frequent transitions out of Full mode indicate instability. Transitions to Disconnected or Syncing warrant investigation.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Full_transitions{exported_instance=~\"$node\"}",
"legendFormat": "Full"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Tracking_transitions{exported_instance=~\"$node\"}",
"legendFormat": "Tracking"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Syncing_transitions{exported_instance=~\"$node\"}",
"legendFormat": "Syncing"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Connected_transitions{exported_instance=~\"$node\"}",
"legendFormat": "Connected"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_State_Accounting_Disconnected_transitions{exported_instance=~\"$node\"}",
"legendFormat": "Disconnected"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Transitions",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "I/O Latency",
"description": "P95 and P50 of the I/O service loop latency in milliseconds. Sourced from the ios_latency event (Application.cpp:438) which measures how long it takes for the io_context to process a timer callback. Values above 10ms are logged; above 500ms trigger warnings. High values indicate thread pool saturation or blocking operations.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ios_latency{exported_instance=~\"$node\", quantile=\"0.95\"}",
"legendFormat": "P95 I/O Latency"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_ios_latency{exported_instance=~\"$node\", quantile=\"0.5\"}",
"legendFormat": "P50 I/O Latency"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Latency (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Job Queue Depth",
"description": "Current number of jobs waiting in the job queue. Sourced from the job_count gauge (JobQueue.cpp:26). A sustained high value indicates the node cannot process work fast enough \u2014 common during ledger replay or heavy RPC load.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_job_count{exported_instance=~\"$node\"}",
"legendFormat": "Job Queue Depth"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Jobs",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Ledger Fetch Rate",
"description": "Rate of ledger fetch requests initiated by the node. Sourced from the ledger_fetches counter (InboundLedgers.cpp:44) which increments each time the node requests a ledger from a peer. High rates indicate the node is catching up or missing ledgers.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_ledger_fetches_total{exported_instance=~\"$node\"}[5m])",
"legendFormat": "Fetches / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops"
},
"overrides": []
}
},
{
"title": "Ledger History Mismatches",
"description": "Rate of ledger history hash mismatches. Sourced from the ledger.history.mismatch counter (LedgerHistory.cpp:16) which increments when a built ledger hash does not match the expected validated hash. Non-zero values indicate consensus divergence or database corruption.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_ledger_history_mismatch_total{exported_instance=~\"$node\"}[5m])",
"legendFormat": "Mismatches / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 0.01
}
]
}
},
"overrides": []
}
},
{
"title": "--- OTel: NodeStore I/O ---",
"type": "row",
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 32
},
"collapsed": false,
"panels": []
},
{
"title": "NodeStore Read/Write Totals",
"description": "Cumulative NodeStore read and write operation counts. Sourced from MetricsRegistry nodestore_state observable gauge with metric=node_reads_total, node_writes, node_reads_hit.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 33
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_nodestore_state{exported_instance=~\"$node\", metric=\"node_reads_total\"}",
"legendFormat": "Reads Total"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_nodestore_state{exported_instance=~\"$node\", metric=\"node_reads_hit\"}",
"legendFormat": "Reads Hit (cache)"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_nodestore_state{exported_instance=~\"$node\", metric=\"node_writes\"}",
"legendFormat": "Writes Total"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 10
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "NodeStore Write Load & Read Queue",
"description": "Instantaneous write load score and read queue depth. High write load indicates backend pressure. High read queue indicates prefetch thread saturation.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 33
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_nodestore_state{exported_instance=~\"$node\", metric=\"write_load\"}",
"legendFormat": "Write Load"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_nodestore_state{exported_instance=~\"$node\", metric=\"read_queue\"}",
"legendFormat": "Read Queue"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 10
},
"color": {
"mode": "palette-classic"
},
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 100
},
{
"color": "red",
"value": 1000
}
]
}
},
"overrides": []
}
},
{
"title": "--- OTel: Cache Hit Rates ---",
"type": "row",
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 41
},
"collapsed": false,
"panels": []
},
{
"title": "Cache Hit Rates",
"description": "Hit rates for SLE cache, Ledger cache, and AcceptedLedger cache. Values from 0.0 to 1.0. Low values indicate cache thrashing. Sourced from MetricsRegistry cache_metrics observable gauge.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 42
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_cache_metrics{exported_instance=~\"$node\", metric=\"SLE_hit_rate\"}",
"legendFormat": "SLE Hit Rate"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_cache_metrics{exported_instance=~\"$node\", metric=\"ledger_hit_rate\"}",
"legendFormat": "Ledger Hit Rate"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_cache_metrics{exported_instance=~\"$node\", metric=\"AL_hit_rate\"}",
"legendFormat": "AcceptedLedger Hit Rate"
}
],
"fieldConfig": {
"defaults": {
"unit": "percentunit",
"min": 0,
"max": 1,
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 10
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "Cache Sizes",
"description": "TreeNode cache size, TreeNode track size, and FullBelow cache size. Sourced from MetricsRegistry cache_metrics observable gauge.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 42
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_cache_metrics{exported_instance=~\"$node\", metric=\"treenode_cache_size\"}",
"legendFormat": "TreeNode Cache"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_cache_metrics{exported_instance=~\"$node\", metric=\"treenode_track_size\"}",
"legendFormat": "TreeNode Track"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_cache_metrics{exported_instance=~\"$node\", metric=\"fullbelow_size\"}",
"legendFormat": "FullBelow"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"drawStyle": "line",
"lineWidth": 2,
"fillOpacity": 10
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
},
{
"title": "--- OTel: Object Instance Counts ---",
"type": "row",
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 50
},
"collapsed": false,
"panels": []
},
{
"title": "Object Instance Counts",
"description": "Live instance counts for key internal object types tracked by CountedObject<T>. Sourced from MetricsRegistry object_count observable gauge. High counts may indicate memory pressure or object leaks.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 51
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"legend": {
"displayMode": "table",
"placement": "right",
"calcs": ["last", "max"]
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "topk(15, rippled_object_count{exported_instance=~\"$node\"})",
"legendFormat": "{{type}}"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"drawStyle": "line",
"lineWidth": 1,
"fillOpacity": 5
},
"color": {
"mode": "palette-classic"
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "statsd", "otel", "node-health", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(rippled_LedgerMaster_Validated_Ledger_Age, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Node Health (System Metrics)",
"uid": "rippled-system-node-health"
}

View File

@@ -0,0 +1,587 @@
{
"annotations": {
"list": []
},
"description": "Detailed overlay traffic breakdown for categories not covered by the main Network Traffic dashboard. Includes squelch, overhead, validator lists, object fetch, ledger sync, and protocol negotiation traffic. Requires [insight] server=otel in rippled config.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Squelch Traffic (Messages)",
"description": "Squelch-related overlay messages. Squelch is the peer traffic management protocol that suppresses redundant message forwarding. 'squelch' = squelch control messages, 'squelch_suppressed' = messages suppressed by squelch, 'squelch_ignored' = squelch directives that were ignored. High suppressed counts indicate effective bandwidth savings; high ignored counts may indicate misconfigured peers. Sourced from TrafficCount.h squelch categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_squelch_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Squelch In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_squelch_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Squelch Out"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_squelch_suppressed_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Suppressed In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_squelch_suppressed_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Suppressed Out"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_squelch_ignored_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Ignored In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_squelch_ignored_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Ignored Out"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Messages",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Overhead Traffic Breakdown (Bytes)",
"description": "Overlay protocol overhead by sub-category. 'overhead' = base protocol overhead (ping, status, etc.), 'overhead_cluster' = intra-cluster communication overhead, 'overhead_manifest' = validator manifest distribution overhead. High cluster overhead may indicate frequent cluster state syncs; high manifest overhead occurs during UNL changes. Sourced from TrafficCount.h overhead categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_overhead_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Base Overhead In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_overhead_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Base Overhead Out"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_overhead_cluster_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Cluster In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_overhead_cluster_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Cluster Out"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_overhead_manifest_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Manifest In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_overhead_manifest_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Manifest Out"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Validator List Traffic",
"description": "Validator list (UNL) distribution traffic. Validator lists are exchanged when peers share their trusted validator configurations. Spikes occur during UNL updates or when new peers connect. Sourced from TrafficCount.h validator_lists category.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validator_lists_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Bytes In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validator_lists_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Bytes Out"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validator_lists_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Messages In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_validator_lists_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Messages Out"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Count",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": [
{
"matcher": {
"id": "byRegexp",
"options": "/Bytes/"
},
"properties": [
{
"id": "custom.axisPlacement",
"value": "right"
},
{
"id": "unit",
"value": "decbytes"
}
]
}
]
}
},
{
"title": "Set Get/Share Traffic (Bytes)",
"description": "Transaction set get and share traffic. 'set_get' = requests to fetch transaction sets (sent during ledger close), 'set_share' = responses sharing transaction sets. High set_get traffic indicates peers frequently requesting missing transaction sets, which may signal sync delays. Sourced from TrafficCount.h set_get/set_share categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_set_get_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Set Get In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_set_get_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Set Get Out"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_set_share_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Set Share In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_set_share_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Set Share Out"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Have/Requested Transactions (Messages)",
"description": "Transaction availability protocol messages. 'have_transactions' = advertisements that a peer has specific transactions available, 'requested_transactions' = explicit requests for transaction data. A high ratio of requested to have may indicate peers are behind on transaction propagation. Sourced from TrafficCount.h have_transactions/requested_transactions categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_have_transactions_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Have TX In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_have_transactions_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Have TX Out"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_requested_transactions_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Requested TX In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_requested_transactions_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Requested TX Out"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Messages",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Unknown / Unclassified Traffic",
"description": "Traffic that does not match any known overlay message category. Non-zero values may indicate protocol version mismatches, corrupted messages, or new message types not yet classified. Sourced from TrafficCount.h unknown category.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_unknown_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Unknown Bytes In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_unknown_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Unknown Bytes Out"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_unknown_Messages_In{exported_instance=~\"$node\"}",
"legendFormat": "Unknown Messages In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_unknown_Messages_Out{exported_instance=~\"$node\"}",
"legendFormat": "Unknown Messages Out"
}
],
"fieldConfig": {
"defaults": {
"unit": "short",
"custom": {
"axisLabel": "Count",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": [
{
"matcher": {
"id": "byRegexp",
"options": "/Bytes/"
},
"properties": [
{
"id": "custom.axisPlacement",
"value": "right"
},
{
"id": "unit",
"value": "decbytes"
}
]
}
]
}
},
{
"title": "Proof Path Traffic",
"description": "Proof path request/response traffic for ledger state proof exchange. Used by peers to verify specific ledger entries without downloading the full ledger. High request volume may indicate peers validating state during catch-up. Sourced from TrafficCount.h proof_path_request/proof_path_response categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_proof_path_request_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Request Bytes In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_proof_path_request_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Request Bytes Out"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_proof_path_response_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Response Bytes In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_proof_path_response_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Response Bytes Out"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Replay Delta Traffic",
"description": "Replay delta request/response traffic for ledger replay protocol. Used during catch-up to efficiently replay ledger state changes. Sourced from TrafficCount.h replay_delta_request/replay_delta_response categories.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_replay_delta_request_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Request Bytes In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_replay_delta_request_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Request Bytes Out"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_replay_delta_response_Bytes_In{exported_instance=~\"$node\"}",
"legendFormat": "Response Bytes In"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_replay_delta_response_Bytes_Out{exported_instance=~\"$node\"}",
"legendFormat": "Response Bytes Out"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Bytes",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "statsd", "overlay", "network", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(rippled_squelch_Messages_In, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Overlay Traffic Detail (System Metrics)",
"uid": "rippled-system-overlay-detail"
}

View File

@@ -0,0 +1,417 @@
{
"annotations": {
"list": []
},
"description": "RPC and pathfinding metrics from beast::insight System Metrics. Requires [insight] server=otel in rippled config.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "RPC Request Rate (System Metrics)",
"description": "Rate of RPC requests as counted by the beast::insight counter. Sourced from rpc.requests (ServerHandler.cpp:108) which increments on every HTTP and WebSocket RPC request. Compare with the span-based rpc.request rate in the RPC Performance dashboard for cross-validation.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_rpc_requests_total{exported_instance=~\"$node\"}[5m])",
"legendFormat": "Requests / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "reqps"
},
"overrides": []
}
},
{
"title": "RPC Response Time (System Metrics)",
"description": "P95 and P50 of RPC response time from the beast::insight timer. Sourced from the rpc.time event (ServerHandler.cpp:110) which records elapsed milliseconds for each RPC response. This measures the full HTTP handler time, not just command execution. Compare with span-based rpc.request duration.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_rpc_time{exported_instance=~\"$node\", quantile=\"0.95\"}",
"legendFormat": "P95 Response Time"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_rpc_time{exported_instance=~\"$node\", quantile=\"0.5\"}",
"legendFormat": "P50 Response Time"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Latency (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "RPC Response Size",
"description": "P95 and P50 of RPC response payload size in bytes. Sourced from the rpc.size event (ServerHandler.cpp:109) which records the byte length of each RPC JSON response. Large responses may indicate expensive queries (e.g. account_tx with many results) or API misuse.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_rpc_size{exported_instance=~\"$node\", quantile=\"0.95\"}",
"legendFormat": "P95 Response Size"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_rpc_size{exported_instance=~\"$node\", quantile=\"0.5\"}",
"legendFormat": "P50 Response Size"
}
],
"fieldConfig": {
"defaults": {
"unit": "decbytes",
"custom": {
"axisLabel": "Size (Bytes)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "RPC Response Time Distribution",
"description": "Distribution of RPC response times from the beast::insight timer showing P50, P90, P95, and P99 quantiles. Sourced from the rpc.time event (ServerHandler.cpp:110). Useful for detecting bimodal latency or long-tail requests.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_rpc_time{exported_instance=~\"$node\", quantile=\"0.5\"}",
"legendFormat": "P50"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_rpc_time{exported_instance=~\"$node\", quantile=\"0.9\"}",
"legendFormat": "P90"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_rpc_time{exported_instance=~\"$node\", quantile=\"0.95\"}",
"legendFormat": "P95"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_rpc_time{exported_instance=~\"$node\", quantile=\"0.99\"}",
"legendFormat": "P99"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Latency (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Pathfinding Fast Duration",
"description": "P95 and P50 of fast pathfinding execution time. Sourced from the pathfind_fast event (PathRequests.h:23) which records the duration of the fast pathfinding algorithm. Fast pathfinding uses a simplified search that trades accuracy for speed.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_pathfind_fast{exported_instance=~\"$node\", quantile=\"0.95\"}",
"legendFormat": "P95 Fast Pathfind"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_pathfind_fast{exported_instance=~\"$node\", quantile=\"0.5\"}",
"legendFormat": "P50 Fast Pathfind"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Pathfinding Full Duration",
"description": "P95 and P50 of full pathfinding execution time. Sourced from the pathfind_full event (PathRequests.h:24) which records the duration of the exhaustive pathfinding search. Full pathfinding is more expensive and can take significantly longer than fast mode.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_pathfind_full{exported_instance=~\"$node\", quantile=\"0.95\"}",
"legendFormat": "P95 Full Pathfind"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "rippled_pathfind_full{exported_instance=~\"$node\", quantile=\"0.5\"}",
"legendFormat": "P50 Full Pathfind"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Duration (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Resource Warnings Rate",
"description": "Rate of resource warning events from the Resource Manager. Sourced from the warn meter (Logic.h:33) which increments when a consumer (peer or RPC client) exceeds the warning threshold for resource usage. A rising rate indicates aggressive clients that may need throttling. NOTE: This panel will show no data until the |m -> |c fix is applied in System MetricsCollector.cpp:706 (Phase 6 Task 6.1).",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_warn_total{exported_instance=~\"$node\"}[5m])",
"legendFormat": "Warnings / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 0.1
},
{
"color": "red",
"value": 1
}
]
}
},
"overrides": []
}
},
{
"title": "Resource Drops Rate",
"description": "Rate of resource drop events from the Resource Manager. Sourced from the drop meter (Logic.h:34) which increments when a consumer is disconnected or blocked due to excessive resource usage. Non-zero values mean the node is actively rejecting abusive connections. NOTE: This panel will show no data until the |m -> |c fix is applied in System MetricsCollector.cpp:706 (Phase 6 Task 6.1).",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "rate(rippled_drop_total{exported_instance=~\"$node\"}[5m])",
"legendFormat": "Drops / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 0.01
},
{
"color": "red",
"value": 0.1
}
]
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "statsd", "rpc", "pathfinding", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id)",
"type": "query",
"query": "label_values(rippled_rpc_requests_total, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "RPC & Pathfinding (System Metrics)",
"uid": "rippled-system-rpc"
}

View File

@@ -0,0 +1,384 @@
{
"annotations": {
"list": []
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"panels": [
{
"title": "Transaction Processing Rate",
"description": "Rate of transactions entering the processing pipeline. tx.process (NetworkOPs.cpp:1227) fires when a transaction is submitted locally or received from a peer and enters processTransaction(). tx.receive (PeerImp.cpp:1273) fires when a raw transaction message arrives from a peer before deduplication.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"tx.process\"}[5m]))",
"legendFormat": "tx.process / Sec"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"tx.receive\"}[5m]))",
"legendFormat": "tx.receive / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Transactions / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Transaction Processing Latency",
"description": "p95 and p50 latency of transaction processing (tx.process span). Measures the time from when a transaction enters processTransaction() to completion. Computed via histogram_quantile() over the spanmetrics duration histogram with a 5m rate window.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"tx.process\"}[5m])))",
"legendFormat": "P95"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"tx.process\"}[5m])))",
"legendFormat": "P50"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Latency (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Transaction Path Distribution",
"description": "Breakdown of transactions by origin path. The xrpl.tx.local attribute indicates whether the transaction was submitted locally (true) or received from a peer (false). Helps understand the ratio of locally-originated vs relayed transactions.",
"type": "piechart",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum by (xrpl_tx_local) (rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"tx.process\"}[5m]))",
"legendFormat": "Local = {{xrpl_tx_local}}"
}
]
},
{
"title": "Transaction Receive vs Suppressed",
"description": "Total rate of raw transaction messages received from peers (tx.receive span from PeerImp.cpp:1273). This fires before deduplication via the HashRouter, so the difference between tx.receive and tx.process reflects suppressed duplicate transactions.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"tx.receive\"}[5m]))",
"legendFormat": "Total Received"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Transactions / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Transaction Processing Duration Heatmap",
"description": "Heatmap showing the distribution of tx.process span durations across histogram buckets over time. Each cell represents the count of transactions that completed within that latency bucket in a 5m window. Reveals whether processing times are consistent or exhibit multi-modal patterns.",
"type": "heatmap",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
},
"yAxis": {
"axisLabel": "Duration (ms)"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(increase(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"tx.process\"}[5m])) by (le)",
"legendFormat": "{{le}}",
"format": "heatmap"
}
]
},
{
"title": "Transaction Apply Duration per Ledger",
"description": "p95 and p50 latency of applying the consensus transaction set to a new ledger. The tx.apply span (BuildLedger.cpp:88) wraps the applyTransactions() function that iterates through the CanonicalTXSet and applies each transaction to the OpenView. Long durations indicate heavy transaction sets or expensive transaction processing.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 16
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by (le) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"tx.apply\"}[5m])))",
"legendFormat": "P95 tx.apply"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by (le) (rate(traces_span_metrics_duration_milliseconds_bucket{exported_instance=~\"$node\", span_name=\"tx.apply\"}[5m])))",
"legendFormat": "P50 tx.apply"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"custom": {
"axisLabel": "Latency (ms)",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Peer Transaction Receive Rate",
"description": "Rate of transaction messages received from network peers. Sourced from the tx.receive span (PeerImp.cpp:1273) which fires in the onMessage(TMTransaction) handler. High rates may indicate network-wide transaction volume spikes or peer flooding.",
"type": "timeseries",
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"tx.receive\"}[5m]))",
"legendFormat": "tx.receive / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": {
"axisLabel": "Transactions / Sec",
"spanNulls": true,
"insertNulls": false,
"showPoints": "auto",
"pointSize": 3
}
},
"overrides": []
}
},
{
"title": "Transaction Apply Failed Rate",
"description": "Rate of tx.apply spans completing with error status, indicating transaction application failures during ledger building. The span records xrpl.ledger.tx_failed as an attribute. Thresholds: green < 0.1/sec, yellow 0.1-1/sec, red > 1/sec. Some failures are normal (e.g. conflicting offers) but sustained high rates may indicate issues.",
"type": "stat",
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"options": {
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus"
},
"expr": "sum(rate(traces_span_metrics_calls_total{exported_instance=~\"$node\", span_name=\"tx.apply\", status_code=\"STATUS_CODE_ERROR\"}[5m]))",
"legendFormat": "Failed / Sec"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops",
"thresholds": {
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 0.1
},
{
"color": "red",
"value": 1
}
]
}
},
"overrides": []
}
}
],
"schemaVersion": 39,
"tags": ["rippled", "transactions", "telemetry"],
"templating": {
"list": [
{
"name": "node",
"label": "Node",
"description": "Filter by rippled node (service.instance.id \u2014 e.g. Node-1)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total, exported_instance)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
},
{
"name": "tx_origin",
"label": "TX Origin",
"description": "Filter by transaction origin (true = local submit, false = peer relay)",
"type": "query",
"query": "label_values(traces_span_metrics_calls_total{span_name=\"tx.process\"}, xrpl_tx_local)",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"includeAll": true,
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
},
"multi": true,
"refresh": 2,
"sort": 1
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"title": "Transaction Overview",
"uid": "rippled-transactions"
}

View File

@@ -0,0 +1,12 @@
apiVersion: 1
providers:
- name: rippled-telemetry
orgId: 1
folder: rippled
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
foldersFromFilesStructure: false

View File

@@ -0,0 +1,12 @@
# Grafana datasource provisioning for the rippled telemetry stack.
# Auto-configures Jaeger as a trace data source on Grafana startup.
# Access Grafana at http://localhost:3000, then use Explore -> Jaeger
# to browse rippled traces.
apiVersion: 1
datasources:
- name: Jaeger
type: jaeger
access: proxy
url: http://jaeger:16686

View File

@@ -0,0 +1,24 @@
# Grafana Loki data source provisioning for rippled log-trace correlation.
#
# Phase 8: Log-Trace Correlation and Centralized Log Ingestion
#
# Loki ingests rippled logs via OTel Collector's filelog receiver.
# The derivedFields config links trace_id values in log lines back to
# Tempo traces, enabling one-click log-to-trace navigation in Grafana.
#
# See: OpenTelemetryPlan/Phase8_taskList.md (Tasks 8.2, 8.4)
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
url: http://loki:3100
uid: loki
jsonData:
derivedFields:
- datasourceUid: tempo
matcherRegex: "trace_id=(\\w+)"
name: TraceID
url: "$${__value.raw}"

View File

@@ -0,0 +1,10 @@
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
uid: prometheus
access: proxy
url: http://prometheus:9090
isDefault: false
editable: true

View File

@@ -0,0 +1,155 @@
# Grafana datasource provisioning for Grafana Tempo.
# Auto-configures Tempo as a trace data source on Grafana startup.
# Access Grafana at http://localhost:3000, then use Explore -> Tempo
# to browse rippled traces using TraceQL.
#
# Search filters provide pre-configured dropdowns in the Explore UI.
# Each phase adds filters for the span attributes it introduces.
# Phase 1b (infra): Base filters — node identity, service, span name, status.
# Phase 2 (RPC): RPC command, status, role filters.
# Phase 3 (TX): Transaction hash, local/peer origin, status.
# Phase 4 (Cons): Consensus mode, round, ledger sequence, close time.
apiVersion: 1
datasources:
- name: Tempo
type: tempo
access: proxy
url: http://tempo:3200
uid: tempo
jsonData:
nodeGraph:
enabled: true
serviceMap:
datasourceUid: prometheus
# Phase 8: Trace-to-log correlation — enables one-click navigation
# from a Tempo trace to the corresponding Loki log lines. Filters
# by trace_id so only logs from the same trace are shown.
tracesToLogs:
datasourceUid: loki
filterByTraceID: true
filterBySpanID: false
tags: ["partition", "severity"]
tracesToMetrics:
datasourceUid: prometheus
spanStartTimeShift: "-1h"
spanEndTimeShift: "1h"
search:
filters:
# --- Node identification filters ---
# service.name: logical service name (default: "rippled").
# Useful when running multiple service types in the same collector.
- id: service-name
tag: service.name
operator: "="
scope: resource
type: static
# service.instance.id: unique node identifier — configurable via
# the service_instance_id setting in [telemetry], defaults to the
# node's public key. E.g. "Node-1" or "nHB1X37...".
- id: node-id
tag: service.instance.id
operator: "="
scope: resource
type: static
# service.version: rippled build version (e.g., "2.4.0-b1").
# Filter traces from specific software releases.
- id: node-version
tag: service.version
operator: "="
scope: resource
type: dynamic
# xrpl.network.id: numeric network identifier
# (0 = mainnet, 1 = testnet, 2 = devnet, etc.).
- id: network-id
tag: xrpl.network.id
operator: "="
scope: resource
type: dynamic
# xrpl.network.type: human-readable network name
# ("mainnet", "testnet", "devnet", "standalone").
- id: network-type
tag: xrpl.network.type
operator: "="
scope: resource
type: static
# --- Span intrinsic filters ---
- id: span-name
tag: name
operator: "="
scope: intrinsic
type: static
- id: span-status
tag: status
operator: "="
scope: intrinsic
type: static
- id: span-duration
tag: duration
operator: ">"
scope: intrinsic
type: static
# Phase 2: RPC tracing filters
- id: rpc-command
tag: xrpl.rpc.command
operator: "="
scope: span
type: static
- id: rpc-status
tag: xrpl.rpc.status
operator: "="
scope: span
type: dynamic
- id: rpc-role
tag: xrpl.rpc.role
operator: "="
scope: span
type: dynamic
# Phase 3: Transaction tracing filters
- id: tx-hash
tag: xrpl.tx.hash
operator: "="
scope: span
type: static
- id: tx-origin
tag: xrpl.tx.local
operator: "="
scope: span
type: dynamic
- id: tx-status
tag: xrpl.tx.status
operator: "="
scope: span
type: dynamic
# Phase 4: Consensus tracing filters
- id: consensus-mode
tag: xrpl.consensus.mode
operator: "="
scope: span
type: static
- id: consensus-round
tag: xrpl.consensus.round
operator: "="
scope: span
type: dynamic
- id: consensus-ledger-seq
tag: xrpl.consensus.ledger.seq
operator: "="
scope: span
type: static
- id: consensus-close-time-correct
tag: xrpl.consensus.close_time_correct
operator: "="
scope: span
type: dynamic
- id: consensus-state
tag: xrpl.consensus.state
operator: "="
scope: span
type: dynamic
- id: consensus-close-resolution
tag: xrpl.consensus.close_resolution_ms
operator: "="
scope: span
type: dynamic

View File

@@ -0,0 +1,719 @@
#!/usr/bin/env bash
# Integration test for rippled OpenTelemetry instrumentation.
#
# Launches a 6-node xrpld consensus network with telemetry enabled,
# exercises RPC / transaction / consensus code paths, then verifies
# that the expected spans and metrics appear in Jaeger and Prometheus.
#
# Usage:
# bash docker/telemetry/integration-test.sh
#
# Prerequisites:
# - .build/xrpld built with telemetry=ON
# - docker compose (v2)
# - curl, jq
#
# The script leaves the observability stack and xrpld nodes running
# so you can manually inspect Jaeger (localhost:16686) and Grafana
# (localhost:3000). Run with --cleanup to tear down instead.
set -euo pipefail
# ---------------------------------------------------------------------------
# Configuration
# ---------------------------------------------------------------------------
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
XRPLD="$REPO_ROOT/.build/xrpld"
COMPOSE_FILE="$SCRIPT_DIR/docker-compose.yml"
STANDALONE_CFG="$SCRIPT_DIR/xrpld-telemetry.cfg"
WORKDIR="/tmp/xrpld-integration"
NUM_NODES=6
PEER_PORT_BASE=51235
RPC_PORT_BASE=5005
CONSENSUS_TIMEOUT=120
GENESIS_ACCOUNT="rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"
GENESIS_SEED="snoPBrXtMeMyMHUVTgbuqAfg1SUTb"
DEST_ACCOUNT="" # Generated dynamically via wallet_propose
JAEGER="http://localhost:16686"
PROM="http://localhost:9090"
# Counters for pass/fail
PASS=0
FAIL=0
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
log() { printf "\033[1;34m[INFO]\033[0m %s\n" "$*"; }
ok() { printf "\033[1;32m[PASS]\033[0m %s\n" "$*"; PASS=$((PASS + 1)); }
fail() { printf "\033[1;31m[FAIL]\033[0m %s\n" "$*"; FAIL=$((FAIL + 1)); }
die() { printf "\033[1;31m[ERROR]\033[0m %s\n" "$*" >&2; exit 1; }
check_span() {
local op="$1"
local count
count=$(curl -sf "$JAEGER/api/traces?service=rippled&operation=$op&limit=5&lookback=1h" \
| jq '.data | length' 2>/dev/null || echo 0)
if [ "$count" -gt 0 ]; then
ok "$op ($count traces)"
else
fail "$op (0 traces)"
fi
}
# Phase 8: Verify trace_id injection in rippled log output.
# Greps all node debug.log files for the "trace_id=<hex> span_id=<hex>"
# pattern that Logs::format() injects when an active OTel span exists.
# Also cross-checks that a trace_id found in logs matches a trace in Jaeger.
check_log_correlation() {
log "Checking log-trace correlation..."
local total_matches=0
local sample_trace_id=""
for i in $(seq 1 "$NUM_NODES"); do
local logfile="$WORKDIR/node$i/debug.log"
if [ ! -f "$logfile" ]; then
continue
fi
local matches
matches=$(grep -c 'trace_id=[a-f0-9]\{32\} span_id=[a-f0-9]\{16\}' "$logfile" 2>/dev/null || echo 0)
total_matches=$((total_matches + matches))
# Capture the first trace_id we find for cross-referencing with Jaeger
if [ -z "$sample_trace_id" ] && [ "$matches" -gt 0 ]; then
sample_trace_id=$(grep -o 'trace_id=[a-f0-9]\{32\}' "$logfile" | head -1 | cut -d= -f2)
fi
done
if [ "$total_matches" -gt 0 ]; then
ok "Log correlation: found $total_matches log lines with trace_id"
else
fail "Log correlation: no trace_id found in any node debug.log"
fi
# Cross-check: verify the sample trace_id exists in Jaeger
if [ -n "$sample_trace_id" ]; then
local trace_found
trace_found=$(curl -sf "$JAEGER/api/traces/$sample_trace_id" \
| jq '.data | length' 2>/dev/null || echo 0)
if [ "$trace_found" -gt 0 ]; then
ok "Log-Jaeger cross-check: trace_id=$sample_trace_id found in Jaeger"
else
fail "Log-Jaeger cross-check: trace_id=$sample_trace_id NOT found in Jaeger"
fi
fi
}
cleanup() {
log "Cleaning up..."
# Kill xrpld nodes
for i in $(seq 1 "$NUM_NODES"); do
local pidfile="$WORKDIR/node$i/xrpld.pid"
if [ -f "$pidfile" ]; then
kill "$(cat "$pidfile")" 2>/dev/null || true
rm -f "$pidfile"
fi
done
# Also kill any straggling xrpld processes from our workdir
pkill -f "$WORKDIR" 2>/dev/null || true
# Stop docker stack
docker compose -f "$COMPOSE_FILE" down 2>/dev/null || true
# Remove workdir
rm -rf "$WORKDIR"
log "Cleanup complete."
}
# Handle --cleanup flag
if [ "${1:-}" = "--cleanup" ]; then
cleanup
exit 0
fi
# ---------------------------------------------------------------------------
# Step 0: Prerequisites
# ---------------------------------------------------------------------------
log "Checking prerequisites..."
command -v docker >/dev/null 2>&1 || die "docker not found"
docker compose version >/dev/null 2>&1 || die "docker compose (v2) not found"
command -v curl >/dev/null 2>&1 || die "curl not found"
command -v jq >/dev/null 2>&1 || die "jq not found"
[ -x "$XRPLD" ] || die "xrpld binary not found at $XRPLD (build with telemetry=ON)"
[ -f "$COMPOSE_FILE" ] || die "docker-compose.yml not found at $COMPOSE_FILE"
[ -f "$STANDALONE_CFG" ] || die "xrpld-telemetry.cfg not found at $STANDALONE_CFG"
log "All prerequisites met."
# ---------------------------------------------------------------------------
# Step 1: Clean previous run
# ---------------------------------------------------------------------------
log "Cleaning previous run data..."
for i in $(seq 1 "$NUM_NODES"); do
pidfile="$WORKDIR/node$i/xrpld.pid"
if [ -f "$pidfile" ]; then
kill "$(cat "$pidfile")" 2>/dev/null || true
fi
done
pkill -f "$WORKDIR" 2>/dev/null || true
# Kill any xrpld using the standalone config (from key generation)
pkill -f "xrpld-telemetry.cfg" 2>/dev/null || true
sleep 2
rm -rf "$WORKDIR"
mkdir -p "$WORKDIR"
# ---------------------------------------------------------------------------
# Step 2: Start observability stack
# ---------------------------------------------------------------------------
log "Starting observability stack..."
docker compose -f "$COMPOSE_FILE" up -d
log "Waiting for otel-collector to be ready..."
for attempt in $(seq 1 30); do
# The OTLP HTTP endpoint returns 405 for GET (expects POST), which
# means it is listening. curl -sf would fail on 405, so we check
# the HTTP status code explicitly.
status=$(curl -so /dev/null -w '%{http_code}' http://localhost:4318/ 2>/dev/null || echo 000)
if [ "$status" != "000" ]; then
log "otel-collector ready (attempt $attempt, HTTP $status)."
break
fi
if [ "$attempt" -eq 30 ]; then
die "otel-collector not ready after 30s"
fi
sleep 1
done
log "Waiting for Jaeger to be ready..."
for attempt in $(seq 1 30); do
if curl -sf "$JAEGER/" >/dev/null 2>&1; then
log "Jaeger ready (attempt $attempt)."
break
fi
if [ "$attempt" -eq 30 ]; then
die "Jaeger not ready after 30s"
fi
sleep 1
done
# ---------------------------------------------------------------------------
# Step 3: Generate validator keys
# ---------------------------------------------------------------------------
log "Generating $NUM_NODES validator key pairs..."
# Start a temporary standalone xrpld for key generation
TEMP_DATA="$WORKDIR/temp-keygen"
mkdir -p "$TEMP_DATA"
# Create a minimal temp config for key generation
TEMP_CFG="$TEMP_DATA/xrpld.cfg"
cat > "$TEMP_CFG" <<EOCFG
[server]
port_rpc_temp
[port_rpc_temp]
port = 5099
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[node_db]
type=NuDB
path=$TEMP_DATA/nudb
online_delete=256
[database_path]
$TEMP_DATA/db
[debug_logfile]
$TEMP_DATA/debug.log
[ssl_verify]
0
EOCFG
"$XRPLD" --conf "$TEMP_CFG" -a --start > "$TEMP_DATA/stdout.log" 2>&1 &
TEMP_PID=$!
log "Temporary xrpld started (PID $TEMP_PID), waiting for RPC..."
for attempt in $(seq 1 30); do
if curl -sf http://localhost:5099 -d '{"method":"server_info"}' >/dev/null 2>&1; then
log "Temporary xrpld RPC ready (attempt $attempt)."
break
fi
if [ "$attempt" -eq 30 ]; then
kill "$TEMP_PID" 2>/dev/null || true
die "Temporary xrpld RPC not ready after 30s"
fi
sleep 1
done
declare -a SEEDS
declare -a PUBKEYS
for i in $(seq 1 "$NUM_NODES"); do
result=$(curl -sf http://localhost:5099 -d '{"method":"validation_create"}')
seed=$(echo "$result" | jq -r '.result.validation_seed')
pubkey=$(echo "$result" | jq -r '.result.validation_public_key')
if [ -z "$seed" ] || [ "$seed" = "null" ]; then
kill "$TEMP_PID" 2>/dev/null || true
die "Failed to generate key pair $i"
fi
SEEDS+=("$seed")
PUBKEYS+=("$pubkey")
log " Node $i: $pubkey"
done
kill "$TEMP_PID" 2>/dev/null || true
wait "$TEMP_PID" 2>/dev/null || true
rm -rf "$TEMP_DATA"
log "Key generation complete."
# ---------------------------------------------------------------------------
# Step 4: Generate node configs and validators.txt
# ---------------------------------------------------------------------------
log "Generating node configs..."
# Create shared validators.txt
VALIDATORS_FILE="$WORKDIR/validators.txt"
{
echo "[validators]"
for i in $(seq 0 $((NUM_NODES - 1))); do
echo "${PUBKEYS[$i]}"
done
} > "$VALIDATORS_FILE"
# Create per-node configs
for i in $(seq 1 "$NUM_NODES"); do
NODE_DIR="$WORKDIR/node$i"
mkdir -p "$NODE_DIR/nudb" "$NODE_DIR/db"
RPC_PORT=$((RPC_PORT_BASE + i - 1))
PEER_PORT=$((PEER_PORT_BASE + i - 1))
SEED="${SEEDS[$((i - 1))]}"
# Build ips_fixed list (all peers except self)
IPS_FIXED=""
for j in $(seq 1 "$NUM_NODES"); do
if [ "$j" -ne "$i" ]; then
IPS_FIXED="${IPS_FIXED}127.0.0.1 $((PEER_PORT_BASE + j - 1))
"
fi
done
cat > "$NODE_DIR/xrpld.cfg" <<EOCFG
[server]
port_rpc
port_peer
[port_rpc]
port = $RPC_PORT
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_peer]
port = $PEER_PORT
ip = 0.0.0.0
protocol = peer
[node_db]
type=NuDB
path=$NODE_DIR/nudb
online_delete=256
[database_path]
$NODE_DIR/db
[debug_logfile]
$NODE_DIR/debug.log
[validation_seed]
$SEED
[validators_file]
$VALIDATORS_FILE
[ips_fixed]
${IPS_FIXED}
[peer_private]
1
[telemetry]
enabled=1
service_instance_id=Node-${i}
endpoint=http://localhost:4318/v1/traces
exporter=otlp_http
sampling_ratio=1.0
batch_size=512
batch_delay_ms=2000
max_queue_size=2048
trace_rpc=1
trace_transactions=1
trace_consensus=1
trace_peer=1
trace_ledger=1
metrics_endpoint=http://localhost:4318/v1/metrics
[insight]
server=otel
endpoint=http://localhost:4318/v1/metrics
prefix=rippled
[rpc_startup]
{ "command": "log_level", "severity": "warning" }
[ssl_verify]
0
EOCFG
log " Node $i config: RPC=$RPC_PORT, Peer=$PEER_PORT"
done
# ---------------------------------------------------------------------------
# Step 5: Start all 6 nodes
# ---------------------------------------------------------------------------
log "Starting $NUM_NODES xrpld nodes..."
for i in $(seq 1 "$NUM_NODES"); do
NODE_DIR="$WORKDIR/node$i"
"$XRPLD" --conf "$NODE_DIR/xrpld.cfg" --start > "$NODE_DIR/stdout.log" 2>&1 &
echo $! > "$NODE_DIR/xrpld.pid"
log " Node $i started (PID $(cat "$NODE_DIR/xrpld.pid"))"
done
# Give nodes a moment to initialize
sleep 5
# ---------------------------------------------------------------------------
# Step 6: Wait for consensus
# ---------------------------------------------------------------------------
log "Waiting for nodes to reach 'proposing' state (timeout: ${CONSENSUS_TIMEOUT}s)..."
start_time=$(date +%s)
nodes_ready=0
while [ "$nodes_ready" -lt "$NUM_NODES" ]; do
elapsed=$(( $(date +%s) - start_time ))
if [ "$elapsed" -ge "$CONSENSUS_TIMEOUT" ]; then
fail "Consensus timeout after ${CONSENSUS_TIMEOUT}s ($nodes_ready/$NUM_NODES nodes ready)"
log "Continuing with partial consensus..."
break
fi
nodes_ready=0
for i in $(seq 1 "$NUM_NODES"); do
RPC_PORT=$((RPC_PORT_BASE + i - 1))
state=$(curl -sf "http://localhost:$RPC_PORT" \
-d '{"method":"server_info"}' 2>/dev/null \
| jq -r '.result.info.server_state' 2>/dev/null || echo "unreachable")
if [ "$state" = "proposing" ]; then
nodes_ready=$((nodes_ready + 1))
fi
done
printf "\r %d/%d nodes proposing (%ds elapsed)..." "$nodes_ready" "$NUM_NODES" "$elapsed"
if [ "$nodes_ready" -lt "$NUM_NODES" ]; then
sleep 3
fi
done
echo ""
if [ "$nodes_ready" -eq "$NUM_NODES" ]; then
ok "All $NUM_NODES nodes reached 'proposing' state"
else
fail "Only $nodes_ready/$NUM_NODES nodes reached 'proposing' state"
fi
# ---------------------------------------------------------------------------
# Step 6b: Wait for validated ledger
# ---------------------------------------------------------------------------
log "Waiting for first validated ledger..."
for attempt in $(seq 1 60); do
val_seq=$(curl -sf "http://localhost:$RPC_PORT_BASE" \
-d '{"method":"server_info"}' 2>/dev/null \
| jq -r '.result.info.validated_ledger.seq // 0' 2>/dev/null || echo 0)
if [ "$val_seq" -gt 2 ] 2>/dev/null; then
ok "First validated ledger: seq $val_seq"
break
fi
if [ "$attempt" -eq 60 ]; then
fail "No validated ledger after 60s"
fi
sleep 1
done
# ---------------------------------------------------------------------------
# Step 7: Exercise RPC spans (Phase 2)
# ---------------------------------------------------------------------------
log "Exercising RPC spans..."
curl -sf "http://localhost:$RPC_PORT_BASE" \
-d '{"method":"server_info"}' > /dev/null
curl -sf "http://localhost:$RPC_PORT_BASE" \
-d '{"method":"server_state"}' > /dev/null
curl -sf "http://localhost:$RPC_PORT_BASE" \
-d '{"method":"ledger","params":[{"ledger_index":"current"}]}' > /dev/null
log "RPC commands sent. Waiting 5s for batch export..."
sleep 5
# ---------------------------------------------------------------------------
# Step 8: Submit transaction (Phase 3)
# ---------------------------------------------------------------------------
log "Submitting Payment transaction..."
# Generate a destination wallet
log " Generating destination wallet..."
wallet_result=$(curl -sf "http://localhost:$RPC_PORT_BASE" \
-d '{"method":"wallet_propose"}')
DEST_ACCOUNT=$(echo "$wallet_result" | jq -r '.result.account_id' 2>/dev/null)
if [ -z "$DEST_ACCOUNT" ] || [ "$DEST_ACCOUNT" = "null" ]; then
fail "Could not generate destination wallet"
DEST_ACCOUNT="rrrrrrrrrrrrrrrrrrrrrhoLvTp" # ACCOUNT_ZERO fallback
fi
log " Destination: $DEST_ACCOUNT"
# Get genesis account info
acct_result=$(curl -sf "http://localhost:$RPC_PORT_BASE" \
-d "{\"method\":\"account_info\",\"params\":[{\"account\":\"$GENESIS_ACCOUNT\"}]}")
seq_num=$(echo "$acct_result" | jq -r '.result.account_data.Sequence' 2>/dev/null || echo "unknown")
log " Genesis account sequence: $seq_num"
# Submit payment
submit_result=$(curl -sf "http://localhost:$RPC_PORT_BASE" -d "{
\"method\": \"submit\",
\"params\": [{
\"secret\": \"$GENESIS_SEED\",
\"tx_json\": {
\"TransactionType\": \"Payment\",
\"Account\": \"$GENESIS_ACCOUNT\",
\"Destination\": \"$DEST_ACCOUNT\",
\"Amount\": \"10000000\"
}
}]
}")
engine_result=$(echo "$submit_result" | jq -r '.result.engine_result' 2>/dev/null || echo "unknown")
tx_hash=$(echo "$submit_result" | jq -r '.result.tx_json.hash' 2>/dev/null || echo "unknown")
if [ "$engine_result" = "tesSUCCESS" ] || [ "$engine_result" = "terQUEUED" ]; then
ok "Transaction submitted: $engine_result (hash: ${tx_hash:0:16}...)"
else
fail "Transaction submission: $engine_result"
log " Full response: $(echo "$submit_result" | jq -c .result 2>/dev/null)"
fi
log "Waiting 15s for consensus round + batch export..."
sleep 15
# ---------------------------------------------------------------------------
# Step 9: Verify Jaeger traces
# ---------------------------------------------------------------------------
log "Verifying spans in Jaeger..."
# Check service registration
services=$(curl -sf "$JAEGER/api/services" | jq -r '.data[]' 2>/dev/null || echo "")
if echo "$services" | grep -q "rippled"; then
ok "Service 'rippled' registered in Jaeger"
else
fail "Service 'rippled' NOT found in Jaeger (found: $services)"
fi
log ""
log "--- Phase 2: RPC Spans ---"
check_span "rpc.request"
check_span "rpc.process"
check_span "rpc.command.server_info"
check_span "rpc.command.server_state"
check_span "rpc.command.ledger"
log ""
log "--- Phase 3: Transaction Spans ---"
check_span "tx.process"
check_span "tx.receive"
check_span "tx.apply"
log ""
log "--- Phase 4: Consensus Spans ---"
check_span "consensus.proposal.send"
check_span "consensus.ledger_close"
check_span "consensus.accept"
check_span "consensus.validation.send"
log ""
log "--- Phase 5: Ledger Spans ---"
check_span "ledger.build"
check_span "ledger.validate"
check_span "ledger.store"
log ""
log "--- Phase 5: Peer Spans (trace_peer=1) ---"
check_span "peer.proposal.receive"
check_span "peer.validation.receive"
# ---------------------------------------------------------------------------
# Step 9b: Verify log-trace correlation (Phase 8)
# ---------------------------------------------------------------------------
log ""
log "--- Phase 8: Log-Trace Correlation ---"
check_log_correlation
# ---------------------------------------------------------------------------
# Step 10: Verify Prometheus spanmetrics
# ---------------------------------------------------------------------------
log ""
log "--- Phase 5: Spanmetrics ---"
log "Waiting 20s for Prometheus scrape cycle..."
sleep 20
calls_count=$(curl -sf "$PROM/api/v1/query?query=traces_span_metrics_calls_total" \
| jq '.data.result | length' 2>/dev/null || echo 0)
if [ "$calls_count" -gt 0 ]; then
ok "Prometheus: traces_span_metrics_calls_total ($calls_count series)"
else
fail "Prometheus: traces_span_metrics_calls_total (0 series)"
fi
duration_count=$(curl -sf "$PROM/api/v1/query?query=traces_span_metrics_duration_milliseconds_count" \
| jq '.data.result | length' 2>/dev/null || echo 0)
if [ "$duration_count" -gt 0 ]; then
ok "Prometheus: duration histogram ($duration_count series)"
else
fail "Prometheus: duration histogram (0 series)"
fi
# Check Grafana
if curl -sf http://localhost:3000/api/health > /dev/null 2>&1; then
ok "Grafana: healthy at localhost:3000"
else
fail "Grafana: not reachable at localhost:3000"
fi
# ---------------------------------------------------------------------------
# Step 10b: Verify native OTel metrics in Prometheus (beast::insight)
# ---------------------------------------------------------------------------
log ""
log "--- Phase 7: Native OTel Metrics (beast::insight via OTLP) ---"
log "Waiting 20s for OTLP metric export + Prometheus scrape..."
sleep 20
check_otel_metric() {
local metric_name="$1"
local result
result=$(curl -sf "$PROM/api/v1/query?query=$metric_name" \
| jq '.data.result | length' 2>/dev/null || echo 0)
if [ "$result" -gt 0 ]; then
ok "OTel: $metric_name ($result series)"
else
fail "OTel: $metric_name (0 series)"
fi
}
# Node health gauges (ObservableGauge — no _total suffix)
check_otel_metric "rippled_LedgerMaster_Validated_Ledger_Age"
check_otel_metric "rippled_LedgerMaster_Published_Ledger_Age"
check_otel_metric "rippled_job_count"
# State accounting
check_otel_metric "rippled_State_Accounting_Full_duration"
# Peer finder
check_otel_metric "rippled_Peer_Finder_Active_Inbound_Peers"
check_otel_metric "rippled_Peer_Finder_Active_Outbound_Peers"
# RPC counters (Counter — Prometheus adds _total suffix automatically)
check_otel_metric "rippled_rpc_requests_total"
# Overlay traffic
check_otel_metric "rippled_total_Bytes_In"
# Verify StatsD receiver is NOT required (no statsd receiver in pipeline)
log ""
log "--- Verify StatsD receiver is not required ---"
statsd_port_check=$(curl -sf "http://localhost:8125" 2>&1 || echo "refused")
if echo "$statsd_port_check" | grep -qi "refused\|error\|connection"; then
ok "StatsD port 8125 is not listening (not required)"
else
fail "StatsD port 8125 appears to be listening (should not be needed)"
fi
# ---------------------------------------------------------------------------
# Step 10c: Verify Phase 9 OTel SDK Metrics
# ---------------------------------------------------------------------------
log ""
log "--- Phase 9: OTel SDK Metrics (MetricsRegistry) ---"
log "Waiting 15s for OTel metric export + Prometheus scrape..."
sleep 15
check_otel_metric() {
local metric_name="$1"
local result
result=$(curl -sf "$PROM/api/v1/query?query=$metric_name" \
| jq '.data.result | length' 2>/dev/null || echo 0)
if [ "$result" -gt 0 ]; then
ok "OTel: $metric_name ($result series)"
else
fail "OTel: $metric_name (0 series)"
fi
}
# Task 9.1: NodeStore I/O
check_otel_metric 'rippled_nodestore_state{metric="node_reads_total"}'
check_otel_metric 'rippled_nodestore_state{metric="write_load"}'
# Task 9.2: Cache hit rates
check_otel_metric 'rippled_cache_metrics{metric="SLE_hit_rate"}'
check_otel_metric 'rippled_cache_metrics{metric="treenode_cache_size"}'
# Task 9.3: TxQ metrics
check_otel_metric 'rippled_txq_metrics{metric="txq_count"}'
check_otel_metric 'rippled_txq_metrics{metric="txq_reference_fee_level"}'
# Task 9.4: Per-RPC metrics
check_otel_metric "rippled_rpc_method_started_total"
check_otel_metric "rippled_rpc_method_finished_total"
# Task 9.5: Per-job metrics
check_otel_metric "rippled_job_queued_total"
check_otel_metric "rippled_job_finished_total"
# Task 9.6: Counted object instances
check_otel_metric "rippled_object_count"
# Task 9.7: Load factor breakdown
check_otel_metric 'rippled_load_factor_metrics{metric="load_factor"}'
check_otel_metric 'rippled_load_factor_metrics{metric="load_factor_server"}'
# ---------------------------------------------------------------------------
# Step 11: Summary
# ---------------------------------------------------------------------------
echo ""
echo "==========================================================="
echo " INTEGRATION TEST RESULTS"
echo "==========================================================="
printf " \033[1;32mPASSED: %d\033[0m\n" "$PASS"
printf " \033[1;31mFAILED: %d\033[0m\n" "$FAIL"
echo "==========================================================="
echo ""
echo " Observability stack is running:"
echo ""
echo " Jaeger UI: http://localhost:16686"
echo " Grafana: http://localhost:3000"
echo " Prometheus: http://localhost:9090"
echo " Loki: http://localhost:3100"
echo ""
echo " xrpld nodes (6) are running:"
for i in $(seq 1 "$NUM_NODES"); do
RPC_PORT=$((RPC_PORT_BASE + i - 1))
PEER_PORT=$((PEER_PORT_BASE + i - 1))
echo " Node $i: RPC=localhost:$RPC_PORT Peer=:$PEER_PORT PID=$(cat "$WORKDIR/node$i/xrpld.pid" 2>/dev/null || echo 'unknown')"
done
echo ""
echo " To tear down:"
echo " bash docker/telemetry/integration-test.sh --cleanup"
echo ""
echo "==========================================================="
if [ "$FAIL" -gt 0 ]; then
exit 1
fi

View File

@@ -0,0 +1,120 @@
# OpenTelemetry Collector configuration for rippled development.
#
# Pipelines:
# traces: OTLP receiver -> batch processor -> debug + Jaeger + Tempo + spanmetrics
# metrics: OTLP receiver + spanmetrics connector -> Prometheus exporter
# logs: filelog receiver -> batch processor -> otlphttp/Loki (Phase 8)
#
# rippled sends traces via OTLP/HTTP to port 4318. The collector batches
# them, forwards to both Jaeger and Tempo, and derives RED metrics via the
# spanmetrics connector, which Prometheus scrapes on port 8889.
#
# rippled sends beast::insight metrics natively via OTLP/HTTP to port 4318
# (same endpoint as traces). The OTLP receiver feeds both the traces and
# metrics pipelines. Metrics are exported to Prometheus alongside
# span-derived metrics.
#
# For backward compatibility, the StatsD receiver config is preserved below
# but commented out. If you need StatsD fallback (server=statsd in
# [insight]), uncomment the statsd receiver and add it to the metrics
# pipeline receivers list.
#
# Phase 8: The filelog receiver tails rippled's debug.log files under
# /var/log/rippled/ (mounted from the host). A regex_parser operator
# extracts timestamp, partition, severity, and optional trace_id/span_id
# fields injected by Logs::format(). Parsed logs are exported to Grafana
# Loki for log-trace correlation.
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
# StatsD receiver — kept for backward compatibility with server=statsd.
# Uncomment and add "statsd" to metrics pipeline receivers if needed.
# statsd:
# endpoint: "0.0.0.0:8125"
# aggregation_interval: 15s
# enable_metric_type: true
# is_monotonic_counter: true
# timer_histogram_mapping:
# - statsd_type: "timing"
# observer_type: "summary"
# summary:
# percentiles: [0, 50, 90, 95, 99, 100]
# - statsd_type: "histogram"
# observer_type: "summary"
# summary:
# percentiles: [0, 50, 90, 95, 99, 100]
# Phase 8: Filelog receiver tails rippled debug.log files for log-trace
# correlation. Extracts structured fields (timestamp, partition, severity,
# trace_id, span_id, message) via regex. The trace_id and span_id are
# optional — only present when the log was emitted within an active span.
filelog:
include: [/var/log/rippled/*/debug.log]
operators:
- type: regex_parser
regex: '^(?P<timestamp>\S+)\s+(?P<partition>\S+):(?P<severity>\S+)\s+(?:trace_id=(?P<trace_id>[a-f0-9]+)\s+span_id=(?P<span_id>[a-f0-9]+)\s+)?(?P<message>.*)$'
timestamp:
parse_from: attributes.timestamp
layout: "%Y-%b-%d %H:%M:%S.%f"
processors:
batch:
timeout: 1s
send_batch_size: 100
connectors:
spanmetrics:
# Expose service.instance.id (node public key) as a Prometheus label so
# Grafana dashboards can filter metrics by individual node.
resource_metrics_key_attributes:
- service.instance.id
histogram:
explicit:
buckets: [1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 5s]
dimensions:
- name: xrpl.rpc.command
- name: xrpl.rpc.status
- name: xrpl.consensus.mode
- name: xrpl.tx.local
- name: xrpl.peer.proposal.trusted
- name: xrpl.peer.validation.trusted
exporters:
debug:
verbosity: detailed
otlp/jaeger:
endpoint: jaeger:4317
tls:
insecure: true
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
# Phase 8: Export logs to Grafana Loki via OTLP/HTTP. Loki 3.x supports
# native OTLP ingestion on its /otlp endpoint, replacing the removed
# loki exporter (dropped in otel-collector-contrib v0.147.0).
otlphttp/loki:
endpoint: http://loki:3100/otlp
prometheus:
endpoint: 0.0.0.0:8889
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp/jaeger, otlp/tempo, spanmetrics]
metrics:
receivers: [otlp, spanmetrics]
processors: [batch]
exporters: [prometheus]
# Phase 8: Log pipeline ingests rippled debug.log via filelog receiver,
# batches entries, and exports to Loki for log-trace correlation.
logs:
receivers: [filelog]
processors: [batch]
exporters: [otlphttp/loki]

View File

@@ -0,0 +1,9 @@
# Prometheus configuration for scraping spanmetrics from OTel Collector.
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: otel-collector
static_configs:
- targets: ["otel-collector:8889"]

View File

@@ -0,0 +1,59 @@
# Grafana Tempo configuration for rippled telemetry stack.
#
# Runs in single-binary mode for local development.
# Receives traces via OTLP/gRPC from the OTel Collector and stores
# them locally. Queryable via Grafana Explore using the Tempo datasource.
#
# Search filters are configured on the Grafana datasource side
# (grafana/provisioning/datasources/tempo.yaml). Tempo auto-indexes
# all span attributes for search in single-binary mode.
#
# For production, replace local storage with S3/GCS backend and adjust
# retention via the compactor settings. See:
# https://grafana.com/docs/tempo/latest/configuration/
stream_over_http_enabled: true
server:
http_listen_port: 3200
distributor:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
ingester:
max_block_duration: 5m
compactor:
compaction:
block_retention: 1h
# Enable metrics generator for service graph and span metrics.
# Produces RED metrics (rate, errors, duration) per service/span,
# feeding Grafana's service map visualization.
metrics_generator:
registry:
external_labels:
source: tempo
storage:
path: /var/tempo/generator/wal
remote_write:
- url: http://prometheus:9090/api/v1/write
overrides:
defaults:
metrics_generator:
processors:
- service-graphs
- span-metrics
storage:
trace:
backend: local
wal:
path: /var/tempo/wal
local:
path: /var/tempo/blocks

View File

@@ -0,0 +1,197 @@
# Telemetry Workload Tools
Synthetic workload generation and validation tools for rippled's OpenTelemetry telemetry stack. These tools validate that all spans, metrics, dashboards, and log-trace correlation work end-to-end under controlled load.
## Quick Start
```bash
# Build rippled with telemetry enabled
conan install . --build=missing -o telemetry=True
cmake --preset default -Dtelemetry=ON
cmake --build --preset default
# Run full validation (starts everything, runs load, validates)
docker/telemetry/workload/run-full-validation.sh --xrpld .build/xrpld
# Cleanup when done
docker/telemetry/workload/run-full-validation.sh --cleanup
```
## Architecture
```
run-full-validation.sh (orchestrator)
|
|-- docker-compose.workload.yaml
| |-- otel-collector (traces + StatsD)
| |-- jaeger (trace search)
| |-- tempo (trace storage)
| |-- prometheus (metrics)
| |-- loki (log aggregation)
| |-- grafana (dashboards)
|
|-- generate-validator-keys.sh
| -> validator-keys.json, validators.txt
|
|-- 5x xrpld nodes (local processes, full telemetry)
|
|-- rpc_load_generator.py (WebSocket RPC traffic)
|-- tx_submitter.py (transaction diversity)
|
|-- validate_telemetry.py (pass/fail checks)
| -> validation-report.json
|
|-- benchmark.sh (baseline vs telemetry comparison)
-> benchmark-report-*.md
```
## Tools Reference
### run-full-validation.sh
Orchestrates the complete validation pipeline. Starts the telemetry stack, starts a multi-node rippled cluster, generates load, and validates the results.
```bash
# Full validation with defaults
./run-full-validation.sh --xrpld /path/to/xrpld
# Custom load parameters
./run-full-validation.sh --xrpld /path/to/xrpld \
--rpc-rate 100 --rpc-duration 300 \
--tx-tps 10 --tx-duration 300
# Include performance benchmarks
./run-full-validation.sh --xrpld /path/to/xrpld --with-benchmark
# Skip Loki checks (if Phase 8 not deployed)
./run-full-validation.sh --xrpld /path/to/xrpld --skip-loki
```
### rpc_load_generator.py
Generates RPC traffic matching realistic production distribution:
- 40% health checks (server_info, fee)
- 30% wallet queries (account_info, account_lines, account_objects)
- 15% explorer queries (ledger, ledger_data)
- 10% transaction lookups (tx, account_tx)
- 5% DEX queries (book_offers, amm_info)
```bash
# Basic usage
python3 rpc_load_generator.py --endpoints ws://localhost:6006 --rate 50 --duration 120
# Multiple endpoints (round-robin)
python3 rpc_load_generator.py \
--endpoints ws://localhost:6006 ws://localhost:6007 \
--rate 100 --duration 300
# Custom weights
python3 rpc_load_generator.py --endpoints ws://localhost:6006 \
--weights '{"server_info": 80, "account_info": 20}'
```
### tx_submitter.py
Submits diverse transaction types to exercise the full span and metric surface:
- Payment (XRP transfers)
- OfferCreate / OfferCancel (DEX activity)
- TrustSet (trust line creation)
- NFTokenMint / NFTokenCreateOffer (NFT activity)
- EscrowCreate / EscrowFinish (escrow lifecycle)
- AMMCreate / AMMDeposit (AMM pool operations)
```bash
# Basic usage
python3 tx_submitter.py --endpoint ws://localhost:6006 --tps 5 --duration 120
# Custom mix
python3 tx_submitter.py --endpoint ws://localhost:6006 \
--weights '{"Payment": 60, "OfferCreate": 20, "TrustSet": 20}'
```
### validate_telemetry.py
Automated validation that all expected telemetry data exists:
- **Span validation**: All 16+ span types with required attributes
- **Metric validation**: SpanMetrics, StatsD, Phase 9 metrics
- **Log-trace correlation**: trace_id/span_id in Loki logs
- **Dashboard validation**: All 10 Grafana dashboards accessible
```bash
# Run all validations
python3 validate_telemetry.py --report /tmp/report.json
# Skip Loki checks
python3 validate_telemetry.py --skip-loki --report /tmp/report.json
```
### benchmark.sh
Compares baseline (no telemetry) vs telemetry-enabled performance:
```bash
./benchmark.sh --xrpld /path/to/xrpld --duration 300
```
Thresholds (configurable via environment):
| Metric | Threshold | Env Variable |
| ----------------- | --------- | --------------------------- |
| CPU overhead | < 3% | BENCH_CPU_OVERHEAD_PCT |
| Memory overhead | < 5MB | BENCH_MEM_OVERHEAD_MB |
| RPC p99 latency | < 2ms | BENCH_RPC_LATENCY_IMPACT_MS |
| Throughput impact | < 5% | BENCH_TPS_IMPACT_PCT |
| Consensus impact | < 1% | BENCH_CONSENSUS_IMPACT_PCT |
## Reading Validation Reports
The validation report (`validation-report.json`) is structured as:
```json
{
"summary": {
"total": 45,
"passed": 42,
"failed": 3,
"all_passed": false
},
"checks": [
{
"name": "span.rpc.request",
"category": "span",
"passed": true,
"message": "rpc.request: 15 traces found",
"details": { "trace_count": 15 }
}
]
}
```
Categories:
- **span**: Span type existence and attribute validation
- **metric**: Prometheus metric existence
- **log**: Log-trace correlation checks
- **dashboard**: Grafana dashboard accessibility
## CI Integration
The validation runs as a GitHub Actions workflow (`.github/workflows/telemetry-validation.yml`):
- Triggered manually or on pushes to telemetry branches
- Builds rippled, starts the full stack, runs load, validates
- Uploads reports as artifacts
- Posts summary to PR
## Configuration Files
| File | Purpose |
| ------------------------------ | ----------------------------------------------- |
| `expected_spans.json` | Span inventory (names, attributes, hierarchies) |
| `expected_metrics.json` | Metric inventory (SpanMetrics, StatsD, Phase 9) |
| `test_accounts.json` | Test account roles (keys generated at runtime) |
| `xrpld-validator.cfg.template` | Node config template with placeholders |
| `requirements.txt` | Python dependencies |

View File

@@ -0,0 +1,379 @@
#!/usr/bin/env bash
# benchmark.sh — Performance benchmark for rippled telemetry overhead.
#
# Runs two identical workloads against a rippled cluster:
# 1. Baseline: telemetry disabled ([telemetry] enabled=0)
# 2. Telemetry: full telemetry enabled (traces + StatsD + all categories)
#
# Compares CPU, memory, RPC latency, TPS, and consensus round time.
# Outputs a Markdown table with pass/fail against configured thresholds.
#
# Usage:
# ./benchmark.sh --xrpld /path/to/xrpld --duration 300
#
# Thresholds (configurable via environment variables):
# BENCH_CPU_OVERHEAD_PCT=3 CPU overhead < 3%
# BENCH_MEM_OVERHEAD_MB=5 Memory overhead < 5MB
# BENCH_RPC_LATENCY_IMPACT_MS=2 RPC p99 latency impact < 2ms
# BENCH_TPS_IMPACT_PCT=5 Throughput impact < 5%
# BENCH_CONSENSUS_IMPACT_PCT=1 Consensus round time impact < 1%
set -euo pipefail
# ---------------------------------------------------------------------------
# Colored output helpers
# ---------------------------------------------------------------------------
log() { printf "\033[1;34m[BENCH]\033[0m %s\n" "$*"; }
ok() { printf "\033[1;32m[BENCH]\033[0m %s\n" "$*"; }
warn() { printf "\033[1;33m[BENCH]\033[0m %s\n" "$*"; }
fail() { printf "\033[1;31m[BENCH]\033[0m %s\n" "$*"; }
die() { printf "\033[1;31m[BENCH]\033[0m %s\n" "$*" >&2; exit 1; }
# ---------------------------------------------------------------------------
# Defaults and thresholds
# ---------------------------------------------------------------------------
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)"
# Configurable thresholds via environment variables.
CPU_THRESHOLD="${BENCH_CPU_OVERHEAD_PCT:-3}"
MEM_THRESHOLD="${BENCH_MEM_OVERHEAD_MB:-5}"
RPC_THRESHOLD="${BENCH_RPC_LATENCY_IMPACT_MS:-2}"
TPS_THRESHOLD="${BENCH_TPS_IMPACT_PCT:-5}"
CONSENSUS_THRESHOLD="${BENCH_CONSENSUS_IMPACT_PCT:-1}"
XRPLD="${BENCH_XRPLD:-$REPO_ROOT/.build/xrpld}"
DURATION=300
NUM_NODES=3
WORKDIR="/tmp/xrpld-benchmark"
RESULTS_DIR="$SCRIPT_DIR/benchmark-results"
RPC_PORT_BASE=5020
PEER_PORT_BASE=51250
# ---------------------------------------------------------------------------
# Argument parsing
# ---------------------------------------------------------------------------
usage() {
echo "Usage: $0 [OPTIONS]"
echo ""
echo "Options:"
echo " --xrpld PATH Path to xrpld binary (default: \$REPO_ROOT/.build/xrpld)"
echo " --duration SECS Benchmark duration per run (default: 300)"
echo " --nodes NUM Number of validator nodes (default: 3)"
echo " --output DIR Results output directory"
echo " -h, --help Show this help"
exit 0
}
while [ $# -gt 0 ]; do
case "$1" in
--xrpld) XRPLD="$2"; shift 2 ;;
--duration) DURATION="$2"; shift 2 ;;
--nodes) NUM_NODES="$2"; shift 2 ;;
--output) RESULTS_DIR="$2"; shift 2 ;;
-h|--help) usage ;;
*) die "Unknown option: $1" ;;
esac
done
# Validate prerequisites.
[ -x "$XRPLD" ] || die "xrpld not found at $XRPLD"
command -v jq >/dev/null 2>&1 || die "jq not found"
command -v bc >/dev/null 2>&1 || die "bc not found"
command -v curl >/dev/null 2>&1 || die "curl not found"
mkdir -p "$RESULTS_DIR"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
# ---------------------------------------------------------------------------
# Node cluster management
# ---------------------------------------------------------------------------
start_cluster() {
local telemetry_enabled="$1"
local label="$2"
log "Starting $NUM_NODES-node cluster ($label, telemetry=$telemetry_enabled)..."
rm -rf "$WORKDIR"
mkdir -p "$WORKDIR"
# Generate keys using first node.
bash "$SCRIPT_DIR/generate-validator-keys.sh" "$XRPLD" "$NUM_NODES" "$WORKDIR"
# Build per-node configs.
for i in $(seq 1 "$NUM_NODES"); do
local node_dir="$WORKDIR/node$i"
mkdir -p "$node_dir/nudb" "$node_dir/db"
local rpc_port=$((RPC_PORT_BASE + i - 1))
local peer_port=$((PEER_PORT_BASE + i - 1))
local seed
seed=$(jq -r ".[$((i-1))].seed" "$WORKDIR/validator-keys.json")
# Build ips_fixed list.
local ips_fixed=""
for j in $(seq 1 "$NUM_NODES"); do
if [ "$j" -ne "$i" ]; then
ips_fixed="${ips_fixed}127.0.0.1 $((PEER_PORT_BASE + j - 1))
"
fi
done
# Build telemetry section.
local telemetry_section=""
if [ "$telemetry_enabled" = "1" ]; then
telemetry_section="
[telemetry]
enabled=1
service_instance_id=bench-node-${i}
endpoint=http://localhost:4318/v1/traces
exporter=otlp_http
sampling_ratio=1.0
batch_size=512
batch_delay_ms=2000
max_queue_size=2048
trace_rpc=1
trace_transactions=1
trace_consensus=1
trace_peer=1
trace_ledger=1
[insight]
server=statsd
address=127.0.0.1:8125
prefix=rippled"
else
telemetry_section="
[telemetry]
enabled=0"
fi
cat > "$node_dir/xrpld.cfg" <<EOCFG
[server]
port_rpc
port_peer
[port_rpc]
port = $rpc_port
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_peer]
port = $peer_port
ip = 0.0.0.0
protocol = peer
[node_db]
type=NuDB
path=$node_dir/nudb
online_delete=256
[database_path]
$node_dir/db
[debug_logfile]
$node_dir/debug.log
[validation_seed]
$seed
[validators_file]
$WORKDIR/validators.txt
[ips_fixed]
${ips_fixed}
[peer_private]
1
${telemetry_section}
[rpc_startup]
{ "command": "log_level", "severity": "warning" }
[ssl_verify]
0
EOCFG
"$XRPLD" --conf "$node_dir/xrpld.cfg" --start > "$node_dir/stdout.log" 2>&1 &
echo $! > "$node_dir/xrpld.pid"
done
# Wait for consensus.
log "Waiting for consensus..."
for attempt in $(seq 1 120); do
local ready=0
for i in $(seq 1 "$NUM_NODES"); do
local port=$((RPC_PORT_BASE + i - 1))
local state
state=$(curl -sf "http://localhost:$port" \
-d '{"method":"server_info"}' 2>/dev/null \
| jq -r '.result.info.server_state' 2>/dev/null || echo "")
if [ "$state" = "proposing" ]; then
ready=$((ready + 1))
fi
done
if [ "$ready" -ge "$NUM_NODES" ]; then
ok "All $NUM_NODES nodes proposing (attempt $attempt)"
break
fi
if [ "$attempt" -eq 120 ]; then
warn "Consensus timeout — $ready/$NUM_NODES nodes ready"
fi
sleep 1
done
# Let the cluster stabilize.
sleep 5
}
stop_cluster() {
log "Stopping cluster..."
for i in $(seq 1 "$NUM_NODES"); do
local pidfile="$WORKDIR/node$i/xrpld.pid"
if [ -f "$pidfile" ]; then
kill "$(cat "$pidfile")" 2>/dev/null || true
fi
done
pkill -f "$WORKDIR" 2>/dev/null || true
sleep 3
}
# Build RPC ports CSV string.
rpc_ports_csv() {
local ports=""
for i in $(seq 1 "$NUM_NODES"); do
[ -n "$ports" ] && ports="$ports,"
ports="$ports$((RPC_PORT_BASE + i - 1))"
done
echo "$ports"
}
# ---------------------------------------------------------------------------
# Run benchmark
# ---------------------------------------------------------------------------
log "="
log " rippled Telemetry Performance Benchmark"
log " Nodes: $NUM_NODES | Duration: ${DURATION}s | Binary: $XRPLD"
log "="
# --- Baseline run ---
BASELINE_FILE="$RESULTS_DIR/baseline-${TIMESTAMP}.json"
start_cluster "0" "baseline"
bash "$SCRIPT_DIR/collect_system_metrics.sh" "$(rpc_ports_csv)" "$DURATION" "$BASELINE_FILE"
stop_cluster
# --- Telemetry run ---
TELEMETRY_FILE="$RESULTS_DIR/telemetry-${TIMESTAMP}.json"
start_cluster "1" "telemetry"
bash "$SCRIPT_DIR/collect_system_metrics.sh" "$(rpc_ports_csv)" "$DURATION" "$TELEMETRY_FILE"
stop_cluster
# ---------------------------------------------------------------------------
# Compare results
# ---------------------------------------------------------------------------
log "Comparing results..."
read_metric() {
local file="$1"
local key="$2"
jq -r ".$key // 0" "$file"
}
BASE_CPU=$(read_metric "$BASELINE_FILE" "cpu_pct_avg")
TELE_CPU=$(read_metric "$TELEMETRY_FILE" "cpu_pct_avg")
CPU_DELTA=$(echo "scale=2; $TELE_CPU - $BASE_CPU" | bc 2>/dev/null || echo "0")
BASE_MEM=$(read_metric "$BASELINE_FILE" "memory_rss_mb_peak")
TELE_MEM=$(read_metric "$TELEMETRY_FILE" "memory_rss_mb_peak")
MEM_DELTA=$(echo "scale=2; $TELE_MEM - $BASE_MEM" | bc 2>/dev/null || echo "0")
BASE_RPC=$(read_metric "$BASELINE_FILE" "rpc_p99_ms")
TELE_RPC=$(read_metric "$TELEMETRY_FILE" "rpc_p99_ms")
RPC_DELTA=$(echo "scale=2; $TELE_RPC - $BASE_RPC" | bc 2>/dev/null || echo "0")
BASE_TPS=$(read_metric "$BASELINE_FILE" "tps")
TELE_TPS=$(read_metric "$TELEMETRY_FILE" "tps")
if [ "$(echo "$BASE_TPS > 0" | bc 2>/dev/null)" = "1" ]; then
TPS_IMPACT=$(echo "scale=2; ($BASE_TPS - $TELE_TPS) / $BASE_TPS * 100" | bc 2>/dev/null || echo "0")
else
TPS_IMPACT="0"
fi
BASE_CONS=$(read_metric "$BASELINE_FILE" "consensus_round_p95_ms")
TELE_CONS=$(read_metric "$TELEMETRY_FILE" "consensus_round_p95_ms")
if [ "$(echo "$BASE_CONS > 0" | bc 2>/dev/null)" = "1" ]; then
CONS_IMPACT=$(echo "scale=2; ($TELE_CONS - $BASE_CONS) / $BASE_CONS * 100" | bc 2>/dev/null || echo "0")
else
CONS_IMPACT="0"
fi
# ---------------------------------------------------------------------------
# Pass/fail checks
# ---------------------------------------------------------------------------
PASS_COUNT=0
FAIL_COUNT=0
check_threshold() {
local name="$1"
local actual="$2"
local threshold="$3"
local unit="$4"
# Compare: actual <= threshold
if [ "$(echo "$actual <= $threshold" | bc 2>/dev/null)" = "1" ]; then
ok "$name: ${actual}${unit} <= ${threshold}${unit} PASS"
PASS_COUNT=$((PASS_COUNT + 1))
echo "PASS"
else
fail "$name: ${actual}${unit} > ${threshold}${unit} FAIL"
FAIL_COUNT=$((FAIL_COUNT + 1))
echo "FAIL"
fi
}
CPU_RESULT=$(check_threshold "CPU overhead" "$CPU_DELTA" "$CPU_THRESHOLD" "%")
MEM_RESULT=$(check_threshold "Memory overhead" "$MEM_DELTA" "$MEM_THRESHOLD" "MB")
RPC_RESULT=$(check_threshold "RPC p99 impact" "$RPC_DELTA" "$RPC_THRESHOLD" "ms")
TPS_RESULT=$(check_threshold "TPS impact" "$TPS_IMPACT" "$TPS_THRESHOLD" "%")
CONS_RESULT=$(check_threshold "Consensus impact" "$CONS_IMPACT" "$CONSENSUS_THRESHOLD" "%")
# ---------------------------------------------------------------------------
# Output Markdown table
# ---------------------------------------------------------------------------
REPORT_FILE="$RESULTS_DIR/benchmark-report-${TIMESTAMP}.md"
cat > "$REPORT_FILE" <<EOMD
# Telemetry Performance Benchmark Report
**Date**: $(date -u +"%Y-%m-%d %H:%M:%S UTC")
**Nodes**: $NUM_NODES | **Duration**: ${DURATION}s per run
**Binary**: $XRPLD
## Results
| Metric | Baseline | Telemetry | Delta | Threshold | Result |
|--------|----------|-----------|-------|-----------|--------|
| CPU (avg %) | ${BASE_CPU}% | ${TELE_CPU}% | ${CPU_DELTA}% | < ${CPU_THRESHOLD}% | ${CPU_RESULT} |
| Memory RSS (peak MB) | ${BASE_MEM} MB | ${TELE_MEM} MB | ${MEM_DELTA} MB | < ${MEM_THRESHOLD} MB | ${MEM_RESULT} |
| RPC p99 Latency (ms) | ${BASE_RPC} ms | ${TELE_RPC} ms | ${RPC_DELTA} ms | < ${RPC_THRESHOLD} ms | ${RPC_RESULT} |
| Throughput (TPS) | ${BASE_TPS} | ${TELE_TPS} | ${TPS_IMPACT}% | < ${TPS_THRESHOLD}% | ${TPS_RESULT} |
| Consensus Round p95 (ms) | ${BASE_CONS} ms | ${TELE_CONS} ms | ${CONS_IMPACT}% | < ${CONSENSUS_THRESHOLD}% | ${CONS_RESULT} |
## Summary
- **Passed**: $PASS_COUNT / $((PASS_COUNT + FAIL_COUNT))
- **Failed**: $FAIL_COUNT / $((PASS_COUNT + FAIL_COUNT))
## Raw Data
- Baseline: \`$(basename "$BASELINE_FILE")\`
- Telemetry: \`$(basename "$TELEMETRY_FILE")\`
EOMD
ok "Benchmark report written to $REPORT_FILE"
cat "$REPORT_FILE"
# Exit with failure if any check failed.
if [ "$FAIL_COUNT" -gt 0 ]; then
exit 1
fi

View File

@@ -0,0 +1,233 @@
#!/usr/bin/env bash
# collect_system_metrics.sh — Collect CPU, memory, and RPC latency metrics
# from running xrpld nodes for benchmark comparison.
#
# Samples system metrics at regular intervals and writes a JSON summary.
# Used by benchmark.sh for baseline vs telemetry comparison.
#
# Usage:
# ./collect_system_metrics.sh <rpc_ports_csv> <duration_seconds> <output_file>
#
# Example:
# ./collect_system_metrics.sh "5005,5006,5007" 300 /tmp/metrics-baseline.json
#
# Output JSON format:
# {
# "cpu_pct_avg": 12.5,
# "memory_rss_mb_peak": 450.2,
# "rpc_p99_ms": 15.3,
# "tps": 4.8,
# "consensus_round_p95_ms": 3200,
# "samples": 60
# }
set -euo pipefail
# ---------------------------------------------------------------------------
# Colored output helpers
# ---------------------------------------------------------------------------
log() { printf "\033[1;34m[METRICS]\033[0m %s\n" "$*"; }
ok() { printf "\033[1;32m[METRICS]\033[0m %s\n" "$*"; }
die() { printf "\033[1;31m[METRICS]\033[0m %s\n" "$*" >&2; exit 1; }
# ---------------------------------------------------------------------------
# Argument parsing
# ---------------------------------------------------------------------------
usage() {
echo "Usage: $0 <rpc_ports_csv> <duration_seconds> <output_file>"
echo ""
echo "Arguments:"
echo " rpc_ports_csv Comma-separated RPC ports (e.g., 5005,5006,5007)"
echo " duration_seconds How long to collect metrics"
echo " output_file Path to write JSON results"
exit 1
}
if [ $# -lt 3 ]; then
usage
fi
RPC_PORTS_CSV="$1"
DURATION="$2"
OUTPUT_FILE="$3"
IFS=',' read -ra RPC_PORTS <<< "$RPC_PORTS_CSV"
SAMPLE_INTERVAL=5
SAMPLES=$((DURATION / SAMPLE_INTERVAL))
log "Collecting metrics for ${DURATION}s (${SAMPLES} samples, ${#RPC_PORTS[@]} nodes)..."
# ---------------------------------------------------------------------------
# Temporary files for aggregation
# ---------------------------------------------------------------------------
TMPDIR_METRICS="$(mktemp -d)"
CPU_FILE="$TMPDIR_METRICS/cpu.txt"
MEM_FILE="$TMPDIR_METRICS/mem.txt"
RPC_FILE="$TMPDIR_METRICS/rpc.txt"
LEDGER_FILE="$TMPDIR_METRICS/ledger.txt"
touch "$CPU_FILE" "$MEM_FILE" "$RPC_FILE" "$LEDGER_FILE"
cleanup() {
rm -rf "$TMPDIR_METRICS"
}
trap cleanup EXIT
# ---------------------------------------------------------------------------
# Get initial ledger sequence for TPS calculation
# ---------------------------------------------------------------------------
INITIAL_SEQ=0
INITIAL_TIME=$(date +%s)
for port in "${RPC_PORTS[@]}"; do
seq=$(curl -sf "http://localhost:$port" \
-d '{"method":"server_info"}' 2>/dev/null \
| jq -r '.result.info.validated_ledger.seq // 0' 2>/dev/null || echo 0)
if [ "$seq" -gt "$INITIAL_SEQ" ]; then
INITIAL_SEQ=$seq
fi
done
log "Initial validated ledger seq: $INITIAL_SEQ"
# ---------------------------------------------------------------------------
# Sampling loop
# ---------------------------------------------------------------------------
for sample in $(seq 1 "$SAMPLES"); do
# Collect CPU usage for xrpld processes.
# Uses ps to find all xrpld processes and average their CPU%.
cpu_sum=0
cpu_count=0
while IFS= read -r line; do
cpu_val=$(echo "$line" | awk '{print $1}')
if [ -n "$cpu_val" ] && [ "$cpu_val" != "0.0" ]; then
cpu_sum=$(echo "$cpu_sum + $cpu_val" | bc 2>/dev/null || echo "$cpu_sum")
cpu_count=$((cpu_count + 1))
fi
done < <(ps aux 2>/dev/null | grep '[x]rpld' | awk '{print $3}')
if [ "$cpu_count" -gt 0 ]; then
cpu_avg=$(echo "scale=2; $cpu_sum / $cpu_count" | bc 2>/dev/null || echo "0")
echo "$cpu_avg" >> "$CPU_FILE"
fi
# Collect memory RSS for xrpld processes.
while IFS= read -r line; do
rss_kb=$(echo "$line" | awk '{print $1}')
if [ -n "$rss_kb" ] && [ "$rss_kb" != "0" ]; then
rss_mb=$(echo "scale=2; $rss_kb / 1024" | bc 2>/dev/null || echo "0")
echo "$rss_mb" >> "$MEM_FILE"
fi
done < <(ps aux 2>/dev/null | grep '[x]rpld' | awk '{print $6}')
# Collect RPC latency from each node.
for port in "${RPC_PORTS[@]}"; do
start_ms=$(date +%s%N)
curl -sf "http://localhost:$port" \
-d '{"method":"server_info"}' > /dev/null 2>&1 || true
end_ms=$(date +%s%N)
latency_ms=$(( (end_ms - start_ms) / 1000000 ))
echo "$latency_ms" >> "$RPC_FILE"
done
# Record current validated ledger seq.
for port in "${RPC_PORTS[@]}"; do
seq=$(curl -sf "http://localhost:$port" \
-d '{"method":"server_info"}' 2>/dev/null \
| jq -r '.result.info.validated_ledger.seq // 0' 2>/dev/null || echo 0)
echo "$seq" >> "$LEDGER_FILE"
break # Only need one node's seq per sample.
done
# Progress indicator.
if [ $((sample % 10)) -eq 0 ]; then
log " Sample $sample/$SAMPLES..."
fi
sleep "$SAMPLE_INTERVAL"
done
# ---------------------------------------------------------------------------
# Compute aggregated metrics
# ---------------------------------------------------------------------------
log "Computing aggregated metrics..."
# CPU average.
if [ -s "$CPU_FILE" ]; then
CPU_AVG=$(awk '{ sum += $1; n++ } END { if (n>0) printf "%.2f", sum/n; else print "0" }' "$CPU_FILE")
else
CPU_AVG="0"
fi
# Memory peak RSS (MB).
if [ -s "$MEM_FILE" ]; then
MEM_PEAK=$(sort -n "$MEM_FILE" | tail -1)
else
MEM_PEAK="0"
fi
# RPC latency p99 (ms).
if [ -s "$RPC_FILE" ]; then
RPC_COUNT=$(wc -l < "$RPC_FILE")
P99_INDEX=$(echo "scale=0; $RPC_COUNT * 99 / 100" | bc)
RPC_P99=$(sort -n "$RPC_FILE" | sed -n "${P99_INDEX}p")
[ -z "$RPC_P99" ] && RPC_P99="0"
else
RPC_P99="0"
fi
# TPS calculation from ledger sequence advancement.
FINAL_SEQ=0
for port in "${RPC_PORTS[@]}"; do
seq=$(curl -sf "http://localhost:$port" \
-d '{"method":"server_info"}' 2>/dev/null \
| jq -r '.result.info.validated_ledger.seq // 0' 2>/dev/null || echo 0)
if [ "$seq" -gt "$FINAL_SEQ" ]; then
FINAL_SEQ=$seq
fi
done
FINAL_TIME=$(date +%s)
ELAPSED=$((FINAL_TIME - INITIAL_TIME))
LEDGER_ADVANCE=$((FINAL_SEQ - INITIAL_SEQ))
if [ "$ELAPSED" -gt 0 ] && [ "$LEDGER_ADVANCE" -gt 0 ]; then
# Rough TPS: assume ~avg_txs_per_ledger * ledgers / elapsed.
# Without tx count, use ledger close rate as proxy.
TPS=$(echo "scale=2; $LEDGER_ADVANCE / $ELAPSED" | bc 2>/dev/null || echo "0")
else
TPS="0"
fi
# Consensus round time p95 (from ledger close interval).
# Approximate by looking at ledger sequence progression intervals.
if [ -s "$LEDGER_FILE" ]; then
# Calculate intervals between consecutive ledger sequences.
LEDGER_COUNT=$(wc -l < "$LEDGER_FILE")
# Rough estimate: DURATION / number_of_distinct_ledgers * 1000 ms
UNIQUE_LEDGERS=$(sort -u "$LEDGER_FILE" | wc -l)
if [ "$UNIQUE_LEDGERS" -gt 1 ]; then
CONSENSUS_P95=$(echo "scale=0; $DURATION * 1000 / ($UNIQUE_LEDGERS - 1)" | bc 2>/dev/null || echo "0")
else
CONSENSUS_P95="0"
fi
else
CONSENSUS_P95="0"
fi
# ---------------------------------------------------------------------------
# Write output JSON
# ---------------------------------------------------------------------------
cat > "$OUTPUT_FILE" <<EOF_JSON
{
"cpu_pct_avg": $CPU_AVG,
"memory_rss_mb_peak": $MEM_PEAK,
"rpc_p99_ms": $RPC_P99,
"tps": $TPS,
"consensus_round_p95_ms": $CONSENSUS_P95,
"samples": $SAMPLES,
"duration_seconds": $DURATION,
"node_count": ${#RPC_PORTS[@]},
"initial_ledger_seq": $INITIAL_SEQ,
"final_ledger_seq": $FINAL_SEQ
}
EOF_JSON
ok "Metrics written to $OUTPUT_FILE"
cat "$OUTPUT_FILE"

View File

@@ -0,0 +1,101 @@
{
"description": "Expected metric inventory for rippled telemetry validation. Sourced from 09-data-collection-reference.md.",
"spanmetrics": {
"description": "SpanMetrics-derived RED metrics from the OTel Collector spanmetrics connector.",
"metrics": [
"traces_span_metrics_calls_total",
"traces_span_metrics_duration_milliseconds_bucket",
"traces_span_metrics_duration_milliseconds_count",
"traces_span_metrics_duration_milliseconds_sum"
],
"required_labels": [
"span_name",
"status_code",
"service_name",
"span_kind"
],
"dimension_labels": [
"xrpl_rpc_command",
"xrpl_rpc_status",
"xrpl_consensus_mode",
"xrpl_tx_local",
"xrpl_peer_proposal_trusted",
"xrpl_peer_validation_trusted"
]
},
"statsd_gauges": {
"description": "beast::insight gauges emitted via StatsD UDP.",
"metrics": [
"rippled_LedgerMaster_Validated_Ledger_Age",
"rippled_LedgerMaster_Published_Ledger_Age",
"rippled_State_Accounting_Full_duration",
"rippled_Peer_Finder_Active_Inbound_Peers",
"rippled_Peer_Finder_Active_Outbound_Peers",
"rippled_job_count"
]
},
"statsd_counters": {
"description": "beast::insight counters emitted via StatsD UDP.",
"metrics": ["rippled_rpc_requests", "rippled_ledger_fetches"]
},
"statsd_histograms": {
"description": "beast::insight timers/histograms emitted via StatsD UDP.",
"metrics": ["rippled_rpc_time", "rippled_rpc_size", "rippled_ios_latency"]
},
"overlay_traffic": {
"description": "Overlay traffic metrics (subset — full list has 45+ categories).",
"metrics": [
"rippled_total_Bytes_In",
"rippled_total_Bytes_Out",
"rippled_total_Messages_In",
"rippled_total_Messages_Out"
]
},
"phase9_nodestore": {
"description": "Phase 9 NodeStore I/O metrics (via beast::insight extensions).",
"metrics": [
"rippled_nodestore_reads_total",
"rippled_nodestore_writes",
"rippled_nodestore_read_bytes",
"rippled_nodestore_written_bytes"
]
},
"phase9_cache": {
"description": "Phase 9 cache hit rate metrics (via OTel MetricsRegistry).",
"metrics": ["rippled_cache_SLE_hit_rate", "rippled_cache_treenode_size"]
},
"phase9_txq": {
"description": "Phase 9 transaction queue metrics (via OTel MetricsRegistry).",
"metrics": ["rippled_txq_count", "rippled_txq_max_size"]
},
"phase9_rpc_method": {
"description": "Phase 9 per-RPC-method metrics (via OTel Metrics SDK).",
"metrics": [
"rippled_rpc_method_started_total",
"rippled_rpc_method_finished_total"
]
},
"phase9_objects": {
"description": "Phase 9 counted object instances.",
"metrics": ["rippled_object_count"]
},
"phase9_load": {
"description": "Phase 9 fee escalation and load factor metrics.",
"metrics": ["rippled_load_factor"]
},
"grafana_dashboards": {
"description": "All 10 Grafana dashboards that must render data.",
"uids": [
"rippled-rpc-perf",
"rippled-transactions",
"rippled-consensus",
"rippled-ledger-ops",
"rippled-peer-net",
"rippled-statsd-node-health",
"rippled-statsd-network",
"rippled-statsd-rpc",
"rippled-statsd-overlay-detail",
"rippled-statsd-ledger-sync"
]
}
}

View File

@@ -0,0 +1,172 @@
{
"description": "Expected span inventory for rippled telemetry validation. Sourced from 09-data-collection-reference.md.",
"spans": [
{
"name": "rpc.request",
"category": "rpc",
"parent": null,
"required_attributes": [],
"config_flag": "trace_rpc"
},
{
"name": "rpc.process",
"category": "rpc",
"parent": "rpc.request",
"required_attributes": [],
"config_flag": "trace_rpc"
},
{
"name": "rpc.ws_message",
"category": "rpc",
"parent": null,
"required_attributes": [],
"config_flag": "trace_rpc"
},
{
"name": "rpc.command.*",
"category": "rpc",
"parent": "rpc.process",
"required_attributes": [
"xrpl.rpc.command",
"xrpl.rpc.version",
"xrpl.rpc.role",
"xrpl.rpc.status",
"xrpl.rpc.duration_ms"
],
"config_flag": "trace_rpc",
"note": "Wildcard — matches rpc.command.server_info, rpc.command.ledger, etc."
},
{
"name": "tx.process",
"category": "transaction",
"parent": null,
"required_attributes": ["xrpl.tx.hash", "xrpl.tx.local", "xrpl.tx.path"],
"config_flag": "trace_transactions"
},
{
"name": "tx.receive",
"category": "transaction",
"parent": null,
"required_attributes": [
"xrpl.peer.id",
"xrpl.tx.hash",
"xrpl.tx.suppressed",
"xrpl.tx.status"
],
"config_flag": "trace_transactions"
},
{
"name": "tx.apply",
"category": "transaction",
"parent": "ledger.build",
"required_attributes": [
"xrpl.ledger.seq",
"xrpl.ledger.tx_count",
"xrpl.ledger.tx_failed"
],
"config_flag": "trace_transactions"
},
{
"name": "consensus.proposal.send",
"category": "consensus",
"parent": null,
"required_attributes": ["xrpl.consensus.round"],
"config_flag": "trace_consensus"
},
{
"name": "consensus.ledger_close",
"category": "consensus",
"parent": null,
"required_attributes": [
"xrpl.consensus.ledger.seq",
"xrpl.consensus.mode"
],
"config_flag": "trace_consensus"
},
{
"name": "consensus.accept",
"category": "consensus",
"parent": null,
"required_attributes": ["xrpl.consensus.proposers"],
"config_flag": "trace_consensus"
},
{
"name": "consensus.validation.send",
"category": "consensus",
"parent": null,
"required_attributes": [
"xrpl.consensus.ledger.seq",
"xrpl.consensus.proposing"
],
"config_flag": "trace_consensus"
},
{
"name": "consensus.accept.apply",
"category": "consensus",
"parent": null,
"required_attributes": [
"xrpl.consensus.close_time",
"xrpl.consensus.ledger.seq"
],
"config_flag": "trace_consensus"
},
{
"name": "ledger.build",
"category": "ledger",
"parent": null,
"required_attributes": [
"xrpl.ledger.seq",
"xrpl.ledger.tx_count",
"xrpl.ledger.tx_failed"
],
"config_flag": "trace_ledger"
},
{
"name": "ledger.validate",
"category": "ledger",
"parent": null,
"required_attributes": ["xrpl.ledger.seq", "xrpl.ledger.validations"],
"config_flag": "trace_ledger"
},
{
"name": "ledger.store",
"category": "ledger",
"parent": null,
"required_attributes": ["xrpl.ledger.seq"],
"config_flag": "trace_ledger"
},
{
"name": "peer.proposal.receive",
"category": "peer",
"parent": null,
"required_attributes": ["xrpl.peer.id", "xrpl.peer.proposal.trusted"],
"config_flag": "trace_peer"
},
{
"name": "peer.validation.receive",
"category": "peer",
"parent": null,
"required_attributes": ["xrpl.peer.id", "xrpl.peer.validation.trusted"],
"config_flag": "trace_peer"
}
],
"parent_child_relationships": [
{
"parent": "rpc.request",
"child": "rpc.process",
"description": "RPC request contains processing span"
},
{
"parent": "rpc.process",
"child": "rpc.command.*",
"description": "Processing span contains per-command span"
},
{
"parent": "ledger.build",
"child": "tx.apply",
"description": "Ledger build contains transaction application"
}
],
"total_span_types": 17,
"total_unique_attributes": 22
}

View File

@@ -0,0 +1,150 @@
#!/usr/bin/env bash
# generate-validator-keys.sh — Generate validator key pairs for the workload harness.
#
# Uses a temporary standalone xrpld instance to call `validation_create` RPC
# for each node. Outputs a JSON file mapping node index to seed + public key.
#
# Usage:
# ./generate-validator-keys.sh <xrpld_binary> <num_nodes> <output_dir>
#
# Output:
# <output_dir>/validator-keys.json — JSON array of {index, seed, public_key}
# <output_dir>/validators.txt — [validators] section for xrpld.cfg
set -euo pipefail
# ---------------------------------------------------------------------------
# Colored output helpers
# ---------------------------------------------------------------------------
log() { printf "\033[1;34m[KEYGEN]\033[0m %s\n" "$*"; }
ok() { printf "\033[1;32m[KEYGEN]\033[0m %s\n" "$*"; }
die() { printf "\033[1;31m[KEYGEN]\033[0m %s\n" "$*" >&2; exit 1; }
# ---------------------------------------------------------------------------
# Argument parsing
# ---------------------------------------------------------------------------
usage() {
echo "Usage: $0 <xrpld_binary> <num_nodes> <output_dir>"
echo ""
echo "Arguments:"
echo " xrpld_binary Path to xrpld binary (built with telemetry=ON)"
echo " num_nodes Number of validator key pairs to generate (1-20)"
echo " output_dir Directory to write validator-keys.json and validators.txt"
exit 1
}
if [ $# -lt 3 ]; then
usage
fi
XRPLD="$1"
NUM_NODES="$2"
OUTPUT_DIR="$3"
# Validate arguments
[ -x "$XRPLD" ] || die "xrpld binary not found or not executable: $XRPLD"
[[ "$NUM_NODES" =~ ^[0-9]+$ ]] || die "num_nodes must be a positive integer"
[ "$NUM_NODES" -ge 1 ] && [ "$NUM_NODES" -le 20 ] || die "num_nodes must be between 1 and 20"
mkdir -p "$OUTPUT_DIR"
# ---------------------------------------------------------------------------
# Start a temporary standalone xrpld for key generation
# ---------------------------------------------------------------------------
TEMP_DIR="$(mktemp -d)"
TEMP_PORT=5099
TEMP_CFG="$TEMP_DIR/xrpld.cfg"
log "Starting temporary xrpld for key generation (port $TEMP_PORT)..."
cat > "$TEMP_CFG" <<EOCFG
[server]
port_rpc_keygen
[port_rpc_keygen]
port = $TEMP_PORT
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[node_db]
type=NuDB
path=$TEMP_DIR/nudb
online_delete=256
[database_path]
$TEMP_DIR/db
[debug_logfile]
$TEMP_DIR/debug.log
[ssl_verify]
0
EOCFG
"$XRPLD" --conf "$TEMP_CFG" -a --start > "$TEMP_DIR/stdout.log" 2>&1 &
TEMP_PID=$!
# Ensure cleanup on exit
cleanup_temp() {
kill "$TEMP_PID" 2>/dev/null || true
wait "$TEMP_PID" 2>/dev/null || true
rm -rf "$TEMP_DIR"
}
trap cleanup_temp EXIT
# Wait for RPC to become available
for attempt in $(seq 1 30); do
if curl -sf "http://localhost:$TEMP_PORT" \
-d '{"method":"server_info"}' >/dev/null 2>&1; then
log "Temporary xrpld RPC ready (attempt $attempt)."
break
fi
if [ "$attempt" -eq 30 ]; then
die "Temporary xrpld RPC not ready after 30s"
fi
sleep 1
done
# ---------------------------------------------------------------------------
# Generate key pairs
# ---------------------------------------------------------------------------
log "Generating $NUM_NODES validator key pairs..."
KEYS_JSON="["
VALIDATORS_TXT="[validators]"
for i in $(seq 1 "$NUM_NODES"); do
result=$(curl -sf "http://localhost:$TEMP_PORT" \
-d '{"method":"validation_create"}')
seed=$(echo "$result" | jq -r '.result.validation_seed')
pubkey=$(echo "$result" | jq -r '.result.validation_public_key')
if [ -z "$seed" ] || [ "$seed" = "null" ]; then
die "Failed to generate key pair for node $i"
fi
log " Node $i: ${pubkey:0:20}..."
# Build JSON entry
entry="{\"index\": $i, \"seed\": \"$seed\", \"public_key\": \"$pubkey\"}"
if [ "$i" -gt 1 ]; then
KEYS_JSON="$KEYS_JSON,"
fi
KEYS_JSON="$KEYS_JSON$entry"
VALIDATORS_TXT="$VALIDATORS_TXT
$pubkey"
done
KEYS_JSON="$KEYS_JSON]"
# ---------------------------------------------------------------------------
# Write output files
# ---------------------------------------------------------------------------
echo "$KEYS_JSON" | jq '.' > "$OUTPUT_DIR/validator-keys.json"
echo "$VALIDATORS_TXT" > "$OUTPUT_DIR/validators.txt"
ok "Generated $NUM_NODES key pairs:"
ok " Keys: $OUTPUT_DIR/validator-keys.json"
ok " Validators: $OUTPUT_DIR/validators.txt"

View File

@@ -0,0 +1,6 @@
# Python dependencies for Phase 10 workload tools.
#
# Install: pip install -r requirements.txt
websockets>=12.0
aiohttp>=3.9.0

View File

@@ -0,0 +1,459 @@
#!/usr/bin/env python3
"""RPC Load Generator for rippled telemetry validation.
Connects to one or more rippled WebSocket endpoints and fires all traced
RPC commands at configurable rates with realistic production-like
distribution.
Command distribution (default weights):
40% Health checks: server_info, fee
30% Wallet queries: account_info, account_lines, account_objects
15% Explorer: ledger, ledger_data
10% TX lookups: tx, account_tx
5% DEX queries: book_offers, amm_info
Usage:
python3 rpc_load_generator.py --endpoints ws://localhost:6006 --rate 50 --duration 120
# Multiple endpoints (round-robin):
python3 rpc_load_generator.py \\
--endpoints ws://localhost:6006 ws://localhost:6007 \\
--rate 100 --duration 300
# Custom weights:
python3 rpc_load_generator.py --endpoints ws://localhost:6006 \\
--weights '{"server_info":60,"account_info":30,"ledger":10}'
"""
import argparse
import asyncio
import json
import logging
import random
import sys
import time
import uuid
from dataclasses import dataclass, field
from typing import Any
import websockets
# ---------------------------------------------------------------------------
# Configuration
# ---------------------------------------------------------------------------
# Default command distribution matching realistic production ratios.
# Keys are RPC command names; values are relative weights.
DEFAULT_WEIGHTS: dict[str, int] = {
# 40% health checks
"server_info": 25,
"fee": 15,
# 30% wallet queries
"account_info": 15,
"account_lines": 8,
"account_objects": 7,
# 15% explorer
"ledger": 10,
"ledger_data": 5,
# 10% tx lookups
"tx": 5,
"account_tx": 5,
# 5% DEX queries
"book_offers": 3,
"amm_info": 2,
}
# Well-known genesis account for queries that require an account parameter.
GENESIS_ACCOUNT = "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"
logger = logging.getLogger("rpc_load_generator")
# ---------------------------------------------------------------------------
# Data classes
# ---------------------------------------------------------------------------
@dataclass
class LoadStats:
"""Tracks request counts and latencies during a load run.
Attributes:
total_sent: Total RPC requests dispatched.
total_success: Requests that returned a valid result.
total_errors: Requests that returned an error or timed out.
latencies: Per-command list of round-trip times in seconds.
command_counts: Per-command request count.
"""
total_sent: int = 0
total_success: int = 0
total_errors: int = 0
latencies: dict[str, list[float]] = field(default_factory=dict)
command_counts: dict[str, int] = field(default_factory=dict)
def record(self, command: str, latency: float, success: bool) -> None:
"""Record the outcome of a single RPC call."""
self.total_sent += 1
if success:
self.total_success += 1
else:
self.total_errors += 1
self.latencies.setdefault(command, []).append(latency)
self.command_counts[command] = self.command_counts.get(command, 0) + 1
def summary(self) -> dict[str, Any]:
"""Return a summary dict suitable for JSON serialization."""
per_command: dict[str, Any] = {}
for cmd, lats in self.latencies.items():
sorted_lats = sorted(lats)
n = len(sorted_lats)
per_command[cmd] = {
"count": self.command_counts.get(cmd, 0),
"p50_ms": round(sorted_lats[n // 2] * 1000, 2) if n else 0,
"p95_ms": (round(sorted_lats[int(n * 0.95)] * 1000, 2) if n else 0),
"p99_ms": (round(sorted_lats[int(n * 0.99)] * 1000, 2) if n else 0),
}
return {
"total_sent": self.total_sent,
"total_success": self.total_success,
"total_errors": self.total_errors,
"error_rate_pct": (
round(self.total_errors / self.total_sent * 100, 2)
if self.total_sent
else 0
),
"per_command": per_command,
}
# ---------------------------------------------------------------------------
# RPC command builders
# ---------------------------------------------------------------------------
def build_rpc_request(command: str) -> dict[str, Any]:
"""Build a JSON-RPC request object for the given command.
Args:
command: The rippled RPC command name.
Returns:
A dict representing the JSON-RPC request body.
"""
base: dict[str, Any] = {"method": command, "params": [{}]}
if command == "server_info":
pass # No params needed.
elif command == "fee":
pass # No params needed.
elif command == "account_info":
base["params"] = [{"account": GENESIS_ACCOUNT}]
elif command == "account_lines":
base["params"] = [{"account": GENESIS_ACCOUNT}]
elif command == "account_objects":
base["params"] = [{"account": GENESIS_ACCOUNT, "limit": 10}]
elif command == "ledger":
base["params"] = [{"ledger_index": "validated"}]
elif command == "ledger_data":
base["params"] = [{"ledger_index": "validated", "limit": 5}]
elif command == "tx":
# Use a dummy hash — will return "txnNotFound" but still exercises
# the full RPC span pipeline (rpc.request -> rpc.process -> rpc.command.tx).
base["params"] = [{"transaction": "0" * 64, "binary": False}]
elif command == "account_tx":
base["params"] = [
{
"account": GENESIS_ACCOUNT,
"ledger_index_min": -1,
"ledger_index_max": -1,
"limit": 5,
}
]
elif command == "book_offers":
base["params"] = [
{
"taker_pays": {"currency": "XRP"},
"taker_gets": {
"currency": "USD",
"issuer": GENESIS_ACCOUNT,
},
"limit": 5,
}
]
elif command == "amm_info":
# AMM may not exist — the span is still created on the server side.
base["params"] = [
{
"asset": {"currency": "XRP"},
"asset2": {
"currency": "USD",
"issuer": GENESIS_ACCOUNT,
},
}
]
return base
def choose_command(weights: dict[str, int]) -> str:
"""Select a random RPC command based on configured weights.
Args:
weights: Mapping of command name to relative weight.
Returns:
A command name string.
"""
commands = list(weights.keys())
w = [weights[c] for c in commands]
return random.choices(commands, weights=w, k=1)[0]
# ---------------------------------------------------------------------------
# WebSocket RPC client
# ---------------------------------------------------------------------------
async def send_rpc(
ws: websockets.WebSocketClientProtocol,
command: str,
stats: LoadStats,
inject_traceparent: bool = True,
) -> None:
"""Send a single RPC request over WebSocket and record the result.
Args:
ws: Open WebSocket connection.
command: RPC command name.
stats: LoadStats instance to record results.
inject_traceparent: If True, add a W3C traceparent header field
to the request for context propagation testing.
"""
request = build_rpc_request(command)
# Inject W3C traceparent for context propagation testing.
# The rippled WebSocket handler extracts this from the JSON body
# when present (Phase 2 context propagation).
if inject_traceparent:
trace_id = uuid.uuid4().hex
span_id = uuid.uuid4().hex[:16]
request["traceparent"] = f"00-{trace_id}-{span_id}-01"
t0 = time.monotonic()
try:
await ws.send(json.dumps(request))
raw = await asyncio.wait_for(ws.recv(), timeout=10.0)
latency = time.monotonic() - t0
response = json.loads(raw)
success = "result" in response
stats.record(command, latency, success)
except (asyncio.TimeoutError, websockets.exceptions.WebSocketException) as exc:
latency = time.monotonic() - t0
stats.record(command, latency, False)
logger.debug("RPC %s failed: %s", command, exc)
async def run_load(
endpoints: list[str],
rate: float,
duration: float,
weights: dict[str, int],
inject_traceparent: bool,
) -> LoadStats:
"""Run the RPC load generator against the given endpoints.
Distributes requests round-robin across endpoints at the specified
rate (requests per second) for the given duration.
Args:
endpoints: List of WebSocket URLs (ws://host:port).
rate: Target requests per second.
duration: Total run time in seconds.
weights: Command distribution weights.
inject_traceparent: Whether to inject W3C traceparent headers.
Returns:
LoadStats with aggregated results.
"""
stats = LoadStats()
interval = 1.0 / rate if rate > 0 else 0.1
# Open persistent connections to all endpoints.
connections: list[websockets.WebSocketClientProtocol] = []
for ep in endpoints:
try:
ws = await websockets.connect(ep, ping_interval=20, ping_timeout=10)
connections.append(ws)
logger.info("Connected to %s", ep)
except Exception as exc:
logger.error("Failed to connect to %s: %s", ep, exc)
if not connections:
logger.error("No connections established. Aborting.")
return stats
logger.info(
"Starting load: rate=%s RPS, duration=%ss, endpoints=%d",
rate,
duration,
len(connections),
)
start = time.monotonic()
conn_idx = 0
try:
while (time.monotonic() - start) < duration:
command = choose_command(weights)
ws = connections[conn_idx % len(connections)]
conn_idx += 1
# Fire-and-forget style with bounded concurrency via sleep.
asyncio.create_task(send_rpc(ws, command, stats, inject_traceparent))
await asyncio.sleep(interval)
# Periodic progress log.
elapsed = time.monotonic() - start
if stats.total_sent % 100 == 0 and stats.total_sent > 0:
actual_rps = stats.total_sent / elapsed if elapsed > 0 else 0
logger.info(
"Progress: %d sent, %d errors, %.1f RPS (%.0fs elapsed)",
stats.total_sent,
stats.total_errors,
actual_rps,
elapsed,
)
except asyncio.CancelledError:
logger.info("Load generation cancelled.")
finally:
# Allow in-flight requests to complete.
await asyncio.sleep(2)
for ws in connections:
await ws.close()
elapsed = time.monotonic() - start
logger.info(
"Load complete: %d sent, %d success, %d errors in %.1fs (%.1f RPS)",
stats.total_sent,
stats.total_success,
stats.total_errors,
elapsed,
stats.total_sent / elapsed if elapsed > 0 else 0,
)
return stats
# ---------------------------------------------------------------------------
# CLI entry point
# ---------------------------------------------------------------------------
def parse_args() -> argparse.Namespace:
"""Parse command-line arguments."""
parser = argparse.ArgumentParser(
description="RPC Load Generator for rippled telemetry validation",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Basic usage (50 RPS for 2 minutes):
python3 rpc_load_generator.py --endpoints ws://localhost:6006 --rate 50 --duration 120
# Multiple endpoints with custom weights:
python3 rpc_load_generator.py \\
--endpoints ws://localhost:6006 ws://localhost:6007 \\
--rate 100 --duration 300 \\
--weights '{"server_info": 80, "account_info": 20}'
""",
)
parser.add_argument(
"--endpoints",
nargs="+",
default=["ws://localhost:6006"],
help="WebSocket endpoints (default: ws://localhost:6006)",
)
parser.add_argument(
"--rate",
type=float,
default=50.0,
help="Target requests per second (default: 50)",
)
parser.add_argument(
"--duration",
type=float,
default=120.0,
help="Run duration in seconds (default: 120)",
)
parser.add_argument(
"--weights",
type=str,
default=None,
help="JSON string of command weights (overrides defaults)",
)
parser.add_argument(
"--no-traceparent",
action="store_true",
help="Disable W3C traceparent injection",
)
parser.add_argument(
"--output",
type=str,
default=None,
help="Write JSON summary to this file path",
)
parser.add_argument(
"--verbose",
action="store_true",
help="Enable debug logging",
)
return parser.parse_args()
def main() -> None:
"""Main entry point for the RPC load generator."""
args = parse_args()
logging.basicConfig(
level=logging.DEBUG if args.verbose else logging.INFO,
format="%(asctime)s [%(name)s] %(levelname)s %(message)s",
)
# Parse custom weights if provided.
weights = DEFAULT_WEIGHTS.copy()
if args.weights:
try:
custom = json.loads(args.weights)
weights = {k: int(v) for k, v in custom.items()}
logger.info("Using custom weights: %s", weights)
except (json.JSONDecodeError, ValueError) as exc:
logger.error("Invalid --weights JSON: %s", exc)
sys.exit(1)
# Run the load generator.
stats = asyncio.run(
run_load(
endpoints=args.endpoints,
rate=args.rate,
duration=args.duration,
weights=weights,
inject_traceparent=not args.no_traceparent,
)
)
summary = stats.summary()
print(json.dumps(summary, indent=2))
if args.output:
with open(args.output, "w") as f:
json.dump(summary, f, indent=2)
logger.info("Summary written to %s", args.output)
# Exit with error if error rate exceeds 50%.
if summary["error_rate_pct"] > 50:
logger.error("High error rate: %.1f%%", summary["error_rate_pct"])
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,413 @@
#!/usr/bin/env bash
# run-full-validation.sh — Orchestrates the full telemetry validation pipeline.
#
# Sequence:
# 1. Start the observability stack (OTel Collector, Jaeger, Tempo, Prometheus, Loki, Grafana)
# 2. Start a multi-node rippled cluster with full telemetry enabled
# 3. Wait for consensus
# 4. Run the RPC load generator
# 5. Run the transaction submitter
# 6. Wait for telemetry data to propagate
# 7. Run the telemetry validation suite
# 8. (Optional) Run the performance benchmark
#
# Usage:
# ./run-full-validation.sh --xrpld /path/to/xrpld
# ./run-full-validation.sh --xrpld /path/to/xrpld --with-benchmark
# ./run-full-validation.sh --cleanup
#
# Exit codes:
# 0 — All validation checks passed
# 1 — One or more validation checks failed
# 2 — Infrastructure error (cluster/stack failed to start)
set -euo pipefail
# ---------------------------------------------------------------------------
# Colored output helpers
# ---------------------------------------------------------------------------
log() { printf "\033[1;34m[VALIDATE]\033[0m %s\n" "$*"; }
ok() { printf "\033[1;32m[VALIDATE]\033[0m %s\n" "$*"; }
warn() { printf "\033[1;33m[VALIDATE]\033[0m %s\n" "$*"; }
fail() { printf "\033[1;31m[VALIDATE]\033[0m %s\n" "$*"; }
die() { printf "\033[1;31m[VALIDATE]\033[0m %s\n" "$*" >&2; exit 2; }
# ---------------------------------------------------------------------------
# Configuration
# ---------------------------------------------------------------------------
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
TELEMETRY_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
REPO_ROOT="$(cd "$TELEMETRY_DIR/../.." && pwd)"
COMPOSE_FILE="$TELEMETRY_DIR/docker-compose.workload.yaml"
WORKDIR="/tmp/xrpld-validation"
XRPLD="${XRPLD:-$REPO_ROOT/.build/xrpld}"
NUM_NODES=5
RPC_PORT_BASE=5005
WS_PORT_BASE=6006
PEER_PORT_BASE=51235
RPC_RATE=50
RPC_DURATION=120
TX_TPS=5
TX_DURATION=120
WITH_BENCHMARK=false
SKIP_LOKI=false
REPORT_DIR="$WORKDIR/reports"
GENESIS_ACCOUNT="rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"
GENESIS_SEED="snoPBrXtMeMyMHUVTgbuqAfg1SUTb"
# ---------------------------------------------------------------------------
# Argument parsing
# ---------------------------------------------------------------------------
usage() {
echo "Usage: $0 [OPTIONS]"
echo ""
echo "Options:"
echo " --xrpld PATH Path to xrpld binary"
echo " --nodes NUM Number of validator nodes (default: 5)"
echo " --rpc-rate RPS RPC load rate (default: 50)"
echo " --rpc-duration SECS RPC load duration (default: 120)"
echo " --tx-tps TPS Transaction submit rate (default: 5)"
echo " --tx-duration SECS Transaction submit duration (default: 120)"
echo " --with-benchmark Also run performance benchmarks"
echo " --skip-loki Skip Loki log-trace correlation checks"
echo " --cleanup Tear down everything and exit"
echo " -h, --help Show this help"
exit 0
}
while [ $# -gt 0 ]; do
case "$1" in
--xrpld) XRPLD="$2"; shift 2 ;;
--nodes) NUM_NODES="$2"; shift 2 ;;
--rpc-rate) RPC_RATE="$2"; shift 2 ;;
--rpc-duration) RPC_DURATION="$2"; shift 2 ;;
--tx-tps) TX_TPS="$2"; shift 2 ;;
--tx-duration) TX_DURATION="$2"; shift 2 ;;
--with-benchmark) WITH_BENCHMARK=true; shift ;;
--skip-loki) SKIP_LOKI=true; shift ;;
--cleanup) # Cleanup mode
log "Cleaning up..."
pkill -f "$WORKDIR" 2>/dev/null || true
docker compose -f "$COMPOSE_FILE" down 2>/dev/null || true
rm -rf "$WORKDIR"
ok "Cleanup complete."
exit 0
;;
-h|--help) usage ;;
*) die "Unknown option: $1" ;;
esac
done
# ---------------------------------------------------------------------------
# Prerequisites
# ---------------------------------------------------------------------------
log "Checking prerequisites..."
[ -x "$XRPLD" ] || die "xrpld binary not found: $XRPLD"
command -v docker >/dev/null 2>&1 || die "docker not found"
docker compose version >/dev/null 2>&1 || die "docker compose (v2) not found"
command -v python3 >/dev/null 2>&1 || die "python3 not found"
command -v curl >/dev/null 2>&1 || die "curl not found"
command -v jq >/dev/null 2>&1 || die "jq not found"
[ -f "$COMPOSE_FILE" ] || die "docker-compose.workload.yaml not found"
# Install Python dependencies.
log "Installing Python dependencies..."
pip3 install -q -r "$SCRIPT_DIR/requirements.txt" 2>/dev/null || \
pip install -q -r "$SCRIPT_DIR/requirements.txt" 2>/dev/null || \
warn "Could not install Python dependencies — they may already be present"
ok "Prerequisites verified."
# ---------------------------------------------------------------------------
# Cleanup previous run
# ---------------------------------------------------------------------------
log "Cleaning up previous run..."
pkill -f "$WORKDIR" 2>/dev/null || true
sleep 2
rm -rf "$WORKDIR"
mkdir -p "$WORKDIR" "$REPORT_DIR"
# ---------------------------------------------------------------------------
# Step 1: Start observability stack
# ---------------------------------------------------------------------------
log "Step 1: Starting observability stack..."
docker compose -f "$COMPOSE_FILE" up -d
log "Waiting for OTel Collector..."
for attempt in $(seq 1 30); do
status=$(curl -so /dev/null -w '%{http_code}' http://localhost:4318/ 2>/dev/null || echo 000)
if [ "$status" != "000" ]; then
ok "OTel Collector ready (attempt $attempt)"
break
fi
[ "$attempt" -eq 30 ] && die "OTel Collector not ready after 30s"
sleep 1
done
log "Waiting for Jaeger..."
for attempt in $(seq 1 30); do
if curl -sf "http://localhost:16686/" >/dev/null 2>&1; then
ok "Jaeger ready (attempt $attempt)"
break
fi
[ "$attempt" -eq 30 ] && die "Jaeger not ready after 30s"
sleep 1
done
log "Waiting for Prometheus..."
for attempt in $(seq 1 30); do
if curl -sf "http://localhost:9090/-/healthy" >/dev/null 2>&1; then
ok "Prometheus ready (attempt $attempt)"
break
fi
[ "$attempt" -eq 30 ] && die "Prometheus not ready after 30s"
sleep 1
done
# ---------------------------------------------------------------------------
# Step 2: Generate validator keys and start cluster
# ---------------------------------------------------------------------------
log "Step 2: Starting $NUM_NODES-node validator cluster..."
bash "$SCRIPT_DIR/generate-validator-keys.sh" "$XRPLD" "$NUM_NODES" "$WORKDIR"
for i in $(seq 1 "$NUM_NODES"); do
NODE_DIR="$WORKDIR/node$i"
mkdir -p "$NODE_DIR/nudb" "$NODE_DIR/db"
RPC_PORT=$((RPC_PORT_BASE + i - 1))
WS_PORT=$((WS_PORT_BASE + i - 1))
PEER_PORT=$((PEER_PORT_BASE + i - 1))
SEED=$(jq -r ".[$((i-1))].seed" "$WORKDIR/validator-keys.json")
# Build ips_fixed.
IPS_FIXED=""
for j in $(seq 1 "$NUM_NODES"); do
if [ "$j" -ne "$i" ]; then
IPS_FIXED="${IPS_FIXED}127.0.0.1 $((PEER_PORT_BASE + j - 1))
"
fi
done
cat > "$NODE_DIR/xrpld.cfg" <<EOCFG
[server]
port_rpc
port_ws
port_peer
[port_rpc]
port = $RPC_PORT
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_ws]
port = $WS_PORT
ip = 127.0.0.1
admin = 127.0.0.1
protocol = ws
[port_peer]
port = $PEER_PORT
ip = 0.0.0.0
protocol = peer
[node_db]
type=NuDB
path=$NODE_DIR/nudb
online_delete=256
[database_path]
$NODE_DIR/db
[debug_logfile]
$NODE_DIR/debug.log
[validation_seed]
$SEED
[validators_file]
$WORKDIR/validators.txt
[ips_fixed]
${IPS_FIXED}
[peer_private]
1
[telemetry]
enabled=1
service_instance_id=validator-${i}
endpoint=http://localhost:4318/v1/traces
exporter=otlp_http
sampling_ratio=1.0
batch_size=512
batch_delay_ms=2000
max_queue_size=2048
trace_rpc=1
trace_transactions=1
trace_consensus=1
trace_peer=1
trace_ledger=1
[insight]
server=statsd
address=127.0.0.1:8125
prefix=rippled
[rpc_startup]
{ "command": "log_level", "severity": "warning" }
[ssl_verify]
0
EOCFG
"$XRPLD" --conf "$NODE_DIR/xrpld.cfg" --start > "$NODE_DIR/stdout.log" 2>&1 &
echo $! > "$NODE_DIR/xrpld.pid"
log " Node $i: RPC=$RPC_PORT WS=$WS_PORT Peer=$PEER_PORT PID=$!"
done
# ---------------------------------------------------------------------------
# Step 3: Wait for consensus
# ---------------------------------------------------------------------------
log "Step 3: Waiting for consensus..."
for attempt in $(seq 1 120); do
ready=0
for i in $(seq 1 "$NUM_NODES"); do
port=$((RPC_PORT_BASE + i - 1))
state=$(curl -sf "http://localhost:$port" \
-d '{"method":"server_info"}' 2>/dev/null \
| jq -r '.result.info.server_state' 2>/dev/null || echo "")
if [ "$state" = "proposing" ]; then
ready=$((ready + 1))
fi
done
if [ "$ready" -ge "$NUM_NODES" ]; then
ok "All $NUM_NODES nodes proposing (attempt $attempt)"
break
fi
if [ "$attempt" -eq 120 ]; then
warn "Consensus timeout — $ready/$NUM_NODES nodes ready"
fi
printf "\r %d/%d nodes proposing..." "$ready" "$NUM_NODES"
sleep 1
done
echo ""
# Wait for first validated ledger.
log "Waiting for validated ledger..."
for attempt in $(seq 1 60); do
val_seq=$(curl -sf "http://localhost:$RPC_PORT_BASE" \
-d '{"method":"server_info"}' 2>/dev/null \
| jq -r '.result.info.validated_ledger.seq // 0' 2>/dev/null || echo 0)
if [ "$val_seq" -gt 2 ] 2>/dev/null; then
ok "Validated ledger: seq $val_seq"
break
fi
[ "$attempt" -eq 60 ] && warn "No validated ledger after 60s"
sleep 1
done
# ---------------------------------------------------------------------------
# Step 4: Run RPC load generator
# ---------------------------------------------------------------------------
log "Step 4: Running RPC load generator (${RPC_RATE} RPS for ${RPC_DURATION}s)..."
WS_ENDPOINTS=""
for i in $(seq 1 "$NUM_NODES"); do
WS_ENDPOINTS="$WS_ENDPOINTS ws://localhost:$((WS_PORT_BASE + i - 1))"
done
python3 "$SCRIPT_DIR/rpc_load_generator.py" \
--endpoints $WS_ENDPOINTS \
--rate "$RPC_RATE" \
--duration "$RPC_DURATION" \
--output "$REPORT_DIR/rpc-load-results.json" || \
warn "RPC load generator returned non-zero exit"
ok "RPC load generation complete."
# ---------------------------------------------------------------------------
# Step 5: Run transaction submitter
# ---------------------------------------------------------------------------
log "Step 5: Running transaction submitter (${TX_TPS} TPS for ${TX_DURATION}s)..."
python3 "$SCRIPT_DIR/tx_submitter.py" \
--endpoint "ws://localhost:$WS_PORT_BASE" \
--tps "$TX_TPS" \
--duration "$TX_DURATION" \
--output "$REPORT_DIR/tx-submit-results.json" || \
warn "Transaction submitter returned non-zero exit"
ok "Transaction submission complete."
# ---------------------------------------------------------------------------
# Step 6: Wait for telemetry propagation
# ---------------------------------------------------------------------------
log "Step 6: Waiting 30s for telemetry data to propagate..."
sleep 30
# ---------------------------------------------------------------------------
# Step 7: Run telemetry validation suite
# ---------------------------------------------------------------------------
log "Step 7: Running telemetry validation suite..."
VALIDATION_ARGS="--report $REPORT_DIR/validation-report.json"
if [ "$SKIP_LOKI" = true ]; then
VALIDATION_ARGS="$VALIDATION_ARGS --skip-loki"
fi
VALIDATION_EXIT=0
python3 "$SCRIPT_DIR/validate_telemetry.py" $VALIDATION_ARGS || VALIDATION_EXIT=$?
if [ "$VALIDATION_EXIT" -eq 0 ]; then
ok "All telemetry validation checks passed!"
else
fail "Some telemetry validation checks failed (exit $VALIDATION_EXIT)"
fi
# ---------------------------------------------------------------------------
# Step 8: (Optional) Run benchmark
# ---------------------------------------------------------------------------
if [ "$WITH_BENCHMARK" = true ]; then
log "Step 8: Running performance benchmark..."
bash "$SCRIPT_DIR/benchmark.sh" \
--xrpld "$XRPLD" \
--duration 120 \
--nodes 3 \
--output "$REPORT_DIR" || \
warn "Benchmark returned non-zero exit"
fi
# ---------------------------------------------------------------------------
# Summary
# ---------------------------------------------------------------------------
echo ""
echo "==========================================================="
echo " FULL VALIDATION RESULTS"
echo "==========================================================="
echo ""
echo " Reports directory: $REPORT_DIR"
echo ""
ls -la "$REPORT_DIR/" 2>/dev/null || true
echo ""
echo " Observability stack is running:"
echo " Jaeger UI: http://localhost:16686"
echo " Grafana: http://localhost:3000"
echo " Prometheus: http://localhost:9090"
echo ""
echo " xrpld nodes ($NUM_NODES) are running:"
for i in $(seq 1 "$NUM_NODES"); do
rpc=$((RPC_PORT_BASE + i - 1))
ws=$((WS_PORT_BASE + i - 1))
pid=$(cat "$WORKDIR/node$i/xrpld.pid" 2>/dev/null || echo 'unknown')
echo " Node $i: RPC=$rpc WS=$ws PID=$pid"
done
echo ""
echo " To tear down:"
echo " $0 --cleanup"
echo ""
echo "==========================================================="
exit "$VALIDATION_EXIT"

View File

@@ -0,0 +1,42 @@
{
"genesis": {
"account": "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh",
"seed": "snoPBrXtMeMyMHUVTgbuqAfg1SUTb",
"description": "Genesis account with all XRP. Used to fund test accounts."
},
"test_accounts": [
{
"name": "alice",
"description": "Primary sender for Payment and OfferCreate transactions."
},
{
"name": "bob",
"description": "Primary receiver for Payment transactions."
},
{
"name": "carol",
"description": "TrustSet and issued currency counterparty."
},
{
"name": "dave",
"description": "NFToken operations (mint, offer, accept)."
},
{
"name": "eve",
"description": "Escrow operations (create, finish)."
},
{
"name": "frank",
"description": "AMM pool operations (create, deposit, withdraw)."
},
{
"name": "grace",
"description": "Additional sender for parallel transaction submission."
},
{
"name": "heidi",
"description": "Additional receiver for payment diversity."
}
],
"note": "Test account keypairs are generated dynamically at runtime via wallet_propose RPC. This file defines the logical roles. Actual keys are stored in the workdir during execution."
}

View File

@@ -0,0 +1,790 @@
#!/usr/bin/env python3
"""Transaction Submitter for rippled telemetry validation.
Generates diverse transaction types against a rippled cluster to exercise
the full span and metric surface: tx.process, tx.apply, ledger.build,
consensus.*, and all associated attributes.
Pre-funds test accounts from the genesis account, then submits a
configurable mix of transaction types at a target TPS.
Supported transaction types:
- Payment (XRP and issued currencies)
- OfferCreate / OfferCancel (DEX activity)
- TrustSet (trust line creation)
- NFTokenMint / NFTokenCreateOffer / NFTokenAcceptOffer
- EscrowCreate / EscrowFinish
- AMMCreate / AMMDeposit / AMMWithdraw (if amendment enabled)
Usage:
python3 tx_submitter.py --endpoint ws://localhost:6006 --tps 5 --duration 120
# Custom transaction mix:
python3 tx_submitter.py --endpoint ws://localhost:6006 \\
--weights '{"Payment":50,"OfferCreate":20,"TrustSet":10,"NFTokenMint":10,"EscrowCreate":10}'
"""
import argparse
import asyncio
import json
import logging
import random
import sys
import time
from dataclasses import dataclass, field
from typing import Any
import websockets
logger = logging.getLogger("tx_submitter")
# ---------------------------------------------------------------------------
# Constants
# ---------------------------------------------------------------------------
GENESIS_ACCOUNT = "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"
GENESIS_SEED = "snoPBrXtMeMyMHUVTgbuqAfg1SUTb"
# Amount to fund each test account (100,000 XRP in drops).
FUND_AMOUNT = "100000000000"
# Default transaction mix weights (relative).
DEFAULT_TX_WEIGHTS: dict[str, int] = {
"Payment": 40,
"OfferCreate": 15,
"OfferCancel": 5,
"TrustSet": 10,
"NFTokenMint": 10,
"NFTokenCreateOffer": 5,
"EscrowCreate": 5,
"EscrowFinish": 5,
"AMMCreate": 3,
"AMMDeposit": 2,
}
# Number of test accounts to create.
NUM_TEST_ACCOUNTS = 8
# ---------------------------------------------------------------------------
# Data classes
# ---------------------------------------------------------------------------
@dataclass
class Account:
"""Represents a funded XRPL test account.
Attributes:
name: Human-readable name (e.g., "alice").
account: Classic address (rXXX...).
seed: Secret seed for signing.
sequence: Next available sequence number.
"""
name: str
account: str
seed: str
sequence: int = 0
@dataclass
class TxStats:
"""Tracks transaction submission results.
Attributes:
total_submitted: Total transactions sent to the network.
total_success: Transactions that returned tesSUCCESS or terQUEUED.
total_errors: Transactions that returned an error engine_result.
by_type: Per-transaction-type count of submissions.
errors_by_type: Per-transaction-type count of errors.
"""
total_submitted: int = 0
total_success: int = 0
total_errors: int = 0
by_type: dict[str, int] = field(default_factory=dict)
errors_by_type: dict[str, int] = field(default_factory=dict)
def record(self, tx_type: str, success: bool) -> None:
"""Record the result of a transaction submission."""
self.total_submitted += 1
self.by_type[tx_type] = self.by_type.get(tx_type, 0) + 1
if success:
self.total_success += 1
else:
self.total_errors += 1
self.errors_by_type[tx_type] = self.errors_by_type.get(tx_type, 0) + 1
def summary(self) -> dict[str, Any]:
"""Return a summary dict suitable for JSON serialization."""
return {
"total_submitted": self.total_submitted,
"total_success": self.total_success,
"total_errors": self.total_errors,
"success_rate_pct": (
round(self.total_success / self.total_submitted * 100, 2)
if self.total_submitted
else 0
),
"by_type": self.by_type,
"errors_by_type": self.errors_by_type,
}
# ---------------------------------------------------------------------------
# WebSocket RPC helpers
# ---------------------------------------------------------------------------
async def ws_request(
ws: websockets.WebSocketClientProtocol,
method: str,
params: list[dict[str, Any]] | None = None,
) -> dict[str, Any]:
"""Send a JSON-RPC request over WebSocket and return the result.
Args:
ws: Open WebSocket connection.
method: RPC method name.
params: Optional list of parameter dicts.
Returns:
The parsed JSON response dict.
Raises:
RuntimeError: If the request fails or times out.
"""
request: dict[str, Any] = {"method": method}
if params:
request["params"] = params
await ws.send(json.dumps(request))
raw = await asyncio.wait_for(ws.recv(), timeout=30.0)
return json.loads(raw)
async def create_account(ws: websockets.WebSocketClientProtocol, name: str) -> Account:
"""Create a new account via wallet_propose RPC.
Args:
ws: Open WebSocket connection.
name: Human-readable name for the account.
Returns:
An Account instance with the generated keypair.
"""
resp = await ws_request(ws, "wallet_propose")
result = resp.get("result", {})
return Account(
name=name,
account=result["account_id"],
seed=result["master_seed"],
)
async def fund_account(
ws: websockets.WebSocketClientProtocol,
dest: Account,
genesis_seq: int,
) -> tuple[bool, int]:
"""Fund a test account from genesis.
Args:
ws: Open WebSocket connection.
dest: Destination account to fund.
genesis_seq: Current genesis account sequence number.
Returns:
Tuple of (success: bool, next_sequence: int).
"""
resp = await ws_request(
ws,
"submit",
[
{
"secret": GENESIS_SEED,
"tx_json": {
"TransactionType": "Payment",
"Account": GENESIS_ACCOUNT,
"Destination": dest.account,
"Amount": FUND_AMOUNT,
"Sequence": genesis_seq,
},
}
],
)
engine_result = resp.get("result", {}).get("engine_result", "unknown")
success = engine_result in ("tesSUCCESS", "terQUEUED")
if not success:
logger.warning("Fund %s failed: %s", dest.name, engine_result)
return success, genesis_seq + 1
async def get_account_sequence(
ws: websockets.WebSocketClientProtocol, account: str
) -> int:
"""Get the current sequence number for an account.
Args:
ws: Open WebSocket connection.
account: Classic address.
Returns:
Current sequence number.
"""
resp = await ws_request(ws, "account_info", [{"account": account}])
return resp.get("result", {}).get("account_data", {}).get("Sequence", 0)
# ---------------------------------------------------------------------------
# Transaction builders
# ---------------------------------------------------------------------------
def build_payment(sender: Account, receiver: Account) -> dict[str, Any]:
"""Build an XRP Payment transaction.
Args:
sender: Source account.
receiver: Destination account.
Returns:
Transaction JSON and signing secret.
"""
amount = str(random.randint(1000, 1000000)) # 0.001 - 1 XRP
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "Payment",
"Account": sender.account,
"Destination": receiver.account,
"Amount": amount,
"Sequence": sender.sequence,
},
}
def build_offer_create(sender: Account) -> dict[str, Any]:
"""Build an OfferCreate transaction (XRP/USD pair).
Args:
sender: Account placing the offer.
Returns:
Transaction JSON and signing secret.
"""
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "OfferCreate",
"Account": sender.account,
"TakerPays": str(random.randint(100000, 10000000)),
"TakerGets": {
"currency": "USD",
"issuer": GENESIS_ACCOUNT,
"value": str(round(random.uniform(0.1, 100.0), 2)),
},
"Sequence": sender.sequence,
},
}
def build_offer_cancel(sender: Account) -> dict[str, Any]:
"""Build an OfferCancel transaction.
Uses a non-existent offer sequence — will fail gracefully but still
exercises the tx.process span pipeline.
Args:
sender: Account cancelling the offer.
Returns:
Transaction JSON and signing secret.
"""
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "OfferCancel",
"Account": sender.account,
"OfferSequence": max(1, sender.sequence - 1),
"Sequence": sender.sequence,
},
}
def build_trust_set(sender: Account) -> dict[str, Any]:
"""Build a TrustSet transaction for a USD trust line.
Args:
sender: Account setting the trust line.
Returns:
Transaction JSON and signing secret.
"""
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "TrustSet",
"Account": sender.account,
"LimitAmount": {
"currency": "USD",
"issuer": GENESIS_ACCOUNT,
"value": "1000000",
},
"Sequence": sender.sequence,
},
}
def build_nftoken_mint(sender: Account) -> dict[str, Any]:
"""Build an NFTokenMint transaction.
Args:
sender: Account minting the NFT.
Returns:
Transaction JSON and signing secret.
"""
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "NFTokenMint",
"Account": sender.account,
"NFTokenTaxon": random.randint(0, 100),
"Flags": 8, # tfTransferable
"Sequence": sender.sequence,
},
}
def build_nftoken_create_offer(sender: Account) -> dict[str, Any]:
"""Build an NFTokenCreateOffer transaction.
Uses a dummy NFTokenID — will fail but exercises the span pipeline.
Args:
sender: Account creating the NFT offer.
Returns:
Transaction JSON and signing secret.
"""
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "NFTokenCreateOffer",
"Account": sender.account,
"NFTokenID": "0" * 64,
"Amount": str(random.randint(100000, 1000000)),
"Flags": 1, # tfSellNFToken
"Sequence": sender.sequence,
},
}
def build_escrow_create(sender: Account, receiver: Account) -> dict[str, Any]:
"""Build an EscrowCreate transaction.
Creates a time-based escrow that finishes 10 seconds from now.
Args:
sender: Account creating the escrow.
receiver: Destination account for escrow funds.
Returns:
Transaction JSON and signing secret.
"""
# Ripple epoch offset: 946684800 seconds from Unix epoch
ripple_time = int(time.time()) - 946684800
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "EscrowCreate",
"Account": sender.account,
"Destination": receiver.account,
"Amount": str(random.randint(100000, 1000000)),
"FinishAfter": ripple_time + 10,
"Sequence": sender.sequence,
},
}
def build_escrow_finish(sender: Account, owner: Account) -> dict[str, Any]:
"""Build an EscrowFinish transaction.
Uses a dummy offer sequence — will likely fail but exercises spans.
Args:
sender: Account finishing the escrow.
owner: Account that created the escrow.
Returns:
Transaction JSON and signing secret.
"""
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "EscrowFinish",
"Account": sender.account,
"Owner": owner.account,
"OfferSequence": max(1, owner.sequence - 2),
"Sequence": sender.sequence,
},
}
def build_amm_create(sender: Account) -> dict[str, Any]:
"""Build an AMMCreate transaction (XRP/USD pool).
Requires the AMM amendment to be enabled on the network.
Args:
sender: Account creating the AMM pool.
Returns:
Transaction JSON and signing secret.
"""
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "AMMCreate",
"Account": sender.account,
"Amount": str(random.randint(10000000, 100000000)),
"Amount2": {
"currency": "USD",
"issuer": GENESIS_ACCOUNT,
"value": str(round(random.uniform(10.0, 1000.0), 2)),
},
"TradingFee": 500, # 0.5%
"Sequence": sender.sequence,
},
}
def build_amm_deposit(sender: Account) -> dict[str, Any]:
"""Build an AMMDeposit transaction.
Args:
sender: Account depositing into the AMM pool.
Returns:
Transaction JSON and signing secret.
"""
return {
"secret": sender.seed,
"tx_json": {
"TransactionType": "AMMDeposit",
"Account": sender.account,
"Asset": {"currency": "XRP"},
"Asset2": {
"currency": "USD",
"issuer": GENESIS_ACCOUNT,
},
"Amount": str(random.randint(1000000, 10000000)),
"Flags": 0x00080000, # tfSingleAsset
"Sequence": sender.sequence,
},
}
# Transaction type -> builder function mapping.
# Each builder takes (accounts: list[Account]) and returns submit params.
TX_BUILDERS: dict[str, Any] = {
"Payment": lambda accts: build_payment(accts[0], accts[1]),
"OfferCreate": lambda accts: build_offer_create(accts[0]),
"OfferCancel": lambda accts: build_offer_cancel(accts[0]),
"TrustSet": lambda accts: build_trust_set(accts[2]),
"NFTokenMint": lambda accts: build_nftoken_mint(accts[3]),
"NFTokenCreateOffer": lambda accts: build_nftoken_create_offer(accts[3]),
"EscrowCreate": lambda accts: build_escrow_create(accts[4], accts[1]),
"EscrowFinish": lambda accts: build_escrow_finish(accts[4], accts[4]),
"AMMCreate": lambda accts: build_amm_create(accts[5]),
"AMMDeposit": lambda accts: build_amm_deposit(accts[5]),
}
# ---------------------------------------------------------------------------
# Main submission loop
# ---------------------------------------------------------------------------
async def setup_accounts(
ws: websockets.WebSocketClientProtocol,
) -> list[Account]:
"""Create and fund test accounts from genesis.
Generates NUM_TEST_ACCOUNTS accounts via wallet_propose, then funds
each with FUND_AMOUNT XRP from genesis.
Args:
ws: Open WebSocket connection to a rippled node.
Returns:
List of funded Account instances.
"""
account_names = ["alice", "bob", "carol", "dave", "eve", "frank", "grace", "heidi"]
logger.info("Creating %d test accounts...", NUM_TEST_ACCOUNTS)
accounts: list[Account] = []
for name in account_names[:NUM_TEST_ACCOUNTS]:
acct = await create_account(ws, name)
accounts.append(acct)
logger.info(" Created %s: %s", name, acct.account)
# Get genesis sequence.
genesis_seq = await get_account_sequence(ws, GENESIS_ACCOUNT)
logger.info("Genesis sequence: %d", genesis_seq)
# Fund all accounts.
logger.info("Funding test accounts...")
for acct in accounts:
success, genesis_seq = await fund_account(ws, acct, genesis_seq)
if success:
logger.info(" Funded %s", acct.name)
else:
logger.warning(" Failed to fund %s", acct.name)
# Wait for funding transactions to be validated.
logger.info("Waiting 10s for funding transactions to validate...")
await asyncio.sleep(10)
# Refresh sequence numbers for all accounts.
for acct in accounts:
try:
acct.sequence = await get_account_sequence(ws, acct.account)
logger.info(" %s sequence: %d", acct.name, acct.sequence)
except Exception as exc:
logger.warning(" Failed to get sequence for %s: %s", acct.name, exc)
return accounts
async def submit_transaction(
ws: websockets.WebSocketClientProtocol,
tx_type: str,
accounts: list[Account],
stats: TxStats,
) -> None:
"""Submit a single transaction of the given type.
Selects the appropriate builder, constructs the transaction, submits
it via the submit RPC, and records the result.
Args:
ws: Open WebSocket connection.
tx_type: Transaction type name (e.g., "Payment").
accounts: List of funded test accounts.
stats: TxStats instance to record results.
"""
builder = TX_BUILDERS.get(tx_type)
if not builder:
logger.warning("Unknown transaction type: %s", tx_type)
return
try:
params = builder(accounts)
# Identify which account is the sender to bump its sequence.
sender_addr = params["tx_json"]["Account"]
sender = next((a for a in accounts if a.account == sender_addr), None)
resp = await ws_request(ws, "submit", [params])
engine_result = resp.get("result", {}).get("engine_result", "unknown")
success = engine_result in (
"tesSUCCESS",
"terQUEUED",
"tecUNFUNDED_OFFER",
"tecNO_DST_INSUF_XRP",
)
stats.record(tx_type, success)
if sender:
sender.sequence += 1
if not success:
logger.debug(
"%s result: %s (%s)",
tx_type,
engine_result,
resp.get("result", {}).get("engine_result_message", ""),
)
except Exception as exc:
stats.record(tx_type, False)
logger.debug("%s error: %s", tx_type, exc)
async def run_submitter(
endpoint: str,
tps: float,
duration: float,
weights: dict[str, int],
) -> TxStats:
"""Run the transaction submitter against a single endpoint.
Args:
endpoint: WebSocket URL (ws://host:port).
tps: Target transactions per second.
duration: Total run time in seconds.
weights: Transaction type distribution weights.
Returns:
TxStats with aggregated results.
"""
stats = TxStats()
interval = 1.0 / tps if tps > 0 else 0.5
ws = await websockets.connect(endpoint, ping_interval=20, ping_timeout=10)
logger.info("Connected to %s", endpoint)
try:
# Setup test accounts.
accounts = await setup_accounts(ws)
if len(accounts) < 6:
logger.error("Need at least 6 funded accounts, got %d", len(accounts))
return stats
# Build weighted command list.
tx_types = list(weights.keys())
tx_weights = [weights[t] for t in tx_types]
logger.info(
"Starting TX submission: tps=%s, duration=%ss, types=%d",
tps,
duration,
len(tx_types),
)
start = time.monotonic()
while (time.monotonic() - start) < duration:
tx_type = random.choices(tx_types, weights=tx_weights, k=1)[0]
await submit_transaction(ws, tx_type, accounts, stats)
await asyncio.sleep(interval)
# Progress logging every 50 transactions.
if stats.total_submitted % 50 == 0 and stats.total_submitted > 0:
elapsed = time.monotonic() - start
actual_tps = stats.total_submitted / elapsed if elapsed > 0 else 0
logger.info(
"Progress: %d submitted, %d success, %d errors, "
"%.1f TPS (%.0fs elapsed)",
stats.total_submitted,
stats.total_success,
stats.total_errors,
actual_tps,
elapsed,
)
finally:
await ws.close()
elapsed = time.monotonic() - start
logger.info(
"Submission complete: %d submitted, %d success, %d errors "
"in %.1fs (%.1f TPS)",
stats.total_submitted,
stats.total_success,
stats.total_errors,
elapsed,
stats.total_submitted / elapsed if elapsed > 0 else 0,
)
return stats
# ---------------------------------------------------------------------------
# CLI entry point
# ---------------------------------------------------------------------------
def parse_args() -> argparse.Namespace:
"""Parse command-line arguments."""
parser = argparse.ArgumentParser(
description="Transaction Submitter for rippled telemetry validation",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Basic usage (5 TPS for 2 minutes):
python3 tx_submitter.py --endpoint ws://localhost:6006 --tps 5 --duration 120
# Custom transaction mix:
python3 tx_submitter.py --endpoint ws://localhost:6006 \\
--weights '{"Payment": 60, "OfferCreate": 20, "TrustSet": 20}'
""",
)
parser.add_argument(
"--endpoint",
type=str,
default="ws://localhost:6006",
help="WebSocket endpoint (default: ws://localhost:6006)",
)
parser.add_argument(
"--tps",
type=float,
default=5.0,
help="Target transactions per second (default: 5)",
)
parser.add_argument(
"--duration",
type=float,
default=120.0,
help="Run duration in seconds (default: 120)",
)
parser.add_argument(
"--weights",
type=str,
default=None,
help="JSON string of transaction type weights (overrides defaults)",
)
parser.add_argument(
"--output",
type=str,
default=None,
help="Write JSON summary to this file path",
)
parser.add_argument(
"--verbose",
action="store_true",
help="Enable debug logging",
)
return parser.parse_args()
def main() -> None:
"""Main entry point for the transaction submitter."""
args = parse_args()
logging.basicConfig(
level=logging.DEBUG if args.verbose else logging.INFO,
format="%(asctime)s [%(name)s] %(levelname)s %(message)s",
)
# Parse custom weights if provided.
weights = DEFAULT_TX_WEIGHTS.copy()
if args.weights:
try:
custom = json.loads(args.weights)
weights = {k: int(v) for k, v in custom.items()}
logger.info("Using custom weights: %s", weights)
except (json.JSONDecodeError, ValueError) as exc:
logger.error("Invalid --weights JSON: %s", exc)
sys.exit(1)
# Run the submitter.
stats = asyncio.run(
run_submitter(
endpoint=args.endpoint,
tps=args.tps,
duration=args.duration,
weights=weights,
)
)
summary = stats.summary()
print(json.dumps(summary, indent=2))
if args.output:
with open(args.output, "w") as f:
json.dump(summary, f, indent=2)
logger.info("Summary written to %s", args.output)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,886 @@
#!/usr/bin/env python3
"""Telemetry Validation Suite for rippled.
Validates that the full telemetry stack is emitting expected data after
a workload run. Queries Jaeger (spans), Prometheus (metrics), Loki (logs),
and Grafana (dashboards) APIs to produce a pass/fail report.
Validation categories:
1. Span validation — All 16+ span types present with required attributes
2. Metric validation — SpanMetrics, StatsD, and Phase 9 metrics are non-zero
3. Log-trace correlation — Loki logs contain trace_id/span_id fields
4. Dashboard validation — All 10 Grafana dashboards render data
Usage:
python3 validate_telemetry.py --report /tmp/validation-report.json
# Custom API endpoints:
python3 validate_telemetry.py \\
--jaeger http://localhost:16686 \\
--prometheus http://localhost:9090 \\
--loki http://localhost:3100 \\
--grafana http://localhost:3000
"""
import argparse
import asyncio
import json
import logging
import sys
import time
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any
import aiohttp
logger = logging.getLogger("validate_telemetry")
# ---------------------------------------------------------------------------
# Configuration defaults
# ---------------------------------------------------------------------------
DEFAULT_JAEGER = "http://localhost:16686"
DEFAULT_PROMETHEUS = "http://localhost:9090"
DEFAULT_LOKI = "http://localhost:3100"
DEFAULT_GRAFANA = "http://localhost:3000"
SCRIPT_DIR = Path(__file__).parent
EXPECTED_SPANS_FILE = SCRIPT_DIR / "expected_spans.json"
EXPECTED_METRICS_FILE = SCRIPT_DIR / "expected_metrics.json"
# ---------------------------------------------------------------------------
# Data classes
# ---------------------------------------------------------------------------
@dataclass
class CheckResult:
"""Result of a single validation check.
Attributes:
name: Check identifier (e.g., "span.rpc.request").
category: Validation category (span, metric, log, dashboard).
passed: Whether the check passed.
message: Human-readable description of the result.
details: Optional additional data (counts, values, etc.).
"""
name: str
category: str
passed: bool
message: str
details: dict[str, Any] = field(default_factory=dict)
def to_dict(self) -> dict[str, Any]:
"""Serialize to a JSON-compatible dict."""
return {
"name": self.name,
"category": self.category,
"passed": self.passed,
"message": self.message,
"details": self.details,
}
@dataclass
class ValidationReport:
"""Aggregated validation report.
Attributes:
checks: List of all individual check results.
start_time: ISO timestamp when validation started.
end_time: ISO timestamp when validation completed.
"""
checks: list[CheckResult] = field(default_factory=list)
start_time: str = ""
end_time: str = ""
@property
def total_checks(self) -> int:
"""Total number of checks executed."""
return len(self.checks)
@property
def passed(self) -> int:
"""Number of checks that passed."""
return sum(1 for c in self.checks if c.passed)
@property
def failed(self) -> int:
"""Number of checks that failed."""
return sum(1 for c in self.checks if not c.passed)
@property
def all_passed(self) -> bool:
"""Whether all checks passed."""
return self.failed == 0
def add(self, check: CheckResult) -> None:
"""Add a check result to the report."""
self.checks.append(check)
status = "PASS" if check.passed else "FAIL"
logger.info("[%s] %s: %s", status, check.name, check.message)
def to_dict(self) -> dict[str, Any]:
"""Serialize to a JSON-compatible dict."""
return {
"summary": {
"total": self.total_checks,
"passed": self.passed,
"failed": self.failed,
"all_passed": self.all_passed,
},
"start_time": self.start_time,
"end_time": self.end_time,
"checks": [c.to_dict() for c in self.checks],
}
# ---------------------------------------------------------------------------
# Span Validation (Jaeger API)
# ---------------------------------------------------------------------------
async def validate_spans(
session: aiohttp.ClientSession,
jaeger_url: str,
report: ValidationReport,
) -> None:
"""Validate that all expected spans appear in Jaeger.
Queries the Jaeger HTTP API for each expected span name and checks
that traces exist. Also validates required attributes on spans and
parent-child relationships.
Args:
session: aiohttp client session.
jaeger_url: Base URL for Jaeger API (e.g., http://localhost:16686).
report: ValidationReport to accumulate results.
"""
logger.info("--- Span Validation (Jaeger) ---")
# Load expected spans.
with open(EXPECTED_SPANS_FILE) as f:
expected = json.load(f)
# Check service registration.
try:
async with session.get(f"{jaeger_url}/api/services") as resp:
data = await resp.json()
services = data.get("data", [])
has_rippled = "rippled" in services
report.add(
CheckResult(
name="span.service_registration",
category="span",
passed=has_rippled,
message=(
f"Service 'rippled' registered (found: {services})"
if has_rippled
else f"Service 'rippled' NOT found (found: {services})"
),
)
)
except Exception as exc:
report.add(
CheckResult(
name="span.service_registration",
category="span",
passed=False,
message=f"Jaeger API unreachable: {exc}",
)
)
return
# Check each expected span.
for span_def in expected["spans"]:
span_name = span_def["name"]
# For wildcard spans (rpc.command.*), search with regex pattern.
if "*" in span_name:
operation = span_name.replace("*", "")
# Query a concrete example: rpc.command.server_info.
operation = "rpc.command.server_info"
check_name = f"span.{span_name}"
else:
operation = span_name
check_name = f"span.{span_name}"
try:
params = {
"service": "rippled",
"operation": operation,
"limit": 5,
"lookback": "1h",
}
async with session.get(f"{jaeger_url}/api/traces", params=params) as resp:
data = await resp.json()
traces = data.get("data", [])
count = len(traces)
report.add(
CheckResult(
name=check_name,
category="span",
passed=count > 0,
message=(
f"{span_name}: {count} traces found"
if count > 0
else f"{span_name}: 0 traces (expected > 0)"
),
details={"trace_count": count},
)
)
# Validate required attributes on first trace.
if count > 0 and span_def.get("required_attributes"):
await _validate_span_attributes(traces[0], span_def, report)
except Exception as exc:
report.add(
CheckResult(
name=check_name,
category="span",
passed=False,
message=f"{span_name}: query failed ({exc})",
)
)
# Validate parent-child relationships.
for rel in expected.get("parent_child_relationships", []):
await _validate_parent_child(session, jaeger_url, rel, report)
async def _validate_span_attributes(
trace: dict[str, Any],
span_def: dict[str, Any],
report: ValidationReport,
) -> None:
"""Check that a trace's spans contain expected attributes.
Args:
trace: A Jaeger trace object (from /api/traces).
span_def: Span definition from expected_spans.json.
report: ValidationReport to accumulate results.
"""
required_attrs = span_def.get("required_attributes", [])
if not required_attrs:
return
span_name = span_def["name"]
# Collect all tag keys from all spans in the trace.
found_attrs: set[str] = set()
for span in trace.get("spans", []):
for tag in span.get("tags", []):
found_attrs.add(tag.get("key", ""))
missing = [a for a in required_attrs if a not in found_attrs]
report.add(
CheckResult(
name=f"span.attrs.{span_name}",
category="span",
passed=len(missing) == 0,
message=(
f"{span_name}: all {len(required_attrs)} attributes present"
if not missing
else f"{span_name}: missing attributes: {missing}"
),
details={
"required": required_attrs,
"found": list(found_attrs),
"missing": missing,
},
)
)
async def _validate_parent_child(
session: aiohttp.ClientSession,
jaeger_url: str,
relationship: dict[str, Any],
report: ValidationReport,
) -> None:
"""Validate a parent-child span relationship in Jaeger traces.
Args:
session: aiohttp client session.
jaeger_url: Base URL for Jaeger API.
relationship: Dict with 'parent' and 'child' span names.
report: ValidationReport to accumulate results.
"""
parent_name = relationship["parent"]
child_name = relationship["child"]
try:
# Query traces for the parent span.
params = {
"service": "rippled",
"operation": parent_name,
"limit": 3,
"lookback": "1h",
}
async with session.get(f"{jaeger_url}/api/traces", params=params) as resp:
data = await resp.json()
traces = data.get("data", [])
if not traces:
report.add(
CheckResult(
name=f"span.hierarchy.{parent_name}->{child_name}",
category="span",
passed=False,
message=f"No {parent_name} traces to check hierarchy",
)
)
return
# Check if child spans exist within parent traces.
# Use the concrete child name for wildcard patterns.
concrete_child = child_name.replace("*", "server_info")
found_child = False
for trace in traces:
for span in trace.get("spans", []):
op = span.get("operationName", "")
if concrete_child in op or ("*" not in child_name and op == child_name):
found_child = True
break
if found_child:
break
report.add(
CheckResult(
name=f"span.hierarchy.{parent_name}->{child_name}",
category="span",
passed=found_child,
message=(
f"Found {child_name} as child of {parent_name}"
if found_child
else f"{child_name} not found in {parent_name} traces"
),
)
)
except Exception as exc:
report.add(
CheckResult(
name=f"span.hierarchy.{parent_name}->{child_name}",
category="span",
passed=False,
message=f"Hierarchy check failed: {exc}",
)
)
# ---------------------------------------------------------------------------
# Metric Validation (Prometheus API)
# ---------------------------------------------------------------------------
async def validate_metrics(
session: aiohttp.ClientSession,
prometheus_url: str,
report: ValidationReport,
) -> None:
"""Validate that expected metrics appear in Prometheus with non-zero values.
Args:
session: aiohttp client session.
prometheus_url: Base URL for Prometheus API (e.g., http://localhost:9090).
report: ValidationReport to accumulate results.
"""
logger.info("--- Metric Validation (Prometheus) ---")
with open(EXPECTED_METRICS_FILE) as f:
expected = json.load(f)
# Check each metric category.
for category_key, category_data in expected.items():
if category_key in ("description", "grafana_dashboards"):
continue
metrics = category_data.get("metrics", [])
for metric_name in metrics:
await _check_prometheus_metric(
session, prometheus_url, metric_name, category_key, report
)
async def _check_prometheus_metric(
session: aiohttp.ClientSession,
prometheus_url: str,
metric_name: str,
category: str,
report: ValidationReport,
) -> None:
"""Query Prometheus for a specific metric and check it exists.
Args:
session: aiohttp client session.
prometheus_url: Prometheus base URL.
metric_name: Prometheus metric name.
category: Metric category for the report.
report: ValidationReport to accumulate results.
"""
try:
params = {"query": metric_name}
async with session.get(f"{prometheus_url}/api/v1/query", params=params) as resp:
data = await resp.json()
results = data.get("data", {}).get("result", [])
series_count = len(results)
report.add(
CheckResult(
name=f"metric.{category}.{metric_name}",
category="metric",
passed=series_count > 0,
message=(
f"{metric_name}: {series_count} series"
if series_count > 0
else f"{metric_name}: 0 series (expected > 0)"
),
details={"series_count": series_count},
)
)
except Exception as exc:
report.add(
CheckResult(
name=f"metric.{category}.{metric_name}",
category="metric",
passed=False,
message=f"{metric_name}: query failed ({exc})",
)
)
# ---------------------------------------------------------------------------
# Log-Trace Correlation Validation (Loki API)
# ---------------------------------------------------------------------------
async def validate_log_trace_correlation(
session: aiohttp.ClientSession,
loki_url: str,
jaeger_url: str,
report: ValidationReport,
) -> None:
"""Validate that Loki logs contain trace_id/span_id for correlation.
Checks:
1. Logs with trace_id= field exist in Loki.
2. A random trace_id from Jaeger can be found in Loki logs.
Args:
session: aiohttp client session.
loki_url: Base URL for Loki API (e.g., http://localhost:3100).
jaeger_url: Base URL for Jaeger API.
report: ValidationReport to accumulate results.
"""
logger.info("--- Log-Trace Correlation Validation (Loki) ---")
# Check 1: Any logs with trace_id exist.
try:
params = {
"query": '{job="rippled"} |= "trace_id="',
"limit": 5,
"direction": "backward",
}
async with session.get(
f"{loki_url}/loki/api/v1/query_range", params=params
) as resp:
data = await resp.json()
streams = data.get("data", {}).get("result", [])
total_entries = sum(len(s.get("values", [])) for s in streams)
report.add(
CheckResult(
name="log.trace_id_present",
category="log",
passed=total_entries > 0,
message=(
f"Found {total_entries} log entries with trace_id"
if total_entries > 0
else "No log entries with trace_id found"
),
details={"log_count": total_entries},
)
)
except Exception as exc:
report.add(
CheckResult(
name="log.trace_id_present",
category="log",
passed=False,
message=f"Loki query failed: {exc}",
)
)
# Check 2: Cross-reference a trace_id from Jaeger to Loki.
try:
# Get a recent trace from Jaeger.
params = {
"service": "rippled",
"limit": 1,
"lookback": "1h",
}
async with session.get(f"{jaeger_url}/api/traces", params=params) as resp:
data = await resp.json()
traces = data.get("data", [])
if traces:
trace_id = traces[0].get("traceID", "")
if trace_id:
# Search Loki for this trace_id.
loki_params = {
"query": f'{{job="rippled"}} |= "{trace_id}"',
"limit": 5,
"direction": "backward",
}
async with session.get(
f"{loki_url}/loki/api/v1/query_range",
params=loki_params,
) as loki_resp:
loki_data = await loki_resp.json()
loki_streams = loki_data.get("data", {}).get("result", [])
loki_count = sum(len(s.get("values", [])) for s in loki_streams)
report.add(
CheckResult(
name="log.trace_id_cross_reference",
category="log",
passed=loki_count > 0,
message=(
f"trace_id {trace_id[:16]}... found in "
f"{loki_count} Loki entries"
if loki_count > 0
else f"trace_id {trace_id[:16]}... not found " "in Loki"
),
details={
"trace_id": trace_id,
"loki_count": loki_count,
},
)
)
else:
report.add(
CheckResult(
name="log.trace_id_cross_reference",
category="log",
passed=False,
message="No traces in Jaeger to cross-reference",
)
)
except Exception as exc:
report.add(
CheckResult(
name="log.trace_id_cross_reference",
category="log",
passed=False,
message=f"Cross-reference check failed: {exc}",
)
)
# ---------------------------------------------------------------------------
# Dashboard Validation (Grafana API)
# ---------------------------------------------------------------------------
async def validate_dashboards(
session: aiohttp.ClientSession,
grafana_url: str,
report: ValidationReport,
) -> None:
"""Validate that all Grafana dashboards are accessible and return data.
For each expected dashboard UID, queries the Grafana API to verify
the dashboard exists and is loadable.
Args:
session: aiohttp client session.
grafana_url: Base URL for Grafana API (e.g., http://localhost:3000).
report: ValidationReport to accumulate results.
"""
logger.info("--- Dashboard Validation (Grafana) ---")
with open(EXPECTED_METRICS_FILE) as f:
expected = json.load(f)
dashboard_uids = expected.get("grafana_dashboards", {}).get("uids", [])
for uid in dashboard_uids:
try:
async with session.get(f"{grafana_url}/api/dashboards/uid/{uid}") as resp:
if resp.status == 200:
data = await resp.json()
dashboard = data.get("dashboard", {})
panel_count = len(dashboard.get("panels", []))
report.add(
CheckResult(
name=f"dashboard.{uid}",
category="dashboard",
passed=True,
message=(f"{uid}: loaded ({panel_count} panels)"),
details={"panel_count": panel_count},
)
)
else:
report.add(
CheckResult(
name=f"dashboard.{uid}",
category="dashboard",
passed=False,
message=f"{uid}: HTTP {resp.status}",
)
)
except Exception as exc:
report.add(
CheckResult(
name=f"dashboard.{uid}",
category="dashboard",
passed=False,
message=f"{uid}: query failed ({exc})",
)
)
# ---------------------------------------------------------------------------
# Span duration validation
# ---------------------------------------------------------------------------
async def validate_span_durations(
session: aiohttp.ClientSession,
jaeger_url: str,
report: ValidationReport,
) -> None:
"""Validate that span durations are within reasonable bounds.
Checks that spans have duration > 0 and < 60s, flagging any anomalies.
Args:
session: aiohttp client session.
jaeger_url: Base URL for Jaeger API.
report: ValidationReport to accumulate results.
"""
logger.info("--- Span Duration Validation ---")
try:
params = {
"service": "rippled",
"limit": 20,
"lookback": "1h",
}
async with session.get(f"{jaeger_url}/api/traces", params=params) as resp:
data = await resp.json()
traces = data.get("data", [])
if not traces:
report.add(
CheckResult(
name="span.duration_bounds",
category="span",
passed=False,
message="No traces available for duration check",
)
)
return
total_spans = 0
invalid_spans = 0
max_duration_us = 0
for trace in traces:
for span in trace.get("spans", []):
duration = span.get("duration", 0) # microseconds
total_spans += 1
max_duration_us = max(max_duration_us, duration)
if duration <= 0 or duration > 60_000_000:
invalid_spans += 1
report.add(
CheckResult(
name="span.duration_bounds",
category="span",
passed=invalid_spans == 0,
message=(
f"All {total_spans} spans have valid durations "
f"(max: {max_duration_us / 1000:.1f}ms)"
if invalid_spans == 0
else f"{invalid_spans}/{total_spans} spans have invalid "
"durations (<=0 or >60s)"
),
details={
"total_spans": total_spans,
"invalid_spans": invalid_spans,
"max_duration_ms": round(max_duration_us / 1000, 2),
},
)
)
except Exception as exc:
report.add(
CheckResult(
name="span.duration_bounds",
category="span",
passed=False,
message=f"Duration check failed: {exc}",
)
)
# ---------------------------------------------------------------------------
# Main validation orchestrator
# ---------------------------------------------------------------------------
async def run_validation(
jaeger_url: str,
prometheus_url: str,
loki_url: str,
grafana_url: str,
skip_loki: bool = False,
) -> ValidationReport:
"""Run all validation checks and return a report.
Args:
jaeger_url: Jaeger API base URL.
prometheus_url: Prometheus API base URL.
loki_url: Loki API base URL.
grafana_url: Grafana API base URL.
skip_loki: If True, skip log-trace correlation checks.
Returns:
ValidationReport with all check results.
"""
report = ValidationReport()
report.start_time = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime())
async with aiohttp.ClientSession() as session:
await validate_spans(session, jaeger_url, report)
await validate_span_durations(session, jaeger_url, report)
await validate_metrics(session, prometheus_url, report)
if not skip_loki:
await validate_log_trace_correlation(session, loki_url, jaeger_url, report)
await validate_dashboards(session, grafana_url, report)
report.end_time = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime())
return report
# ---------------------------------------------------------------------------
# CLI entry point
# ---------------------------------------------------------------------------
def parse_args() -> argparse.Namespace:
"""Parse command-line arguments."""
parser = argparse.ArgumentParser(
description="Telemetry Validation Suite for rippled",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Run all validations with defaults:
python3 validate_telemetry.py
# Write report to file:
python3 validate_telemetry.py --report /tmp/validation-report.json
# Custom endpoints:
python3 validate_telemetry.py \\
--jaeger http://jaeger:16686 --prometheus http://prom:9090
# Skip Loki checks (if log-trace correlation is not set up):
python3 validate_telemetry.py --skip-loki
""",
)
parser.add_argument(
"--jaeger",
type=str,
default=DEFAULT_JAEGER,
help=f"Jaeger API URL (default: {DEFAULT_JAEGER})",
)
parser.add_argument(
"--prometheus",
type=str,
default=DEFAULT_PROMETHEUS,
help=f"Prometheus API URL (default: {DEFAULT_PROMETHEUS})",
)
parser.add_argument(
"--loki",
type=str,
default=DEFAULT_LOKI,
help=f"Loki API URL (default: {DEFAULT_LOKI})",
)
parser.add_argument(
"--grafana",
type=str,
default=DEFAULT_GRAFANA,
help=f"Grafana API URL (default: {DEFAULT_GRAFANA})",
)
parser.add_argument(
"--skip-loki",
action="store_true",
help="Skip log-trace correlation validation",
)
parser.add_argument(
"--report",
type=str,
default=None,
help="Write JSON report to this file path",
)
parser.add_argument(
"--verbose",
action="store_true",
help="Enable debug logging",
)
return parser.parse_args()
def main() -> None:
"""Main entry point for the telemetry validation suite."""
args = parse_args()
logging.basicConfig(
level=logging.DEBUG if args.verbose else logging.INFO,
format="%(asctime)s [%(name)s] %(levelname)s %(message)s",
)
report = asyncio.run(
run_validation(
jaeger_url=args.jaeger,
prometheus_url=args.prometheus,
loki_url=args.loki,
grafana_url=args.grafana,
skip_loki=args.skip_loki,
)
)
# Print summary.
print("")
print("=" * 60)
print(" TELEMETRY VALIDATION REPORT")
print("=" * 60)
print(f" Total checks: {report.total_checks}")
print(f" Passed: {report.passed}")
print(f" Failed: {report.failed}")
print("=" * 60)
print("")
# Print failures.
if report.failed > 0:
print("FAILED CHECKS:")
for check in report.checks:
if not check.passed:
print(f" [{check.category}] {check.name}: {check.message}")
print("")
# Write report file.
report_dict = report.to_dict()
if args.report:
with open(args.report, "w") as f:
json.dump(report_dict, f, indent=2)
logger.info("Report written to %s", args.report)
else:
print(json.dumps(report_dict, indent=2))
# Exit with appropriate code for CI.
sys.exit(0 if report.all_passed else 1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,94 @@
# xrpld validator node configuration template for workload harness.
#
# Placeholders (replaced by docker-compose entrypoint):
# {{NODE_INDEX}} — Node number (1-based)
# {{RPC_PORT}} — HTTP RPC port
# {{WS_PORT}} — WebSocket port
# {{PEER_PORT}} — Peer protocol port
# {{DATA_DIR}} — Node data directory
# {{VALIDATION_SEED}} — Validator seed from key generation
# {{VALIDATORS_FILE}} — Path to shared validators.txt
# {{IPS_FIXED}} — Peer addresses (one per line)
# {{OTEL_ENDPOINT}} — OTel Collector OTLP/HTTP endpoint
# {{STATSD_ADDRESS}} — StatsD UDP address (host:port)
# {{LOG_LEVEL}} — Log level (debug, info, warning, error)
[server]
port_rpc
port_ws
port_peer
[port_rpc]
port = {{RPC_PORT}}
ip = 0.0.0.0
admin = 0.0.0.0
protocol = http
[port_ws]
port = {{WS_PORT}}
ip = 0.0.0.0
admin = 0.0.0.0
protocol = ws
[port_peer]
port = {{PEER_PORT}}
ip = 0.0.0.0
protocol = peer
[node_db]
type=NuDB
path={{DATA_DIR}}/nudb
online_delete=256
[database_path]
{{DATA_DIR}}/db
[debug_logfile]
{{DATA_DIR}}/debug.log
[validation_seed]
{{VALIDATION_SEED}}
[validators_file]
{{VALIDATORS_FILE}}
[ips_fixed]
{{IPS_FIXED}}
[peer_private]
1
# --- OpenTelemetry tracing (all categories enabled) ---
[telemetry]
enabled=1
service_instance_id=validator-{{NODE_INDEX}}
endpoint={{OTEL_ENDPOINT}}
exporter=otlp_http
sampling_ratio=1.0
batch_size=512
batch_delay_ms=2000
max_queue_size=2048
trace_rpc=1
trace_transactions=1
trace_consensus=1
trace_peer=1
trace_ledger=1
# --- StatsD metrics (beast::insight) ---
[insight]
server=statsd
address={{STATSD_ADDRESS}}
prefix=rippled
[rpc_startup]
{ "command": "log_level", "severity": "{{LOG_LEVEL}}" }
[ssl_verify]
0
# --- Network tuning for local cluster ---
[network_id]
0
[sntp_servers]
time.google.com

View File

@@ -0,0 +1,59 @@
# Standalone xrpld configuration with OpenTelemetry enabled.
#
# Usage:
# 1. Start the observability stack:
# docker compose -f docker/telemetry/docker-compose.yml up -d
# 2. Run xrpld in standalone mode:
# ./xrpld --conf docker/telemetry/xrpld-telemetry.cfg -a --start
# 3. Send RPC commands to exercise tracing:
# curl -s http://localhost:5005 -d '{"method":"server_info"}'
# 4. View traces in Jaeger UI: http://localhost:16686
[server]
port_rpc_admin_local
port_ws_admin_local
[port_rpc_admin_local]
port = 5005
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_ws_admin_local]
port = 6006
ip = 127.0.0.1
admin = 127.0.0.1
protocol = ws
[node_db]
type=NuDB
path=./data/nudb
online_delete=256
advisory_delete=0
[database_path]
./data
[debug_logfile]
./data/debug.log
[rpc_startup]
{ "command": "log_level", "severity": "debug" }
[ssl_verify]
0
# --- OpenTelemetry tracing ---
[telemetry]
enabled=1
endpoint=http://localhost:4318/v1/traces
exporter=otlp_http
sampling_ratio=1.0
batch_size=512
batch_delay_ms=5000
max_queue_size=2048
trace_rpc=1
trace_transactions=1
trace_consensus=1
trace_peer=0
trace_ledger=1

278
docs/build/telemetry.md vendored Normal file
View File

@@ -0,0 +1,278 @@
# OpenTelemetry Tracing for Rippled
This document explains how to build rippled with OpenTelemetry distributed tracing support, configure the runtime telemetry options, and set up the observability backend to view traces.
- [OpenTelemetry Tracing for Rippled](#opentelemetry-tracing-for-rippled)
- [Overview](#overview)
- [Building with Telemetry](#building-with-telemetry)
- [Summary](#summary)
- [Build steps](#build-steps)
- [Install dependencies](#install-dependencies)
- [Call CMake](#call-cmake)
- [Build](#build)
- [Building without telemetry](#building-without-telemetry)
- [Runtime Configuration](#runtime-configuration)
- [Configuration options](#configuration-options)
- [Observability Stack](#observability-stack)
- [Start the stack](#start-the-stack)
- [Verify the stack](#verify-the-stack)
- [View traces in Jaeger](#view-traces-in-jaeger)
- [Running Tests](#running-tests)
- [Troubleshooting](#troubleshooting)
- [No traces appear in Jaeger](#no-traces-appear-in-jaeger)
- [Conan lockfile error](#conan-lockfile-error)
- [CMake target not found](#cmake-target-not-found)
- [Architecture](#architecture)
- [Key files](#key-files)
- [Conditional compilation](#conditional-compilation)
## Overview
Rippled supports optional [OpenTelemetry](https://opentelemetry.io/) distributed tracing.
When enabled, it instruments RPC requests with trace spans that are exported via
OTLP/HTTP to an OpenTelemetry Collector, which forwards them to a tracing backend
such as Jaeger.
Telemetry is **off by default** at both compile time and runtime:
- **Compile time**: The Conan option `telemetry` and CMake option `telemetry` must be set to `True`/`ON`.
When disabled, all tracing macros compile to `((void)0)` with zero overhead.
- **Runtime**: The `[telemetry]` config section must set `enabled=1`.
When disabled at runtime, a no-op implementation is used.
## Building with Telemetry
### Summary
Follow the same instructions as mentioned in [BUILD.md](../../BUILD.md) but with the following changes:
1. Pass `-o telemetry=True` to `conan install` to pull the `opentelemetry-cpp` dependency.
2. CMake will automatically pick up `telemetry=ON` from the Conan-generated toolchain.
3. Build as usual.
---
### Build steps
```bash
cd /path/to/rippled
rm -rf .build
mkdir .build
cd .build
```
#### Install dependencies
The `telemetry` option adds `opentelemetry-cpp/1.18.0` as a dependency.
If the Conan lockfile does not yet include this package, bypass it with `--lockfile=""`.
```bash
conan install .. \
--output-folder . \
--build missing \
--settings build_type=Debug \
-o telemetry=True \
-o tests=True \
-o xrpld=True \
--lockfile=""
```
> **Note**: The first build with telemetry may take longer as `opentelemetry-cpp`
> and its transitive dependencies are compiled from source.
#### Call CMake
The Conan-generated toolchain file sets `telemetry=ON` automatically.
No additional CMake flags are needed beyond the standard ones.
```bash
cmake .. -G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=Debug \
-Dtests=ON -Dxrpld=ON
```
You should see in the CMake output:
```
-- OpenTelemetry tracing enabled
```
#### Build
```bash
cmake --build . --parallel $(nproc)
```
### Building without telemetry
Omit the `-o telemetry=True` option (or pass `-o telemetry=False`).
The `opentelemetry-cpp` dependency will not be downloaded,
the `XRPL_ENABLE_TELEMETRY` preprocessor define will not be set,
and all tracing macros will compile to no-ops.
The resulting binary is identical to one built before telemetry support was added.
## Runtime Configuration
Add a `[telemetry]` section to your `xrpld.cfg` file:
```ini
[telemetry]
enabled=1
service_name=rippled
endpoint=http://localhost:4318/v1/traces
sampling_ratio=1.0
trace_rpc=1
trace_transactions=1
trace_consensus=1
trace_peer=0
```
### Configuration options
| Option | Type | Default | Description |
| --------------------- | ------ | --------------------------------- | -------------------------------------------------- |
| `enabled` | int | `0` | Enable (`1`) or disable (`0`) telemetry at runtime |
| `service_name` | string | `rippled` | Service name reported in traces |
| `service_instance_id` | string | node public key | Unique instance identifier |
| `exporter` | string | `otlp_http` | Exporter type |
| `endpoint` | string | `http://localhost:4318/v1/traces` | OTLP/HTTP collector endpoint |
| `use_tls` | int | `0` | Enable TLS for the exporter connection |
| `tls_ca_cert` | string | (empty) | Path to CA certificate for TLS |
| `sampling_ratio` | double | `1.0` | Fraction of traces to sample (`0.0` to `1.0`) |
| `batch_size` | uint32 | `512` | Maximum spans per export batch |
| `batch_delay_ms` | uint32 | `5000` | Maximum delay (ms) before flushing a batch |
| `max_queue_size` | uint32 | `2048` | Maximum spans queued in memory |
| `trace_rpc` | int | `1` | Enable RPC request tracing |
| `trace_transactions` | int | `1` | Enable transaction lifecycle tracing |
| `trace_consensus` | int | `1` | Enable consensus round tracing |
| `trace_peer` | int | `0` | Enable peer message tracing (high volume) |
| `trace_ledger` | int | `1` | Enable ledger close tracing |
## Observability Stack
A Docker Compose stack is provided in `docker/telemetry/` with three services:
| Service | Port | Purpose |
| ------------------ | ---------------------------------------------- | ---------------------------------------------------- |
| **OTel Collector** | `4317` (gRPC), `4318` (HTTP), `13133` (health) | Receives OTLP spans, batches, and forwards to Jaeger |
| **Jaeger** | `16686` (UI) | Trace storage and visualization |
| **Grafana** | `3000` | Dashboards (Jaeger pre-configured as datasource) |
### Start the stack
```bash
docker compose -f docker/telemetry/docker-compose.yml up -d
```
### Verify the stack
```bash
# Collector health
curl http://localhost:13133
# Jaeger UI
open http://localhost:16686
# Grafana
open http://localhost:3000
```
### View traces in Jaeger
1. Open `http://localhost:16686` in a browser.
2. Select the service name (e.g. `rippled`) from the **Service** dropdown.
3. Click **Find Traces**.
4. Click into any trace to see the span tree and attributes.
Traced RPC operations produce a span hierarchy like:
```
rpc.request
└── rpc.command.server_info (xrpl.rpc.command=server_info, xrpl.rpc.status=success)
```
Each span includes attributes:
- `xrpl.rpc.command` — the RPC method name
- `xrpl.rpc.version` — API version
- `xrpl.rpc.role``admin` or `user`
- `xrpl.rpc.status``success` or `error`
## Running Tests
Unit tests run with the telemetry-enabled build regardless of whether the
observability stack is running. When no collector is available, the exporter
silently drops spans with no impact on test results.
```bash
# Run all RPC tests
./xrpld --unittest=RPCCall,ServerInfo,AccountTx,LedgerRPC,Transaction --unittest-jobs $(nproc)
# Run the full test suite
./xrpld --unittest --unittest-jobs $(nproc)
```
To generate traces during manual testing, start rippled in standalone mode:
```bash
./xrpld --conf /path/to/xrpld.cfg --standalone --start
```
Then send RPC requests:
```bash
curl -s -X POST http://127.0.0.1:5005/ \
-H "Content-Type: application/json" \
-d '{"method":"server_info","params":[{}]}'
```
## Troubleshooting
### No traces appear in Jaeger
1. Confirm the OTel Collector is running: `docker compose -f docker/telemetry/docker-compose.yml ps`
2. Check collector logs for errors: `docker compose -f docker/telemetry/docker-compose.yml logs otel-collector`
3. Confirm `[telemetry] enabled=1` is set in the rippled config.
4. Confirm `endpoint` points to the correct collector address (`http://localhost:4318/v1/traces`).
5. Wait for the batch delay to elapse (default `5000` ms) before checking Jaeger.
### Conan lockfile error
If you see `ERROR: Requirement 'opentelemetry-cpp/1.18.0' not in lockfile 'requires'`,
the lockfile was generated without the telemetry dependency.
Pass `--lockfile=""` to bypass the lockfile, or regenerate it with telemetry enabled.
### CMake target not found
If CMake reports that `opentelemetry-cpp` targets are not found,
ensure you ran `conan install` with `-o telemetry=True` and that the
Conan-generated toolchain file is being used.
The Conan package provides a single umbrella target
`opentelemetry-cpp::opentelemetry-cpp` (not individual component targets).
## Architecture
### Key files
| File | Purpose |
| ---------------------------------------------- | ----------------------------------------------------------- |
| `include/xrpl/telemetry/Telemetry.h` | Abstract telemetry interface and `Setup` struct |
| `include/xrpl/telemetry/SpanGuard.h` | RAII span guard (activates scope, ends span on destruction) |
| `src/libxrpl/telemetry/Telemetry.cpp` | OTel-backed implementation (`TelemetryImpl`) |
| `src/libxrpl/telemetry/TelemetryConfig.cpp` | Config parser (`setup_Telemetry()`) |
| `src/libxrpl/telemetry/NullTelemetry.cpp` | No-op implementation (used when disabled) |
| `src/xrpld/telemetry/TracingInstrumentation.h` | Convenience macros (`XRPL_TRACE_RPC`, etc.) |
| `src/xrpld/rpc/detail/ServerHandler.cpp` | RPC entry point instrumentation |
| `src/xrpld/rpc/detail/RPCHandler.cpp` | Per-command instrumentation |
| `docker/telemetry/docker-compose.yml` | Observability stack (Collector + Jaeger + Grafana) |
| `docker/telemetry/otel-collector-config.yaml` | OTel Collector pipeline configuration |
### Conditional compilation
All OpenTelemetry SDK headers are guarded behind `#ifdef XRPL_ENABLE_TELEMETRY`.
The instrumentation macros in `TracingInstrumentation.h` compile to `((void)0)` when
the define is absent.
At runtime, if `enabled=0` is set in config (or the section is omitted), a
`NullTelemetry` implementation is used that returns no-op spans.
This two-layer approach ensures zero overhead when telemetry is not wanted.

606
docs/telemetry-runbook.md Normal file
View File

@@ -0,0 +1,606 @@
# rippled Telemetry Operator Runbook
## Overview
rippled supports OpenTelemetry distributed tracing to provide visibility into RPC requests, transaction processing, and consensus rounds.
## Quick Start
### 1. Start the observability stack
```bash
docker compose -f docker/telemetry/docker-compose.yml up -d
```
This starts:
- **OTel Collector** on ports 4317 (gRPC) and 4318 (HTTP)
- **Jaeger** UI on http://localhost:16686
- **Prometheus** on http://localhost:9090
- **Loki** on http://localhost:3100 (log aggregation)
- **Grafana** on http://localhost:3000
### 2. Enable telemetry in rippled
Add to your `xrpld.cfg`:
```ini
[telemetry]
enabled=1
endpoint=http://localhost:4318/v1/traces
```
### 3. Build with telemetry support
```bash
conan install . --build=missing -o telemetry=True
cmake --preset default -Dtelemetry=ON
cmake --build --preset default
```
## Configuration Reference
| Option | Default | Description |
| -------------------- | --------------------------------- | ----------------------------------------- |
| `enabled` | `0` | Master switch for telemetry |
| `endpoint` | `http://localhost:4318/v1/traces` | OTLP/HTTP endpoint |
| `exporter` | `otlp_http` | Exporter type |
| `sampling_ratio` | `1.0` | Head-based sampling ratio (0.01.0) |
| `trace_rpc` | `1` | Enable RPC request tracing |
| `trace_transactions` | `1` | Enable transaction tracing |
| `trace_consensus` | `1` | Enable consensus tracing |
| `trace_peer` | `0` | Enable peer message tracing (high volume) |
| `trace_ledger` | `1` | Enable ledger tracing |
| `batch_size` | `512` | Max spans per batch export |
| `batch_delay_ms` | `5000` | Delay between batch exports |
| `max_queue_size` | `2048` | Max spans queued before dropping |
| `use_tls` | `0` | Use TLS for exporter connection |
| `tls_ca_cert` | (empty) | Path to CA certificate bundle |
## Span Reference
All spans instrumented in rippled, grouped by subsystem:
### RPC Spans (Phase 2)
| Span Name | Source File | Attributes | Description |
| -------------------- | --------------------- | ---------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------- |
| `rpc.request` | ServerHandler.cpp:271 | — | Top-level HTTP RPC request |
| `rpc.process` | ServerHandler.cpp:573 | — | RPC processing (child of rpc.request) |
| `rpc.ws_message` | ServerHandler.cpp:384 | — | WebSocket RPC message |
| `rpc.command.<name>` | RPCHandler.cpp:161 | `xrpl.rpc.command`, `xrpl.rpc.version`, `xrpl.rpc.role`, `xrpl.rpc.status`, `xrpl.rpc.duration_ms`, `xrpl.rpc.error_message` | Per-command span (e.g., `rpc.command.server_info`) |
### Transaction Spans (Phase 3)
| Span Name | Source File | Attributes | Description |
| ------------ | ------------------- | ---------------------------------------------------------------------- | ------------------------------------- |
| `tx.process` | NetworkOPs.cpp:1227 | `xrpl.tx.hash`, `xrpl.tx.local`, `xrpl.tx.path` | Transaction submission and processing |
| `tx.receive` | PeerImp.cpp:1273 | `xrpl.peer.id`, `xrpl.tx.hash`, `xrpl.tx.suppressed`, `xrpl.tx.status` | Transaction received from peer relay |
| `tx.apply` | BuildLedger.cpp:88 | `xrpl.ledger.seq`, `xrpl.ledger.tx_count`, `xrpl.ledger.tx_failed` | Transaction set applied per ledger |
### Consensus Spans (Phase 4)
| Span Name | Source File | Attributes | Description |
| --------------------------- | -------------------- | ----------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------ |
| `consensus.proposal.send` | RCLConsensus.cpp:177 | `xrpl.consensus.round` | Consensus proposal broadcast |
| `consensus.ledger_close` | RCLConsensus.cpp:282 | `xrpl.consensus.ledger.seq`, `xrpl.consensus.mode` | Ledger close event |
| `consensus.accept` | RCLConsensus.cpp:395 | `xrpl.consensus.proposers`, `xrpl.consensus.round_time_ms` | Ledger accepted by consensus |
| `consensus.validation.send` | RCLConsensus.cpp:753 | `xrpl.consensus.ledger.seq`, `xrpl.consensus.proposing` | Validation sent after accept |
| `consensus.accept.apply` | RCLConsensus.cpp:453 | `xrpl.consensus.close_time`, `close_time_correct`, `close_resolution_ms`, `state`, `proposing`, `round_time_ms`, `ledger.seq` | Ledger application with close time details |
#### Close Time Queries (Tempo TraceQL)
```
# Find rounds where validators disagreed on close time
{name="consensus.accept.apply"} | xrpl.consensus.close_time_correct = false
# Find consensus failures (moved_on)
{name="consensus.accept.apply"} | xrpl.consensus.state = "moved_on"
# Find slow ledger applications (>5s)
{name="consensus.accept.apply"} | duration > 5s
# Find specific ledger's consensus details
{name="consensus.accept.apply"} | xrpl.consensus.ledger.seq = 92345678
```
### Ledger Spans (Phase 5)
| Span Name | Source File | Attributes | Description |
| ----------------- | -------------------- | ------------------------------------------------------------------ | ----------------------------- |
| `ledger.build` | BuildLedger.cpp:31 | `xrpl.ledger.seq`, `xrpl.ledger.tx_count`, `xrpl.ledger.tx_failed` | Ledger build during consensus |
| `ledger.validate` | LedgerMaster.cpp:915 | `xrpl.ledger.seq`, `xrpl.ledger.validations` | Ledger promoted to validated |
| `ledger.store` | LedgerMaster.cpp:409 | `xrpl.ledger.seq` | Ledger stored in history |
### Peer Spans (Phase 5)
| Span Name | Source File | Attributes | Description |
| ------------------------- | ---------------- | ---------------------------------------------- | ----------------------------- |
| `peer.proposal.receive` | PeerImp.cpp:1667 | `xrpl.peer.id`, `xrpl.peer.proposal.trusted` | Proposal received from peer |
| `peer.validation.receive` | PeerImp.cpp:2264 | `xrpl.peer.id`, `xrpl.peer.validation.trusted` | Validation received from peer |
## Prometheus Metrics (Spanmetrics)
The OTel Collector's spanmetrics connector automatically derives RED (Rate, Errors, Duration) metrics from every span. No custom metrics code is needed in rippled.
### Generated Metric Names
| Prometheus Metric | Type | Description |
| -------------------------------------------------- | --------- | ---------------------------- |
| `traces_span_metrics_calls_total` | Counter | Total span invocations |
| `traces_span_metrics_duration_milliseconds_bucket` | Histogram | Latency distribution buckets |
| `traces_span_metrics_duration_milliseconds_count` | Histogram | Latency observation count |
| `traces_span_metrics_duration_milliseconds_sum` | Histogram | Cumulative latency |
### Metric Labels
Every metric carries these standard labels:
| Label | Source | Example |
| -------------- | ------------------ | ---------------------------------------- |
| `span_name` | Span name | `rpc.command.server_info` |
| `status_code` | Span status | `STATUS_CODE_UNSET`, `STATUS_CODE_ERROR` |
| `service_name` | Resource attribute | `rippled` |
| `span_kind` | Span kind | `SPAN_KIND_INTERNAL` |
Additionally, span attributes configured as dimensions in the collector become metric labels (dots → underscores):
| Span Attribute | Metric Label | Applies To |
| ------------------------------ | ------------------------------ | ------------------------------- |
| `xrpl.rpc.command` | `xrpl_rpc_command` | `rpc.command.*` spans |
| `xrpl.rpc.status` | `xrpl_rpc_status` | `rpc.command.*` spans |
| `xrpl.consensus.mode` | `xrpl_consensus_mode` | `consensus.ledger_close` spans |
| `xrpl.tx.local` | `xrpl_tx_local` | `tx.process` spans |
| `xrpl.peer.proposal.trusted` | `xrpl_peer_proposal_trusted` | `peer.proposal.receive` spans |
| `xrpl.peer.validation.trusted` | `xrpl_peer_validation_trusted` | `peer.validation.receive` spans |
### Histogram Buckets
Configured in `otel-collector-config.yaml`:
```
1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 5s
```
## System Metrics (beast::insight via OTel native)
rippled has a built-in metrics framework (`beast::insight`) that exports metrics natively via OTLP/HTTP. These complement the span-derived RED metrics by providing system-level gauges, counters, and timers that don't map to individual trace spans.
### Configuration
Add to `xrpld.cfg`:
```ini
[insight]
server=otel
endpoint=http://localhost:4318/v1/metrics
prefix=rippled
```
The OTel Collector receives these via the OTLP receiver (same endpoint as traces, port 4318) and exports them to Prometheus alongside spanmetrics.
#### StatsD fallback (backward compatibility)
The legacy StatsD backend is still available:
```ini
[insight]
server=statsd
address=127.0.0.1:8125
prefix=rippled
```
When using StatsD, uncomment the `statsd` receiver in `otel-collector-config.yaml` and add port `8125:8125/udp` to the docker-compose otel-collector service.
### Metric Reference
#### Gauges
| Prometheus Metric | Source | Description |
| --------------------------------------------- | ------------------------- | -------------------------------------------------------------------------- |
| `rippled_LedgerMaster_Validated_Ledger_Age` | LedgerMaster.h:373 | Age of validated ledger (seconds) |
| `rippled_LedgerMaster_Published_Ledger_Age` | LedgerMaster.h:374 | Age of published ledger (seconds) |
| `rippled_State_Accounting_{Mode}_duration` | NetworkOPs.cpp:774 | Time in each operating mode (Disconnected/Connected/Syncing/Tracking/Full) |
| `rippled_State_Accounting_{Mode}_transitions` | NetworkOPs.cpp:780 | Transition count per mode |
| `rippled_Peer_Finder_Active_Inbound_Peers` | PeerfinderManager.cpp:214 | Active inbound peer connections |
| `rippled_Peer_Finder_Active_Outbound_Peers` | PeerfinderManager.cpp:215 | Active outbound peer connections |
| `rippled_Overlay_Peer_Disconnects` | OverlayImpl.h:557 | Peer disconnect count |
| `rippled_job_count` | JobQueue.cpp:26 | Current job queue depth |
| `rippled_{category}_Bytes_In/Out` | OverlayImpl.h:535 | Overlay traffic bytes per category (57 categories) |
| `rippled_{category}_Messages_In/Out` | OverlayImpl.h:535 | Overlay traffic messages per category |
#### Counters
| Prometheus Metric | Source | Description |
| --------------------------------- | --------------------- | ------------------------------ |
| `rippled_rpc_requests` | ServerHandler.cpp:108 | Total RPC request count |
| `rippled_ledger_fetches` | InboundLedgers.cpp:44 | Ledger fetch request count |
| `rippled_ledger_history_mismatch` | LedgerHistory.cpp:16 | Ledger hash mismatch count |
| `rippled_warn` | Logic.h:33 | Resource manager warning count |
| `rippled_drop` | Logic.h:34 | Resource manager drop count |
#### Histograms (from StatsD timers)
| Prometheus Metric | Source | Description |
| ----------------------- | --------------------- | ------------------------------ |
| `rippled_rpc_time` | ServerHandler.cpp:110 | RPC response time (ms) |
| `rippled_rpc_size` | ServerHandler.cpp:109 | RPC response size (bytes) |
| `rippled_ios_latency` | Application.cpp:438 | I/O service loop latency (ms) |
| `rippled_pathfind_fast` | PathRequests.h:23 | Fast pathfinding duration (ms) |
| `rippled_pathfind_full` | PathRequests.h:24 | Full pathfinding duration (ms) |
## Grafana Dashboards
Thirteen dashboards are pre-provisioned in `docker/telemetry/grafana/dashboards/`:
### RPC Performance (`rippled-rpc-perf`)
| Panel | Type | PromQL | Labels Used |
| --------------------------- | ---------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- |
| RPC Request Rate by Command | timeseries | `sum by (xrpl_rpc_command) (rate(traces_span_metrics_calls_total{span_name=~"rpc.command.*"}[5m]))` | `xrpl_rpc_command` |
| RPC Latency p95 by Command | timeseries | `histogram_quantile(0.95, sum by (le, xrpl_rpc_command) (rate(traces_span_metrics_duration_milliseconds_bucket{span_name=~"rpc.command.*"}[5m])))` | `xrpl_rpc_command` |
| RPC Error Rate | bargauge | Error spans / total spans × 100, grouped by `xrpl_rpc_command` | `xrpl_rpc_command`, `status_code` |
| RPC Latency Heatmap | heatmap | `sum(increase(traces_span_metrics_duration_milliseconds_bucket{span_name=~"rpc.command.*"}[5m])) by (le)` | `le` (bucket boundaries) |
| Overall RPC Throughput | timeseries | `rpc.request` + `rpc.process` rate | — |
| RPC Success vs Error | timeseries | by `status_code` (UNSET vs ERROR) | `status_code` |
| Top Commands by Volume | bargauge | `topk(10, ...)` by `xrpl_rpc_command` | `xrpl_rpc_command` |
| WebSocket Message Rate | stat | `rpc.ws_message` rate | — |
### Transaction Overview (`rippled-transactions`)
| Panel | Type | PromQL | Labels Used |
| --------------------------------- | ---------- | -------------------------------------------------------------------------------------------- | --------------- |
| Transaction Processing Rate | timeseries | `rate(traces_span_metrics_calls_total{span_name="tx.process"}[5m])` and `tx.receive` | `span_name` |
| Transaction Processing Latency | timeseries | `histogram_quantile(0.95 / 0.50, ... {span_name="tx.process"})` | — |
| Transaction Path Distribution | piechart | `sum by (xrpl_tx_local) (rate(traces_span_metrics_calls_total{span_name="tx.process"}[5m]))` | `xrpl_tx_local` |
| Transaction Receive vs Suppressed | timeseries | `rate(traces_span_metrics_calls_total{span_name="tx.receive"}[5m])` | — |
| TX Processing Duration Heatmap | heatmap | `tx.process` histogram buckets | `le` |
| TX Apply Duration per Ledger | timeseries | p95/p50 of `tx.apply` | — |
| Peer TX Receive Rate | timeseries | `tx.receive` rate | — |
| TX Apply Failed Rate | stat | `tx.apply` with `STATUS_CODE_ERROR` | `status_code` |
### Consensus Health (`rippled-consensus`)
| Panel | Type | PromQL | Labels Used |
| ----------------------------- | ---------- | ---------------------------------------------------------------------------------- | --------------------- |
| Consensus Round Duration | timeseries | `histogram_quantile(0.95 / 0.50, ... {span_name="consensus.accept"})` | — |
| Consensus Proposals Sent Rate | timeseries | `rate(traces_span_metrics_calls_total{span_name="consensus.proposal.send"}[5m])` | — |
| Ledger Close Duration | timeseries | `histogram_quantile(0.95, ... {span_name="consensus.ledger_close"})` | — |
| Validation Send Rate | stat | `rate(traces_span_metrics_calls_total{span_name="consensus.validation.send"}[5m])` | — |
| Ledger Apply Duration | timeseries | `histogram_quantile(0.95 / 0.50, ... {span_name="consensus.accept.apply"})` | — |
| Close Time Agreement | timeseries | `rate(traces_span_metrics_calls_total{span_name="consensus.accept.apply"}[5m])` | — |
| Consensus Mode Over Time | timeseries | `consensus.ledger_close` by `xrpl_consensus_mode` | `xrpl_consensus_mode` |
| Accept vs Close Rate | timeseries | `consensus.accept` vs `consensus.ledger_close` rate | — |
| Validation vs Close Rate | timeseries | `consensus.validation.send` vs `consensus.ledger_close` | — |
| Accept Duration Heatmap | heatmap | `consensus.accept` histogram buckets | `le` |
### Ledger Operations (`rippled-ledger-ops`)
| Panel | Type | PromQL | Labels Used |
| ----------------------- | ---------- | ---------------------------------------------- | ----------- |
| Ledger Build Rate | stat | `ledger.build` call rate | — |
| Ledger Build Duration | timeseries | p95/p50 of `ledger.build` | — |
| Ledger Validation Rate | stat | `ledger.validate` call rate | — |
| Build Duration Heatmap | heatmap | `ledger.build` histogram buckets | `le` |
| TX Apply Duration | timeseries | p95/p50 of `tx.apply` | — |
| TX Apply Rate | timeseries | `tx.apply` call rate | — |
| Ledger Store Rate | stat | `ledger.store` call rate | — |
| Build vs Close Duration | timeseries | p95 `ledger.build` vs `consensus.ledger_close` | — |
### Peer Network (`rippled-peer-net`)
Requires `trace_peer=1` in the `[telemetry]` config section.
| Panel | Type | PromQL | Labels Used |
| -------------------------------- | ---------- | --------------------------------- | ------------------------------ |
| Proposal Receive Rate | timeseries | `peer.proposal.receive` rate | — |
| Validation Receive Rate | timeseries | `peer.validation.receive` rate | — |
| Proposals Trusted vs Untrusted | piechart | by `xrpl_peer_proposal_trusted` | `xrpl_peer_proposal_trusted` |
| Validations Trusted vs Untrusted | piechart | by `xrpl_peer_validation_trusted` | `xrpl_peer_validation_trusted` |
### Node Health — System Metrics (`rippled-system-node-health`)
| Panel | Type | PromQL | Labels Used |
| -------------------------- | ---------- | ------------------------------------------------------ | ----------- |
| Validated Ledger Age | stat | `rippled_LedgerMaster_Validated_Ledger_Age` | — |
| Published Ledger Age | stat | `rippled_LedgerMaster_Published_Ledger_Age` | — |
| Operating Mode Duration | timeseries | `rippled_State_Accounting_*_duration` | — |
| Operating Mode Transitions | timeseries | `rippled_State_Accounting_*_transitions` | — |
| I/O Latency | timeseries | `histogram_quantile(0.95, rippled_ios_latency_bucket)` | — |
| Job Queue Depth | timeseries | `rippled_job_count` | — |
| Ledger Fetch Rate | stat | `rate(rippled_ledger_fetches[5m])` | — |
| Ledger History Mismatches | stat | `rate(rippled_ledger_history_mismatch[5m])` | — |
### Network Traffic — System Metrics (`rippled-system-network`)
| Panel | Type | PromQL | Labels Used |
| ---------------------- | ---------- | -------------------------------------- | ----------- |
| Active Peers | timeseries | `rippled_Peer_Finder_Active_*_Peers` | — |
| Peer Disconnects | timeseries | `rippled_Overlay_Peer_Disconnects` | — |
| Total Network Bytes | timeseries | `rippled_total_Bytes_In/Out` | — |
| Total Network Messages | timeseries | `rippled_total_Messages_In/Out` | — |
| Transaction Traffic | timeseries | `rippled_transactions_Messages_In/Out` | — |
| Proposal Traffic | timeseries | `rippled_proposals_Messages_In/Out` | — |
| Validation Traffic | timeseries | `rippled_validations_Messages_In/Out` | — |
| Traffic by Category | bargauge | `topk(10, rippled_*_Bytes_In)` | — |
### RPC & Pathfinding — System Metrics (`rippled-system-rpc`)
| Panel | Type | PromQL | Labels Used |
| ------------------------- | ---------- | -------------------------------------------------------- | ----------- |
| RPC Request Rate | stat | `rate(rippled_rpc_requests[5m])` | — |
| RPC Response Time | timeseries | `histogram_quantile(0.95, rippled_rpc_time_bucket)` | — |
| RPC Response Size | timeseries | `histogram_quantile(0.95, rippled_rpc_size_bucket)` | — |
| RPC Response Time Heatmap | heatmap | `rippled_rpc_time_bucket` | — |
| Pathfinding Fast Duration | timeseries | `histogram_quantile(0.95, rippled_pathfind_fast_bucket)` | — |
| Pathfinding Full Duration | timeseries | `histogram_quantile(0.95, rippled_pathfind_full_bucket)` | — |
| Resource Warnings Rate | stat | `rate(rippled_warn[5m])` | — |
| Resource Drops Rate | stat | `rate(rippled_drop[5m])` | — |
### Span → Metric → Dashboard Summary
| Span Name | Prometheus Metric Filter | Grafana Dashboard |
| --------------------------- | ----------------------------------------- | --------------------------------------------- |
| `rpc.request` | `{span_name="rpc.request"}` | RPC Performance (Overall Throughput) |
| `rpc.process` | `{span_name="rpc.process"}` | RPC Performance (Overall Throughput) |
| `rpc.ws_message` | `{span_name="rpc.ws_message"}` | RPC Performance (WebSocket Rate) |
| `rpc.command.*` | `{span_name=~"rpc.command.*"}` | RPC Performance (Rate, Latency, Error, Top) |
| `tx.process` | `{span_name="tx.process"}` | Transaction Overview (Rate, Latency, Heatmap) |
| `tx.receive` | `{span_name="tx.receive"}` | Transaction Overview (Rate, Receive) |
| `tx.apply` | `{span_name="tx.apply"}` | Transaction Overview + Ledger Ops (Apply) |
| `consensus.accept` | `{span_name="consensus.accept"}` | Consensus Health (Duration, Rate, Heatmap) |
| `consensus.proposal.send` | `{span_name="consensus.proposal.send"}` | Consensus Health (Proposals Rate) |
| `consensus.ledger_close` | `{span_name="consensus.ledger_close"}` | Consensus Health (Close, Mode) |
| `consensus.validation.send` | `{span_name="consensus.validation.send"}` | Consensus Health (Validation Rate) |
| `consensus.accept.apply` | `{span_name="consensus.accept.apply"}` | Consensus Health (Apply Duration, Close Time) |
| `ledger.build` | `{span_name="ledger.build"}` | Ledger Ops (Build Rate, Duration, Heatmap) |
| `ledger.validate` | `{span_name="ledger.validate"}` | Ledger Ops (Validation Rate) |
| `ledger.store` | `{span_name="ledger.store"}` | Ledger Ops (Store Rate) |
| `peer.proposal.receive` | `{span_name="peer.proposal.receive"}` | Peer Network (Rate, Trusted/Untrusted) |
| `peer.validation.receive` | `{span_name="peer.validation.receive"}` | Peer Network (Rate, Trusted/Untrusted) |
## Log-Trace Correlation (Phase 8)
When rippled is built with `telemetry=ON`, log lines emitted within an active OpenTelemetry span automatically include `trace_id` and `span_id` fields:
```
2024-01-15T10:30:45.123Z LedgerMaster:NFO trace_id=abc123def456789012345678abcdef01 span_id=0123456789abcdef Validated ledger 42
```
This enables bidirectional navigation between logs and traces in Grafana:
- **Tempo -> Loki**: Click "Logs for this trace" on any trace in Grafana Tempo to see all log lines from that trace.
- **Loki -> Tempo**: Click the `TraceID` derived field link on any log line containing `trace_id=` to jump to the full trace in Tempo.
### Log Ingestion Pipeline
Log files are ingested by the OTel Collector's `filelog` receiver, which tails `debug.log` files and parses them with a regex that extracts `timestamp`, `partition`, `severity`, `trace_id`, `span_id`, and `message` fields. Parsed entries are exported to Grafana Loki.
### LogQL Query Examples
```logql
# Find all logs for a specific trace
{job="rippled"} |= "trace_id=abc123def456789012345678abcdef01"
# Error logs with trace context (log lines with ERR severity that have a trace_id)
{job="rippled"} |= "ERR" |= "trace_id="
# All logs from a specific partition that were emitted during a span
{job="rippled"} |= "LedgerMaster" | regexp `trace_id=(?P<trace_id>[a-f0-9]+)` | trace_id != ""
# Logs from the last hour containing trace context
{job="rippled"} |= "trace_id=" | regexp `(?P<partition>\S+):(?P<sev>\S+)\s+trace_id=(?P<tid>[a-f0-9]+)`
# Count of traced vs untraced log lines
count_over_time({job="rippled"} |= "trace_id=" [5m])
```
### Verifying Log Correlation
1. Start the observability stack and rippled with telemetry enabled.
2. Send an RPC request: `curl http://localhost:5005 -d '{"method":"server_info"}'`
3. Check the debug.log for `trace_id=` entries: `grep trace_id= /path/to/debug.log`
4. Open Grafana at http://localhost:3000 -> Explore -> Loki and search for `{job="rippled"} |= "trace_id="`.
5. Click the TraceID link to navigate to the corresponding trace in Tempo.
## Phase 9: OTel Metrics Alerting Rules
The following alerting rules are recommended for the Phase 9 OTel SDK metrics.
Add to your Prometheus alerting rules configuration.
### NodeStore
| Alert Name | Severity | Condition | For | Description |
| --------------------------- | -------- | ---------------------------------------------------- | --- | ------------------------------------------------------- |
| `NodeStoreHighWriteLoad` | Warning | `rippled_nodestore_state{metric="write_load"} > 100` | 5m | NodeStore backend is under sustained write pressure |
| `NodeStoreReadQueueBacklog` | Warning | `rippled_nodestore_state{metric="read_queue"} > 500` | 5m | Prefetch thread pool is saturated; reads are backing up |
### Cache
| Alert Name | Severity | Condition | For | Description |
| ----------------------- | -------- | ------------------------------------------------------- | --- | ------------------------------------------------------ |
| `SLECacheHitRateLow` | Warning | `rippled_cache_metrics{metric="SLE_hit_rate"} < 0.5` | 10m | SLE cache is thrashing; consider increasing cache size |
| `LedgerCacheHitRateLow` | Warning | `rippled_cache_metrics{metric="ledger_hit_rate"} < 0.5` | 10m | Ledger cache hit rate is degraded |
### Transaction Queue
| Alert Name | Severity | Condition | For | Description |
| ---------------------- | -------- | ---------------------------------------------------------------------------------------------------------------------- | --- | -------------------------------------------------- |
| `TxQNearCapacity` | Warning | `rippled_txq_metrics{metric="txq_count"} / rippled_txq_metrics{metric="txq_max_size"} > 0.8` | 5m | TxQ is >80% full; transactions may be rejected |
| `TxQHighFeeEscalation` | Warning | `rippled_txq_metrics{metric="txq_open_ledger_fee_level"} / rippled_txq_metrics{metric="txq_reference_fee_level"} > 10` | 5m | Fee escalation is 10x above reference; high demand |
### Load Factor
| Alert Name | Severity | Condition | For | Description |
| --------------------- | -------- | -------------------------------------------------------------- | --- | -------------------------------------------------------------- |
| `HighLoadFactor` | Warning | `rippled_load_factor_metrics{metric="load_factor"} > 5` | 10m | Combined load factor is elevated; transactions cost 5x+ normal |
| `HighLocalLoadFactor` | Critical | `rippled_load_factor_metrics{metric="load_factor_local"} > 10` | 5m | Local server load is critically elevated |
### RPC Performance
| Alert Name | Severity | Condition | For | Description |
| ------------------ | -------- | ---------------------------------------------------------------------------------------------------------- | --- | --------------------------------- |
| `HighRPCErrorRate` | Warning | `sum(rate(rippled_rpc_method_errored_total[5m])) / sum(rate(rippled_rpc_method_started_total[5m])) > 0.05` | 5m | >5% of RPC calls are erroring |
| `SlowRPCLatency` | Warning | `histogram_quantile(0.95, sum by (le) (rate(rippled_rpc_method_duration_us_bucket[5m]))) > 5000000` | 5m | RPC p95 latency exceeds 5 seconds |
### Job Queue
| Alert Name | Severity | Condition | For | Description |
| ------------------ | -------- | ----------------------------------------------------------------------------------------------------- | --- | ---------------------------------------------------- |
| `JobQueueBacklog` | Warning | `sum(rate(rippled_job_queued_total[5m])) - sum(rate(rippled_job_finished_total[5m])) > 100` | 5m | Jobs are being queued faster than they're completing |
| `SlowJobExecution` | Warning | `histogram_quantile(0.95, sum by (le) (rate(rippled_job_running_duration_us_bucket[5m]))) > 10000000` | 5m | Job execution p95 exceeds 10 seconds |
## Troubleshooting
### No OTel SDK metrics in Prometheus
1. Verify `enabled=1` in the `[telemetry]` config section
2. Check that `metrics_endpoint` points to the OTel Collector's HTTP receiver
(default: `http://localhost:4318/v1/metrics`)
3. Check rippled logs for `MetricsRegistry: started successfully` message
4. Verify the OTel Collector is configured with an OTLP receiver and Prometheus exporter
5. Check Prometheus targets page for the collector scrape target
### Cache hit rates are zero
Cache hit rates may be zero during startup before caches are warmed. Wait for the
node to reach `Full` operating mode and process several ledgers before investigating.
### NodeStore I/O counters not incrementing
NodeStore counters are cumulative and may appear flat if the node is idle. Submit
some transactions or RPC requests to generate I/O activity.
### No traces appearing in Jaeger
1. Check rippled logs for `Telemetry starting` message
2. Verify `enabled=1` in the `[telemetry]` config section
3. Test collector connectivity: `curl -v http://localhost:4318/v1/traces`
4. Check collector logs: `docker compose logs otel-collector`
### No system metrics in Prometheus
1. Check rippled logs for `OTelCollector starting` message
2. Verify `server=otel` in the `[insight]` config section
3. Verify the endpoint in `[insight]` points to the OTLP/HTTP port (default: `http://localhost:4318/v1/metrics`)
4. Check that the `otlp` receiver is in the metrics pipeline receivers in `otel-collector-config.yaml`
5. Query Prometheus directly: `curl 'http://localhost:9090/api/v1/query?query=rippled_job_count'`
### High memory usage
- Reduce `sampling_ratio` (e.g., `0.1` for 10% sampling)
- Reduce `max_queue_size` and `batch_size`
- Disable high-volume trace categories: `trace_peer=0`
### Collector connection failures
- Verify endpoint URL matches collector address
- Check firewall rules for ports 4317/4318
- If using TLS, verify certificate path with `tls_ca_cert`
### No trace_id in log output
- Verify rippled was built with `telemetry=ON` (the `XRPL_ENABLE_TELEMETRY` preprocessor flag)
- Verify `enabled=1` in the `[telemetry]` config section
- Log lines only contain `trace_id`/`span_id` when emitted inside an active span — background logs outside of RPC/consensus/transaction processing will not have trace context
- Check that the specific trace category is enabled (e.g., `trace_rpc=1`)
### No logs in Loki
- Verify the log file mount in docker-compose.yml points to the correct rippled log directory
- Check OTel Collector logs for filelog receiver errors: `docker compose logs otel-collector`
- Verify Loki is running: `curl http://localhost:3100/ready`
- Check the filelog receiver glob pattern matches your log file paths
## Performance Tuning
| Scenario | Recommendation |
| ------------------------ | ------------------------------------------------- |
| Production mainnet | `sampling_ratio=0.01`, `trace_peer=0` |
| Testnet/devnet | `sampling_ratio=1.0` (full tracing) |
| Debugging specific issue | `sampling_ratio=1.0` temporarily |
| High-throughput node | Increase `batch_size=1024`, `max_queue_size=4096` |
## Disabling Telemetry
Set `enabled=0` in config (runtime disable) or build without the flag:
```bash
cmake --preset default -Dtelemetry=OFF
```
When telemetry is compiled out, all trace macros expand to no-ops with zero overhead.
## Validating Telemetry Stack
After deploying telemetry, use the Phase 10 workload tools to validate the full stack end-to-end.
### Quick Validation
```bash
# Run the full validation suite (starts cluster, generates load, validates):
docker/telemetry/workload/run-full-validation.sh --xrpld .build/xrpld
# Check the report:
cat /tmp/xrpld-validation/reports/validation-report.json | jq '.summary'
```
### What Gets Validated
| Category | Checks | Description |
| ---------- | -------------- | -------------------------------------------------------- |
| Spans | 16+ span types | All span names appear in Jaeger with required attributes |
| Metrics | 30+ metrics | SpanMetrics, StatsD gauges/counters, Phase 9 metrics |
| Logs | 2 checks | trace_id/span_id present in Loki, cross-reference works |
| Dashboards | 10 dashboards | All Grafana dashboards load without errors |
### Running Individual Tools
```bash
# RPC load only:
python3 docker/telemetry/workload/rpc_load_generator.py \
--endpoints ws://localhost:6006 --rate 50 --duration 120
# Transaction mix only:
python3 docker/telemetry/workload/tx_submitter.py \
--endpoint ws://localhost:6006 --tps 5 --duration 120
# Validation only (assumes load already ran):
python3 docker/telemetry/workload/validate_telemetry.py \
--report /tmp/report.json
```
### Interpreting Failures
- **Span failures**: Check that the relevant trace category is enabled in `[telemetry]` config (e.g., `trace_rpc=1`).
- **Metric failures**: Verify the OTel Collector is running and Prometheus is scraping port 8889. Check `docker compose logs otel-collector`.
- **Dashboard failures**: Ensure Grafana provisioning is mounted correctly. Check `docker compose logs grafana`.
## Performance Benchmarking
Measure the overhead of the telemetry stack against a baseline:
```bash
docker/telemetry/workload/benchmark.sh --xrpld .build/xrpld --duration 300
```
### Benchmark Thresholds
| Metric | Target | Description |
| ----------------- | ------ | -------------------------------------- |
| CPU overhead | < 3% | Average CPU increase across nodes |
| Memory overhead | < 5MB | Peak RSS increase per node |
| RPC p99 latency | < 2ms | Additional p99 latency for server_info |
| Throughput impact | < 5% | Reduction in ledger close rate |
| Consensus impact | < 1% | Increase in consensus round time |
### Tuning for Production
If benchmarks exceed thresholds:
1. **Reduce sampling**: `sampling_ratio=0.01` (1% of traces)
2. **Disable peer tracing**: `trace_peer=0` (highest volume category)
3. **Increase batch delay**: `batch_delay_ms=10000` (less frequent exports)
4. **Reduce queue size**: `max_queue_size=1024` (back-pressure earlier)
See `docker/telemetry/workload/README.md` for full documentation.

View File

@@ -1,73 +0,0 @@
#pragma once
#include <xrpl/beast/utility/Journal.h>
#include <chrono>
#include <cstdint>
#include <string_view>
namespace xrpl {
// cSpell:ignore ptmalloc
// -----------------------------------------------------------------------------
// Allocator interaction note:
// - This facility invokes glibc's malloc_trim(0) on Linux/glibc to request that
// ptmalloc return free heap pages to the OS.
// - If an alternative allocator (e.g. jemalloc or tcmalloc) is linked or
// preloaded (LD_PRELOAD), calling glibc's malloc_trim typically has no effect
// on the *active* heap. The call is harmless but may not reclaim memory
// because those allocators manage their own arenas.
// - Only glibc sbrk/arena space is eligible for trimming; large mmap-backed
// allocations are usually returned to the OS on free regardless of trimming.
// - Call at known reclamation points (e.g., after cache sweeps / online delete)
// and consider rate limiting to avoid churn.
// -----------------------------------------------------------------------------
struct MallocTrimReport
{
bool supported{false};
int trimResult{-1};
std::int64_t rssBeforeKB{-1};
std::int64_t rssAfterKB{-1};
std::chrono::microseconds durationUs{-1};
std::int64_t minfltDelta{-1};
std::int64_t majfltDelta{-1};
[[nodiscard]] std::int64_t
deltaKB() const noexcept
{
if (rssBeforeKB < 0 || rssAfterKB < 0)
return 0;
return rssAfterKB - rssBeforeKB;
}
};
/**
* @brief Attempt to return freed memory to the operating system.
*
* On Linux with glibc malloc, this issues ::malloc_trim(0), which may release
* free space from ptmalloc arenas back to the kernel. On other platforms, or if
* a different allocator is in use, this function is a no-op and the report will
* indicate that trimming is unsupported or had no effect.
*
* @param tag Identifier for logging/debugging purposes.
* @param journal Journal for diagnostic logging.
* @return Report containing before/after metrics and the trim result.
*
* @note If an alternative allocator (jemalloc/tcmalloc) is linked or preloaded,
* calling glibc's malloc_trim may have no effect on the active heap. The
* call is harmless but typically does not reclaim memory under those
* allocators.
*
* @note Only memory served from glibc's sbrk/arena heaps is eligible for trim.
* Large allocations satisfied via mmap are usually returned on free
* independently of trimming.
*
* @note Intended for use after operations that free significant memory (e.g.,
* cache sweeps, ledger cleanup, online delete). Consider rate limiting.
*/
MallocTrimReport
mallocTrim(std::string_view tag, beast::Journal journal);
} // namespace xrpl

View File

@@ -12,4 +12,5 @@
#include <xrpl/beast/insight/Hook.h>
#include <xrpl/beast/insight/HookImpl.h>
#include <xrpl/beast/insight/NullCollector.h>
#include <xrpl/beast/insight/OTelCollector.h>
#include <xrpl/beast/insight/StatsDCollector.h>

View File

@@ -0,0 +1,92 @@
#pragma once
/**
* @file OTelCollector.h
* @brief OpenTelemetry-based implementation of the beast::insight::Collector
* interface for native OTLP metric export.
*
* When XRPL_ENABLE_TELEMETRY is defined, OTelCollector maps each
* beast::insight instrument type (Counter, Gauge, Event, Meter, Hook) to
* the corresponding OpenTelemetry Metrics SDK instrument and exports
* them via OTLP/HTTP to an OpenTelemetry Collector.
*
* When XRPL_ENABLE_TELEMETRY is NOT defined, OTelCollector::New() returns
* a NullCollector so the binary compiles without OTel dependencies.
*
* Dependency diagram:
*
* +-----------------+ +-------------------+
* | Collector (ABC) |<----| OTelCollector |
* +-----------------+ | (public header) |
* ^ +-------------------+
* | |
* +-----------------+ +-------------------+
* | NullCollector | | OTelCollectorImp |
* | (fallback when | | (impl in .cpp, |
* | no telemetry) | | uses OTel SDK) |
* +-----------------+ +-------------------+
* |
* +-------------------+
* | OTel Metrics SDK |
* | MeterProvider |
* | OTLP HTTP Metric |
* | Exporter |
* +-------------------+
*/
#include <xrpl/beast/insight/Collector.h>
#include <xrpl/beast/utility/Journal.h>
#include <memory>
#include <string>
namespace beast {
namespace insight {
/**
* @brief A Collector that exports metrics via OpenTelemetry OTLP/HTTP.
*
* Replaces StatsD-based metric collection with native OTel Metrics SDK
* instruments. Each beast::insight instrument maps to an OTel equivalent:
*
* - Counter -> OTel Counter<int64_t>
* - Gauge -> OTel ObservableGauge<int64_t> (async callback)
* - Event -> OTel Histogram<double> (duration in milliseconds)
* - Meter -> OTel Counter<uint64_t> (monotonic, unsigned)
* - Hook -> Called by PeriodicMetricReader at collection time
*
* @see StatsDCollector for the StatsD-based alternative.
* @see NullCollector for the no-op fallback.
*/
class OTelCollector : public Collector
{
public:
explicit OTelCollector() = default;
/**
* @brief Factory method to create an OTelCollector instance.
*
* When XRPL_ENABLE_TELEMETRY is defined, creates a real OTel-backed
* collector that exports metrics via OTLP/HTTP. When telemetry is
* disabled at compile time, returns a NullCollector.
*
* @param endpoint OTLP/HTTP metrics endpoint URL
* (e.g. "http://localhost:4318/v1/metrics").
* @param prefix Prefix prepended to all metric names
* (e.g. "rippled").
* @param instanceId Unique identifier for this node instance,
* emitted as the `service.instance.id` OTel
* resource attribute. Defaults to empty string
* (attribute omitted when empty).
* @param journal Journal for logging.
* @return Shared pointer to the created Collector.
*/
static std::shared_ptr<Collector>
New(std::string const& endpoint,
std::string const& prefix,
std::string const& instanceId,
Journal journal);
};
} // namespace insight
} // namespace beast

View File

@@ -19,6 +19,10 @@ class Manager;
namespace perf {
class PerfLog;
}
namespace telemetry {
class Telemetry;
class MetricsRegistry;
} // namespace telemetry
// This is temporary until we migrate all code to use ServiceRegistry.
class Application;
@@ -205,6 +209,15 @@ public:
virtual perf::PerfLog&
getPerfLog() = 0;
virtual telemetry::Telemetry&
getTelemetry() = 0;
/** Return the MetricsRegistry, or nullptr if telemetry is disabled.
Used by PerfLog and other hot paths to record OTel metrics.
*/
virtual telemetry::MetricsRegistry*
getMetricsRegistry() = 0;
// Configuration and state
virtual bool
isStopping() const = 0;

View File

@@ -77,16 +77,16 @@ public:
If the object is not found or an error is encountered, the
result will indicate the condition.
@note This will be called concurrently.
@param hash The hash of the object.
@param key A pointer to the key data.
@param pObject [out] The created object if successful.
@return The result of the operation.
*/
virtual Status
fetch(uint256 const& hash, std::shared_ptr<NodeObject>* pObject) = 0;
fetch(void const* key, std::shared_ptr<NodeObject>* pObject) = 0;
/** Fetch a batch synchronously. */
virtual std::pair<std::vector<std::shared_ptr<NodeObject>>, Status>
fetchBatch(std::vector<uint256> const& hashes) = 0;
fetchBatch(std::vector<uint256 const*> const& hashes) = 0;
/** Store a single object.
Depending on the implementation this may happen immediately

View File

@@ -85,6 +85,19 @@ message TMPublicKey {
// If you want to send an amount that is greater than any single address of yours
// you must first combine coins from one address to another.
// Trace context for OpenTelemetry distributed tracing across nodes.
// Uses W3C Trace Context format internally.
message TraceContext {
optional bytes trace_id = 1; // 16-byte trace identifier
optional bytes span_id = 2; // 8-byte parent span identifier
optional uint32 trace_flags = 3; // bit 0 = sampled
// TODO: trace_state is reserved for W3C tracestate vendor-specific
// key-value pairs but is not yet read or written by
// TraceContextPropagator. Wire it when cross-vendor trace
// propagation is needed.
optional string trace_state = 4; // W3C tracestate header value
}
enum TransactionStatus {
tsNEW = 1; // origin node did/could not validate
tsCURRENT = 2; // scheduled to go in this ledger
@@ -101,6 +114,9 @@ message TMTransaction {
required TransactionStatus status = 2;
optional uint64 receiveTimestamp = 3;
optional bool deferred = 4; // not applied to open ledger
// Optional trace context for OpenTelemetry distributed tracing
optional TraceContext trace_context = 1001;
}
message TMTransactions {
@@ -149,6 +165,9 @@ message TMProposeSet {
// Number of hops traveled
optional uint32 hops = 12 [deprecated = true];
// Optional trace context for OpenTelemetry distributed tracing
optional TraceContext trace_context = 1001;
}
enum TxSetStatus {
@@ -194,6 +213,9 @@ message TMValidation {
// Number of hops traveled
optional uint32 hops = 3 [deprecated = true];
// Optional trace context for OpenTelemetry distributed tracing
optional TraceContext trace_context = 1001;
}
// An array of Endpoint messages

View File

@@ -15,10 +15,9 @@
// Add new amendments to the top of this list.
// Keep it sorted in reverse chronological order.
XRPL_FIX (PermissionedDomainInvariant, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (ExpiredNFTokenOfferRemoval, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (BatchInnerSigs, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (BatchInnerSigs, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(LendingProtocol, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(PermissionDelegationV1_1, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (DirectoryLimit, Supported::yes, VoteBehavior::DefaultNo)
@@ -32,7 +31,7 @@ XRPL_FEATURE(TokenEscrow, Supported::yes, VoteBehavior::DefaultNo
XRPL_FIX (EnforceNFTokenTrustlineV2, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (AMMv1_3, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(PermissionedDEX, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(Batch, Supported::no, VoteBehavior::DefaultNo)
XRPL_FEATURE(Batch, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(SingleAssetVault, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (PayChanCancelAfter, Supported::yes, VoteBehavior::DefaultNo)
// Check flags in Credential transactions

View File

@@ -0,0 +1,155 @@
#pragma once
/** RAII guard for OpenTelemetry trace spans.
Wraps an OTel Span and Scope together. On construction, the span is
activated on the current thread's context (via Scope). On destruction,
the span is ended and the previous context is restored.
Used by the XRPL_TRACE_* macros in TracingInstrumentation.h. Can also
be stored in std::optional for conditional tracing (move-constructible).
Only compiled when XRPL_ENABLE_TELEMETRY is defined.
*/
#ifdef XRPL_ENABLE_TELEMETRY
#include <opentelemetry/context/runtime_context.h>
#include <opentelemetry/nostd/shared_ptr.h>
#include <opentelemetry/trace/scope.h>
#include <opentelemetry/trace/span.h>
#include <exception>
#include <string_view>
namespace xrpl {
namespace telemetry {
/** RAII wrapper that activates a span on construction and ends it on
destruction. Non-copyable but move-constructible so it can be held
in std::optional for conditional tracing.
*/
class SpanGuard
{
/** The OTel span being guarded. Set to nullptr after move. */
opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span> span_;
/** Scope that activates span_ on the current thread's context stack. */
opentelemetry::trace::Scope scope_;
public:
/** Construct a guard that activates @p span on the current context.
@param span The span to guard. Ended in the destructor.
*/
explicit SpanGuard(opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span> span)
: span_(std::move(span)), scope_(span_)
{
}
/** Non-copyable. Move-constructible to support std::optional.
The move constructor creates a new Scope from the transferred span,
because Scope is not movable.
*/
SpanGuard(SpanGuard const&) = delete;
SpanGuard&
operator=(SpanGuard const&) = delete;
SpanGuard(SpanGuard&& other) noexcept : span_(std::move(other.span_)), scope_(span_)
{
other.span_ = nullptr;
}
SpanGuard&
operator=(SpanGuard&&) = delete;
~SpanGuard()
{
if (span_)
span_->End();
}
/** @return A mutable reference to the underlying span. */
opentelemetry::trace::Span&
span()
{
return *span_;
}
/** @return A const reference to the underlying span. */
opentelemetry::trace::Span const&
span() const
{
return *span_;
}
/** Mark the span status as OK. */
void
setOk()
{
span_->SetStatus(opentelemetry::trace::StatusCode::kOk);
}
/** Set an explicit status code on the span.
@param code The OTel status code.
@param description Optional human-readable status description.
*/
void
setStatus(opentelemetry::trace::StatusCode code, std::string_view description = "")
{
span_->SetStatus(code, std::string(description));
}
/** Set a key-value attribute on the span.
@param key Attribute name (e.g. "xrpl.rpc.command").
@param value Attribute value (string, int, bool, etc.).
*/
template <typename T>
void
setAttribute(std::string_view key, T&& value)
{
span_->SetAttribute(
opentelemetry::nostd::string_view(key.data(), key.size()), std::forward<T>(value));
}
/** Add a named event to the span's timeline.
@param name Event name.
*/
void
addEvent(std::string_view name)
{
span_->AddEvent(std::string(name));
}
/** Record an exception as a span event following OTel semantic
conventions, and mark the span status as error.
@param e The exception to record.
*/
void
recordException(std::exception const& e)
{
span_->AddEvent(
"exception",
{{"exception.type", "std::exception"}, {"exception.message", std::string(e.what())}});
span_->SetStatus(opentelemetry::trace::StatusCode::kError, e.what());
}
/** Return the current OTel context.
Useful for creating child spans on a different thread by passing
this context to Telemetry::startSpan(name, parentContext).
*/
opentelemetry::context::Context
context() const
{
return opentelemetry::context::RuntimeContext::GetCurrent();
}
};
} // namespace telemetry
} // namespace xrpl
#endif // XRPL_ENABLE_TELEMETRY

View File

@@ -0,0 +1,230 @@
#pragma once
/** Abstract interface for OpenTelemetry distributed tracing.
Provides the Telemetry base class that all components use to create trace
spans. Two implementations exist:
- TelemetryImpl (Telemetry.cpp): real OTel SDK integration, compiled
only when XRPL_ENABLE_TELEMETRY is defined and enabled at runtime.
- NullTelemetry (NullTelemetry.cpp): no-op stub used when telemetry is
disabled at compile time or runtime.
The Setup struct holds all configuration parsed from the [telemetry]
section of xrpld.cfg. See TelemetryConfig.cpp for the parser and
cfg/xrpld-example.cfg for the available options.
OTel SDK headers are conditionally included behind XRPL_ENABLE_TELEMETRY
so that builds without telemetry have zero dependency on opentelemetry-cpp.
*/
#include <xrpl/basics/BasicConfig.h>
#include <xrpl/beast/utility/Journal.h>
#include <chrono>
#include <memory>
#include <string>
#include <string_view>
#ifdef XRPL_ENABLE_TELEMETRY
#include <opentelemetry/context/context.h>
#include <opentelemetry/nostd/shared_ptr.h>
#include <opentelemetry/trace/span.h>
#include <opentelemetry/trace/tracer.h>
#endif
namespace xrpl {
namespace telemetry {
class Telemetry
{
public:
/** Configuration parsed from the [telemetry] section of xrpld.cfg.
All fields have sensible defaults so the section can be minimal
or omitted entirely. See TelemetryConfig.cpp for the parser.
*/
struct Setup
{
/** Master switch: true to enable tracing at runtime. */
bool enabled = false;
/** OTel resource attribute `service.name`. */
std::string serviceName = "rippled";
/** OTel resource attribute `service.version` (set from BuildInfo). */
std::string serviceVersion;
/** OTel resource attribute `service.instance.id` (defaults to node
public key). */
std::string serviceInstanceId;
/** Exporter type: currently only "otlp_http" is supported. */
std::string exporterType = "otlp_http";
/** OTLP/HTTP endpoint URL where spans are sent. */
std::string exporterEndpoint = "http://localhost:4318/v1/traces";
/** Whether to use TLS for the exporter connection. */
bool useTls = false;
/** Path to a CA certificate bundle for TLS verification. */
std::string tlsCertPath;
/** Head-based sampling ratio in [0.0, 1.0]. 1.0 = trace everything. */
double samplingRatio = 1.0;
/** Maximum number of spans per batch export. */
std::uint32_t batchSize = 512;
/** Delay between batch exports. */
std::chrono::milliseconds batchDelay{5000};
/** Maximum number of spans queued before dropping. */
std::uint32_t maxQueueSize = 2048;
/** Network identifier, added as an OTel resource attribute. */
std::uint32_t networkId = 0;
/** Network type label (e.g. "mainnet", "testnet", "devnet"). */
std::string networkType = "mainnet";
/** Enable tracing for transaction processing. */
bool traceTransactions = true;
/** Enable tracing for consensus rounds. */
bool traceConsensus = true;
/** Enable tracing for RPC request handling. */
bool traceRpc = true;
/** Enable tracing for peer-to-peer messages (disabled by default
due to high volume). */
bool tracePeer = false;
/** Enable tracing for ledger close/accept. */
bool traceLedger = true;
};
virtual ~Telemetry() = default;
/** Update the service instance ID (OTel resource attribute
`service.instance.id`).
Must be called before start(). The node public key is not available
when Telemetry is constructed (during the ApplicationImp member
initializer list), so this setter allows Application::setup() to
inject the identity once nodeIdentity_ is known.
@param id The node's base58-encoded public key or custom identifier.
*/
virtual void
setServiceInstanceId(std::string const& id)
{
// Default no-op for NullTelemetry implementations.
(void)id;
}
/** Initialize the tracing pipeline (exporter, processor, provider).
Call after construction.
*/
virtual void
start() = 0;
/** Flush pending spans and shut down the tracing pipeline.
Call before destruction.
*/
virtual void
stop() = 0;
/** @return true if this instance is actively exporting spans. */
virtual bool
isEnabled() const = 0;
/** @return true if transaction processing should be traced. */
virtual bool
shouldTraceTransactions() const = 0;
/** @return true if consensus rounds should be traced. */
virtual bool
shouldTraceConsensus() const = 0;
/** @return true if RPC request handling should be traced. */
virtual bool
shouldTraceRpc() const = 0;
/** @return true if peer-to-peer messages should be traced. */
virtual bool
shouldTracePeer() const = 0;
/** @return true if ledger close/accept should be traced. */
virtual bool
shouldTraceLedger() const = 0;
#ifdef XRPL_ENABLE_TELEMETRY
/** Get or create a named tracer instance.
@param name Tracer name used to identify the instrumentation library.
@return A shared pointer to the Tracer.
*/
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Tracer>
getTracer(std::string_view name = "rippled") = 0;
/** Start a new span on the current thread's context.
The span becomes a child of the current active span (if any) via
OpenTelemetry's context propagation.
@param name Span name (typically "rpc.command.<cmd>").
@param kind The span kind (defaults to kInternal).
@return A shared pointer to the new Span.
*/
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span>
startSpan(
std::string_view name,
opentelemetry::trace::SpanKind kind = opentelemetry::trace::SpanKind::kInternal) = 0;
/** Start a new span with an explicit parent context.
Use this overload when the parent span is not on the current
thread's context stack (e.g. cross-thread trace propagation).
@param name Span name.
@param parentContext The parent span's context.
@param kind The span kind (defaults to kInternal).
@return A shared pointer to the new Span.
*/
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span>
startSpan(
std::string_view name,
opentelemetry::context::Context const& parentContext,
opentelemetry::trace::SpanKind kind = opentelemetry::trace::SpanKind::kInternal) = 0;
#endif
};
/** Create a Telemetry instance.
Returns a TelemetryImpl when setup.enabled is true, or a
NullTelemetry no-op stub otherwise.
@param setup Configuration from the [telemetry] config section.
@param journal Journal for log output during initialization.
*/
std::unique_ptr<Telemetry>
make_Telemetry(Telemetry::Setup const& setup, beast::Journal journal);
/** Parse the [telemetry] config section into a Setup struct.
@param section The [telemetry] config section.
@param nodePublicKey Node public key, used as default instance ID.
@param version Build version string.
@return A populated Setup struct with defaults for missing values.
*/
Telemetry::Setup
setup_Telemetry(
Section const& section,
std::string const& nodePublicKey,
std::string const& version);
} // namespace telemetry
} // namespace xrpl

View File

@@ -0,0 +1,101 @@
#pragma once
/** Utilities for trace context propagation across nodes.
Provides serialization/deserialization of OTel trace context to/from
Protocol Buffer TraceContext messages (P2P cross-node propagation).
Only compiled when XRPL_ENABLE_TELEMETRY is defined.
TODO: These utilities are not yet wired into the P2P message flow.
To enable cross-node distributed traces, call injectToProtobuf() in
PeerImp when sending TMTransaction/TMProposeSet messages, and call
extractFromProtobuf() in the corresponding message handlers to
reconstruct the parent span context before starting a child span.
This was deferred to validate single-node tracing performance first.
*/
#ifdef XRPL_ENABLE_TELEMETRY
#include <xrpl/proto/xrpl.pb.h>
#include <opentelemetry/context/context.h>
#include <opentelemetry/trace/context.h>
#include <opentelemetry/trace/default_span.h>
#include <opentelemetry/trace/span_context.h>
#include <opentelemetry/trace/trace_flags.h>
#include <opentelemetry/trace/trace_id.h>
#include <cstdint>
namespace xrpl {
namespace telemetry {
/** Extract OTel context from a protobuf TraceContext message.
@param proto The protobuf TraceContext received from a peer.
@return An OTel Context with the extracted parent span, or an empty
context if the protobuf fields are missing or invalid.
*/
inline opentelemetry::context::Context
extractFromProtobuf(protocol::TraceContext const& proto)
{
namespace trace = opentelemetry::trace;
if (!proto.has_trace_id() || proto.trace_id().size() != 16 || !proto.has_span_id() ||
proto.span_id().size() != 8)
{
return opentelemetry::context::Context{};
}
auto const* rawTraceId = reinterpret_cast<std::uint8_t const*>(proto.trace_id().data());
auto const* rawSpanId = reinterpret_cast<std::uint8_t const*>(proto.span_id().data());
trace::TraceId traceId(opentelemetry::nostd::span<std::uint8_t const, 16>(rawTraceId, 16));
trace::SpanId spanId(opentelemetry::nostd::span<std::uint8_t const, 8>(rawSpanId, 8));
// Default to not-sampled (0x00) per W3C Trace Context spec when
// the trace_flags field is absent.
trace::TraceFlags flags(
proto.has_trace_flags() ? static_cast<std::uint8_t>(proto.trace_flags())
: static_cast<std::uint8_t>(0));
trace::SpanContext spanCtx(traceId, spanId, flags, /* remote = */ true);
return opentelemetry::context::Context{}.SetValue(
trace::kSpanKey,
opentelemetry::nostd::shared_ptr<trace::Span>(new trace::DefaultSpan(spanCtx)));
}
/** Inject the current span's trace context into a protobuf TraceContext.
@param ctx The OTel context containing the span to propagate.
@param proto The protobuf TraceContext to populate.
*/
inline void
injectToProtobuf(opentelemetry::context::Context const& ctx, protocol::TraceContext& proto)
{
namespace trace = opentelemetry::trace;
auto span = trace::GetSpan(ctx);
if (!span)
return;
auto const& spanCtx = span->GetContext();
if (!spanCtx.IsValid())
return;
// Serialize trace_id (16 bytes)
auto const& traceId = spanCtx.trace_id();
proto.set_trace_id(traceId.Id().data(), trace::TraceId::kSize);
// Serialize span_id (8 bytes)
auto const& spanId = spanCtx.span_id();
proto.set_span_id(spanId.Id().data(), trace::SpanId::kSize);
// Serialize flags
proto.set_trace_flags(spanCtx.trace_flags().flags());
}
} // namespace telemetry
} // namespace xrpl
#endif // XRPL_ENABLE_TELEMETRY

280
presentation.md Normal file
View File

@@ -0,0 +1,280 @@
# OpenTelemetry Distributed Tracing for rippled
---
## Slide 1: Introduction
### What is OpenTelemetry?
OpenTelemetry is an open-source, CNCF-backed observability framework for distributed tracing, metrics, and logs.
### Why OpenTelemetry for rippled?
- **End-to-End Transaction Visibility**: Track transactions from submission → consensus → ledger inclusion
- **Cross-Node Correlation**: Follow requests across multiple independent nodes using a unique `trace_id`
- **Consensus Round Analysis**: Understand timing and behavior across validators
- **Incident Debugging**: Correlate events across distributed nodes during issues
```mermaid
flowchart LR
A["Node A<br/>tx.receive<br/>trace_id: abc123"] --> B["Node B<br/>tx.relay<br/>trace_id: abc123"] --> C["Node C<br/>tx.validate<br/>trace_id: abc123"] --> D["Node D<br/>ledger.apply<br/>trace_id: abc123"]
style A fill:#1565c0,stroke:#0d47a1,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#2e7d32,stroke:#1b5e20,color:#fff
style D fill:#e65100,stroke:#bf360c,color:#fff
```
> **Trace ID: abc123** — All nodes share the same trace, enabling cross-node correlation.
---
## Slide 2: OpenTelemetry vs Open Source Alternatives
| Feature | OpenTelemetry | Jaeger | Zipkin | SkyWalking | Pinpoint | Prometheus |
| ------------------- | ---------------- | ---------------- | ------------------ | ---------- | ---------- | ---------- |
| **Tracing** | YES | YES | YES | YES | YES | NO |
| **Metrics** | YES | NO | NO | YES | YES | YES |
| **Logs** | YES | NO | NO | YES | NO | NO |
| **C++ SDK** | YES Official | YES (Deprecated) | YES (Unmaintained) | NO | NO | YES |
| **Vendor Neutral** | YES Primary goal | NO | NO | NO | NO | NO |
| **Instrumentation** | Manual + Auto | Manual | Manual | Auto-first | Auto-first | Manual |
| **Backend** | Any (exporters) | Self | Self | Self | Self | Self |
| **CNCF Status** | Incubating | Graduated | NO | Incubating | NO | Graduated |
> **Why OpenTelemetry?** It's the only actively maintained, full-featured C++ option with vendor neutrality — allowing export to Jaeger, Prometheus, Grafana, or any commercial backend without changing instrumentation.
---
## Slide 3: Comparison with rippled's Existing Solutions
### Current Observability Stack
| Aspect | PerfLog (JSON) | StatsD (Metrics) | OpenTelemetry (NEW) |
| --------------------- | --------------------- | --------------------- | --------------------------- |
| **Type** | Logging | Metrics | Distributed Tracing |
| **Scope** | Single node | Single node | **Cross-node** |
| **Data** | JSON log entries | Counters, gauges | Spans with context |
| **Correlation** | By timestamp | By metric name | By `trace_id` |
| **Overhead** | Low (file I/O) | Low (UDP) | Low-Medium (configurable) |
| **Question Answered** | "What happened here?" | "How many? How fast?" | **"What was the journey?"** |
### Use Case Matrix
| Scenario | PerfLog | StatsD | OpenTelemetry |
| -------------------------------- | ------- | ------ | ------------- |
| "How many TXs per second?" | ❌ | ✅ | ❌ |
| "Why was this specific TX slow?" | ⚠️ | ❌ | ✅ |
| "Which node delayed consensus?" | ❌ | ❌ | ✅ |
| "Show TX journey across 5 nodes" | ❌ | ❌ | ✅ |
> **Key Insight**: OpenTelemetry **complements** (not replaces) existing systems.
---
## Slide 4: Architecture
### High-Level Integration Architecture
```mermaid
flowchart TB
subgraph rippled["rippled Node"]
subgraph services["Core Services"]
direction LR
RPC["RPC Server<br/>(HTTP/WS)"] ~~~ Overlay["Overlay<br/>(P2P Network)"] ~~~ Consensus["Consensus<br/>(RCLConsensus)"]
end
Telemetry["Telemetry Module<br/>(OpenTelemetry SDK)"]
services --> Telemetry
end
Telemetry -->|OTLP/gRPC| Collector["OTel Collector"]
Collector --> Tempo["Grafana Tempo"]
Collector --> Jaeger["Jaeger"]
Collector --> Elastic["Elastic APM"]
style rippled fill:#424242,stroke:#212121,color:#fff
style services fill:#1565c0,stroke:#0d47a1,color:#fff
style Telemetry fill:#2e7d32,stroke:#1b5e20,color:#fff
style Collector fill:#e65100,stroke:#bf360c,color:#fff
```
### Context Propagation
```mermaid
sequenceDiagram
participant Client
participant NodeA as Node A
participant NodeB as Node B
Client->>NodeA: Submit TX (no context)
Note over NodeA: Creates trace_id: abc123<br/>span: tx.receive
NodeA->>NodeB: Relay TX<br/>(traceparent: abc123)
Note over NodeB: Links to trace_id: abc123<br/>span: tx.relay
```
- **HTTP/RPC**: W3C Trace Context headers (`traceparent`)
- **P2P Messages**: Protocol Buffer extension fields
---
## Slide 5: Implementation Plan
### 5-Phase Rollout (9 Weeks)
```mermaid
gantt
title Implementation Timeline
dateFormat YYYY-MM-DD
axisFormat Week %W
section Phase 1
Core Infrastructure :p1, 2024-01-01, 2w
section Phase 2
RPC Tracing :p2, after p1, 2w
section Phase 3
Transaction Tracing :p3, after p2, 2w
section Phase 4
Consensus Tracing :p4, after p3, 2w
section Phase 5
Documentation :p5, after p4, 1w
```
### Phase Details
| Phase | Focus | Key Deliverables | Effort |
| ----- | ------------------- | -------------------------------------------- | ------- |
| 1 | Core Infrastructure | SDK integration, Telemetry interface, Config | 10 days |
| 2 | RPC Tracing | HTTP context extraction, Handler spans | 10 days |
| 3 | Transaction Tracing | Protobuf context, P2P relay propagation | 10 days |
| 4 | Consensus Tracing | Round spans, Proposal/validation tracing | 10 days |
| 5 | Documentation | Runbook, Dashboards, Training | 7 days |
**Total Effort**: ~47 developer-days (2 developers)
---
## Slide 6: Performance Overhead
### Estimated System Impact
| Metric | Overhead | Notes |
| ----------------- | ---------- | ----------------------------------- |
| **CPU** | 1-3% | Span creation and attribute setting |
| **Memory** | 2-5 MB | Batch buffer for pending spans |
| **Network** | 10-50 KB/s | Compressed OTLP export to collector |
| **Latency (p99)** | <2% | With proper sampling configuration |
### Per-Message Overhead (Context Propagation)
Each P2P message carries trace context with the following overhead:
| Field | Size | Description |
| ------------- | ------------- | ----------------------------------------- |
| `trace_id` | 16 bytes | Unique identifier for the entire trace |
| `span_id` | 8 bytes | Current span (becomes parent on receiver) |
| `trace_flags` | 4 bytes | Sampling decision flags |
| `trace_state` | 0-4 bytes | Optional vendor-specific data |
| **Total** | **~32 bytes** | **Added per traced P2P message** |
```mermaid
flowchart LR
subgraph msg["P2P Message with Trace Context"]
A["Original Message<br/>(variable size)"] --> B["+ TraceContext<br/>(~32 bytes)"]
end
subgraph breakdown["Context Breakdown"]
C["trace_id<br/>16 bytes"]
D["span_id<br/>8 bytes"]
E["flags<br/>4 bytes"]
F["state<br/>0-4 bytes"]
end
B --> breakdown
style A fill:#424242,stroke:#212121,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#1565c0,stroke:#0d47a1,color:#fff
style D fill:#1565c0,stroke:#0d47a1,color:#fff
style E fill:#e65100,stroke:#bf360c,color:#fff
style F fill:#4a148c,stroke:#2e0d57,color:#fff
```
> **Note**: 32 bytes is negligible compared to typical transaction messages (hundreds to thousands of bytes)
### Mitigation Strategies
```mermaid
flowchart LR
A["Head Sampling<br/>10% default"] --> B["Tail Sampling<br/>Keep errors/slow"] --> C["Batch Export<br/>Reduce I/O"] --> D["Conditional Compile<br/>XRPL_ENABLE_TELEMETRY"]
style A fill:#1565c0,stroke:#0d47a1,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#e65100,stroke:#bf360c,color:#fff
style D fill:#4a148c,stroke:#2e0d57,color:#fff
```
### Kill Switches (Rollback Options)
1. **Config Disable**: Set `enabled=0` in config instant disable, no restart needed for sampling
2. **Rebuild**: Compile with `XRPL_ENABLE_TELEMETRY=OFF` zero overhead (no-op)
3. **Full Revert**: Clean separation allows easy commit reversion
---
## Slide 7: Data Collection & Privacy
### What Data is Collected
| Category | Attributes Collected | Purpose |
| --------------- | ---------------------------------------------------------------------------------- | --------------------------- |
| **Transaction** | `tx.hash`, `tx.type`, `tx.result`, `tx.fee`, `ledger_index` | Trace transaction lifecycle |
| **Consensus** | `round`, `phase`, `mode`, `proposers`(public key or public node id), `duration_ms` | Analyze consensus timing |
| **RPC** | `command`, `version`, `status`, `duration_ms` | Monitor RPC performance |
| **Peer** | `peer.id`(public key), `latency_ms`, `message.type`, `message.size` | Network topology analysis |
| **Ledger** | `ledger.hash`, `ledger.index`, `close_time`, `tx_count` | Ledger progression tracking |
| **Job** | `job.type`, `queue_ms`, `worker` | JobQueue performance |
### What is NOT Collected (Privacy Guarantees)
```mermaid
flowchart LR
subgraph notCollected["❌ NOT Collected"]
direction LR
A["Private Keys"] ~~~ B["Account Balances"] ~~~ C["Transaction Amounts"]
end
subgraph alsoNot["❌ Also Excluded"]
direction LR
D["IP Addresses<br/>(configurable)"] ~~~ E["Personal Data"] ~~~ F["Raw TX Payloads"]
end
style A fill:#c62828,stroke:#8c2809,color:#fff
style B fill:#c62828,stroke:#8c2809,color:#fff
style C fill:#c62828,stroke:#8c2809,color:#fff
style D fill:#c62828,stroke:#8c2809,color:#fff
style E fill:#c62828,stroke:#8c2809,color:#fff
style F fill:#c62828,stroke:#8c2809,color:#fff
```
### Privacy Protection Mechanisms
| Mechanism | Description |
| -------------------------- | ------------------------------------------------------------- |
| **Account Hashing** | `xrpl.tx.account` is hashed at collector level before storage |
| **Configurable Redaction** | Sensitive fields can be excluded via config |
| **Sampling** | Only 10% of traces recorded by default (reduces exposure) |
| **Local Control** | Node operators control what gets exported |
| **No Raw Payloads** | Transaction content is never recorded, only metadata |
> **Key Principle**: Telemetry collects **operational metadata** (timing, counts, hashes) — never **sensitive content** (keys, balances, amounts).
---
_End of Presentation_

View File

@@ -6,6 +6,15 @@
#include <boost/algorithm/string/predicate.hpp>
#include <boost/filesystem/path.hpp>
// Phase 8: OTel trace context headers for log-trace correlation.
// GetSpan() and RuntimeContext::GetCurrent() are thread-local reads
// with no locking — measured at <10ns per call.
#ifdef XRPL_ENABLE_TELEMETRY
#include <opentelemetry/context/runtime_context.h>
#include <opentelemetry/trace/context.h>
#include <opentelemetry/trace/provider.h>
#endif // XRPL_ENABLE_TELEMETRY
#include <chrono>
#include <cstring>
#include <fstream>
@@ -345,6 +354,30 @@ Logs::format(
break;
}
// Phase 8: Inject OTel trace context (trace_id, span_id) into log lines
// for log-trace correlation. Only appended when an active span exists.
// GetSpan() reads thread-local storage — no locks, <10ns overhead.
#ifdef XRPL_ENABLE_TELEMETRY
{
auto span =
opentelemetry::trace::GetSpan(opentelemetry::context::RuntimeContext::GetCurrent());
auto ctx = span->GetContext();
if (ctx.IsValid())
{
// Append trace context as structured key=value fields that the
// OTel Collector filelog receiver regex_parser can extract.
char traceId[32], spanId[16];
ctx.trace_id().ToLowerBase16(opentelemetry::nostd::span<char, 32>{traceId});
ctx.span_id().ToLowerBase16(opentelemetry::nostd::span<char, 16>{spanId});
output += "trace_id=";
output.append(traceId, 32);
output += " span_id=";
output.append(spanId, 16);
output += ' ';
}
}
#endif // XRPL_ENABLE_TELEMETRY
output += message;
// Limit the maximum length of the output

Some files were not shown because too many files have changed in this diff Show More