Compare commits

..

3 Commits

Author SHA1 Message Date
Pratik Mankawde
6bc3070856 activate tsan and asan for gcc and clang
Signed-off-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
2026-02-19 15:25:38 +00:00
Pratik Mankawde
21a52226bf Merge branch 'develop' into pratik/use_boost_coroutine2
Signed-off-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
2026-02-19 15:23:11 +00:00
Pratik Mankawde
7def28c8ef switch to ucontext and coroutine2
Signed-off-by: Pratik Mankawde <pmankawde@ripple.com>

updated conan profiles

Signed-off-by: Pratik Mankawde <pmankawde@ripple.com>

removed no-pie

Signed-off-by: Pratik Mankawde <pmankawde@ripple.com>

minor change

Signed-off-by: Pratik Mankawde <pmankawde@ripple.com>

fixes for boost flags

Signed-off-by: Pratik Mankawde <pmankawde@ripple.com>

minor fix

Signed-off-by: Pratik Mankawde <pmankawde@ripple.com>

added boost flags to cxx_flags

Signed-off-by: Pratik Mankawde <pmankawde@ripple.com>

added boost context to conanfile

Signed-off-by: Pratik Mankawde <pmankawde@ripple.com>

added dependency to coroutine2

Signed-off-by: Pratik Mankawde <pmankawde@ripple.com>

now building coroutine2

Signed-off-by: Pratik Mankawde <pmankawde@ripple.com>

using without_coroutine2=false as well

Signed-off-by: Pratik Mankawde <pmankawde@ripple.com>

fix build

Signed-off-by: Pratik Mankawde <pmankawde@ripple.com>
2025-11-21 12:11:13 +00:00
1804 changed files with 47950 additions and 181824 deletions

View File

@@ -37,7 +37,7 @@ BinPackParameters: false
BreakBeforeBinaryOperators: false
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: true
ColumnLimit: 100
ColumnLimit: 120
CommentPragmas: "^ IWYU pragma:"
ConstructorInitializerAllOnOneLineOrOnePerLine: true
ConstructorInitializerIndentWidth: 4
@@ -50,21 +50,20 @@ ForEachMacros: [Q_FOREACH, BOOST_FOREACH]
IncludeBlocks: Regroup
IncludeCategories:
- Regex: "^<(test)/"
Priority: 1
Priority: 0
- Regex: "^<(xrpld)/"
Priority: 2
Priority: 1
- Regex: "^<(xrpl)/"
Priority: 3
Priority: 2
- Regex: "^<(boost)/"
Priority: 4
Priority: 3
- Regex: "^.*/"
Priority: 5
Priority: 4
- Regex: '^.*\.h'
Priority: 6
Priority: 5
- Regex: ".*"
Priority: 7
Priority: 6
IncludeIsMainRegex: "$"
MainIncludeChar: AngleBracket
IndentCaseLabels: true
IndentFunctionDeclarationAfterType: false
IndentRequiresClause: true

View File

@@ -1,200 +0,0 @@
---
# This entire group of checks was applied to all cpp files but not all header files.
# ---
Checks: "-*,
bugprone-argument-comment,
bugprone-assert-side-effect,
bugprone-bad-signal-to-kill-thread,
bugprone-bool-pointer-implicit-conversion,
bugprone-casting-through-void,
bugprone-chained-comparison,
bugprone-compare-pointer-to-member-virtual-function,
bugprone-copy-constructor-init,
bugprone-crtp-constructor-accessibility,
bugprone-dangling-handle,
bugprone-dynamic-static-initializers,
bugprone-empty-catch,
bugprone-fold-init-type,
bugprone-forward-declaration-namespace,
bugprone-inaccurate-erase,
bugprone-inc-dec-in-conditions,
bugprone-incorrect-enable-if,
bugprone-incorrect-roundings,
bugprone-infinite-loop,
bugprone-integer-division,
bugprone-lambda-function-name,
bugprone-macro-parentheses,
bugprone-macro-repeated-side-effects,
bugprone-misplaced-operator-in-strlen-in-alloc,
bugprone-misplaced-pointer-arithmetic-in-alloc,
bugprone-misplaced-widening-cast,
bugprone-move-forwarding-reference,
bugprone-multi-level-implicit-pointer-conversion,
bugprone-multiple-new-in-one-expression,
bugprone-multiple-statement-macro,
bugprone-no-escape,
bugprone-non-zero-enum-to-bool-conversion,
bugprone-optional-value-conversion,
bugprone-parent-virtual-call,
bugprone-pointer-arithmetic-on-polymorphic-object,
bugprone-posix-return,
bugprone-redundant-branch-condition,
bugprone-reserved-identifier,
bugprone-return-const-ref-from-parameter,
bugprone-shared-ptr-array-mismatch,
bugprone-signal-handler,
bugprone-signed-char-misuse,
bugprone-sizeof-container,
bugprone-sizeof-expression,
bugprone-spuriously-wake-up-functions,
bugprone-standalone-empty,
bugprone-string-constructor,
bugprone-string-integer-assignment,
bugprone-string-literal-with-embedded-nul,
bugprone-stringview-nullptr,
bugprone-suspicious-enum-usage,
bugprone-suspicious-include,
bugprone-suspicious-memory-comparison,
bugprone-suspicious-memset-usage,
bugprone-suspicious-missing-comma,
bugprone-suspicious-realloc-usage,
bugprone-suspicious-semicolon,
bugprone-suspicious-string-compare,
bugprone-suspicious-stringview-data-usage,
bugprone-swapped-arguments,
bugprone-switch-missing-default-case,
bugprone-terminating-continue,
bugprone-throw-keyword-missing,
bugprone-too-small-loop-variable,
bugprone-unchecked-optional-access,
bugprone-undefined-memory-manipulation,
bugprone-undelegated-constructor,
bugprone-unhandled-exception-at-new,
bugprone-unhandled-self-assignment,
bugprone-unique-ptr-array-mismatch,
bugprone-unsafe-functions,
bugprone-use-after-move,
bugprone-unused-raii,
bugprone-unused-return-value,
bugprone-unused-local-non-trivial-variable,
bugprone-virtual-near-miss,
cppcoreguidelines-init-variables,
cppcoreguidelines-misleading-capture-default-by-value,
cppcoreguidelines-no-suspend-with-lock,
cppcoreguidelines-pro-type-member-init,
cppcoreguidelines-pro-type-static-cast-downcast,
cppcoreguidelines-rvalue-reference-param-not-moved,
cppcoreguidelines-use-default-member-init,
cppcoreguidelines-virtual-class-destructor,
hicpp-ignored-remove-result,
misc-const-correctness,
misc-definitions-in-headers,
misc-header-include-cycle,
misc-include-cleaner,
misc-misplaced-const,
misc-redundant-expression,
misc-static-assert,
misc-throw-by-value-catch-by-reference,
misc-unused-alias-decls,
misc-unused-using-decls,
modernize-concat-nested-namespaces,
modernize-make-shared,
modernize-make-unique,
modernize-pass-by-value,
modernize-type-traits,
modernize-use-designated-initializers,
modernize-use-emplace,
modernize-use-equals-default,
modernize-use-equals-delete,
modernize-use-nodiscard,
modernize-use-override,
modernize-use-ranges,
modernize-use-starts-ends-with,
modernize-use-std-numbers,
modernize-use-using,
modernize-deprecated-headers,
llvm-namespace-comment,
performance-faster-string-find,
performance-for-range-copy,
performance-implicit-conversion-in-loop,
performance-inefficient-vector-operation,
performance-move-const-arg,
performance-move-constructor-init,
performance-no-automatic-move,
performance-trivially-destructible,
readability-avoid-nested-conditional-operator,
readability-avoid-return-with-void-value,
readability-braces-around-statements,
readability-const-return-type,
readability-container-contains,
readability-container-size-empty,
readability-convert-member-functions-to-static,
readability-duplicate-include,
readability-else-after-return,
readability-enum-initial-value,
readability-implicit-bool-conversion,
readability-make-member-function-const,
readability-math-missing-parentheses,
readability-misleading-indentation,
readability-non-const-parameter,
readability-redundant-casting,
readability-redundant-declaration,
readability-redundant-inline-specifier,
readability-redundant-member-init,
readability-redundant-string-init,
readability-reference-to-constructed-temporary,
readability-simplify-boolean-expr,
readability-static-definition-in-anonymous-namespace,
readability-suspicious-call-argument,
readability-use-std-min-max
"
# ---
# other checks that have issues that need to be resolved:
#
# readability-inconsistent-declaration-parameter-name, # in this codebase this check will break a lot of arg names
# readability-static-accessed-through-instance, # this check is probably unnecessary. it makes the code less readable
# readability-identifier-naming, # https://github.com/XRPLF/rippled/pull/6571
# ---
#
CheckOptions:
readability-braces-around-statements.ShortStatementLines: 2
# readability-identifier-naming.MacroDefinitionCase: UPPER_CASE
# readability-identifier-naming.ClassCase: CamelCase
# readability-identifier-naming.StructCase: CamelCase
# readability-identifier-naming.UnionCase: CamelCase
# readability-identifier-naming.EnumCase: CamelCase
# readability-identifier-naming.EnumConstantCase: CamelCase
# readability-identifier-naming.ScopedEnumConstantCase: CamelCase
# readability-identifier-naming.GlobalConstantCase: UPPER_CASE
# readability-identifier-naming.GlobalConstantPrefix: "k"
# readability-identifier-naming.GlobalVariableCase: CamelCase
# readability-identifier-naming.GlobalVariablePrefix: "g"
# readability-identifier-naming.ConstexprFunctionCase: camelBack
# readability-identifier-naming.ConstexprMethodCase: camelBack
# readability-identifier-naming.ClassMethodCase: camelBack
# readability-identifier-naming.ClassMemberCase: camelBack
# readability-identifier-naming.ClassConstantCase: UPPER_CASE
# readability-identifier-naming.ClassConstantPrefix: "k"
# readability-identifier-naming.StaticConstantCase: UPPER_CASE
# readability-identifier-naming.StaticConstantPrefix: "k"
# readability-identifier-naming.StaticVariableCase: UPPER_CASE
# readability-identifier-naming.StaticVariablePrefix: "k"
# readability-identifier-naming.ConstexprVariableCase: UPPER_CASE
# readability-identifier-naming.ConstexprVariablePrefix: "k"
# readability-identifier-naming.LocalConstantCase: camelBack
# readability-identifier-naming.LocalVariableCase: camelBack
# readability-identifier-naming.TemplateParameterCase: CamelCase
# readability-identifier-naming.ParameterCase: camelBack
# readability-identifier-naming.FunctionCase: camelBack
# readability-identifier-naming.MemberCase: camelBack
# readability-identifier-naming.PrivateMemberSuffix: _
# readability-identifier-naming.ProtectedMemberSuffix: _
# readability-identifier-naming.PublicMemberSuffix: ""
# readability-identifier-naming.FunctionIgnoredRegexp: ".*tag_invoke.*"
bugprone-unsafe-functions.ReportMoreUnsafeFunctions: true
bugprone-unused-return-value.CheckedReturnTypes: ::std::error_code;::std::error_condition;::std::errc
misc-include-cleaner.IgnoreHeaders: ".*/(detail|impl)/.*;.*fwd\\.h(pp)?;time.h;stdlib.h;sqlite3.h;netinet/in\\.h;sys/resource\\.h;sys/sysinfo\\.h;linux/sysinfo\\.h;__chrono/.*;bits/.*;_abort\\.h;boost/uuid/uuid_hash.hpp;boost/beast/core/flat_buffer\\.hpp;boost/beast/http/field\\.hpp;boost/beast/http/dynamic_body\\.hpp;boost/beast/http/message\\.hpp;boost/beast/http/read\\.hpp;boost/beast/http/write\\.hpp;openssl/obj_mac\\.h"
#
HeaderFilterRegex: '^.*/(test|xrpl|xrpld)/.*\.(h|hpp)$'
ExcludeHeaderFilterRegex: '^.*/protocol_autogen/.*\.(h|hpp)$'
WarningsAsErrors: "*"

247
.cmake-format.yaml Normal file
View File

@@ -0,0 +1,247 @@
_help_parse: Options affecting listfile parsing
parse:
_help_additional_commands:
- Specify structure for custom cmake functions
additional_commands:
target_protobuf_sources:
pargs:
- target
- prefix
kwargs:
PROTOS: "*"
LANGUAGE: cpp
IMPORT_DIRS: "*"
GENERATE_EXTENSIONS: "*"
PLUGIN: "*"
_help_override_spec:
- Override configurations per-command where available
override_spec: {}
_help_vartags:
- Specify variable tags.
vartags: []
_help_proptags:
- Specify property tags.
proptags: []
_help_format: Options affecting formatting.
format:
_help_disable:
- Disable formatting entirely, making cmake-format a no-op
disable: false
_help_line_width:
- How wide to allow formatted cmake files
line_width: 120
_help_tab_size:
- How many spaces to tab for indent
tab_size: 4
_help_use_tabchars:
- If true, lines are indented using tab characters (utf-8
- 0x09) instead of <tab_size> space characters (utf-8 0x20).
- In cases where the layout would require a fractional tab
- character, the behavior of the fractional indentation is
- governed by <fractional_tab_policy>
use_tabchars: false
_help_fractional_tab_policy:
- If <use_tabchars> is True, then the value of this variable
- indicates how fractional indentions are handled during
- whitespace replacement. If set to 'use-space', fractional
- indentation is left as spaces (utf-8 0x20). If set to
- "`round-up` fractional indentation is replaced with a single"
- tab character (utf-8 0x09) effectively shifting the column
- to the next tabstop
fractional_tab_policy: use-space
_help_max_subgroups_hwrap:
- If an argument group contains more than this many sub-groups
- (parg or kwarg groups) then force it to a vertical layout.
max_subgroups_hwrap: 4
_help_max_pargs_hwrap:
- If a positional argument group contains more than this many
- arguments, then force it to a vertical layout.
max_pargs_hwrap: 5
_help_max_rows_cmdline:
- If a cmdline positional group consumes more than this many
- lines without nesting, then invalidate the layout (and nest)
max_rows_cmdline: 2
_help_separate_ctrl_name_with_space:
- If true, separate flow control names from their parentheses
- with a space
separate_ctrl_name_with_space: true
_help_separate_fn_name_with_space:
- If true, separate function names from parentheses with a
- space
separate_fn_name_with_space: false
_help_dangle_parens:
- If a statement is wrapped to more than one line, than dangle
- the closing parenthesis on its own line.
dangle_parens: false
_help_dangle_align:
- If the trailing parenthesis must be 'dangled' on its on
- "line, then align it to this reference: `prefix`: the start"
- "of the statement, `prefix-indent`: the start of the"
- "statement, plus one indentation level, `child`: align to"
- the column of the arguments
dangle_align: prefix
_help_min_prefix_chars:
- If the statement spelling length (including space and
- parenthesis) is smaller than this amount, then force reject
- nested layouts.
min_prefix_chars: 18
_help_max_prefix_chars:
- If the statement spelling length (including space and
- parenthesis) is larger than the tab width by more than this
- amount, then force reject un-nested layouts.
max_prefix_chars: 10
_help_max_lines_hwrap:
- If a candidate layout is wrapped horizontally but it exceeds
- this many lines, then reject the layout.
max_lines_hwrap: 2
_help_line_ending:
- What style line endings to use in the output.
line_ending: unix
_help_command_case:
- Format command names consistently as 'lower' or 'upper' case
command_case: canonical
_help_keyword_case:
- Format keywords consistently as 'lower' or 'upper' case
keyword_case: unchanged
_help_always_wrap:
- A list of command names which should always be wrapped
always_wrap: []
_help_enable_sort:
- If true, the argument lists which are known to be sortable
- will be sorted lexicographicall
enable_sort: true
_help_autosort:
- If true, the parsers may infer whether or not an argument
- list is sortable (without annotation).
autosort: true
_help_require_valid_layout:
- By default, if cmake-format cannot successfully fit
- everything into the desired linewidth it will apply the
- last, most aggressive attempt that it made. If this flag is
- True, however, cmake-format will print error, exit with non-
- zero status code, and write-out nothing
require_valid_layout: false
_help_layout_passes:
- A dictionary mapping layout nodes to a list of wrap
- decisions. See the documentation for more information.
layout_passes: {}
_help_markup: Options affecting comment reflow and formatting.
markup:
_help_bullet_char:
- What character to use for bulleted lists
bullet_char: "-"
_help_enum_char:
- What character to use as punctuation after numerals in an
- enumerated list
enum_char: .
_help_first_comment_is_literal:
- If comment markup is enabled, don't reflow the first comment
- block in each listfile. Use this to preserve formatting of
- your copyright/license statements.
first_comment_is_literal: false
_help_literal_comment_pattern:
- If comment markup is enabled, don't reflow any comment block
- which matches this (regex) pattern. Default is `None`
- (disabled).
literal_comment_pattern: null
_help_fence_pattern:
- Regular expression to match preformat fences in comments
- default= ``r'^\s*([`~]{3}[`~]*)(.*)$'``
fence_pattern: ^\s*([`~]{3}[`~]*)(.*)$
_help_ruler_pattern:
- Regular expression to match rulers in comments default=
- '``r''^\s*[^\w\s]{3}.*[^\w\s]{3}$''``'
ruler_pattern: ^\s*[^\w\s]{3}.*[^\w\s]{3}$
_help_explicit_trailing_pattern:
- If a comment line matches starts with this pattern then it
- is explicitly a trailing comment for the preceding
- argument. Default is '#<'
explicit_trailing_pattern: "#<"
_help_hashruler_min_length:
- If a comment line starts with at least this many consecutive
- hash characters, then don't lstrip() them off. This allows
- for lazy hash rulers where the first hash char is not
- separated by space
hashruler_min_length: 10
_help_canonicalize_hashrulers:
- If true, then insert a space between the first hash char and
- remaining hash chars in a hash ruler, and normalize its
- length to fill the column
canonicalize_hashrulers: true
_help_enable_markup:
- enable comment markup parsing and reflow
enable_markup: false
_help_lint: Options affecting the linter
lint:
_help_disabled_codes:
- a list of lint codes to disable
disabled_codes: []
_help_function_pattern:
- regular expression pattern describing valid function names
function_pattern: "[0-9a-z_]+"
_help_macro_pattern:
- regular expression pattern describing valid macro names
macro_pattern: "[0-9A-Z_]+"
_help_global_var_pattern:
- regular expression pattern describing valid names for
- variables with global (cache) scope
global_var_pattern: "[A-Z][0-9A-Z_]+"
_help_internal_var_pattern:
- regular expression pattern describing valid names for
- variables with global scope (but internal semantic)
internal_var_pattern: _[A-Z][0-9A-Z_]+
_help_local_var_pattern:
- regular expression pattern describing valid names for
- variables with local scope
local_var_pattern: "[a-z][a-z0-9_]+"
_help_private_var_pattern:
- regular expression pattern describing valid names for
- privatedirectory variables
private_var_pattern: _[0-9a-z_]+
_help_public_var_pattern:
- regular expression pattern describing valid names for public
- directory variables
public_var_pattern: "[A-Z][0-9A-Z_]+"
_help_argument_var_pattern:
- regular expression pattern describing valid names for
- function/macro arguments and loop variables.
argument_var_pattern: "[a-z][a-z0-9_]+"
_help_keyword_pattern:
- regular expression pattern describing valid names for
- keywords used in functions or macros
keyword_pattern: "[A-Z][0-9A-Z_]+"
_help_max_conditionals_custom_parser:
- In the heuristic for C0201, how many conditionals to match
- within a loop in before considering the loop a parser.
max_conditionals_custom_parser: 2
_help_min_statement_spacing:
- Require at least this many newlines between statements
min_statement_spacing: 1
_help_max_statement_spacing:
- Require no more than this many newlines between statements
max_statement_spacing: 2
max_returns: 6
max_branches: 12
max_arguments: 5
max_localvars: 15
max_statements: 50
_help_encode: Options affecting file encoding
encode:
_help_emit_byteorder_mark:
- If true, emit the unicode byte-order mark (BOM) at the start
- of the file
emit_byteorder_mark: false
_help_input_encoding:
- Specify the encoding of the input file. Defaults to utf-8
input_encoding: utf-8
_help_output_encoding:
- Specify the encoding of the output file. Defaults to utf-8.
- Note that cmake only claims to support utf-8 so be careful
- when using anything else
output_encoding: utf-8
_help_misc: Miscellaneous configurations options.
misc:
_help_per_command:
- A dictionary containing any per-command configuration
- overrides. Currently only `command_case` is supported.
per_command: {}

View File

@@ -1,101 +0,0 @@
# Custom CMake command definitions for gersemi formatting.
# These stubs teach gersemi the signatures of project-specific commands
# so it can format their invocations correctly.
function(git_branch branch_val)
endfunction()
function(isolate_headers target A B scope)
endfunction()
function(create_symbolic_link target link)
endfunction()
function(xrpl_add_test name)
endfunction()
macro(exclude_from_default target_)
endmacro()
macro(exclude_if_included target_)
endmacro()
function(target_protobuf_sources target prefix)
set(options APPEND_PATH DESCRIPTORS)
set(oneValueArgs
LANGUAGE
OUT_VAR
EXPORT_MACRO
TARGET
PROTOC_OUT_DIR
PLUGIN
PLUGIN_OPTIONS
PROTOC_EXE
)
set(multiValueArgs
PROTOS
IMPORT_DIRS
GENERATE_EXTENSIONS
PROTOC_OPTIONS
DEPENDENCIES
)
cmake_parse_arguments(
THIS_FUNCTION_PREFIX
"${options}"
"${oneValueArgs}"
"${multiValueArgs}"
${ARGN}
)
endfunction()
function(add_module parent name)
endfunction()
function(setup_protocol_autogen)
endfunction()
function(target_link_modules parent scope)
endfunction()
function(setup_target_for_coverage_gcovr)
set(options NONE)
set(oneValueArgs BASE_DIRECTORY NAME FORMAT)
set(multiValueArgs EXCLUDE EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)
cmake_parse_arguments(
THIS_FUNCTION_PREFIX
"${options}"
"${oneValueArgs}"
"${multiValueArgs}"
${ARGN}
)
endfunction()
function(add_code_coverage_to_target name scope)
endfunction()
function(verbose_find_path variable name)
set(options
NO_CACHE
REQUIRED
OPTIONAL
NO_DEFAULT_PATH
NO_PACKAGE_ROOT_PATH
NO_CMAKE_PATH
NO_CMAKE_ENVIRONMENT_PATH
NO_SYSTEM_ENVIRONMENT_PATH
NO_CMAKE_SYSTEM_PATH
NO_CMAKE_INSTALL_PREFIX
CMAKE_FIND_ROOT_PATH_BOTH
ONLY_CMAKE_FIND_ROOT_PATH
NO_CMAKE_FIND_ROOT_PATH
)
set(oneValueArgs REGISTRY_VIEW VALIDATOR DOC)
set(multiValueArgs NAMES HINTS PATHS PATH_SUFFIXES)
cmake_parse_arguments(
THIS_FUNCTION_PREFIX
"${options}"
"${oneValueArgs}"
"${multiValueArgs}"
${ARGN}
)
endfunction()

View File

@@ -1 +0,0 @@
definitions: [.gersemi]

View File

@@ -1,79 +1,16 @@
# This feature requires Git >= 2.24
# To use it by default in git blame:
# git config blame.ignoreRevsFile .git-blame-ignore-revs
# This file is sorted in reverse chronological order, with the most recent commits at the top.
# The commits listed here are ignored by git blame, which is useful for formatting-only commits that would otherwise obscure the history of changes to a file.
# refactor: Enable remaining clang-tidy `cppcoreguidelines` checks (#6538)
72f4cb097f626b08b02fc3efcb4aa11cb2e7adb8
# refactor: Rename system name from 'ripple' to 'xrpld' (#6347)
ffea3977f0b771fe8e43a8f74e4d393d63a7afd8
# refactor: Update transaction folder structure (#6483)
5865bd017f777491b4a956f9210be0c4161f5442
# chore: Use gersemi instead of ancient cmake-format (#6486)
0c74270b055133a57a497b5c9fc5a75f7647b1f4
# chore: Apply clang-format width 100 (#6387)
2c1fad102353e11293e3edde1c043224e7d3e983
# chore: Set clang-format width to 100 in config file (#6387)
25cca465538a56cce501477f9e5e2c1c7ea2d84c
# chore: Set cmake-format width to 100 (#6386)
469ce9f291a4480c38d4ee3baca5136b2f053cd0
# refactor: Modularize app/tx (#6228)
0976b2b68b64972af8e6e7c497900b5bce9fe22f
# chore: Update clang-format to 21.1.8 (#6352)
958d8f375453d80bb1aa4c293b5102c045a3e4b4
# refactor: Replace include guards by '#pragma once' (#6322)
34ef577604782ca8d6e1c17df8bd7470990a52ff
# chore: Format all cmake files without comments (#6294)
fe9c8d568fcf6ac21483024e01f58962dd5c8260
# chore: Add cmake-format pre-commit hook (#6279)
a0e09187b9370805d027c611a7e9ff5a0125282a
# chore: Set ColumnLimit to 120 in clang-format (#6288)
5f638f55536def0d88b970d1018a465a238e55f4
# refactor: Fix typos in comments, configure cspell (#6164)
3c9f5b62525cb1d6ca1153eeb10433db7d7379fd
# refactor: Rename `rippled.cfg` to `xrpld.cfg` (#6098)
3d1b3a49b3601a0a7037fa0b19d5df7b5e0e2fc1
# refactor: Rename `ripple` namespace to `xrpl` (#5982)
1eb0fdac6543706b4b9ddca57fd4102928a1f871
# refactor: Rename `rippled` binary to `xrpld` (#5983)
9eb84a561ef8bb066d89f098bd9b4ac71baed67c
# refactor: Replaces secp256k1 source by Conan package (#6089)
813bc4d9491b078bb950f8255f93b02f71320478
# refactor: Remove unnecessary copyright notices already covered by LICENSE.md (#5929)
1d42c4f6de6bf01d1286fc7459b17a37a5189e88
# refactor: Rename `RIPPLE_` and `RIPPLED_` definitions to `XRPL_` (#5821)
ada83564d894829424b0f4d922b0e737e07abbf7
# refactor: Modularize shamap and nodestore (#5668)
8eb233c2ea8ad5a159be73b77f0f5e1496d547ac
# refactor: Modularise ledger (#5493)
dc8b37a52448b005153c13a7f046ad494128cf94
# chore: Update clang-format and prettier with pre-commit (#5709)
c14ce956adeabe476ad73c18d73103f347c9c613
# chore: Fix file formatting (#5718)
896b8c3b54a22b0497cb0d1ce95e1095f9a227ce
# chore: Reverts formatting changes to external files, adds formatting changes to proto files (#5711)
b13370ac0d207217354f1fc1c29aef87769fb8a1
# chore: Run prettier on all files (#5657)
97f0747e103f13e26e45b731731059b32f7679ac
# Reformat code with clang-format-18
552377c76f55b403a1c876df873a23d780fcc81c
# Recompute loops (#4997)
d028005aa6319338b0adae1aebf8abe113162960
# Rewrite includes (#4997)
1d23148e6dd53957fcb6205c07a5c6cd7b64d50c
# Rearrange sources (#4997)
e416ee72ca26fa0c09d2aee1b68bdfb2b7046eed
# Move CMake directory (#4997)
2e902dee53aab2a8f27f32971047bb81e022f94f
# Rewrite includes
0eebe6a5f4246fced516d52b83ec4e7f47373edd
# Format formerly .hpp files
760f16f56835663d9286bd29294d074de26a7ba6
# Rename .hpp to .h
241b9ddde9e11beb7480600fd5ed90e1ef109b21
# Consolidate external libraries
e2384885f5f630c8f0ffe4bf21a169b433a16858
# Format first-party source according to .clang-format
50760c693510894ca368e90369b0cc2dabfd07f3
e2384885f5f630c8f0ffe4bf21a169b433a16858
241b9ddde9e11beb7480600fd5ed90e1ef109b21
760f16f56835663d9286bd29294d074de26a7ba6
0eebe6a5f4246fced516d52b83ec4e7f47373edd
2189cc950c0cebb89e4e2fa3b2d8817205bf7cef
b9d007813378ad0ff45660dc07285b823c7e9855
fe9a5365b8a52d4acc42eb27369247e6f238a4f9
9a93577314e6a8d4b4a8368cc9d2b15a5d8303e8
552377c76f55b403a1c876df873a23d780fcc81c
97f0747e103f13e26e45b731731059b32f7679ac
b13370ac0d207217354f1fc1c29aef87769fb8a1
896b8c3b54a22b0497cb0d1ce95e1095f9a227ce

View File

@@ -1,7 +1,7 @@
---
name: Feature Request
about: Suggest a new feature for the xrpld project
title: "[Title with short description] (Version: [xrpld version])"
about: Suggest a new feature for the rippled project
title: "[Title with short description] (Version: [rippled version])"
labels: Feature Request
assignees: ""
---

View File

@@ -11,7 +11,7 @@ runs:
steps:
# When a tag is pushed, the version is used as-is.
- name: Generate version for tag event
if: ${{ startsWith(github.ref, 'refs/tags/') }}
if: ${{ github.event_name == 'tag' }}
shell: bash
env:
VERSION: ${{ github.ref_name }}
@@ -22,7 +22,7 @@ runs:
# We use a plus sign instead of a hyphen because Conan recipe versions do
# not support two hyphens.
- name: Generate version for non-tag event
if: ${{ !startsWith(github.ref, 'refs/tags/') }}
if: ${{ github.event_name != 'tag' }}
shell: bash
run: |
echo 'Extracting version from BuildInfo.cpp.'

View File

@@ -1,56 +0,0 @@
version: 2
updates:
- package-ecosystem: github-actions
directory: /
schedule:
interval: weekly
day: monday
time: "04:00"
timezone: Etc/GMT
commit-message:
prefix: "ci: [DEPENDABOT] "
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/build-deps/
schedule:
interval: weekly
day: monday
time: "04:00"
timezone: Etc/GMT
commit-message:
prefix: "ci: [DEPENDABOT] "
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/generate-version/
schedule:
interval: weekly
day: monday
time: "04:00"
timezone: Etc/GMT
commit-message:
prefix: "ci: [DEPENDABOT] "
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/print-env/
schedule:
interval: weekly
day: monday
time: "04:00"
timezone: Etc/GMT
commit-message:
prefix: "ci: [DEPENDABOT] "
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/setup-conan/
schedule:
interval: weekly
day: monday
time: "04:00"
timezone: Etc/GMT
commit-message:
prefix: "ci: [DEPENDABOT] "
target-branch: develop

View File

@@ -29,6 +29,22 @@ If a refactor, how is this better than the previous implementation?
If there is a spec or design document for this feature, please link it here.
-->
### Type of Change
<!--
Please check [x] relevant options, delete irrelevant ones.
-->
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Refactor (non-breaking change that only restructures code)
- [ ] Performance (increase or change in throughput and/or latency)
- [ ] Tests (you added tests for code that already exists, or your new feature included in this PR)
- [ ] Documentation update
- [ ] Chore (no impact to binary, e.g. `.gitignore`, formatting, dropping support for older tooling)
- [ ] Release
### API Impact
<!--

View File

@@ -1,85 +0,0 @@
#!/usr/bin/env python3
"""
Checks that a pull request description has been customized from the
pull_request_template.md. Exits with code 1 if the description is empty
or identical to the template (ignoring HTML comments and whitespace).
Usage:
python check-pr-description.py --template-file TEMPLATE --pr-body-file BODY
"""
import argparse
import re
import sys
from pathlib import Path
def normalize(text: str) -> str:
"""Strip HTML comments, trim lines, and remove blank lines."""
# Remove HTML comments (possibly multi-line)
text = re.sub(r"<!--.*?-->", "", text, flags=re.DOTALL)
# Strip each line and drop empties
lines = [line.strip() for line in text.splitlines()]
lines = [line for line in lines if line]
return "\n".join(lines)
def main() -> int:
parser = argparse.ArgumentParser(
description="Check that a PR description differs from the template."
)
parser.add_argument(
"--template-file",
type=Path,
required=True,
help="Path to the pull request template file.",
)
parser.add_argument(
"--pr-body-file",
type=Path,
required=True,
help="Path to a file containing the PR body text.",
)
args = parser.parse_args()
template_path: Path = args.template_file
pr_body_path: Path = args.pr_body_file
if not template_path.is_file():
print(f"::error::Template file {template_path} not found")
return 1
if not pr_body_path.is_file():
print(f"::error::PR body file {pr_body_path} not found")
return 1
template = template_path.read_text(encoding="utf-8")
pr_body = pr_body_path.read_text(encoding="utf-8")
# Check if the PR body is empty or whitespace-only
if not pr_body.strip():
print(
"::error::PR description is empty. "
"Please fill in the pull request template."
)
return 1
norm_template = normalize(template)
norm_pr_body = normalize(pr_body)
if norm_pr_body == norm_template:
print(
"::error::PR description (ignoring HTML comments) is identical"
" to the template. Please fill in the details of your change."
f"\n\nVisible template content:\n---\n{norm_template}\n---"
f"\n\nVisible PR description content:\n---\n{norm_pr_body}\n---"
)
return 1
print("PR description has been customized from the template.")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,14 +1,14 @@
# Levelization
Levelization is the term used to describe efforts to prevent xrpld from
Levelization is the term used to describe efforts to prevent rippled from
having or creating cyclic dependencies.
xrpld code is organized into directories under `src/xrpld`, `src/libxrpl` (and
rippled code is organized into directories under `src/xrpld`, `src/libxrpl` (and
`src/test`) representing modules. The modules are intended to be
organized into "tiers" or "levels" such that a module from one level can
only include code from lower levels. Additionally, a module
in one level should never include code in an `impl` or `detail` folder of any level
other than its own.
other than it's own.
The codebase is split into two main areas:
@@ -22,7 +22,7 @@ levelization violations they find (by moving files or individual
classes). At the very least, don't make things worse.
The table below summarizes the _desired_ division of modules, based on the current
state of the xrpld code. The levels are numbered from
state of the rippled code. The levels are numbered from
the bottom up with the lower level, lower numbered, more independent
modules listed first, and the higher level, higher numbered modules with
more dependencies listed later.
@@ -70,12 +70,12 @@ that `test` code should _never_ be included in `xrpl` or `xrpld` code.)
## Validation
The [levelization](generate.py) script takes no parameters,
The [levelization](generate.sh) script takes no parameters,
reads no environment variables, and can be run from any directory,
as long as it is in the expected location in the xrpld repo.
as long as it is in the expected location in the rippled repo.
It can be run at any time from within a checked out repo, and will
do an analysis of all the `#include`s in
the xrpld source. The only caveat is that it runs much slower
the rippled source. The only caveat is that it runs much slower
under Windows than in Linux. It hasn't yet been tested under MacOS.
It generates many files of [results](results):
@@ -104,7 +104,7 @@ It generates many files of [results](results):
Github Actions workflow to test that levelization loops haven't
changed. Unfortunately, if changes are detected, it can't tell if
they are improvements or not, so if you have resolved any issues or
done anything else to improve levelization, run `generate.py`,
done anything else to improve levelization, run `levelization.sh`,
and commit the updated results.
The `loops.txt` and `ordering.txt` files relate the modules
@@ -128,7 +128,7 @@ The committed files hide the detailed values intentionally, to
prevent false alarms and merging issues, and because it's easy to
get those details locally.
1. Run `generate.py`
1. Run `levelization.sh`
2. Grep the modules in `paths.txt`.
- For example, if a cycle is found `A ~= B`, simply `grep -w
A .github/scripts/levelization/results/paths.txt | grep -w B`

View File

@@ -1,335 +0,0 @@
#!/usr/bin/env python3
"""
Usage: generate.py
This script takes no parameters, and can be called from any directory in the file system.
"""
import os
import re
import subprocess
import sys
from collections import defaultdict
from pathlib import Path
from typing import Dict, List, Tuple, Set, Optional
# Compile regex patterns once at module level
INCLUDE_PATTERN = re.compile(r"^\s*#include.*/.*\.h")
INCLUDE_PATH_PATTERN = re.compile(r'[<"]([^>"]+)[>"]')
def dictionary_sort_key(s: str) -> str:
"""
Create a sort key that mimics 'sort -d' (dictionary order).
Dictionary order only considers blanks and alphanumeric characters.
This means punctuation like '.' is ignored during sorting.
"""
# Keep only alphanumeric characters and spaces
return "".join(c for c in s if c.isalnum() or c.isspace())
def get_level(file_path: str) -> str:
"""
Extract the level from a file path (second and third directory components).
Equivalent to bash: cut -d/ -f 2,3
Examples:
src/xrpld/app/main.cpp -> xrpld.app
src/libxrpl/protocol/STObject.cpp -> libxrpl.protocol
include/xrpl/basics/base_uint.h -> xrpl.basics
"""
parts = file_path.split("/")
# Get fields 2 and 3 (indices 1 and 2 in 0-based indexing)
if len(parts) >= 3:
level = f"{parts[1]}/{parts[2]}"
elif len(parts) >= 2:
level = f"{parts[1]}/toplevel"
else:
level = file_path
# If the "level" indicates a file, cut off the filename
if "." in level.split("/")[-1]: # Avoid Path object creation
# Use the "toplevel" label as a workaround for `sort`
# inconsistencies between different utility versions
level = level.rsplit("/", 1)[0] + "/toplevel"
return level.replace("/", ".")
def extract_include_level(include_line: str) -> Optional[str]:
"""
Extract the include path from an #include directive.
Gets the first two directory components from the include path.
Equivalent to bash: cut -d/ -f 1,2
Examples:
#include <xrpl/basics/base_uint.h> -> xrpl.basics
#include "xrpld/app/main/Application.h" -> xrpld.app
"""
# Remove everything before the quote or angle bracket
match = INCLUDE_PATH_PATTERN.search(include_line)
if not match:
return None
include_path = match.group(1)
parts = include_path.split("/")
# Get first two fields (indices 0 and 1)
if len(parts) >= 2:
include_level = f"{parts[0]}/{parts[1]}"
else:
include_level = include_path
# If the "includelevel" indicates a file, cut off the filename
if "." in include_level.split("/")[-1]: # Avoid Path object creation
include_level = include_level.rsplit("/", 1)[0] + "/toplevel"
return include_level.replace("/", ".")
def find_repository_directories(
start_path: Path, depth_limit: int = 10
) -> Tuple[Path, List[Path]]:
"""
Find the repository root by looking for src or include folders.
Walks up the directory tree from the start path.
"""
current = start_path.resolve()
# Walk up the directory tree
for _ in range(depth_limit): # Limit search depth to prevent infinite loops
src_path = current / "src"
include_path = current / "include"
# Check if this directory has src or include folders
has_src = src_path.exists()
has_include = include_path.exists()
if has_src or has_include:
return current, [src_path, include_path]
# Move up one level
parent = current.parent
if parent == current: # Reached filesystem root
break
current = parent
# If we couldn't find it, raise an error
raise RuntimeError(
"Could not find repository root. "
"Expected to find a directory containing 'src' and/or 'include' folders."
)
def main():
# Change to the script's directory
script_dir = Path(__file__).parent.resolve()
os.chdir(script_dir)
# Clean up and create results directory.
results_dir = script_dir / "results"
if results_dir.exists():
import shutil
shutil.rmtree(results_dir)
results_dir.mkdir()
# Find the repository root by searching for src and include directories.
try:
repo_root, scan_dirs = find_repository_directories(script_dir)
print(f"Found repository root: {repo_root}")
print(f"Scanning directories:")
for scan_dir in scan_dirs:
print(f" - {scan_dir.relative_to(repo_root)}")
except RuntimeError as e:
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)
print("\nScanning for raw includes...")
# Find all #include directives
raw_includes: List[Tuple[str, str]] = []
rawincludes_file = results_dir / "rawincludes.txt"
# Write to file as we go to avoid storing everything in memory.
with open(rawincludes_file, "w", buffering=8192) as raw_f:
for dir_path in scan_dirs:
print(f" Scanning {dir_path.relative_to(repo_root)}...")
for file_path in dir_path.rglob("*"):
if not file_path.is_file():
continue
try:
rel_path_str = str(file_path.relative_to(repo_root))
# Read file with a large buffer for performance.
with open(
file_path,
"r",
encoding="utf-8",
errors="ignore",
buffering=8192,
) as f:
for line in f:
# Quick check before regex
if "#include" not in line or "boost" in line:
continue
if INCLUDE_PATTERN.match(line):
line_stripped = line.strip()
entry = f"{rel_path_str}:{line_stripped}\n"
print(entry, end="")
raw_f.write(entry)
raw_includes.append((rel_path_str, line_stripped))
except Exception as e:
print(f"Error reading {file_path}: {e}", file=sys.stderr)
# Build levelization paths and count directly (no need to sort first).
print("Build levelization paths")
path_counts: Dict[Tuple[str, str], int] = defaultdict(int)
for file_path, include_line in raw_includes:
include_level = extract_include_level(include_line)
if not include_level:
continue
level = get_level(file_path)
if level != include_level:
path_counts[(level, include_level)] += 1
# Sort and deduplicate paths (using dictionary order like bash 'sort -d').
print("Sort and deduplicate paths")
paths_file = results_dir / "paths.txt"
with open(paths_file, "w") as f:
# Sort using dictionary order: only alphanumeric and spaces matter
sorted_items = sorted(
path_counts.items(),
key=lambda x: (dictionary_sort_key(x[0][0]), dictionary_sort_key(x[0][1])),
)
for (level, include_level), count in sorted_items:
line = f"{count:7} {level} {include_level}\n"
print(line.rstrip())
f.write(line)
# Split into flat-file database
print("Split into flat-file database")
includes_dir = results_dir / "includes"
included_by_dir = results_dir / "included_by"
includes_dir.mkdir()
included_by_dir.mkdir()
# Batch writes by grouping data first to avoid repeated file opens.
includes_data: Dict[str, List[Tuple[str, int]]] = defaultdict(list)
included_by_data: Dict[str, List[Tuple[str, int]]] = defaultdict(list)
# Process in sorted order to match bash script behaviour (dictionary order).
sorted_items = sorted(
path_counts.items(),
key=lambda x: (dictionary_sort_key(x[0][0]), dictionary_sort_key(x[0][1])),
)
for (level, include_level), count in sorted_items:
includes_data[level].append((include_level, count))
included_by_data[include_level].append((level, count))
# Write all includes files in sorted order (dictionary order).
for level in sorted(includes_data.keys(), key=dictionary_sort_key):
entries = includes_data[level]
with open(includes_dir / level, "w") as f:
for include_level, count in entries:
line = f"{include_level} {count}\n"
print(line.rstrip())
f.write(line)
# Write all included_by files in sorted order (dictionary order).
for include_level in sorted(included_by_data.keys(), key=dictionary_sort_key):
entries = included_by_data[include_level]
with open(included_by_dir / include_level, "w") as f:
for level, count in entries:
line = f"{level} {count}\n"
print(line.rstrip())
f.write(line)
# Search for loops
print("Search for loops")
loops_file = results_dir / "loops.txt"
ordering_file = results_dir / "ordering.txt"
loops_found: Set[Tuple[str, str]] = set()
# Pre-load all include files into memory to avoid repeated I/O.
# This is the biggest optimisation - we were reading files repeatedly in nested loops.
# Use list of tuples to preserve file order.
includes_cache: Dict[str, List[Tuple[str, int]]] = {}
includes_lookup: Dict[str, Dict[str, int]] = {} # For fast lookup
# Note: bash script uses 'for source in *' which uses standard glob sorting,
# NOT dictionary order. So we use standard sorted() here, not dictionary_sort_key.
for include_file in sorted(includes_dir.iterdir(), key=lambda p: p.name):
if not include_file.is_file():
continue
includes_cache[include_file.name] = []
includes_lookup[include_file.name] = {}
with open(include_file, "r") as f:
for line in f:
parts = line.strip().split()
if len(parts) >= 2:
include_name = parts[0]
include_count = int(parts[1])
includes_cache[include_file.name].append(
(include_name, include_count)
)
includes_lookup[include_file.name][include_name] = include_count
with open(loops_file, "w", buffering=8192) as loops_f, open(
ordering_file, "w", buffering=8192
) as ordering_f:
# Use standard sorting to match bash glob expansion 'for source in *'.
for source in sorted(includes_cache.keys()):
source_includes = includes_cache[source]
for include, include_freq in source_includes:
# Check if include file exists and references source
if include not in includes_lookup:
continue
source_freq = includes_lookup[include].get(source)
if source_freq is not None:
# Found a loop
loop_key = tuple(sorted([source, include]))
if loop_key in loops_found:
continue
loops_found.add(loop_key)
loops_f.write(f"Loop: {source} {include}\n")
# If the counts are close, indicate that the two modules are
# on the same level, though they shouldn't be.
diff = include_freq - source_freq
if diff > 3:
loops_f.write(f" {source} > {include}\n\n")
elif diff < -3:
loops_f.write(f" {include} > {source}\n\n")
elif source_freq == include_freq:
loops_f.write(f" {include} == {source}\n\n")
else:
loops_f.write(f" {include} ~= {source}\n\n")
else:
ordering_f.write(f"{source} > {include}\n")
# Print results
print("\nOrdering:")
with open(ordering_file, "r") as f:
print(f.read(), end="")
print("\nLoops:")
with open(loops_file, "r") as f:
print(f.read(), end="")
if __name__ == "__main__":
main()

130
.github/scripts/levelization/generate.sh vendored Executable file
View File

@@ -0,0 +1,130 @@
#!/bin/bash
# Usage: generate.sh
# This script takes no parameters, reads no environment variables,
# and can be run from any directory, as long as it is in the expected
# location in the repo.
pushd $( dirname $0 )
if [ -v PS1 ]
then
# if the shell is interactive, clean up any flotsam before analyzing
git clean -ix
fi
# Ensure all sorting is ASCII-order consistently across platforms.
export LANG=C
rm -rfv results
mkdir results
includes="$( pwd )/results/rawincludes.txt"
pushd ../../..
echo Raw includes:
grep -r '^[ ]*#include.*/.*\.h' include src | \
grep -v boost | tee ${includes}
popd
pushd results
oldifs=${IFS}
IFS=:
mkdir includes
mkdir included_by
echo Build levelization paths
exec 3< ${includes} # open rawincludes.txt for input
while read -r -u 3 file include
do
level=$( echo ${file} | cut -d/ -f 2,3 )
# If the "level" indicates a file, cut off the filename
if [[ "${level##*.}" != "${level}" ]]
then
# Use the "toplevel" label as a workaround for `sort`
# inconsistencies between different utility versions
level="$( dirname ${level} )/toplevel"
fi
level=$( echo ${level} | tr '/' '.' )
includelevel=$( echo ${include} | sed 's/.*["<]//; s/[">].*//' | \
cut -d/ -f 1,2 )
if [[ "${includelevel##*.}" != "${includelevel}" ]]
then
# Use the "toplevel" label as a workaround for `sort`
# inconsistencies between different utility versions
includelevel="$( dirname ${includelevel} )/toplevel"
fi
includelevel=$( echo ${includelevel} | tr '/' '.' )
if [[ "$level" != "$includelevel" ]]
then
echo $level $includelevel | tee -a paths.txt
fi
done
echo Sort and deduplicate paths
sort -ds paths.txt | uniq -c | tee sortedpaths.txt
mv sortedpaths.txt paths.txt
exec 3>&- #close fd 3
IFS=${oldifs}
unset oldifs
echo Split into flat-file database
exec 4<paths.txt # open paths.txt for input
while read -r -u 4 count level include
do
echo ${include} ${count} | tee -a includes/${level}
echo ${level} ${count} | tee -a included_by/${include}
done
exec 4>&- #close fd 4
loops="$( pwd )/loops.txt"
ordering="$( pwd )/ordering.txt"
pushd includes
echo Search for loops
# Redirect stdout to a file
exec 4>&1
exec 1>"${loops}"
for source in *
do
if [[ -f "$source" ]]
then
exec 5<"${source}" # open for input
while read -r -u 5 include includefreq
do
if [[ -f $include ]]
then
if grep -q -w $source $include
then
if grep -q -w "Loop: $include $source" "${loops}"
then
continue
fi
sourcefreq=$( grep -w $source $include | cut -d\ -f2 )
echo "Loop: $source $include"
# If the counts are close, indicate that the two modules are
# on the same level, though they shouldn't be
if [[ $(( $includefreq - $sourcefreq )) -gt 3 ]]
then
echo -e " $source > $include\n"
elif [[ $(( $sourcefreq - $includefreq )) -gt 3 ]]
then
echo -e " $include > $source\n"
elif [[ $sourcefreq -eq $includefreq ]]
then
echo -e " $include == $source\n"
else
echo -e " $include ~= $source\n"
fi
else
echo "$source > $include" >> "${ordering}"
fi
fi
done
exec 5>&- #close fd 5
fi
done
exec 1>&4 #close fd 1
exec 4>&- #close fd 4
cat "${ordering}"
cat "${loops}"
popd
popd
popd

View File

@@ -2,19 +2,19 @@ Loop: test.jtx test.toplevel
test.toplevel > test.jtx
Loop: test.jtx test.unit_test
test.unit_test ~= test.jtx
test.unit_test == test.jtx
Loop: xrpld.app xrpld.overlay
xrpld.app > xrpld.overlay
xrpld.overlay ~= xrpld.app
Loop: xrpld.app xrpld.peerfinder
xrpld.peerfinder ~= xrpld.app
xrpld.peerfinder == xrpld.app
Loop: xrpld.app xrpld.rpc
xrpld.rpc > xrpld.app
Loop: xrpld.app xrpld.shamap
xrpld.shamap > xrpld.app
xrpld.shamap ~= xrpld.app
Loop: xrpld.overlay xrpld.rpc
xrpld.rpc ~= xrpld.overlay

View File

@@ -3,17 +3,13 @@ libxrpl.conditions > xrpl.basics
libxrpl.conditions > xrpl.conditions
libxrpl.core > xrpl.basics
libxrpl.core > xrpl.core
libxrpl.core > xrpl.json
libxrpl.crypto > xrpl.basics
libxrpl.json > xrpl.basics
libxrpl.json > xrpl.json
libxrpl.ledger > xrpl.basics
libxrpl.ledger > xrpl.json
libxrpl.ledger > xrpl.ledger
libxrpl.ledger > xrpl.nodestore
libxrpl.ledger > xrpl.protocol
libxrpl.ledger > xrpl.server
libxrpl.ledger > xrpl.shamap
libxrpl.net > xrpl.basics
libxrpl.net > xrpl.net
libxrpl.nodestore > xrpl.basics
@@ -24,21 +20,16 @@ libxrpl.protocol > xrpl.basics
libxrpl.protocol > xrpl.json
libxrpl.protocol > xrpl.protocol
libxrpl.rdb > xrpl.basics
libxrpl.rdb > xrpl.core
libxrpl.rdb > xrpl.rdb
libxrpl.resource > xrpl.basics
libxrpl.resource > xrpl.json
libxrpl.resource > xrpl.protocol
libxrpl.resource > xrpl.resource
libxrpl.server > xrpl.basics
libxrpl.server > xrpl.core
libxrpl.server > xrpl.json
libxrpl.server > xrpl.protocol
libxrpl.server > xrpl.rdb
libxrpl.server > xrpl.resource
libxrpl.server > xrpl.server
libxrpl.shamap > xrpl.basics
libxrpl.shamap > xrpl.nodestore
libxrpl.shamap > xrpl.protocol
libxrpl.shamap > xrpl.shamap
libxrpl.tx > xrpl.basics
@@ -50,11 +41,12 @@ libxrpl.tx > xrpl.protocol
libxrpl.tx > xrpl.server
libxrpl.tx > xrpl.tx
test.app > test.jtx
test.app > test.rpc
test.app > test.toplevel
test.app > test.unit_test
test.app > xrpl.basics
test.app > xrpl.core
test.app > xrpld.app
test.app > xrpld.consensus
test.app > xrpld.core
test.app > xrpld.overlay
test.app > xrpld.rpc
@@ -62,9 +54,9 @@ test.app > xrpl.json
test.app > xrpl.ledger
test.app > xrpl.nodestore
test.app > xrpl.protocol
test.app > xrpl.rdb
test.app > xrpl.resource
test.app > xrpl.server
test.app > xrpl.shamap
test.app > xrpl.tx
test.basics > test.jtx
test.basics > test.unit_test
@@ -77,29 +69,26 @@ test.beast > xrpl.basics
test.conditions > xrpl.basics
test.conditions > xrpl.conditions
test.consensus > test.csf
test.consensus > test.jtx
test.consensus > test.toplevel
test.consensus > test.unit_test
test.consensus > xrpl.basics
test.consensus > xrpld.app
test.consensus > xrpld.consensus
test.consensus > xrpl.json
test.consensus > xrpl.ledger
test.consensus > xrpl.protocol
test.consensus > xrpl.shamap
test.consensus > xrpl.tx
test.core > test.jtx
test.core > test.toplevel
test.core > test.unit_test
test.core > xrpl.basics
test.core > xrpl.core
test.core > xrpld.core
test.core > xrpl.json
test.core > xrpl.protocol
test.core > xrpl.rdb
test.core > xrpl.server
test.csf > xrpl.basics
test.csf > xrpld.consensus
test.csf > xrpl.json
test.csf > xrpl.ledger
test.csf > xrpl.protocol
test.json > test.jtx
test.json > xrpl.json
@@ -116,32 +105,27 @@ test.jtx > xrpl.resource
test.jtx > xrpl.server
test.jtx > xrpl.tx
test.ledger > test.jtx
test.ledger > test.toplevel
test.ledger > xrpl.basics
test.ledger > xrpl.core
test.ledger > xrpld.app
test.ledger > xrpld.core
test.ledger > xrpl.json
test.ledger > xrpl.ledger
test.ledger > xrpl.protocol
test.nodestore > test.jtx
test.nodestore > test.toplevel
test.nodestore > test.unit_test
test.nodestore > xrpl.basics
test.nodestore > xrpld.core
test.nodestore > xrpl.nodestore
test.nodestore > xrpl.protocol
test.nodestore > xrpl.rdb
test.overlay > test.jtx
test.overlay > test.toplevel
test.overlay > test.unit_test
test.overlay > xrpl.basics
test.overlay > xrpld.app
test.overlay > xrpld.core
test.overlay > xrpld.overlay
test.overlay > xrpld.peerfinder
test.overlay > xrpl.json
test.overlay > xrpl.nodestore
test.overlay > xrpl.protocol
test.overlay > xrpl.resource
test.overlay > xrpl.server
test.overlay > xrpl.shamap
test.peerfinder > test.beast
test.peerfinder > test.unit_test
@@ -149,8 +133,7 @@ test.peerfinder > xrpl.basics
test.peerfinder > xrpld.core
test.peerfinder > xrpld.peerfinder
test.peerfinder > xrpl.protocol
test.protocol > test.jtx
test.protocol > test.unit_test
test.protocol > test.toplevel
test.protocol > xrpl.basics
test.protocol > xrpl.json
test.protocol > xrpl.protocol
@@ -158,6 +141,7 @@ test.resource > test.unit_test
test.resource > xrpl.basics
test.resource > xrpl.resource
test.rpc > test.jtx
test.rpc > test.toplevel
test.rpc > xrpl.basics
test.rpc > xrpl.core
test.rpc > xrpld.app
@@ -171,12 +155,13 @@ test.rpc > xrpl.resource
test.rpc > xrpl.server
test.rpc > xrpl.tx
test.server > test.jtx
test.server > test.toplevel
test.server > test.unit_test
test.server > xrpl.basics
test.server > xrpld.app
test.server > xrpld.core
test.server > xrpld.rpc
test.server > xrpl.json
test.server > xrpl.protocol
test.server > xrpl.server
test.shamap > test.unit_test
test.shamap > xrpl.basics
@@ -186,16 +171,14 @@ test.shamap > xrpl.shamap
test.toplevel > test.csf
test.toplevel > xrpl.json
test.unit_test > xrpl.basics
test.unit_test > xrpl.protocol
tests.libxrpl > xrpl.basics
tests.libxrpl > xrpl.json
tests.libxrpl > xrpl.net
tests.libxrpl > xrpl.protocol
tests.libxrpl > xrpl.protocol_autogen
xrpl.conditions > xrpl.basics
xrpl.conditions > xrpl.protocol
xrpl.core > xrpl.basics
xrpl.core > xrpl.json
xrpl.core > xrpl.ledger
xrpl.core > xrpl.protocol
xrpl.json > xrpl.basics
xrpl.ledger > xrpl.basics
@@ -207,8 +190,6 @@ xrpl.nodestore > xrpl.basics
xrpl.nodestore > xrpl.protocol
xrpl.protocol > xrpl.basics
xrpl.protocol > xrpl.json
xrpl.protocol_autogen > xrpl.json
xrpl.protocol_autogen > xrpl.protocol
xrpl.rdb > xrpl.basics
xrpl.rdb > xrpl.core
xrpl.rdb > xrpl.protocol
@@ -246,24 +227,22 @@ xrpld.app > xrpl.shamap
xrpld.app > xrpl.tx
xrpld.consensus > xrpl.basics
xrpld.consensus > xrpl.json
xrpld.consensus > xrpl.ledger
xrpld.consensus > xrpl.protocol
xrpld.core > xrpl.basics
xrpld.core > xrpl.core
xrpld.core > xrpl.json
xrpld.core > xrpl.net
xrpld.core > xrpl.protocol
xrpld.core > xrpl.rdb
xrpld.overlay > xrpl.basics
xrpld.overlay > xrpl.core
xrpld.overlay > xrpld.consensus
xrpld.overlay > xrpld.core
xrpld.overlay > xrpld.peerfinder
xrpld.overlay > xrpl.json
xrpld.overlay > xrpl.ledger
xrpld.overlay > xrpl.protocol
xrpld.overlay > xrpl.rdb
xrpld.overlay > xrpl.resource
xrpld.overlay > xrpl.server
xrpld.overlay > xrpl.shamap
xrpld.overlay > xrpl.tx
xrpld.peerfinder > xrpl.basics
xrpld.peerfinder > xrpld.core
@@ -273,7 +252,6 @@ xrpld.perflog > xrpl.basics
xrpld.perflog > xrpl.core
xrpld.perflog > xrpld.rpc
xrpld.perflog > xrpl.json
xrpld.perflog > xrpl.protocol
xrpld.rpc > xrpl.basics
xrpld.rpc > xrpl.core
xrpld.rpc > xrpld.core
@@ -285,9 +263,5 @@ xrpld.rpc > xrpl.protocol
xrpld.rpc > xrpl.rdb
xrpld.rpc > xrpl.resource
xrpld.rpc > xrpl.server
xrpld.rpc > xrpl.shamap
xrpld.rpc > xrpl.tx
xrpld.shamap > xrpl.basics
xrpld.shamap > xrpld.core
xrpld.shamap > xrpl.protocol
xrpld.shamap > xrpl.shamap

View File

@@ -34,8 +34,6 @@ run from the repository root.
6. `.github/scripts/rename/config.sh`: This script will rename the config from
`rippled.cfg` to `xrpld.cfg`, and updating the code accordingly. The old
filename will still be accepted.
7. `.github/scripts/rename/docs.sh`: This script will rename any lingering
references of `ripple(d)` to `xrpl(d)` in code, comments, and documentation.
You can run all these scripts from the repository root as follows:
@@ -46,5 +44,4 @@ You can run all these scripts from the repository root as follows:
./.github/scripts/rename/binary.sh .
./.github/scripts/rename/namespace.sh .
./.github/scripts/rename/config.sh .
./.github/scripts/rename/docs.sh .
```

View File

@@ -6,11 +6,11 @@ set -e
# On MacOS, ensure that GNU sed is installed and available as `gsed`.
SED_COMMAND=sed
if [[ "${OSTYPE}" == 'darwin'* ]]; then
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
fi
# This script changes the binary name from `rippled` to `xrpld`, and reverses
@@ -29,7 +29,7 @@ if [ ! -d "${DIRECTORY}" ]; then
echo "Error: Directory '${DIRECTORY}' does not exist."
exit 1
fi
pushd "${DIRECTORY}"
pushd ${DIRECTORY}
# Remove the binary name override added by the cmake.sh script.
${SED_COMMAND} -z -i -E 's@\s+# For the time being.+"rippled"\)@@' cmake/XrplCore.cmake
@@ -49,7 +49,6 @@ ${SED_COMMAND} -i -E 's@ripple/xrpld@XRPLF/rippled@g' BUILD.md
${SED_COMMAND} -i -E 's@XRPLF/xrpld@XRPLF/rippled@g' BUILD.md
${SED_COMMAND} -i -E 's@xrpld \(`xrpld`\)@xrpld@g' BUILD.md
${SED_COMMAND} -i -E 's@XRPLF/xrpld@XRPLF/rippled@g' CONTRIBUTING.md
${SED_COMMAND} -i -E 's@XRPLF/xrpld@XRPLF/rippled@g' docs/build/install.md
popd
echo "Processing complete."

View File

@@ -8,16 +8,16 @@ set -e
SED_COMMAND=sed
HEAD_COMMAND=head
if [[ "${OSTYPE}" == 'darwin'* ]]; then
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
if ! command -v ghead &> /dev/null; then
echo "Error: ghead is not installed. Please install it using 'brew install coreutils'."
exit 1
fi
HEAD_COMMAND=ghead
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
if ! command -v ghead &> /dev/null; then
echo "Error: ghead is not installed. Please install it using 'brew install coreutils'."
exit 1
fi
HEAD_COMMAND=ghead
fi
# This script renames CMake files from `RippleXXX.cmake` or `RippledXXX.cmake`
@@ -38,16 +38,16 @@ if [ ! -d "${DIRECTORY}" ]; then
echo "Error: Directory '${DIRECTORY}' does not exist."
exit 1
fi
pushd "${DIRECTORY}"
pushd ${DIRECTORY}
# Rename the files.
find cmake -type f -name 'Rippled*.cmake' -exec bash -c 'mv "${1}" "${1/Rippled/Xrpl}"' - {} \;
find cmake -type f -name 'Ripple*.cmake' -exec bash -c 'mv "${1}" "${1/Ripple/Xrpl}"' - {} \;
if [ -e cmake/xrpl_add_test.cmake ]; then
mv cmake/xrpl_add_test.cmake cmake/XrplAddTest.cmake
mv cmake/xrpl_add_test.cmake cmake/XrplAddTest.cmake
fi
if [ -e include/xrpl/proto/ripple.proto ]; then
mv include/xrpl/proto/ripple.proto include/xrpl/proto/xrpl.proto
mv include/xrpl/proto/ripple.proto include/xrpl/proto/xrpl.proto
fi
# Rename inside the files.
@@ -71,14 +71,14 @@ ${SED_COMMAND} -i 's@xrpl/validator-keys-tool@ripple/validator-keys-tool@' cmake
# Ensure the name of the binary and config remain 'rippled' for now.
${SED_COMMAND} -i -E 's/xrpld(-example)?\.cfg/rippled\1.cfg/g' cmake/XrplInstall.cmake
if grep -q '"xrpld"' cmake/XrplCore.cmake; then
# The script has been rerun, so just restore the name of the binary.
${SED_COMMAND} -i 's/"xrpld"/"rippled"/' cmake/XrplCore.cmake
# The script has been rerun, so just restore the name of the binary.
${SED_COMMAND} -i 's/"xrpld"/"rippled"/' cmake/XrplCore.cmake
elif ! grep -q '"rippled"' cmake/XrplCore.cmake; then
${HEAD_COMMAND} -n -1 cmake/XrplCore.cmake > cmake.tmp
echo ' # For the time being, we will keep the name of the binary as it was.' >> cmake.tmp
echo ' set_target_properties(xrpld PROPERTIES OUTPUT_NAME "rippled")' >> cmake.tmp
tail -1 cmake/XrplCore.cmake >> cmake.tmp
mv cmake.tmp cmake/XrplCore.cmake
${HEAD_COMMAND} -n -1 cmake/XrplCore.cmake > cmake.tmp
echo ' # For the time being, we will keep the name of the binary as it was.' >> cmake.tmp
echo ' set_target_properties(xrpld PROPERTIES OUTPUT_NAME "rippled")' >> cmake.tmp
tail -1 cmake/XrplCore.cmake >> cmake.tmp
mv cmake.tmp cmake/XrplCore.cmake
fi
# Restore the symlink from 'xrpld' to 'rippled'.

View File

@@ -6,11 +6,11 @@ set -e
# On MacOS, ensure that GNU sed is installed and available as `gsed`.
SED_COMMAND=sed
if [[ "${OSTYPE}" == 'darwin'* ]]; then
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
fi
# This script renames the config from `rippled.cfg` to `xrpld.cfg`, and updates
@@ -28,39 +28,40 @@ if [ ! -d "${DIRECTORY}" ]; then
echo "Error: Directory '${DIRECTORY}' does not exist."
exit 1
fi
pushd "${DIRECTORY}"
pushd ${DIRECTORY}
# Add the xrpld.cfg to the .gitignore.
if ! grep -q 'xrpld.cfg' .gitignore; then
${SED_COMMAND} -i '/rippled.cfg/a\
${SED_COMMAND} -i '/rippled.cfg/a\
/xrpld.cfg' .gitignore
fi
# Rename the files.
if [ -e rippled.cfg ]; then
mv rippled.cfg xrpld.cfg
mv rippled.cfg xrpld.cfg
fi
if [ -e cfg/rippled-example.cfg ]; then
mv cfg/rippled-example.cfg cfg/xrpld-example.cfg
mv cfg/rippled-example.cfg cfg/xrpld-example.cfg
fi
# Rename inside the files.
DIRECTORIES=("cfg" "cmake" "include" "src")
for DIRECTORY in "${DIRECTORIES[@]}"; do
echo "Processing directory: ${DIRECTORY}"
echo "Processing directory: ${DIRECTORY}"
find "${DIRECTORY}" -type f \( -name "*.h" -o -name "*.hpp" -o -name "*.ipp" -o -name "*.cpp" -o -name "*.cmake" -o -name "*.txt" -o -name "*.cfg" -o -name "*.md" \) | while read -r FILE; do
echo "Processing file: ${FILE}"
${SED_COMMAND} -i -E 's/rippled(-example)?[ .]cfg/xrpld\1.cfg/g' "${FILE}"
${SED_COMMAND} -i 's/rippleConfig/xrpldConfig/g' "${FILE}"
done
find "${DIRECTORY}" -type f \( -name "*.h" -o -name "*.hpp" -o -name "*.ipp" -o -name "*.cpp" -o -name "*.cmake" -o -name "*.txt" -o -name "*.cfg" -o -name "*.md" \) | while read -r FILE; do
echo "Processing file: ${FILE}"
${SED_COMMAND} -i -E 's/rippled(-example)?[ .]cfg/xrpld\1.cfg/g' "${FILE}"
done
done
${SED_COMMAND} -i 's/rippled/xrpld/g' cfg/xrpld-example.cfg
${SED_COMMAND} -i 's/rippled/xrpld/g' src/test/core/Config_test.cpp
${SED_COMMAND} -i 's/ripplevalidators/xrplvalidators/g' src/test/core/Config_test.cpp # cspell: disable-line
${SED_COMMAND} -i 's/rippleConfig/xrpldConfig/g' src/test/core/Config_test.cpp
${SED_COMMAND} -i 's@ripple/@xrpld/@g' src/test/core/Config_test.cpp
${SED_COMMAND} -i 's/Rippled/File/g' src/test/core/Config_test.cpp
# Restore the old config file name in the code that maintains support for now.
${SED_COMMAND} -i 's/configLegacyName = "xrpld.cfg"/configLegacyName = "rippled.cfg"/g' src/xrpld/core/detail/Config.cpp

View File

@@ -6,11 +6,11 @@ set -e
# On MacOS, ensure that GNU sed is installed and available as `gsed`.
SED_COMMAND=sed
if [[ "${OSTYPE}" == 'darwin'* ]]; then
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
fi
# This script removes superfluous copyright notices in source and header files
@@ -31,7 +31,7 @@ if [ ! -d "${DIRECTORY}" ]; then
echo "Error: Directory '${DIRECTORY}' does not exist."
exit 1
fi
pushd "${DIRECTORY}"
pushd ${DIRECTORY}
# Prevent sed and echo from removing newlines and tabs in string literals by
# temporarily replacing them with placeholders. This only affects one file.
@@ -43,56 +43,56 @@ ${SED_COMMAND} -i -E "s@\\\t@${PLACEHOLDER_TAB}@g" src/test/rpc/ValidatorInfo_te
# Process the include/ and src/ directories.
DIRECTORIES=("include" "src")
for DIRECTORY in "${DIRECTORIES[@]}"; do
echo "Processing directory: ${DIRECTORY}"
echo "Processing directory: ${DIRECTORY}"
find "${DIRECTORY}" -type f \( -name "*.h" -o -name "*.hpp" -o -name "*.ipp" -o -name "*.cpp" -o -name "*.macro" \) | while read -r FILE; do
echo "Processing file: ${FILE}"
# Handle the cases where the copyright notice is enclosed in /* ... */
# and usually surrounded by //---- and //======.
${SED_COMMAND} -z -i -E 's@^//-------+\n+@@' "${FILE}"
${SED_COMMAND} -z -i -E 's@^.*Copyright.+(Ripple|Bougalis|Falco|Hinnant|Null|Ritchford|XRPLF).+PERFORMANCE OF THIS SOFTWARE\.\n\*/\n+@@' "${FILE}" # cspell: ignore Bougalis Falco Hinnant Ritchford
${SED_COMMAND} -z -i -E 's@^//=======+\n+@@' "${FILE}"
find "${DIRECTORY}" -type f \( -name "*.h" -o -name "*.hpp" -o -name "*.ipp" -o -name "*.cpp" -o -name "*.macro" \) | while read -r FILE; do
echo "Processing file: ${FILE}"
# Handle the cases where the copyright notice is enclosed in /* ... */
# and usually surrounded by //---- and //======.
${SED_COMMAND} -z -i -E 's@^//-------+\n+@@' "${FILE}"
${SED_COMMAND} -z -i -E 's@^.*Copyright.+(Ripple|Bougalis|Falco|Hinnant|Null|Ritchford|XRPLF).+PERFORMANCE OF THIS SOFTWARE\.\n\*/\n+@@' "${FILE}" # cspell: ignore Bougalis Falco Hinnant Ritchford
${SED_COMMAND} -z -i -E 's@^//=======+\n+@@' "${FILE}"
# Handle the cases where the copyright notice is commented out with //.
${SED_COMMAND} -z -i -E 's@^//\n// Copyright.+Falco \(vinnie dot falco at gmail dot com\)\n//\n+@@' "${FILE}" # cspell: ignore Vinnie Falco
done
# Handle the cases where the copyright notice is commented out with //.
${SED_COMMAND} -z -i -E 's@^//\n// Copyright.+Falco \(vinnie dot falco at gmail dot com\)\n//\n+@@' "${FILE}" # cspell: ignore Vinnie Falco
done
done
# Restore copyright notices that were removed from specific files, without
# restoring the verbiage that is already present in LICENSE.md. Ensure that if
# the script is run multiple times, duplicate notices are not added.
if ! grep -q 'Raw Material Software' include/xrpl/beast/core/CurrentThreadName.h; then
echo -e "// Portions of this file are from JUCE (http://www.juce.com).\n// Copyright (c) 2013 - Raw Material Software Ltd.\n// Please visit http://www.juce.com\n\n$(cat include/xrpl/beast/core/CurrentThreadName.h)" > include/xrpl/beast/core/CurrentThreadName.h
echo -e "// Portions of this file are from JUCE (http://www.juce.com).\n// Copyright (c) 2013 - Raw Material Software Ltd.\n// Please visit http://www.juce.com\n\n$(cat include/xrpl/beast/core/CurrentThreadName.h)" > include/xrpl/beast/core/CurrentThreadName.h
fi
if ! grep -q 'Dev Null' src/test/app/NetworkID_test.cpp; then
echo -e "// Copyright (c) 2020 Dev Null Productions\n\n$(cat src/test/app/NetworkID_test.cpp)" > src/test/app/NetworkID_test.cpp
echo -e "// Copyright (c) 2020 Dev Null Productions\n\n$(cat src/test/app/NetworkID_test.cpp)" > src/test/app/NetworkID_test.cpp
fi
if ! grep -q 'Dev Null' src/test/app/tx/apply_test.cpp; then
echo -e "// Copyright (c) 2020 Dev Null Productions\n\n$(cat src/test/app/tx/apply_test.cpp)" > src/test/app/tx/apply_test.cpp
echo -e "// Copyright (c) 2020 Dev Null Productions\n\n$(cat src/test/app/tx/apply_test.cpp)" > src/test/app/tx/apply_test.cpp
fi
if ! grep -q 'Dev Null' src/test/rpc/ManifestRPC_test.cpp; then
echo -e "// Copyright (c) 2020 Dev Null Productions\n\n$(cat src/test/rpc/ManifestRPC_test.cpp)" > src/test/rpc/ManifestRPC_test.cpp
echo -e "// Copyright (c) 2020 Dev Null Productions\n\n$(cat src/test/rpc/ManifestRPC_test.cpp)" > src/test/rpc/ManifestRPC_test.cpp
fi
if ! grep -q 'Dev Null' src/test/rpc/ValidatorInfo_test.cpp; then
echo -e "// Copyright (c) 2020 Dev Null Productions\n\n$(cat src/test/rpc/ValidatorInfo_test.cpp)" > src/test/rpc/ValidatorInfo_test.cpp
echo -e "// Copyright (c) 2020 Dev Null Productions\n\n$(cat src/test/rpc/ValidatorInfo_test.cpp)" > src/test/rpc/ValidatorInfo_test.cpp
fi
if ! grep -q 'Dev Null' src/xrpld/rpc/handlers/server_info/Manifest.cpp; then
echo -e "// Copyright (c) 2019 Dev Null Productions\n\n$(cat src/xrpld/rpc/handlers/server_info/Manifest.cpp)" > src/xrpld/rpc/handlers/server_info/Manifest.cpp
if ! grep -q 'Dev Null' src/xrpld/rpc/handlers/DoManifest.cpp; then
echo -e "// Copyright (c) 2019 Dev Null Productions\n\n$(cat src/xrpld/rpc/handlers/DoManifest.cpp)" > src/xrpld/rpc/handlers/DoManifest.cpp
fi
if ! grep -q 'Dev Null' src/xrpld/rpc/handlers/admin/status/ValidatorInfo.cpp; then
echo -e "// Copyright (c) 2019 Dev Null Productions\n\n$(cat src/xrpld/rpc/handlers/admin/status/ValidatorInfo.cpp)" > src/xrpld/rpc/handlers/admin/status/ValidatorInfo.cpp
if ! grep -q 'Dev Null' src/xrpld/rpc/handlers/ValidatorInfo.cpp; then
echo -e "// Copyright (c) 2019 Dev Null Productions\n\n$(cat src/xrpld/rpc/handlers/ValidatorInfo.cpp)" > src/xrpld/rpc/handlers/ValidatorInfo.cpp
fi
if ! grep -q 'Bougalis' include/xrpl/basics/SlabAllocator.h; then
echo -e "// Copyright (c) 2022, Nikolaos D. Bougalis <nikb@bougalis.net>\n\n$(cat include/xrpl/basics/SlabAllocator.h)" > include/xrpl/basics/SlabAllocator.h # cspell: ignore Nikolaos Bougalis nikb
echo -e "// Copyright (c) 2022, Nikolaos D. Bougalis <nikb@bougalis.net>\n\n$(cat include/xrpl/basics/SlabAllocator.h)" > include/xrpl/basics/SlabAllocator.h # cspell: ignore Nikolaos Bougalis nikb
fi
if ! grep -q 'Bougalis' include/xrpl/basics/spinlock.h; then
echo -e "// Copyright (c) 2022, Nikolaos D. Bougalis <nikb@bougalis.net>\n\n$(cat include/xrpl/basics/spinlock.h)" > include/xrpl/basics/spinlock.h # cspell: ignore Nikolaos Bougalis nikb
echo -e "// Copyright (c) 2022, Nikolaos D. Bougalis <nikb@bougalis.net>\n\n$(cat include/xrpl/basics/spinlock.h)" > include/xrpl/basics/spinlock.h # cspell: ignore Nikolaos Bougalis nikb
fi
if ! grep -q 'Bougalis' include/xrpl/basics/tagged_integer.h; then
echo -e "// Copyright (c) 2014, Nikolaos D. Bougalis <nikb@bougalis.net>\n\n$(cat include/xrpl/basics/tagged_integer.h)" > include/xrpl/basics/tagged_integer.h # cspell: ignore Nikolaos Bougalis nikb
echo -e "// Copyright (c) 2014, Nikolaos D. Bougalis <nikb@bougalis.net>\n\n$(cat include/xrpl/basics/tagged_integer.h)" > include/xrpl/basics/tagged_integer.h # cspell: ignore Nikolaos Bougalis nikb
fi
if ! grep -q 'Ritchford' include/xrpl/beast/utility/Zero.h; then
echo -e "// Copyright (c) 2014, Tom Ritchford <tom@swirly.com>\n\n$(cat include/xrpl/beast/utility/Zero.h)" > include/xrpl/beast/utility/Zero.h # cspell: ignore Ritchford
echo -e "// Copyright (c) 2014, Tom Ritchford <tom@swirly.com>\n\n$(cat include/xrpl/beast/utility/Zero.h)" > include/xrpl/beast/utility/Zero.h # cspell: ignore Ritchford
fi
# Restore newlines and tabs in string literals in the affected file.

View File

@@ -6,11 +6,11 @@ set -e
# On MacOS, ensure that GNU sed is installed and available as `gsed`.
SED_COMMAND=sed
if [[ "${OSTYPE}" == 'darwin'* ]]; then
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
fi
# This script renames definitions, such as include guards, in this project.

View File

@@ -1,96 +0,0 @@
#!/bin/bash
# Exit the script as soon as an error occurs.
set -e
# On MacOS, ensure that GNU sed is installed and available as `gsed`.
SED_COMMAND=sed
if [[ "${OSTYPE}" == 'darwin'* ]]; then
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
fi
# This script renames all remaining references to `ripple` and `rippled` to
# `xrpl` and `xrpld`, respectively, in code, comments, and documentation.
# Usage: .github/scripts/rename/docs.sh <repository directory>
if [ "$#" -ne 1 ]; then
echo "Usage: $0 <repository directory>"
exit 1
fi
DIRECTORY=$1
echo "Processing directory: ${DIRECTORY}"
if [ ! -d "${DIRECTORY}" ]; then
echo "Error: Directory '${DIRECTORY}' does not exist."
exit 1
fi
pushd "${DIRECTORY}"
find . -type f \( -name "*.h" -o -name "*.hpp" -o -name "*.ipp" -o -name "*.cpp" -o -name "*.txt" -o -name "*.cfg" -o -name "*.md" -o -name "*.proto" \) -not -path "./.github/scripts/*" | while read -r FILE; do
echo "Processing file: ${FILE}"
${SED_COMMAND} -i 's/rippleLockEscrowMPT/lockEscrowMPT/g' "${FILE}"
${SED_COMMAND} -i 's/rippleUnlockEscrowMPT/unlockEscrowMPT/g' "${FILE}"
${SED_COMMAND} -i 's/rippleCredit/directSendNoFee/g' "${FILE}"
${SED_COMMAND} -i 's/rippleSend/directSendNoLimit/g' "${FILE}"
${SED_COMMAND} -i -E 's@([^/+-])rippled@\1xrpld@g' "${FILE}"
${SED_COMMAND} -i -E 's@([^/+-])Rippled@\1Xrpld@g' "${FILE}"
${SED_COMMAND} -i -E 's/^rippled/xrpld/g' "${FILE}"
${SED_COMMAND} -i -E 's/^Rippled/Xrpld/g' "${FILE}"
# cspell: disable
${SED_COMMAND} -i -E 's/(r|R)ipple (a|A)ddress/XRPL address/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (a|A)ccount/XRPL account/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (a|A)lgorithm/XRPL algorithm/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (c|C)lient/XRPL client/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (c|C)luster/XRPL cluster/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (c|C)onsensus/XRPL consensus/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (d|D)efault/XRPL default/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (e|E)poch/XRPL epoch/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (f|F)eature/XRPL feature/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (n|N)etwork/XRPL network/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (p|P)ayment/XRPL payment/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (p|P)rotocol/XRPL protocol/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (r|R)epository/XRPL repository/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple RPC/XRPL RPC/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (s|S)erialization/XRPL serialization/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (s|S)erver/XRPL server/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (s|S)pecific/XRPL specific/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple Source/XRPL Source/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (t|T)imestamp/XRPL timestamp/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple uses the consensus/XRPL uses the consensus/g' "${FILE}"
${SED_COMMAND} -i -E 's/(r|R)ipple (v|V)alidator/XRPL validator/g' "${FILE}"
# cspell: enable
${SED_COMMAND} -i 's/RippleLib/XrplLib/g' "${FILE}"
${SED_COMMAND} -i 's/ripple-lib/XrplLib/g' "${FILE}"
${SED_COMMAND} -i 's@opt/ripple/@opt/xrpld/@g' "${FILE}"
${SED_COMMAND} -i 's@src/ripple/@src/xrpld/@g' "${FILE}"
${SED_COMMAND} -i 's@ripple/app/@xrpld/app/@g' "${FILE}"
${SED_COMMAND} -i 's@github.com/ripple/rippled@github.com/XRPLF/rippled@g' "${FILE}"
${SED_COMMAND} -i 's/\ba xrpl/an xrpl/g' "${FILE}"
${SED_COMMAND} -i 's/\ba XRPL/an XRPL/g' "${FILE}"
done
${SED_COMMAND} -i 's/ripple_libs/xrpl_libs/' BUILD.md
${SED_COMMAND} -i 's/Ripple integrators/XRPL developers/' README.md
${SED_COMMAND} -i 's/sanitizer-configuration-for-rippled/sanitizer-configuration-for-xrpld/' docs/build/sanitizers.md
${SED_COMMAND} -i 's/rippled/xrpld/g' .github/scripts/levelization/README.md
${SED_COMMAND} -i 's/rippled/xrpld/g' .github/scripts/strategy-matrix/generate.py
${SED_COMMAND} -i 's@/rippled@/xrpld@g' docs/build/install.md
${SED_COMMAND} -i 's@github.com/XRPLF/xrpld@github.com/XRPLF/rippled@g' docs/build/install.md
${SED_COMMAND} -i 's/rippled/xrpld/g' docs/Doxyfile
${SED_COMMAND} -i 's/ripple_basics/basics/' include/xrpl/basics/CountedObject.h
${SED_COMMAND} -i 's/<ripple/<xrpl/' include/xrpl/protocol/AccountID.h
${SED_COMMAND} -i 's/Ripple:/the XRPL:/g' include/xrpl/protocol/SecretKey.h
${SED_COMMAND} -i 's/Ripple:/the XRPL:/g' include/xrpl/protocol/Seed.h
${SED_COMMAND} -i 's/ripple/xrpl/g' src/test/README.md
${SED_COMMAND} -i 's/www.ripple.com/www.xrpl.org/g' src/test/protocol/Seed_test.cpp
# Restore specific changes.
${SED_COMMAND} -i 's@b5efcc/src/xrpld@b5efcc/src/ripple@' include/xrpl/protocol/README.md
${SED_COMMAND} -i 's/dbPrefix_ = "xrpldb"/dbPrefix_ = "rippledb"/' src/xrpld/app/misc/SHAMapStoreImp.h # cspell: disable-line
${SED_COMMAND} -i 's/configLegacyName = "xrpld.cfg"/configLegacyName = "rippled.cfg"/' src/xrpld/core/detail/Config.cpp
popd
echo "Renaming complete."

View File

@@ -23,8 +23,8 @@ fi
find "${DIRECTORY}" -type f \( -name "*.h" -o -name "*.hpp" -o -name "*.ipp" \) | while read -r FILE; do
echo "Processing file: ${FILE}"
if grep -q "#ifndef XRPL_" "${FILE}"; then
echo "Please replace all include guards by #pragma once."
exit 1
echo "Please replace all include guards by #pragma once."
exit 1
fi
done
echo "Checking complete."

View File

@@ -6,11 +6,11 @@ set -e
# On MacOS, ensure that GNU sed is installed and available as `gsed`.
SED_COMMAND=sed
if [[ "${OSTYPE}" == 'darwin'* ]]; then
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
fi
# This script renames the `ripple` namespace to `xrpl` in this project.
@@ -31,19 +31,18 @@ if [ ! -d "${DIRECTORY}" ]; then
echo "Error: Directory '${DIRECTORY}' does not exist."
exit 1
fi
pushd "${DIRECTORY}"
pushd ${DIRECTORY}
DIRECTORIES=("include" "src" "tests")
for DIRECTORY in "${DIRECTORIES[@]}"; do
echo "Processing directory: ${DIRECTORY}"
echo "Processing directory: ${DIRECTORY}"
find "${DIRECTORY}" -type f \( -name "*.h" -o -name "*.hpp" -o -name "*.ipp" -o -name "*.cpp" -o -name "*.macro" \) | while read -r FILE; do
echo "Processing file: ${FILE}"
${SED_COMMAND} -i 's/namespace ripple/namespace xrpl/g' "${FILE}"
${SED_COMMAND} -i 's/ripple::/xrpl::/g' "${FILE}"
${SED_COMMAND} -i 's/"ripple:/"xrpl::/g' "${FILE}"
${SED_COMMAND} -i -E 's/(BEAST_DEFINE_TESTSUITE.+)ripple(.+)/\1xrpl\2/g' "${FILE}"
done
find "${DIRECTORY}" -type f \( -name "*.h" -o -name "*.hpp" -o -name "*.ipp" -o -name "*.cpp" \) | while read -r FILE; do
echo "Processing file: ${FILE}"
${SED_COMMAND} -i 's/namespace ripple/namespace xrpl/g' "${FILE}"
${SED_COMMAND} -i 's/ripple::/xrpl::/g' "${FILE}"
${SED_COMMAND} -i -E 's/(BEAST_DEFINE_TESTSUITE.+)ripple(.+)/\1xrpl\2/g' "${FILE}"
done
done
# Special case for NuDBFactory that has ripple twice in the test suite name.

View File

@@ -51,21 +51,20 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
# Only generate a subset of configurations in PRs.
if not all:
# Debian:
# - Bookworm using GCC 13: Debug on linux/amd64, set the reference
# fee to 500 and enable code coverage (which will be done below).
# - Bookworm using GCC 15: Debug on linux/amd64, enable Address and
# UB sanitizers (which will be done below).
# - Bookworm using Clang 16: Debug on linux/amd64, enable voidstar.
# - Bookworm using GCC 13: Release on linux/amd64, set the reference
# fee to 500.
# - Bookworm using GCC 15: Debug on linux/amd64, enable code
# coverage (which will be done below).
# - Bookworm using Clang 16: Debug on linux/arm64, enable voidstar.
# - Bookworm using Clang 17: Release on linux/amd64, set the
# reference fee to 1000.
# - Bookworm using Clang 20: Debug on linux/amd64, enable Address
# and UB sanitizers (which will be done below).
# - Bookworm using Clang 20: Debug on linux/amd64.
if os["distro_name"] == "debian":
skip = True
if os["distro_version"] == "bookworm":
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-13"
and build_type == "Debug"
and build_type == "Release"
and architecture["platform"] == "linux/amd64"
):
cmake_args = f"-DUNIT_TEST_REFERENCE_FEE=500 {cmake_args}"
@@ -79,7 +78,7 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-16"
and build_type == "Debug"
and architecture["platform"] == "linux/amd64"
and architecture["platform"] == "linux/arm64"
):
cmake_args = f"-Dvoidstar=ON {cmake_args}"
skip = False
@@ -194,11 +193,11 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
):
continue
# Enable code coverage for Debian Bookworm using GCC 13 in Debug on
# linux/amd64.
# Enable code coverage for Debian Bookworm using GCC 15 in Debug on
# linux/amd64
if (
f"{os['distro_name']}-{os['distro_version']}" == "debian-bookworm"
and f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-13"
and f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-15"
and build_type == "Debug"
and architecture["platform"] == "linux/amd64"
):
@@ -235,43 +234,29 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
# Add the configuration to the list, with the most unique fields first,
# so that they are easier to identify in the GitHub Actions UI, as long
# names get truncated.
# Add Address and UB sanitizers as separate configurations for specific
# bookworm distros. Thread sanitizer is currently disabled (see below).
# GCC-Asan xrpld-embedded tests are failing because of https://github.com/google/sanitizers/issues/856
# Add Address and Thread (both coupled with UB) sanitizers for specific bookworm distros.
# GCC-Asan rippled-embedded tests are failing because of https://github.com/google/sanitizers/issues/856
if os[
"distro_version"
] == "bookworm" and f"{os['compiler_name']}-{os['compiler_version']}" in [
"gcc-15",
"clang-20",
"gcc-15",
]:
# Add ASAN configuration.
# Add ASAN + UBSAN configuration.
configurations.append(
{
"config_name": config_name + "-asan",
"config_name": config_name + "-asan-ubsan",
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
"sanitizers": "address",
}
)
# Add UBSAN configuration.
configurations.append(
{
"config_name": config_name + "-ubsan",
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
"sanitizers": "undefinedbehavior",
"sanitizers": "address,undefinedbehavior",
}
)
# TSAN is deactivated due to seg faults with latest compilers.
activate_tsan = False
activate_tsan = True
if activate_tsan:
configurations.append(
{

View File

@@ -1,13 +0,0 @@
name: Check PR commits
on:
pull_request_target:
# The action needs to have write permissions to post comments on the PR.
permissions:
contents: read
pull-requests: write
jobs:
check_commits:
uses: XRPLF/actions/.github/workflows/check-pr-commits.yml@e2c7f400d1e85ae65dad552fd425169fbacca4a3

View File

@@ -1,30 +0,0 @@
name: Check PR description
on:
merge_group:
types:
- checks_requested
pull_request:
types: [opened, edited, reopened, synchronize, ready_for_review]
branches: [develop]
jobs:
check_description:
if: ${{ github.event.pull_request.draft != true }}
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Write PR body to file
env:
PR_BODY: ${{ github.event.pull_request.body }}
if: ${{ github.event_name == 'pull_request' }}
run: printenv PR_BODY > pr_body.md
- name: Check PR description differs from template
if: ${{ github.event_name == 'pull_request' }}
run: >
python .github/scripts/check-pr-description.py
--template-file .github/pull_request_template.md
--pr-body-file pr_body.md

View File

@@ -1,14 +0,0 @@
name: Check PR title
on:
merge_group:
types:
- checks_requested
pull_request:
types: [opened, edited, reopened, synchronize, ready_for_review]
branches: [develop]
jobs:
check_title:
if: ${{ github.event.pull_request.draft != true }}
uses: XRPLF/actions/.github/workflows/check-pr-title.yml@a5d8dd35be543365e90a11358447130c8763871d

View File

@@ -1,25 +0,0 @@
name: Label PRs with merge conflicts
on:
# So that PRs touching the same files as the push are updated.
push:
# So that the `dirtyLabel` is removed if conflicts are resolved.
# We recommend `pull_request_target` so that github secrets are available.
# In `pull_request` we wouldn't be able to change labels of fork PRs.
pull_request_target:
types: [synchronize]
permissions:
pull-requests: write
jobs:
main:
runs-on: ubuntu-latest
steps:
- name: Check if PRs are dirty
uses: eps1lon/actions-label-merge-conflict@1df065ebe6e3310545d4f4c4e862e43bdca146f0 # v3.0.3
with:
dirtyLabel: "PR: has conflicts"
repoToken: "${{ secrets.GITHUB_TOKEN }}"
commentOnDirty: "This PR has conflicts, please resolve them in order for the PR to be reviewed."
commentOnClean: "All conflicts have been resolved. Assigned reviewers can now start or resume their review."

View File

@@ -33,7 +33,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Determine changed files
# This step checks whether any files have changed that should
# cause the next jobs to run. We do it this way rather than
@@ -46,7 +46,7 @@ jobs:
# that Github considers any skipped jobs to have passed, and in
# turn the required checks as well.
id: changes
uses: tj-actions/changed-files@9426d40962ed5378910ee2e21d5f8c6fcbf2dd96 # v47.0.6
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
# These paths are unique to `on-pr.yml`.
@@ -65,12 +65,9 @@ jobs:
.github/workflows/reusable-build.yml
.github/workflows/reusable-build-test-config.yml
.github/workflows/reusable-build-test.yml
.github/workflows/reusable-clang-tidy.yml
.github/workflows/reusable-clang-tidy-files.yml
.github/workflows/reusable-strategy-matrix.yml
.github/workflows/reusable-test.yml
.github/workflows/reusable-upload-recipe.yml
.clang-tidy
.codecov.yml
cmake/**
conan/**
@@ -110,17 +107,6 @@ jobs:
if: ${{ needs.should-run.outputs.go == 'true' }}
uses: ./.github/workflows/reusable-check-rename.yml
clang-tidy:
needs: should-run
if: ${{ needs.should-run.outputs.go == 'true' }}
uses: ./.github/workflows/reusable-clang-tidy.yml
permissions:
issues: write
contents: read
with:
check_only_changed: true
create_issue_on_failure: false
build-test:
needs: should-run
if: ${{ needs.should-run.outputs.go == 'true' }}
@@ -141,8 +127,9 @@ jobs:
needs:
- should-run
- build-test
# Only run when committing to a PR that targets a release branch.
if: ${{ github.repository == 'XRPLF/rippled' && needs.should-run.outputs.go == 'true' && github.event_name == 'pull_request' && startsWith(github.event.pull_request.base.ref, 'release') }}
# Only run when committing to a PR that targets a release branch in the
# XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' && needs.should-run.outputs.go == 'true' && startsWith(github.ref, 'refs/heads/release') }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
@@ -169,7 +156,6 @@ jobs:
needs:
- check-levelization
- check-rename
- clang-tidy
- build-test
- upload-recipe
- notify-clio

View File

@@ -5,7 +5,7 @@ name: Tag
on:
push:
tags:
- "[0-9]+.[0-9]+.[0-9]*"
- "v*"
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -17,7 +17,8 @@ defaults:
jobs:
upload-recipe:
if: ${{ github.repository == 'XRPLF/rippled' }}
# Only run when a tag is pushed to the XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}

View File

@@ -22,12 +22,9 @@ on:
- ".github/workflows/reusable-build.yml"
- ".github/workflows/reusable-build-test-config.yml"
- ".github/workflows/reusable-build-test.yml"
- ".github/workflows/reusable-clang-tidy.yml"
- ".github/workflows/reusable-clang-tidy-files.yml"
- ".github/workflows/reusable-strategy-matrix.yml"
- ".github/workflows/reusable-test.yml"
- ".github/workflows/reusable-upload-recipe.yml"
- ".clang-tidy"
- ".codecov.yml"
- "cmake/**"
- "conan/**"
@@ -63,15 +60,6 @@ defaults:
shell: bash
jobs:
clang-tidy:
uses: ./.github/workflows/reusable-clang-tidy.yml
permissions:
issues: write
contents: read
with:
check_only_changed: false
create_issue_on_failure: ${{ github.event_name == 'schedule' }}
build-test:
uses: ./.github/workflows/reusable-build-test.yml
strategy:
@@ -92,8 +80,8 @@ jobs:
upload-recipe:
needs: build-test
# Only run when pushing to the develop branch.
if: ${{ github.repository == 'XRPLF/rippled' && github.event_name == 'push' && github.ref == 'refs/heads/develop' }}
# Only run when pushing to the develop branch in the XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' && github.event_name == 'push' && github.ref == 'refs/heads/develop' }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}

View File

@@ -1,9 +1,6 @@
name: Run pre-commit hooks
on:
merge_group:
types:
- checks_requested
pull_request:
push:
branches:
@@ -14,7 +11,7 @@ on:
jobs:
# Call the workflow in the XRPLF/actions repo that runs the pre-commit hooks.
run-hooks:
uses: XRPLF/actions/.github/workflows/pre-commit.yml@9307df762265e15c745ddcdb38a581c989f7f349
uses: XRPLF/actions/.github/workflows/pre-commit.yml@320be44621ca2a080f05aeb15817c44b84518108
with:
runs_on: ubuntu-latest
container: '{ "image": "ghcr.io/xrplf/ci/tools-rippled-pre-commit:sha-41ec7c1" }'
container: '{ "image": "ghcr.io/xrplf/ci/tools-rippled-pre-commit:sha-ab4d1f0" }'

View File

@@ -4,17 +4,6 @@ name: Build and publish documentation
on:
push:
branches:
- "develop"
paths:
- ".github/workflows/publish-docs.yml"
- "*.md"
- "**/*.md"
- "docs/**"
- "include/**"
- "src/libxrpl/**"
- "src/xrpld/**"
pull_request:
paths:
- ".github/workflows/publish-docs.yml"
- "*.md"
@@ -34,22 +23,17 @@ defaults:
env:
BUILD_DIR: build
# ubuntu-latest has only 2 CPUs for private repositories
# https://docs.github.com/en/actions/reference/runners/github-hosted-runners#standard-github-hosted-runners-for--private-repositories
NPROC_SUBTRACT: ${{ github.event.repository.visibility == 'public' && '2' || '1' }}
NPROC_SUBTRACT: 2
jobs:
build:
publish:
runs-on: ubuntu-latest
container: ghcr.io/xrplf/ci/tools-rippled-documentation:sha-a8c7be1
permissions:
contents: write
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Prepare runner
uses: XRPLF/actions/prepare-runner@90f11ee655d1687824fb8793db770477d52afbab
with:
enable_ccache: false
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Get number of processors
uses: XRPLF/actions/get-nproc@cf0433aa74563aead044a1e395610c96d65a37cf
@@ -80,23 +64,9 @@ jobs:
cmake -Donly_docs=ON ..
cmake --build . --target docs --parallel ${BUILD_NPROC}
- name: Create documentation artifact
if: ${{ github.event.repository.visibility == 'public' && github.event_name == 'push' }}
uses: actions/upload-pages-artifact@fc324d3547104276b827a68afc52ff2a11cc49c9 # v5.0.0
- name: Publish documentation
if: ${{ github.ref_type == 'branch' && github.ref_name == github.event.repository.default_branch }}
uses: peaceiris/actions-gh-pages@4f9cc6602d3f66b9c108549d475ec49e8ef4d45e # v4.0.0
with:
path: ${{ env.BUILD_DIR }}/docs/html
deploy:
if: ${{ github.repository == 'XRPLF/rippled' && github.event_name == 'push' }}
needs: build
runs-on: ubuntu-latest
permissions:
pages: write
id-token: write
environment:
name: github-pages
url: ${{ steps.deploy.outputs.page_url }}
steps:
- name: Deploy to GitHub Pages
id: deploy
uses: actions/deploy-pages@cd2ce8fcbc39b97be8ca5fce6e763baed58fa128 # v5.0.0
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ${{ env.BUILD_DIR }}/docs/html

View File

@@ -76,7 +76,7 @@ jobs:
name: ${{ inputs.config_name }}
runs-on: ${{ fromJSON(inputs.runs_on) }}
container: ${{ inputs.image != '' && inputs.image || null }}
timeout-minutes: ${{ inputs.sanitizers != '' && 360 || 60 }}
timeout-minutes: 60
env:
# Use a namespace to keep the objects separate for each configuration.
CCACHE_NAMESPACE: ${{ inputs.config_name }}
@@ -101,13 +101,13 @@ jobs:
steps:
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
uses: XRPLF/actions/cleanup-workspace@c7d9ce5ebb03c752a354889ecd870cadfc2b1cd4
uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Prepare runner
uses: XRPLF/actions/prepare-runner@90f11ee655d1687824fb8793db770477d52afbab
uses: XRPLF/actions/prepare-runner@2cbf481018d930656e9276fcc20dc0e3a0be5b6d
with:
enable_ccache: ${{ inputs.ccache_enabled }}
@@ -153,32 +153,6 @@ jobs:
${CMAKE_ARGS} \
..
- name: Check protocol autogen files are up-to-date
working-directory: ${{ env.BUILD_DIR }}
env:
MESSAGE: |
The generated protocol wrapper classes are out of date.
This typically happens when the macro files or generator scripts
have changed but the generated files were not regenerated.
To fix this:
1. Run: cmake --build . --target setup_code_gen
2. Run: cmake --build . --target code_gen
3. Commit and push the regenerated files
run: |
set -e
cmake --build . --target setup_code_gen
cmake --build . --target code_gen
DIFF=$(git -C .. status --porcelain -- include/xrpl/protocol_autogen src/tests/libxrpl/protocol_autogen)
if [ -n "${DIFF}" ]; then
echo "::error::Generated protocol files are out of date"
git -C .. diff -- include/xrpl/protocol_autogen src/tests/libxrpl/protocol_autogen
echo "${MESSAGE}"
exit 1
fi
- name: Build the binary
working-directory: ${{ env.BUILD_DIR }}
env:
@@ -202,30 +176,14 @@ jobs:
fi
- name: Upload the binary (Linux)
if: ${{ github.event.repository.visibility == 'public' && runner.os == 'Linux' }}
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
if: ${{ github.repository_owner == 'XRPLF' && runner.os == 'Linux' }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: xrpld-${{ inputs.config_name }}
path: ${{ env.BUILD_DIR }}/xrpld
retention-days: 3
if-no-files-found: error
- name: Export server definitions
if: ${{ runner.os != 'Windows' && !inputs.build_only && env.VOIDSTAR_ENABLED != 'true' }}
working-directory: ${{ env.BUILD_DIR }}
run: |
set -o pipefail
./xrpld --definitions | python3 -m json.tool > server_definitions.json
- name: Upload server definitions
if: ${{ github.event.repository.visibility == 'public' && inputs.config_name == 'debian-bookworm-gcc-13-amd64-release' }}
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
with:
name: server-definitions
path: ${{ env.BUILD_DIR }}/server_definitions.json
retention-days: 3
if-no-files-found: error
- name: Check linking (Linux)
if: ${{ runner.os == 'Linux' && env.SANITIZERS_ENABLED == 'false' }}
working-directory: ${{ env.BUILD_DIR }}
@@ -246,17 +204,11 @@ jobs:
- name: Set sanitizer options
if: ${{ !inputs.build_only && env.SANITIZERS_ENABLED == 'true' }}
env:
CONFIG_NAME: ${{ inputs.config_name }}
run: |
ASAN_OPTS="include=${GITHUB_WORKSPACE}/sanitizers/suppressions/runtime-asan-options.txt:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/asan.supp"
if [[ "${CONFIG_NAME}" == *gcc* ]]; then
ASAN_OPTS="${ASAN_OPTS}:alloc_dealloc_mismatch=0"
fi
echo "ASAN_OPTIONS=${ASAN_OPTS}" >> ${GITHUB_ENV}
echo "TSAN_OPTIONS=include=${GITHUB_WORKSPACE}/sanitizers/suppressions/runtime-tsan-options.txt:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/tsan.supp" >> ${GITHUB_ENV}
echo "UBSAN_OPTIONS=include=${GITHUB_WORKSPACE}/sanitizers/suppressions/runtime-ubsan-options.txt:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/ubsan.supp" >> ${GITHUB_ENV}
echo "LSAN_OPTIONS=include=${GITHUB_WORKSPACE}/sanitizers/suppressions/runtime-lsan-options.txt:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/lsan.supp" >> ${GITHUB_ENV}
echo "ASAN_OPTIONS=print_stacktrace=1:detect_container_overflow=0:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/asan.supp" >> ${GITHUB_ENV}
echo "TSAN_OPTIONS=second_deadlock_stack=1:halt_on_error=0:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/tsan.supp" >> ${GITHUB_ENV}
echo "UBSAN_OPTIONS=suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/ubsan.supp" >> ${GITHUB_ENV}
echo "LSAN_OPTIONS=suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/lsan.supp" >> ${GITHUB_ENV}
- name: Run the separate tests
if: ${{ !inputs.build_only }}
@@ -277,23 +229,8 @@ jobs:
env:
BUILD_NPROC: ${{ steps.nproc.outputs.nproc }}
run: |
set -o pipefail
# Coverage builds are slower due to instrumentation; use fewer parallel jobs to avoid flakiness
[ "$COVERAGE_ENABLED" = "true" ] && BUILD_NPROC=$(( BUILD_NPROC - 2 ))
./xrpld --unittest --unittest-jobs "${BUILD_NPROC}" 2>&1 | tee unittest.log
./xrpld --unittest --unittest-jobs "${BUILD_NPROC}"
- name: Show test failure summary
if: ${{ failure() && !inputs.build_only }}
working-directory: ${{ runner.os == 'Windows' && format('{0}/{1}', env.BUILD_DIR, inputs.build_type) || env.BUILD_DIR }}
run: |
if [ ! -f unittest.log ]; then
echo "unittest.log not found; embedded tests may not have run."
exit 0
fi
if ! grep -E "failed" unittest.log; then
echo "Log present but no failure lines found in unittest.log."
fi
- name: Debug failure (Linux)
if: ${{ failure() && runner.os == 'Linux' && !inputs.build_only }}
run: |
@@ -316,8 +253,8 @@ jobs:
--target coverage
- name: Upload coverage report
if: ${{ github.repository == 'XRPLF/rippled' && !inputs.build_only && env.COVERAGE_ENABLED == 'true' }}
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0
if: ${{ github.repository_owner == 'XRPLF' && !inputs.build_only && env.COVERAGE_ENABLED == 'true' }}
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
with:
disable_search: true
disable_telem: true

View File

@@ -18,9 +18,9 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Check levelization
run: python .github/scripts/levelization/generate.py
run: .github/scripts/levelization/generate.sh
- name: Check for differences
env:
MESSAGE: |
@@ -32,7 +32,7 @@ jobs:
removed from loops.txt, it's probably an improvement, while if
something was added, it's probably a regression.
Run '.github/scripts/levelization/generate.py' in your repo, commit
Run '.github/scripts/levelization/generate.sh' in your repo, commit
and push the changes. See .github/scripts/levelization/README.md for
more info.
run: |

View File

@@ -18,7 +18,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Check definitions
run: .github/scripts/rename/definitions.sh .
- name: Check copyright notices
@@ -33,8 +33,6 @@ jobs:
run: .github/scripts/rename/config.sh .
- name: Check include guards
run: .github/scripts/rename/include.sh .
- name: Check documentation
run: .github/scripts/rename/docs.sh .
- name: Check for differences
env:
MESSAGE: |

View File

@@ -1,175 +0,0 @@
name: Run clang-tidy on files
on:
workflow_call:
inputs:
files:
description: "List of files to check (empty means check all files)"
type: string
default: ""
create_issue_on_failure:
description: "Whether to create an issue if the check failed"
type: boolean
default: false
defaults:
run:
shell: bash
env:
# Conan installs the generators in the build/generators directory, see the
# layout() method in conanfile.py. We then run CMake from the build directory.
BUILD_DIR: build
BUILD_TYPE: Release
jobs:
run-clang-tidy:
name: Run clang tidy
runs-on: ["self-hosted", "Linux", "X64", "heavy"]
container: "ghcr.io/xrplf/ci/debian-trixie:clang-21-sha-53033a2"
permissions:
issues: write
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Prepare runner
uses: XRPLF/actions/prepare-runner@90f11ee655d1687824fb8793db770477d52afbab
with:
enable_ccache: false
- name: Print build environment
uses: ./.github/actions/print-env
- name: Get number of processors
uses: XRPLF/actions/get-nproc@cf0433aa74563aead044a1e395610c96d65a37cf
id: nproc
- name: Setup Conan
uses: ./.github/actions/setup-conan
- name: Build dependencies
uses: ./.github/actions/build-deps
with:
build_nproc: ${{ steps.nproc.outputs.nproc }}
build_type: ${{ env.BUILD_TYPE }}
log_verbosity: verbose
- name: Configure CMake
working-directory: ${{ env.BUILD_DIR }}
run: |
cmake \
-G 'Ninja' \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE="${BUILD_TYPE}" \
-Dtests=ON \
-Dwerr=ON \
-Dxrpld=ON \
..
# clang-tidy needs headers generated from proto files
- name: Build libxrpl.libpb
working-directory: ${{ env.BUILD_DIR }}
run: |
ninja -j ${{ steps.nproc.outputs.nproc }} xrpl.libpb
- name: Run clang tidy
id: run_clang_tidy
continue-on-error: true
env:
TARGETS: ${{ inputs.files != '' && inputs.files || 'src tests' }}
run: |
run-clang-tidy -j ${{ steps.nproc.outputs.nproc }} -p "${BUILD_DIR}" -quiet -fix -allow-no-checks ${TARGETS} 2>&1 | tee clang-tidy-output.txt
- name: Upload clang-tidy output
if: ${{ github.event.repository.visibility == 'public' && steps.run_clang_tidy.outcome != 'success' }}
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
with:
name: clang-tidy-results
path: clang-tidy-output.txt
retention-days: 30
- name: Generate git diff
if: ${{ steps.run_clang_tidy.outcome != 'success' }}
run: |
git diff | tee clang-tidy-git-diff.txt
- name: Upload clang-tidy diff output
if: ${{ github.event.repository.visibility == 'public' && steps.run_clang_tidy.outcome != 'success' }}
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
with:
name: clang-tidy-git-diff
path: clang-tidy-git-diff.txt
retention-days: 30
- name: Create an issue
if: ${{ steps.run_clang_tidy.outcome != 'success' && inputs.create_issue_on_failure }}
id: create_issue
shell: bash
env:
GH_TOKEN: ${{ github.token }}
run: |
# Prepare issue body with clang-tidy output
cat > issue.md <<EOF
## Clang-tidy Check Failed
**Workflow:** ${{ github.workflow }}
**Run ID:** ${{ github.run_id }}
**Commit:** ${{ github.sha }}
**Branch/Ref:** ${{ github.ref }}
**Triggered by:** ${{ github.actor }}
### Clang-tidy Output:
\`\`\`
EOF
# Append clang-tidy output (filter for errors and warnings)
if [ -f clang-tidy-output.txt ]; then
# Extract lines containing 'error:', 'warning:', or 'note:'
grep -E '(error:|warning:|note:)' clang-tidy-output.txt > filtered-output.txt || true
# If filtered output is empty, use original (might be a different error format)
if [ ! -s filtered-output.txt ]; then
cp clang-tidy-output.txt filtered-output.txt
fi
# Truncate if too large
head -c 60000 filtered-output.txt >> issue.md
if [ "$(wc -c < filtered-output.txt)" -gt 60000 ]; then
echo "" >> issue.md
echo "... (output truncated, see artifacts for full output)" >> issue.md
fi
rm filtered-output.txt
else
echo "No output file found" >> issue.md
fi
cat >> issue.md <<EOF
\`\`\`
**Workflow run:** ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
---
*This issue was automatically created by the clang-tidy workflow.*
EOF
# Create the issue
gh issue create \
--label "Bug,Clang-tidy" \
--title "Clang-tidy check failed" \
--body-file ./issue.md \
> create_issue.log
created_issue="$(sed 's|.*/||' create_issue.log)"
echo "created_issue=$created_issue" >> $GITHUB_OUTPUT
echo "Created issue #$created_issue"
rm -f create_issue.log issue.md clang-tidy-output.txt
- name: Fail the workflow if clang-tidy failed
if: ${{ steps.run_clang_tidy.outcome != 'success' }}
run: |
echo "Clang-tidy check failed!"
exit 1

View File

@@ -1,55 +0,0 @@
name: Clang-tidy check
on:
workflow_call:
inputs:
check_only_changed:
description: "Check only changed files in PR. If false, checks all files in the repository."
type: boolean
default: false
create_issue_on_failure:
description: "Whether to create an issue if the check failed"
type: boolean
default: false
defaults:
run:
shell: bash
jobs:
determine-files:
name: Determine files to check
if: ${{ inputs.check_only_changed }}
runs-on: ubuntu-latest
outputs:
clang_tidy_config_changed: ${{ steps.changed_clang_tidy.outputs.any_changed }}
any_cpp_changed: ${{ steps.changed_files.outputs.any_changed }}
all_changed_files: ${{ steps.changed_files.outputs.all_changed_files }}
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Get changed C++ files
id: changed_files
uses: tj-actions/changed-files@9426d40962ed5378910ee2e21d5f8c6fcbf2dd96 # v47.0.6
with:
files: |
**/*.cpp
**/*.h
**/*.ipp
separator: " "
- name: Get changed clang-tidy configuration
id: changed_clang_tidy
uses: tj-actions/changed-files@9426d40962ed5378910ee2e21d5f8c6fcbf2dd96 # v47.0.6
with:
files: |
.clang-tidy
run-clang-tidy:
needs: [determine-files]
if: ${{ always() && !cancelled() && (!inputs.check_only_changed || needs.determine-files.outputs.any_cpp_changed == 'true' || needs.determine-files.outputs.clang_tidy_config_changed == 'true') }}
uses: ./.github/workflows/reusable-clang-tidy-files.yml
with:
files: ${{ (needs.determine-files.outputs.clang_tidy_config_changed != 'true' && inputs.check_only_changed) && needs.determine-files.outputs.all_changed_files || '' }}
create_issue_on_failure: ${{ inputs.create_issue_on_failure }}

View File

@@ -29,10 +29,10 @@ jobs:
matrix: ${{ steps.generate.outputs.matrix }}
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Set up Python
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: 3.13

View File

@@ -43,7 +43,7 @@ jobs:
container: ghcr.io/xrplf/ci/ubuntu-noble:gcc-13-sha-5dd7158
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Generate build version number
id: version
@@ -69,30 +69,24 @@ jobs:
conan export . --version=${{ steps.version.outputs.version }}
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/${{ steps.version.outputs.version }}
# When this workflow is triggered by a push event, it will always be when merging into the
# 'develop' branch, see on-trigger.yml.
- name: Upload Conan recipe (develop)
if: ${{ github.event_name == 'push' }}
if: ${{ github.ref == 'refs/heads/develop' }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=develop
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/develop
# When this workflow is triggered by a pull request event, it will always be when merging into
# one of the 'release' branches, see on-pr.yml.
- name: Upload Conan recipe (rc)
if: ${{ github.event_name == 'pull_request' }}
if: ${{ startsWith(github.ref, 'refs/heads/release') }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=rc
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/rc
# When this workflow is triggered by a push event, it will always be when tagging a final
# release, see on-tag.yml.
- name: Upload Conan recipe (release)
if: ${{ startsWith(github.ref, 'refs/tags/') }}
if: ${{ github.event_name == 'tag' }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |

View File

@@ -64,13 +64,13 @@ jobs:
steps:
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
uses: XRPLF/actions/cleanup-workspace@c7d9ce5ebb03c752a354889ecd870cadfc2b1cd4
uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Prepare runner
uses: XRPLF/actions/prepare-runner@90f11ee655d1687824fb8793db770477d52afbab
uses: XRPLF/actions/prepare-runner@2cbf481018d930656e9276fcc20dc0e3a0be5b6d
with:
enable_ccache: false
@@ -103,11 +103,11 @@ jobs:
sanitizers: ${{ matrix.sanitizers }}
- name: Log into Conan remote
if: ${{ github.repository == 'XRPLF/rippled' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}
if: ${{ github.repository_owner == 'XRPLF' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}
run: conan remote login "${CONAN_REMOTE_NAME}" "${{ secrets.CONAN_REMOTE_USERNAME }}" --password "${{ secrets.CONAN_REMOTE_PASSWORD }}"
- name: Upload Conan packages
if: ${{ github.repository == 'XRPLF/rippled' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}
if: ${{ github.repository_owner == 'XRPLF' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}
env:
FORCE_OPTION: ${{ github.event.inputs.force_upload == 'true' && '--force' || '' }}
run: conan upload "*" --remote="${CONAN_REMOTE_NAME}" --confirm ${FORCE_OPTION}

12
.gitignore vendored
View File

@@ -13,7 +13,6 @@
Debug/
Release/
/.build/
/.venv/
/build/
/db/
/out.txt
@@ -43,9 +42,6 @@ gmon.out
# Locally patched Conan recipes
external/conan-center-index/
# Local conan directory
.conan
# XCode IDE.
*.pbxuser
!default.pbxuser
@@ -72,17 +68,9 @@ DerivedData
/.zed/
# AI tools.
/.agent
/.agents
/.augment
/.claude
/CLAUDE.md
# Python
__pycache__
# Direnv's directory
/.direnv
# clangd cache
/.cache

View File

@@ -13,64 +13,40 @@ repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: 3e8a8703264a2f4a69428a0aa4dcb512790b2c8c # frozen: v6.0.0
hooks:
- id: check-added-large-files
args: [--maxkb=400, --enforce-all]
- id: trailing-whitespace
- id: end-of-file-fixer
- id: mixed-line-ending
- id: check-merge-conflict
args: [--assume-in-merge]
- repo: local
hooks:
- id: clang-tidy
name: "clang-tidy (enable with: TIDY=1)"
entry: ./bin/pre-commit/clang_tidy_check.py
language: python
types_or: [c++, c]
exclude: ^include/xrpl/protocol_autogen
pass_filenames: false # script determines the staged files itself
- id: fix-include-style
name: fix include style
entry: ./bin/pre-commit/fix_include_style.py
language: python
types_or: [c++, c]
exclude: ^include/xrpl/protocol_autogen/(transactions|ledger_entries)/
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: cd481d7b0bfb5c7b3090c21846317f9a8262e891 # frozen: v22.1.0
rev: 75ca4ad908dc4a99f57921f29b7e6c1521e10b26 # frozen: v21.1.8
hooks:
- id: clang-format
args: [--style=file]
"types_or": [c++, c, proto]
exclude: ^include/xrpl/protocol_autogen/(transactions|ledger_entries)/
- repo: https://github.com/BlankSpruce/gersemi
rev: 0.26.0
- repo: https://github.com/cheshirekow/cmake-format-precommit
rev: e2c2116d86a80e72e7146a06e68b7c228afc6319 # frozen: v0.6.13
hooks:
- id: gersemi
- id: cmake-format
additional_dependencies: [PyYAML]
- repo: https://github.com/rbubley/mirrors-prettier
rev: c2bc67fe8f8f549cc489e00ba8b45aa18ee713b1 # frozen: v3.8.1
rev: 5ba47274f9b181bce26a5150a725577f3c336011 # frozen: v3.6.2
hooks:
- id: prettier
args: [--end-of-line=auto]
- repo: https://github.com/psf/black-pre-commit-mirror
rev: ea488cebbfd88a5f50b8bd95d5c829d0bb76feb8 # frozen: 26.1.0
rev: 831207fd435b47aeffdf6af853097e64322b4d44 # frozen: v25.12.0
hooks:
- id: black
- repo: https://github.com/openstack/bashate
rev: 5798d24d571676fc407e81df574c1ef57b520f23 # frozen: 2.1.1
hooks:
- id: bashate
args: ["--ignore=E006"]
- repo: https://github.com/streetsidesoftware/cspell-cli
rev: a42085ade523f591dca134379a595e7859986445 # frozen: v9.7.0
rev: 1cfa010f078c354f3ffb8413616280cc28f5ba21 # frozen: v9.4.0
hooks:
- id: cspell # Spell check changed files
exclude: (.config/cspell.config.yaml|^include/xrpl/protocol_autogen/(transactions|ledger_entries)/)
exclude: .config/cspell.config.yaml
- id: cspell # Spell check the commit message
name: check commit message spelling
args:
@@ -81,27 +57,8 @@ repos:
- .git/COMMIT_EDITMSG
stages: [commit-msg]
- repo: local
hooks:
- id: nix-fmt
name: Format Nix files
entry: |
bash -c '
if command -v nix &> /dev/null || [ "$GITHUB_ACTIONS" = "true" ]; then
nix --extra-experimental-features "nix-command flakes" fmt "$@"
else
echo "Skipping nix-fmt: nix not installed and not in GitHub Actions"
exit 0
fi
' --
language: system
types:
- nix
pass_filenames: true
exclude: |
(?x)^(
external/.*|
.github/scripts/levelization/results/.*\.txt|
src/tests/libxrpl/protocol_autogen/(transactions|ledger_entries)/.*
.github/scripts/levelization/results/.*\.txt
)$

View File

@@ -4,42 +4,23 @@ This changelog is intended to list all updates to the [public API methods](https
For info about how [API versioning](https://xrpl.org/request-formatting.html#api-versioning) works, including examples, please view the [XLS-22d spec](https://github.com/XRPLF/XRPL-Standards/discussions/54). For details about the implementation of API versioning, view the [implementation PR](https://github.com/XRPLF/rippled/pull/3155). API versioning ensures existing integrations and users continue to receive existing behavior, while those that request a higher API version will experience new behavior.
The API version controls the API behavior you see. This includes what properties you see in responses, what parameters you're permitted to send in requests, and so on. You specify the API version in each of your requests. When a breaking change is introduced to the `xrpld` API, a new version is released. To avoid breaking your code, you should set (or increase) your version when you're ready to upgrade.
The API version controls the API behavior you see. This includes what properties you see in responses, what parameters you're permitted to send in requests, and so on. You specify the API version in each of your requests. When a breaking change is introduced to the `rippled` API, a new version is released. To avoid breaking your code, you should set (or increase) your version when you're ready to upgrade.
The [commandline](https://xrpl.org/docs/references/http-websocket-apis/api-conventions/request-formatting/#commandline-format) always uses the latest API version. The command line is intended for ad-hoc usage by humans, not programs or automated scripts. The command line is not meant for use in production code.
For a log of breaking changes, see the **API Version [number]** headings. In general, breaking changes are associated with a particular API Version number. For non-breaking changes, scroll to the **XRP Ledger version [x.y.z]** headings. Non-breaking changes are associated with a particular XRP Ledger (`xrpld`) release.
For a log of breaking changes, see the **API Version [number]** headings. In general, breaking changes are associated with a particular API Version number. For non-breaking changes, scroll to the **XRP Ledger version [x.y.z]** headings. Non-breaking changes are associated with a particular XRP Ledger (`rippled`) release.
## API Version 3 (Beta)
API version 3 is currently a beta API. It requires enabling `[beta_rpc_api]` in the xrpld configuration to use. See [API-VERSION-3.md](API-VERSION-3.md) for the full list of changes in API version 3.
API version 3 is currently a beta API. It requires enabling `[beta_rpc_api]` in the rippled configuration to use. See [API-VERSION-3.md](API-VERSION-3.md) for the full list of changes in API version 3.
## API Version 2
API version 2 is available in `xrpld` version 2.0.0 and later. See [API-VERSION-2.md](API-VERSION-2.md) for the full list of changes in API version 2.
API version 2 is available in `rippled` version 2.0.0 and later. See [API-VERSION-2.md](API-VERSION-2.md) for the full list of changes in API version 2.
## API Version 1
This version is supported by all `xrpld` versions. For WebSocket and HTTP JSON-RPC requests, it is currently the default API version used when no `api_version` is specified.
## Unreleased
This section contains changes targeting a future version.
### Additions
- `server_definitions`: Added the following new sections to the response ([#6321](https://github.com/XRPLF/rippled/pull/6321)):
- `TRANSACTION_FORMATS`: Describes the fields and their optionality for each transaction type, including common fields shared across all transactions.
- `LEDGER_ENTRY_FORMATS`: Describes the fields and their optionality for each ledger entry type, including common fields shared across all ledger entries.
- `TRANSACTION_FLAGS`: Maps transaction type names to their supported flags and flag values.
- `LEDGER_ENTRY_FLAGS`: Maps ledger entry type names to their flags and flag values.
- `ACCOUNT_SET_FLAGS`: Maps AccountSet flag names (asf flags) to their numeric values.
### Bugfixes
- Peer Crawler: The `port` field in `overlay.active[]` now consistently returns an integer instead of a string for outbound peers. [#6318](https://github.com/XRPLF/rippled/pull/6318)
- `ping`: The `ip` field is no longer returned as an empty string for proxied connections without a forwarded-for header. It is now omitted, consistent with the behavior for identified connections. [#6730](https://github.com/XRPLF/rippled/pull/6730)
- gRPC `GetLedgerDiff`: Fixed error message that incorrectly said "base ledger not validated" when the desired ledger was not validated. [#6730](https://github.com/XRPLF/rippled/pull/6730)
This version is supported by all `rippled` versions. For WebSocket and HTTP JSON-RPC requests, it is currently the default API version used when no `api_version` is specified.
## XRP Ledger server version 3.1.0

View File

@@ -1,6 +1,6 @@
# API Version 2
API version 2 is available in `xrpld` version 2.0.0 and later. To use this API, clients specify `"api_version" : 2` in each request.
API version 2 is available in `rippled` version 2.0.0 and later. To use this API, clients specify `"api_version" : 2` in each request.
For info about how [API versioning](https://xrpl.org/request-formatting.html#api-versioning) works, including examples, please view the [XLS-22d spec](https://github.com/XRPLF/XRPL-Standards/discussions/54). For details about the implementation of API versioning, view the [implementation PR](https://github.com/XRPLF/rippled/pull/3155). API versioning ensures existing integrations and users continue to receive existing behavior, while those that request a higher API version will experience new behavior.

View File

@@ -1,6 +1,6 @@
# API Version 3
API version 3 is currently a **beta API**. It requires enabling `[beta_rpc_api]` in the xrpld configuration to use. To use this API, clients specify `"api_version" : 3` in each request.
API version 3 is currently a **beta API**. It requires enabling `[beta_rpc_api]` in the rippled configuration to use. To use this API, clients specify `"api_version" : 3` in each request.
For info about how [API versioning](https://xrpl.org/request-formatting.html#api-versioning) works, including examples, please view the [XLS-22d spec](https://github.com/XRPLF/XRPL-Standards/discussions/54). For details about the implementation of API versioning, view the [implementation PR](https://github.com/XRPLF/rippled/pull/3155). API versioning ensures existing integrations and users continue to receive existing behavior, while those that request a higher API version will experience new behavior.

View File

@@ -125,9 +125,9 @@ default profile.
### Patched recipes
Occasionally, we need patched recipes or recipes not present in Conan Center.
We maintain a fork of the Conan Center Index
[here](https://github.com/XRPLF/conan-center-index/) containing the modified and newly added recipes.
The recipes in Conan Center occasionally need to be patched for compatibility
with the latest version of `xrpld`. We maintain a fork of the Conan Center
[here](https://github.com/XRPLF/conan-center-index/) containing the patches.
To ensure our patched recipes are used, you must add our Conan remote at a
higher index than the default Conan Center remote, so it is consulted first. You
@@ -137,11 +137,19 @@ can do this by running:
conan remote add --index 0 xrplf https://conan.ripplex.io
```
Alternatively, you can pull our recipes from the repository and export them locally:
Alternatively, you can pull the patched recipes into the repository and use them
locally:
```bash
# Extract the version number from the lockfile.
function extract_version {
version=$(cat conan.lock | sed -nE "s@.+${1}/(.+)#.+@\1@p" | head -n1)
echo ${version}
}
# Define which recipes to export.
recipes=('abseil' 'ed25519' 'grpc' 'm4' 'mpt-crypto' 'openssl' 'secp256k1' 'snappy' 'soci' 'wasm-xrplf' 'wasmi')
recipes=('ed25519' 'grpc' 'nudb' 'openssl' 'secp256k1' 'snappy' 'soci')
folders=('all' 'all' 'all' '3.x.x' 'all' 'all' 'all')
# Selectively check out the recipes from our CCI fork.
cd external
@@ -150,19 +158,29 @@ cd conan-center-index
git init
git remote add origin git@github.com:XRPLF/conan-center-index.git
git sparse-checkout init
for recipe in "${recipes[@]}"; do
echo "Checking out recipe '${recipe}'..."
git sparse-checkout add recipes/${recipe}
for ((index = 1; index <= ${#recipes[@]}; index++)); do
recipe=${recipes[index]}
folder=${folders[index]}
echo "Checking out recipe '${recipe}' from folder '${folder}'..."
git sparse-checkout add recipes/${recipe}/${folder}
done
git fetch origin master
git checkout master
cd ../..
./export_all.sh
cd ../../
# Export the recipes into the local cache.
for ((index = 1; index <= ${#recipes[@]}; index++)); do
recipe=${recipes[index]}
folder=${folders[index]}
version=$(extract_version ${recipe})
echo "Exporting '${recipe}/${version}' from '${recipe}/${folder}'..."
conan export --version $(extract_version ${recipe}) \
external/conan-center-index/recipes/${recipe}/${folder}
done
```
In the case we switch to a newer version of a dependency that still requires a
patch or add a new dependency, it will be necessary for you to pull in the changes and re-export the
patch, it will be necessary for you to pull in the changes and re-export the
updated dependencies with the newer version. However, if we switch to a newer
version that no longer requires a patch, no action is required on your part, as
the new recipe will be automatically pulled from the official Conan Center.
@@ -171,8 +189,6 @@ the new recipe will be automatically pulled from the official Conan Center.
> You might need to add `--lockfile=""` to your `conan install` command
> to avoid automatic use of the existing `conan.lock` file when you run
> `conan export` manually on your machine
>
> This is not recommended though, as you might end up using different revisions of recipes.
### Conan profile tweaks
@@ -188,14 +204,39 @@ Possible values are ['5.0', '5.1', '6.0', '6.1', '7.0', '7.3', '8.0', '8.1',
Read "http://docs.conan.io/2/knowledge/faq.html#error-invalid-setting"
```
you need to add your compiler to the list of compiler versions in
`$(conan config home)/settings_user.yml`, by adding the required version number(s)
you need to amend the list of compiler versions in
`$(conan config home)/settings.yml`, by appending the required version number(s)
to the `version` array specific for your compiler. For example:
```yaml
compiler:
apple-clang:
version: ["17.0"]
apple-clang:
version:
[
"5.0",
"5.1",
"6.0",
"6.1",
"7.0",
"7.3",
"8.0",
"8.1",
"9.0",
"9.1",
"10.0",
"11.0",
"12.0",
"13",
"13.0",
"13.1",
"14",
"14.0",
"15",
"15.0",
"16",
"16.0",
"17",
"17.0",
]
```
#### Multiple compilers
@@ -459,21 +500,6 @@ install ccache --version 4.11.3 --allow-downgrade`.
The location of `xrpld` binary in your build directory depends on your
CMake generator. Pass `--help` to see the rest of the command line options.
## Code generation
The protocol wrapper classes in `include/xrpl/protocol_autogen/` are generated
from macro definition files in `include/xrpl/protocol/detail/`. If you modify
the macro files (e.g. `transactions.macro`, `ledger_entries.macro`) or the
generation scripts/templates in `cmake/scripts/codegen/`, you need to regenerate the
files:
```
cmake --build . --target setup_code_gen # create venv and install dependencies (once)
cmake --build . --target code_gen # regenerate code
```
The regenerated files should be committed alongside your changes.
## Coverage report
The coverage report is intended for developers using compilers GCC
@@ -618,8 +644,8 @@ If you want to experiment with a new package, follow these steps:
`default_options` property (with syntax `'$package:$option': $value`).
3. Modify [`CMakeLists.txt`](./CMakeLists.txt):
- Add a call to `find_package($package REQUIRED)`.
- Link a library from the package to the target `xrpl_libs`
(search for the existing call to `target_link_libraries(xrpl_libs INTERFACE ...)`).
- Link a library from the package to the target `ripple_libs`
(search for the existing call to `target_link_libraries(ripple_libs INTERFACE ...)`).
4. Start coding! Don't forget to include whatever headers you need from the package.
[1]: https://github.com/conan-io/conan-center-index/issues/13168

View File

@@ -1,84 +1,94 @@
cmake_minimum_required(VERSION 3.16)
if(POLICY CMP0074)
if (POLICY CMP0074)
cmake_policy(SET CMP0074 NEW)
endif()
if(POLICY CMP0077)
endif ()
if (POLICY CMP0077)
cmake_policy(SET CMP0077 NEW)
endif()
endif ()
# Fix "unrecognized escape" issues when passing CMAKE_MODULE_PATH on Windows.
if(DEFINED CMAKE_MODULE_PATH)
if (DEFINED CMAKE_MODULE_PATH)
file(TO_CMAKE_PATH "${CMAKE_MODULE_PATH}" CMAKE_MODULE_PATH)
endif()
endif ()
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
project(xrpl)
set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
include(CompilationEnv)
if(is_gcc)
if (is_gcc)
# GCC-specific fixes
add_compile_options(-Wno-unknown-pragmas -Wno-subobject-linkage)
# -Wno-subobject-linkage can be removed when we upgrade GCC version to at least 13.3
elseif(is_clang)
elseif (is_clang)
# Clang-specific fixes
add_compile_options(-Wno-unknown-warning-option) # Ignore unknown warning options
elseif(is_msvc)
elseif (is_msvc)
# MSVC-specific fixes
add_compile_options(/wd4068) # Ignore unknown pragmas
endif()
endif ()
# Enable ccache to speed up builds.
include(Ccache)
if(thread_safety_analysis)
add_compile_options(
-Wthread-safety
-D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS
-DXRPL_ENABLE_THREAD_SAFETY_ANNOTATIONS
)
# make GIT_COMMIT_HASH define available to all sources
find_package(Git)
if (Git_FOUND)
execute_process(COMMAND ${GIT_EXECUTABLE} --git-dir=${CMAKE_CURRENT_SOURCE_DIR}/.git rev-parse HEAD
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gch)
if (gch)
set(GIT_COMMIT_HASH "${gch}")
message(STATUS gch: ${GIT_COMMIT_HASH})
add_definitions(-DGIT_COMMIT_HASH="${GIT_COMMIT_HASH}")
endif ()
execute_process(COMMAND ${GIT_EXECUTABLE} --git-dir=${CMAKE_CURRENT_SOURCE_DIR}/.git rev-parse --abbrev-ref HEAD
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gb)
if (gb)
set(GIT_BRANCH "${gb}")
message(STATUS gb: ${GIT_BRANCH})
add_definitions(-DGIT_BRANCH="${GIT_BRANCH}")
endif ()
endif () # git
if (thread_safety_analysis)
add_compile_options(-Wthread-safety -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS
-DXRPL_ENABLE_THREAD_SAFETY_ANNOTATIONS)
add_compile_options("-stdlib=libc++")
add_link_options("-stdlib=libc++")
endif()
endif ()
include(CheckCXXCompilerFlag)
include(FetchContent)
include(ExternalProject)
include(CMakeFuncs) # must come *after* ExternalProject b/c it overrides one function in EP
if(target)
message(
FATAL_ERROR
"The target option has been removed - use native cmake options to control build"
)
endif()
if (target)
message(FATAL_ERROR "The target option has been removed - use native cmake options to control build")
endif ()
include(XrplSanity)
include(XrplVersion)
include(XrplSettings)
# this check has to remain in the top-level cmake because of the early return statement
if(packages_only)
if(NOT TARGET rpm)
message(
FATAL_ERROR
"packages_only requested, but targets were not created - is docker installed?"
)
endif()
if (packages_only)
if (NOT TARGET rpm)
message(FATAL_ERROR "packages_only requested, but targets were not created - is docker installed?")
endif ()
return()
endif()
endif ()
include(XrplCompiler)
include(XrplSanitizers)
include(XrplInterface)
option(only_docs "Include only the docs target?" FALSE)
include(XrplDocs)
if(only_docs)
if (only_docs)
return()
endif()
endif ()
include(deps/Boost)
@@ -97,46 +107,41 @@ find_package(xxHash REQUIRED)
target_link_libraries(
xrpl_libs
INTERFACE
ed25519::ed25519
lz4::lz4
OpenSSL::Crypto
OpenSSL::SSL
secp256k1::secp256k1
soci::soci
SQLite::SQLite3
)
INTERFACE ed25519::ed25519
lz4::lz4
OpenSSL::Crypto
OpenSSL::SSL
secp256k1::secp256k1
soci::soci
SQLite::SQLite3)
option(rocksdb "Enable RocksDB" ON)
if(rocksdb)
if (rocksdb)
find_package(RocksDB REQUIRED)
set_target_properties(
RocksDB::rocksdb
PROPERTIES INTERFACE_COMPILE_DEFINITIONS XRPL_ROCKSDB_AVAILABLE=1
)
set_target_properties(RocksDB::rocksdb PROPERTIES INTERFACE_COMPILE_DEFINITIONS XRPL_ROCKSDB_AVAILABLE=1)
target_link_libraries(xrpl_libs INTERFACE RocksDB::rocksdb)
endif()
endif ()
# Work around changes to Conan recipe for now.
if(TARGET nudb::core)
if (TARGET nudb::core)
set(nudb nudb::core)
elseif(TARGET NuDB::nudb)
elseif (TARGET NuDB::nudb)
set(nudb NuDB::nudb)
else()
else ()
message(FATAL_ERROR "unknown nudb target")
endif()
endif ()
target_link_libraries(xrpl_libs INTERFACE ${nudb})
if(coverage)
if (coverage)
include(XrplCov)
endif()
endif ()
set(PROJECT_EXPORT_SET XrplExports)
include(XrplCore)
include(XrplProtocolAutogen)
include(XrplInstall)
include(XrplValidatorKeys)
if(tests)
if (tests)
include(CTest)
add_subdirectory(src/tests/libxrpl)
endif()
endif ()

View File

@@ -127,6 +127,26 @@ tl;dr
> 6. Wrap the body at 72 characters.
> 7. Use the body to explain what and why vs. how.
In addition to those guidelines, please add one of the following
prefixes to the subject line if appropriate.
- `fix:` - The primary purpose is to fix an existing bug.
- `perf:` - The primary purpose is performance improvements.
- `refactor:` - The changes refactor code without affecting
functionality.
- `test:` - The changes _only_ affect unit tests.
- `docs:` - The changes _only_ affect documentation. This can
include code comments in addition to `.md` files like this one.
- `build:` - The changes _only_ affect the build process,
including CMake and/or Conan settings.
- `chore:` - Other tasks that don't affect the binary, but don't fit
any of the other cases. e.g. formatting, git settings, updating
Github Actions jobs.
Whenever possible, when updating commits after the PR is open, please
add the PR number to the end of the subject line. e.g. `test: Add
unit tests for Feature X (#1234)`.
## Pull requests
In general, pull requests use `develop` as the base branch.
@@ -160,23 +180,6 @@ credibility of the existing approvals is insufficient.
Pull requests must be merged by [squash-and-merge][squash]
to preserve a linear history for the `develop` branch.
### Type of Change
In addition to those guidelines, please start your PR title with one of the following:
- `build:` - The changes _only_ affect the build process, including CMake and/or Conan settings.
- `feat`: New feature (change which adds functionality).
- `fix:` - The primary purpose is to fix an existing bug.
- `docs:` - The changes _only_ affect documentation.
- `test:` - The changes _only_ affect unit tests.
- `ci`: Continuous Integration (changes to our CI configuration files and scripts).
- `style`: Code style (formatting).
- `refactor:` - The changes refactor code without affecting functionality.
- `perf:` - The primary purpose is performance improvements.
- `chore:` - Other tasks that don't affect the binary, but don't fit any of the other cases. e.g. `git` settings, `clang-tidy`, removing dead code, dropping support for older tooling.
First letter after the type prefix should be capitalized, and the type prefix should be followed by a colon and a space. e.g. `feat: Add support for Borrowing Protocol`.
### "Ready to merge"
A pull request should only have the "Ready to merge" label added when it
@@ -248,58 +251,6 @@ pip3 install pre-commit
pre-commit install
```
## Clang-tidy
All code must pass `clang-tidy` checks according to the settings in [`.clang-tidy`](./.clang-tidy).
There is a Continuous Integration job that runs clang-tidy on pull requests. The CI will check:
- All changed C++ files (`.cpp`, `.h`, `.ipp`) when only code files are modified
- **All files in the repository** when the `.clang-tidy` configuration file is changed
This ensures that configuration changes don't introduce new warnings across the codebase.
### Installing clang-tidy
See the [environment setup guide](./docs/build/environment.md#clang-tidy) for platform-specific installation instructions.
### Running clang-tidy locally
Before running clang-tidy, you must build the project to generate required files (particularly protobuf headers). Refer to [`BUILD.md`](./BUILD.md) for build instructions.
#### Via pre-commit (recommended)
If you have already installed the pre-commit hooks (see above), you can run clang-tidy on your staged files using:
```
TIDY=1 pre-commit run clang-tidy
```
This runs clang-tidy locally with the same configuration/flags as CI, scoped to your staged C++ files. The `TIDY=1` environment variable is required to opt in — without it the hook is skipped.
You can also have clang-tidy run automatically on every `git commit` by setting `TIDY=1` in your shell environment:
```
export TIDY=1
```
With this set, the hook will run as part of `git commit` alongside the other pre-commit checks.
#### Manually
Then run clang-tidy on your local changes:
```
run-clang-tidy -p build -allow-no-checks src tests
```
This will check all source files in the `src`, `include` and `tests` directories using the compile commands from your `build` directory.
If you wish to automatically fix whatever clang-tidy finds _and_ is capable of fixing, add `-fix` to the above command:
```
run-clang-tidy -p build -quiet -fix -allow-no-checks src tests
```
## Contracts and instrumentation
We are using [Antithesis](https://antithesis.com/) for continuous fuzzing,
@@ -553,7 +504,7 @@ All releases, including release candidates and betas, are handled
differently from typical PRs. Most importantly, never use
the Github UI to merge a release.
Xrpld uses a linear workflow model that can be summarized as:
Rippled uses a linear workflow model that can be summarized as:
1. In between releases, developers work against the `develop` branch.
2. Periodically, a maintainer will build and tag a beta version from

View File

@@ -1,7 +1,7 @@
ISC License
Copyright (c) 2011, Arthur Britto, David Schwartz, Jed McCaleb, Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant.
Copyright (c) 2012-present, the XRP Ledger developers.
Copyright (c) 2012-2025, the XRP Ledger developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above

View File

@@ -1,567 +0,0 @@
# Distributed Tracing Fundamentals
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Next**: [Architecture Analysis](./01-architecture-analysis.md)
---
## What is Distributed Tracing?
Distributed tracing is a method for tracking data objects as they flow through distributed systems. In a network like XRP Ledger, a single transaction touches multiple independent nodes—each with no shared memory or logging. Distributed tracing connects these dots.
**Without tracing:** You see isolated logs on each node with no way to correlate them.
**With tracing:** You see the complete journey of a transaction or an event across all nodes it touched.
---
## Actors and Actions at a Glance
### Actors
| Who (Plain English) | Technical Term |
| ---------------------------------------------- | --------------- |
| A single unit of work being tracked | Span |
| The complete journey of a request | Trace |
| Data that links spans across services | Trace Context |
| Code that creates spans and propagates context | Instrumentation |
| Service that receives and processes traces | Collector |
| Storage and visualization system | Backend (Tempo) |
| Decision logic for which traces to keep | Sampler |
### Actions
| What Happens (Plain English) | Technical Term |
| --------------------------------------- | ----------------------- |
| Start tracking a new operation | Create a Span |
| Connect a child operation to its parent | Set `parent_span_id` |
| Group all related operations together | Share a `trace_id` |
| Pass tracking data between services | Context Propagation |
| Decide whether to record a trace | Sampling (Head or Tail) |
| Send completed traces to storage | Export (OTLP) |
---
## Core Concepts
### 1. Trace
A **trace** represents the entire journey of a request through the system. It has a unique `trace_id` that stays constant across all nodes.
```
Trace ID: abc123
├── Node A: received transaction
├── Node B: relayed transaction
├── Node C: included in consensus
└── Node D: applied to ledger
```
### 2. Span
A **span** represents a single unit of work within a trace. Each span has:
| Attribute | Description | Example |
| ---------------- | -------------------------------- | -------------------------- |
| `trace_id` | Identifies the trace | `event123` |
| `span_id` | Unique identifier | `span456` |
| `parent_span_id` | Parent span (if any) | `p_span123` |
| `name` | Operation name | `rpc.submit` |
| `start_time` | When work began (local time) | `2024-01-15T10:30:00Z` |
| `end_time` | When work completed (local time) | `2024-01-15T10:30:00.050Z` |
| `attributes` | Key-value metadata | `tx.hash=ABC...` |
| `status` | OK, ERROR MSG | `OK` |
### 3. Trace Context
**Trace context** is the data that propagates between services to link spans together. It contains:
- `trace_id` - The trace this span belongs to
- `span_id` - The current span (becomes parent for child spans)
- `trace_flags` - Sampling decisions
---
## How Spans Form a Trace
Spans have parent-child relationships forming a tree structure:
```mermaid
flowchart TB
subgraph trace["Trace: abc123"]
A["tx.submit<br/>span_id: 001<br/>50ms"] --> B["tx.validate<br/>span_id: 002<br/>5ms"]
A --> C["tx.relay<br/>span_id: 003<br/>10ms"]
A --> D["tx.apply<br/>span_id: 004<br/>30ms"]
D --> E["ledger.update<br/>span_id: 005<br/>20ms"]
end
style A fill:#0d47a1,stroke:#082f6a,color:#ffffff
style B fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style C fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style D fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style E fill:#bf360c,stroke:#8c2809,color:#ffffff
```
**Reading the diagram:**
- **tx.submit (blue, root)**: The top-level span representing the entire transaction submission; all other spans are its descendants.
- **tx.validate, tx.relay, tx.apply (green)**: Direct children of tx.submit, representing the three main stages -- validation, relay to peers, and application to the ledger.
- **ledger.update (red)**: A grandchild span nested under tx.apply, representing the actual ledger state mutation triggered by applying the transaction.
- **Arrows (parent to child)**: Each arrow indicates a parent-child span relationship where the parent's completion depends on the child finishing.
The same trace visualized as a **timeline (Gantt chart)**:
```
Time → 0ms 10ms 20ms 30ms 40ms 50ms
├───────────────────────────────────────────┤
tx.submit│▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
├─────┤
tx.valid │▓▓▓▓▓│
│ ├──────────┤
tx.relay │ │▓▓▓▓▓▓▓▓▓▓│
│ ├────────────────────────────┤
tx.apply │ │▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
│ ├──────────────────┤
ledger │ │▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
```
---
## Span Relationships
Spans don't always form simple parent-child trees. Distributed tracing defines several relationship types to capture different causal patterns:
### 1. Parent-Child (ChildOf)
The default relationship. The parent span **depends on** or **contains** the child span. The child runs within the scope of the parent.
```
tx.submit (parent)
├── tx.validate (child) ← parent waits for this
├── tx.relay (child) ← parent waits for this
└── tx.apply (child) ← parent waits for this
```
**When to use:** Synchronous calls, nested operations, any case where the parent's completion depends on the child.
### 2. Follows-From
A causal relationship where the first span **triggers** the second, but does **not wait** for it. The originator fires and moves on.
```
Time →
tx.receive [=======]
↓ triggers (follows-from)
tx.relay [===========] ← runs independently
```
**When to use:** Asynchronous jobs, queued work, fire-and-forget patterns. For example, a node receives a transaction and queues it for relay — the relay span _follows from_ the receive span but the receiver doesn't wait for relaying to complete.
> **OpenTracing** defined `FollowsFrom` as a first-class reference type alongside `ChildOf`.
> **OpenTelemetry** represents this using **Span Links** with descriptive attributes instead (see below).
### 3. Span Links (Cross-Trace and Non-Hierarchical)
Links connect spans that are **causally related but not in a parent-child hierarchy**. Unlike parent-child, links can cross trace boundaries.
```
Trace A Trace B
────── ──────
batch.schedule batch.execute
├─ item.enqueue (span X) ┌──► process.item
├─ item.enqueue (span Y) ───┤ (links to X, Y, Z)
├─ item.enqueue (span Z) └──►
```
**Use cases:**
| Pattern | Description |
| -------------------- | --------------------------------------------------------------------------- |
| **Batch processing** | A batch span links back to all individual spans that contributed to it |
| **Fan-in** | An aggregation span links to the multiple producer spans it merges |
| **Fan-out** | Multiple downstream spans link back to the single span that triggered them |
| **Async handoff** | A deferred job links back to the request that queued it (follows-from) |
| **Cross-trace** | Correlating spans across independent traces (e.g., retries, related events) |
**Link structure:** Each link carries the target span's context plus optional attributes:
```
Link {
trace_id: <target trace>
span_id: <target span>
attributes: { "link.description": "triggered by batch scheduler" }
}
```
### Relationship Summary
```mermaid
flowchart LR
subgraph parent_child["Parent-Child"]
direction TB
P["Parent"] --> C["Child"]
end
subgraph follows_from["Follows-From"]
direction TB
A["Span A"] -.->|triggers| B["Span B"]
end
subgraph links["Span Links"]
direction TB
X["Span X\n(Trace 1)"] -.-|link| Y["Span Y\n(Trace 2)"]
end
parent_child ~~~ follows_from ~~~ links
style P fill:#0d47a1,stroke:#082f6a,color:#ffffff
style C fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style A fill:#0d47a1,stroke:#082f6a,color:#ffffff
style B fill:#bf360c,stroke:#8c2809,color:#ffffff
style X fill:#4a148c,stroke:#38006b,color:#ffffff
style Y fill:#4a148c,stroke:#38006b,color:#ffffff
```
| Relationship | Same Trace? | Dependency? | OTel Mechanism |
| ---------------- | ----------- | -------------------------- | ----------------- |
| **Parent-Child** | Yes | Parent depends on child | `parent_span_id` |
| **Follows-From** | Usually | Causal but no dependency | Link + attributes |
| **Span Link** | Either | Correlation, no dependency | Link + attributes |
---
## Trace ID Generation
A `trace_id` is a 128-bit (16-byte) identifier that groups all spans belonging to one logical operation. How it's generated determines how easily you can find and correlate traces later.
### General Approaches
#### 1. Random (W3C Default)
Generate a random 128-bit ID when a trace starts. Standard approach for most services.
```
trace_id = random_128_bits()
```
| Pros | Cons |
| --------------------------- | --------------------------------------------- |
| Simple, standard | No natural correlation to domain events |
| Guaranteed unique per trace | If propagation is lost, trace is broken |
| Works with all OTel tooling | "Find trace for TX abc" requires index lookup |
#### 2. Deterministic (Derived from Domain Data)
Compute the trace_id from a hash of a natural identifier. Every node independently derives the **same** trace_id for the same event.
```
trace_id = SHA-256(domain_identifier)[0:16] // truncate to 128 bits
```
| Pros | Cons |
| --------------------------------------------------- | ---------------------------------------------------------- |
| Propagation-resilient — same ID computed everywhere | Same event processed twice (retry) shares trace_id |
| Natural search — domain ID maps directly to trace | Non-standard (tooling assumes random) |
| No coordination needed between nodes | 256→128 bit truncation (collision risk negligible at ~2⁶⁴) |
#### 3. Hybrid (Deterministic Prefix + Random Suffix)
First 8 bytes derived from domain data, last 8 bytes random.
```
trace_id = SHA-256(domain_identifier)[0:8] || random_64_bits()
```
| Pros | Cons |
| ------------------------------------------- | ---------------------------------------- |
| Prefix search: "find all traces for TX abc" | Must propagate to maintain full trace_id |
| Unique per processing instance | More complex generation logic |
| Retries get distinct trace_ids | Partial correlation only (prefix match) |
### XRPL Workflow Analysis
XRPL has a unique advantage: its core workflows produce **globally unique 256-bit hashes** that are known on every node. This makes deterministic trace_id generation practical in ways most systems can't achieve.
#### Natural Identifiers by Workflow
| Workflow | Natural Identifier | Size | Known at Start? | Same on All Nodes? |
| ------------------- | --------------------------------- | ---------- | ----------------------------- | -------------------------------- |
| **Transaction** | Transaction hash (`tid_`) | 256-bit | Yes — computed before signing | Yes — hash of canonical tx data |
| **Consensus round** | Previous ledger hash + ledger seq | 256+32 bit | Yes — known when round opens | Yes — all validators agree |
| **Validation** | Ledger hash being validated | 256-bit | Yes — from consensus result | Yes — same closed ledger |
| **Ledger catch-up** | Target ledger hash | 256-bit | Yes — we know what to fetch | Yes — identifies ledger globally |
#### Where These Identifiers Live in Code
```
Transaction: STTx::getTransactionID() → uint256 tid_
TMTransaction::rawTransaction → recompute hash from bytes
Consensus: ConsensusProposal::prevLedger_ → uint256 (previous ledger hash)
ConsensusProposal::position_ → uint256 (TxSet hash)
LedgerHeader::seq → uint32_t (ledger sequence)
Validation: STValidation::getLedgerHash() → uint256
STValidation::getNodeID() → NodeID (160-bit)
Ledger fetch: InboundLedger constructor → uint256 hash, uint32_t seq
TMGetLedger::ledgerHash → bytes (uint256)
```
### Recommended Strategy: Workflow-Scoped Deterministic
Each workflow type derives its trace_id from its natural domain identifier:
```
Transaction trace: trace_id = SHA-256("tx" || tx_hash)[0:16]
Consensus trace: trace_id = SHA-256("cons" || prev_ledger_hash || ledger_seq)[0:16]
Ledger catch-up: trace_id = SHA-256("fetch" || target_ledger_hash)[0:16]
```
The string prefix (`"tx"`, `"cons"`, `"fetch"`) prevents collisions between workflows that might share underlying hashes.
**Why this works for XRPL:**
1. **Propagation-resilient** — Even if a P2P message drops trace context, every node independently computes the same trace_id from the same tx_hash or ledger_hash. Spans still correlate.
2. **Zero-cost search** — "Show me the trace for transaction ABC" becomes a direct lookup: compute `SHA-256("tx" || ABC)[0:16]` and query. No secondary index needed.
3. **Cross-workflow linking via Span Links** — A consensus trace links to individual transaction traces. A validation span links to the consensus trace. This connects the full picture without forcing everything into one giant trace.
### Cross-Workflow Correlation
Each workflow gets its own trace. Span Links tie them together:
```mermaid
flowchart TB
subgraph tx_trace["Transaction Trace"]
direction LR
Tn["trace_id = f(tx_hash)"]:::note --> T1["tx.receive"] --> T2["tx.validate"] --> T3["tx.relay"]
end
subgraph cons_trace["Consensus Trace"]
direction LR
Cn["trace_id = f(prev_ledger, seq)"]:::note --> C1["cons.open"] --> C2["cons.propose"] --> C3["cons.accept"]
end
subgraph val_trace["Validation"]
direction LR
Vn["spans within consensus trace"]:::note --> V1["val.create"] --> V2["val.broadcast"]
end
subgraph fetch_trace["Catch-Up Trace"]
direction LR
Fn["trace_id = f(ledger_hash)"]:::note --> F1["fetch.request"] --> F2["fetch.receive"] --> F3["fetch.apply"]
end
C1 -.-|"span link\n(tx traces)"| T3
C3 --> V1
F1 -.-|"span link\n(target ledger)"| C3
classDef note fill:none,stroke:#888,stroke-dasharray:5 5,color:#333,font-style:italic
style T1 fill:#0d47a1,stroke:#082f6a,color:#ffffff
style T2 fill:#0d47a1,stroke:#082f6a,color:#ffffff
style T3 fill:#0d47a1,stroke:#082f6a,color:#ffffff
style C1 fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style C2 fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style C3 fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style V1 fill:#bf360c,stroke:#8c2809,color:#ffffff
style V2 fill:#bf360c,stroke:#8c2809,color:#ffffff
style F1 fill:#4a148c,stroke:#38006b,color:#ffffff
style F2 fill:#4a148c,stroke:#38006b,color:#ffffff
style F3 fill:#4a148c,stroke:#38006b,color:#ffffff
```
**Reading the diagram:**
- **Transaction Trace (blue)**: An independent trace whose `trace_id` is deterministically derived from the transaction hash. Contains receive, validate, and relay spans.
- **Consensus Trace (green)**: An independent trace whose `trace_id` is derived from the previous ledger hash and sequence number. Covers the open, propose, and accept phases.
- **Validation (red)**: Validation spans live within the consensus trace (not a separate trace). They are created after the accept phase completes.
- **Catch-Up Trace (purple)**: An independent trace for ledger acquisition, derived from the target ledger hash. Used when a node is behind and fetching missing ledgers.
- **Dotted arrows (span links)**: Cross-trace correlations. Consensus links to transaction traces it included; catch-up links to the consensus trace that produced the target ledger.
- **Solid arrow (C3 to V1)**: A parent-child relationship -- validation spans are direct children of the consensus accept span within the same trace.
**How a query flows:**
```
"Why was TX abc slow?"
1. Compute trace_id = SHA-256("tx" || abc)[0:16]
2. Find transaction trace → see it was included in consensus round N
3. Follow span link → consensus trace for round N
4. See which phase was slow (propose? accept?)
5. If a node was catching up, follow link → catch-up trace
```
### Trade-offs to Consider
| Concern | Mitigation |
| ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| **Retries get same trace_id** | Add `attempt` attribute to root span; spans have unique span_ids and timestamps |
| **256→128 bit truncation** | Birthday-bound collision at ~2⁶⁴ operations — negligible for XRPL's throughput |
| **Non-standard generation** | OTel spec allows any 16-byte non-zero value; tooling works on the hex string |
| **Hash computation cost** | SHA-256 is ~0.3μs per call; XRPL already computes these hashes for other purposes |
| **Late-binding identifiers** | Ledger hash isn't known until after consensus — validation spans use ledger_seq as fallback, then link to the consensus trace |
---
## Distributed Traces Across Nodes
In distributed systems like xrpld, traces span **multiple independent nodes**. The trace context must be propagated in network messages:
```mermaid
sequenceDiagram
participant Client
participant NodeA as Node A
participant NodeB as Node B
participant NodeC as Node C
Client->>NodeA: Submit TX<br/>(no trace context)
Note over NodeA: Creates new trace<br/>trace_id: abc123<br/>span: tx.receive
NodeA->>NodeB: Relay TX<br/>(trace_id: abc123, parent: 001)
Note over NodeB: Creates child span<br/>span: tx.relay<br/>parent_span_id: 001
NodeA->>NodeC: Relay TX<br/>(trace_id: abc123, parent: 001)
Note over NodeC: Creates child span<br/>span: tx.relay<br/>parent_span_id: 001
Note over NodeA,NodeC: All spans share trace_id: abc123<br/>enabling correlation across nodes
```
**Reading the diagram:**
- **Client**: The external entity that submits a transaction. It does not carry trace context -- the trace originates at the first node.
- **Node A**: The entry point that creates a new trace (trace_id: abc123) and the root span `tx.receive`. It relays the transaction to peers with trace context attached.
- **Node B and Node C**: Peer nodes that receive the relayed transaction along with the propagated trace context. Each creates a child span under Node A's span, preserving the same `trace_id`.
- **Arrows with trace context**: The relay messages carry `trace_id` and `parent_span_id`, allowing each downstream node to link its spans back to the originating span on Node A.
---
## Context Propagation
For traces to work across nodes, **trace context must be propagated** in messages.
### What's in the Context (~26 bytes)
| Field | Size | Description |
| ------------- | -------- | ------------------------------------------------------- |
| `trace_id` | 16 bytes | Identifies the entire trace (constant across all nodes) |
| `span_id` | 8 bytes | The sender's current span (becomes parent on receiver) |
| `trace_flags` | 1 byte | Sampling decision (bit 0 = sampled; bits 1-7 reserved) |
| `trace_state` | variable | Optional vendor-specific data (typically omitted) |
### How span_id Changes at Each Hop
Only **one** `span_id` travels in the context - the sender's current span. Each node:
1. Extracts the received `span_id` and uses it as the `parent_span_id`
2. Creates a **new** `span_id` for its own span
3. Sends its own `span_id` as the parent when forwarding
```
Node A Node B Node C
────── ────── ──────
Span AAA Span BBB Span CCC
│ │ │
▼ ▼ ▼
Context out: Context out: Context out:
├─ trace_id: abc123 ├─ trace_id: abc123 ├─ trace_id: abc123
├─ span_id: AAA ──────────► ├─ span_id: BBB ──────────► ├─ span_id: CCC ──────►
└─ flags: 01 └─ flags: 01 └─ flags: 01
│ │
parent = AAA parent = BBB
```
The `trace_id` stays constant, but `span_id` **changes at every hop** to maintain the parent-child chain.
### Propagation Formats
There are two patterns:
### HTTP/RPC Headers (W3C Trace Context)
```
traceparent: 00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01
│ │ │ │
│ │ │ └── Flags (sampled)
│ │ └── Parent span ID (16 hex)
│ └── Trace ID (32 hex)
└── Version
```
### Protocol Buffers (xrpld P2P messages)
```protobuf
message TMTransaction {
bytes rawTransaction = 1;
// ... existing fields ...
// Trace context extension
bytes trace_parent = 100; // W3C traceparent
bytes trace_state = 101; // W3C tracestate
}
```
---
## Sampling
Not every trace needs to be recorded. **Sampling** reduces overhead:
### Head Sampling (at trace start)
```
Request arrives → Random 10% chance → Record or skip entire trace
```
- ✅ Low overhead
- ❌ May miss interesting traces
### Tail Sampling (after trace completes)
```
Trace completes → Collector evaluates:
- Error? → KEEP
- Slow? → KEEP
- Normal? → Sample 10%
```
- ✅ Never loses important traces
- ❌ Higher memory usage at collector
---
## Key Benefits for xrpld
| Challenge | How Tracing Helps |
| ---------------------------------- | ---------------------------------------- |
| "Where is my transaction?" | Follow trace across all nodes it touched |
| "Why was consensus slow?" | See timing breakdown of each phase |
| "Which node is the bottleneck?" | Compare span durations across nodes |
| "What happened during the outage?" | Correlate errors across the network |
---
## Glossary
| Term | Definition |
| -------------------- | ------------------------------------------------------------------- |
| **Trace** | Complete journey of a request, identified by `trace_id` |
| **Span** | Single operation within a trace |
| **Parent-Child** | Span relationship where the parent depends on the child |
| **Follows-From** | Causal relationship where originator doesn't wait for the result |
| **Span Link** | Non-hierarchical connection between spans, possibly across traces |
| **Deterministic ID** | Trace ID derived from domain data (e.g., tx_hash) instead of random |
| **Context** | Data propagated between services (`trace_id`, `span_id`, flags) |
| **Instrumentation** | Code that creates spans and propagates context |
| **Collector** | Service that receives, processes, and exports traces |
| **Backend** | Storage/visualization system (Tempo) |
| **Head Sampling** | Sampling decision at trace start |
| **Tail Sampling** | Sampling decision after trace completes |
---
_Next: [Architecture Analysis](./01-architecture-analysis.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -1,467 +0,0 @@
# Architecture Analysis
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Design Decisions](./02-design-decisions.md) | [Implementation Strategy](./03-implementation-strategy.md)
---
## 1.1 Current xrpld Architecture Overview
> **WS** = WebSocket | **UNL** = Unique Node List | **TxQ** = Transaction Queue | **StatsD** = Statistics Daemon
The xrpld node software consists of several interconnected components that need instrumentation for distributed tracing:
```mermaid
flowchart TB
subgraph xrpld["xrpld Node"]
subgraph services["Core Services"]
RPC["RPC Server<br/>(HTTP/WS/gRPC)"]
Overlay["Overlay<br/>(P2P Network)"]
Consensus["Consensus<br/>(RCLConsensus)"]
ValidatorList["ValidatorList<br/>(UNL Mgmt)"]
end
JobQueue["JobQueue<br/>(Thread Pool)"]
subgraph processing["Processing Layer"]
NetworkOPs["NetworkOPs<br/>(Tx Processing)"]
LedgerMaster["LedgerMaster<br/>(Ledger Mgmt)"]
NodeStore["NodeStore<br/>(Database)"]
InboundLedgers["InboundLedgers<br/>(Ledger Sync)"]
end
subgraph appservices["Application Services"]
PathFind["PathFinding<br/>(Payment Paths)"]
TxQ["TxQ<br/>(Fee Escalation)"]
LoadMgr["LoadManager<br/>(Fee/Load)"]
end
subgraph observability["Existing Observability"]
PerfLog["PerfLog<br/>(JSON)"]
Insight["Insight<br/>(StatsD)"]
Logging["Logging<br/>(Journal)"]
end
services --> JobQueue
JobQueue --> processing
JobQueue --> appservices
end
style xrpld fill:#424242,stroke:#212121,color:#ffffff
style services fill:#1565c0,stroke:#0d47a1,color:#ffffff
style processing fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style appservices fill:#6a1b9a,stroke:#4a148c,color:#ffffff
style observability fill:#e65100,stroke:#bf360c,color:#ffffff
```
**Reading the diagram:**
- **Core Services (blue)**: The entry points into xrpld -- RPC Server handles client requests, Overlay manages peer-to-peer networking, Consensus drives agreement, and ValidatorList manages trusted validators.
- **JobQueue (center)**: The asynchronous thread pool that decouples Core Services from the Processing and Application layers. All work flows through it.
- **Processing Layer (green)**: Core business logic -- NetworkOPs processes transactions, LedgerMaster manages ledger state, NodeStore handles persistence, and InboundLedgers synchronizes missing data.
- **Application Services (purple)**: Higher-level features -- PathFinding computes payment routes, TxQ manages fee-based queuing, and LoadManager tracks server load.
- **Existing Observability (orange)**: The current monitoring stack (PerfLog, Insight, Journal logging) that OpenTelemetry will complement, not replace.
- **Arrows (Services to JobQueue to layers)**: Work originates at Core Services, is enqueued onto the JobQueue, and dispatched to Processing or Application layers for execution.
---
## 1.1.1 Actors and Actions
### Actors
| Who (Plain English) | Technical Term |
| ----------------------------------------- | -------------------------- |
| Network node running XRPL software | xrpld node |
| External client submitting requests | RPC Client |
| Network neighbor sharing data | Peer (PeerImp) |
| Request handler for client queries | RPC Server (ServerHandler) |
| Command executor for specific RPC methods | RPCHandler |
| Agreement process between nodes | Consensus (RCLConsensus) |
| Transaction processing coordinator | NetworkOPs |
| Background task scheduler | JobQueue |
| Ledger state manager | LedgerMaster |
| Payment route calculator | PathFinding (Pathfinder) |
| Transaction waiting room | TxQ (Transaction Queue) |
| Fee adjustment system | LoadManager |
| Trusted validator list manager | ValidatorList |
| Protocol upgrade tracker | AmendmentTable |
| Ledger state hash tree | SHAMap |
| Persistent key-value storage | NodeStore |
### Actions
| What Happens (Plain English) | Technical Term |
| ---------------------------------------------- | ---------------------- |
| Client sends a request to a node | `rpc.request` |
| Node executes a specific RPC command | `rpc.command.*` |
| Node receives a transaction from a peer | `tx.receive` |
| Node checks if a transaction is valid | `tx.validate` |
| Node forwards a transaction to neighbors | `tx.relay` |
| Nodes agree on which transactions to include | `consensus.round` |
| Consensus progresses through phases | `consensus.phase.*` |
| Node builds a new confirmed ledger | `ledger.build` |
| Node fetches missing ledger data from peers | `ledger.acquire` |
| Node computes payment routes | `pathfind.compute` |
| Node queues a transaction for later processing | `txq.enqueue` |
| Node increases fees due to high load | `fee.escalate` |
| Node fetches the latest trusted validator list | `validator.list.fetch` |
| Node votes on a protocol amendment | `amendment.vote` |
| Node synchronizes state tree data | `shamap.sync` |
---
## 1.2 Key Components for Instrumentation
> **TxQ** = Transaction Queue | **UNL** = Unique Node List
| Component | Location | Purpose | Trace Value |
| ------------------ | ------------------------------------------ | ------------------------ | -------------------------------- |
| **Overlay** | `src/xrpld/overlay/` | P2P communication | Message propagation timing |
| **PeerImp** | `src/xrpld/overlay/detail/PeerImp.cpp` | Individual peer handling | Per-peer latency |
| **RCLConsensus** | `src/xrpld/app/consensus/RCLConsensus.cpp` | Consensus algorithm | Round timing, phase analysis |
| **NetworkOPs** | `src/xrpld/app/misc/NetworkOPs.cpp` | Transaction processing | Tx lifecycle tracking |
| **ServerHandler** | `src/xrpld/rpc/detail/ServerHandler.cpp` | RPC entry point | Request latency |
| **RPCHandler** | `src/xrpld/rpc/detail/RPCHandler.cpp` | Command execution | Per-command timing |
| **JobQueue** | `src/xrpl/core/JobQueue.h` | Async task execution | Queue wait times |
| **PathFinding** | `src/xrpld/app/paths/` | Payment path computation | Path latency, cache hits |
| **TxQ** | `src/xrpld/app/misc/TxQ.cpp` | Transaction queue/fees | Queue depth, eviction rates |
| **LoadManager** | `src/xrpld/app/main/LoadManager.cpp` | Fee escalation/load | Fee levels, load factors |
| **InboundLedgers** | `src/xrpld/app/ledger/InboundLedgers.cpp` | Ledger acquisition | Sync time, peer reliability |
| **ValidatorList** | `src/xrpld/app/misc/ValidatorList.cpp` | UNL management | List freshness, fetch failures |
| **AmendmentTable** | `src/xrpld/app/misc/AmendmentTable.cpp` | Protocol amendments | Voting status, activation events |
| **SHAMap** | `src/xrpld/shamap/` | State hash tree | Sync speed, missing nodes |
---
## 1.3 Transaction Flow Diagram
Transaction flow spans multiple nodes in the network. Each node creates linked spans to form a distributed trace:
```mermaid
sequenceDiagram
participant Client
participant PeerA as Peer A (Receive)
participant PeerB as Peer B (Relay)
participant PeerC as Peer C (Validate)
Client->>PeerA: 1. Submit TX
rect rgb(230, 245, 255)
Note over PeerA: tx.receive SPAN START
PeerA->>PeerA: HashRouter Deduplication
PeerA->>PeerA: tx.validate (child span)
end
PeerA->>PeerB: 2. Relay TX (with trace ctx)
rect rgb(230, 245, 255)
Note over PeerB: tx.receive (linked span)
end
PeerB->>PeerC: 3. Relay TX
rect rgb(230, 245, 255)
Note over PeerC: tx.receive (linked span)
PeerC->>PeerC: tx.process
end
Note over Client,PeerC: DISTRIBUTED TRACE (same trace_id: abc123)
```
**Reading the diagram:**
- **Client**: The external entity that submits a transaction to Peer A. It has no trace context -- the trace starts at the first node.
- **Peer A (Receive)**: The entry node that creates the root span `tx.receive`, runs HashRouter deduplication to avoid processing duplicates, and creates a child `tx.validate` span.
- **Peer A to Peer B arrow**: The relay message carries trace context (trace_id + parent span_id), enabling Peer B to create a linked span under the same trace.
- **Peer B (Relay)**: Receives the transaction and trace context, creates a `tx.receive` span linked to Peer A's trace, then relays onward.
- **Peer C (Validate)**: Final hop in this example. Creates a linked `tx.receive` span and runs `tx.process` to fully process the transaction.
- **Blue rectangles**: Highlight the span boundaries on each node, showing where instrumentation creates and closes spans.
### Trace Structure
```
trace_id: abc123
├── span: tx.receive (Peer A)
│ ├── span: tx.validate
│ └── span: tx.relay
├── span: tx.receive (Peer B) [parent: Peer A]
│ └── span: tx.relay
└── span: tx.receive (Peer C) [parent: Peer B]
└── span: tx.process
```
---
## 1.4 Consensus Round Flow
Consensus rounds are multi-phase operations that benefit significantly from tracing:
```mermaid
flowchart TB
subgraph round["consensus.round (root span)"]
attrs["Attributes:<br/>xrpl.consensus.ledger.seq = 12345678<br/>xrpl.consensus.mode = proposing<br/>xrpl.consensus.proposers = 35"]
subgraph open["consensus.phase.open"]
open_desc["Duration: ~3s<br/>Waiting for transactions"]
end
subgraph establish["consensus.phase.establish"]
est_attrs["proposals_received = 28<br/>disputes_resolved = 3"]
est_children["├── consensus.proposal.receive (×28)<br/>├── consensus.proposal.send (×1)<br/>└── consensus.dispute.resolve (×3)"]
end
subgraph accept["consensus.phase.accept"]
acc_attrs["transactions_applied = 150<br/>ledger.hash = DEF456..."]
acc_children["├── ledger.build<br/>└── ledger.validate"]
end
attrs --> open
open --> establish
establish --> accept
end
style round fill:#f57f17,stroke:#e65100,color:#ffffff
style open fill:#1565c0,stroke:#0d47a1,color:#ffffff
style establish fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style accept fill:#c2185b,stroke:#880e4f,color:#ffffff
```
**Reading the diagram:**
- **consensus.round (orange, root span)**: The top-level span encompassing the entire consensus round, with attributes like ledger sequence, mode, and proposer count.
- **consensus.phase.open (blue)**: The first phase where the node waits (~3s) to collect incoming transactions before proposing.
- **consensus.phase.establish (green)**: The negotiation phase where validators exchange proposals, resolve disputes, and converge on a transaction set. Child spans track each proposal received/sent and each dispute resolved.
- **consensus.phase.accept (pink)**: The final phase where the agreed transaction set is applied, a new ledger is built, and the ledger is validated. Child spans cover `ledger.build` and `ledger.validate`.
- **Arrows (open to establish to accept)**: The sequential flow through the three consensus phases. Each phase must complete before the next begins.
---
## 1.5 RPC Request Flow
> **WS** = WebSocket
RPC requests support W3C Trace Context headers for distributed tracing across services:
```mermaid
flowchart TB
subgraph request["rpc.request (root span)"]
http["HTTP Request — POST /<br/>traceparent:<br/>00-abc123...-def456...-01"]
attrs["Attributes:<br/>http.method = POST<br/>net.peer.ip = 192.168.1.100<br/>xrpl.rpc.command = submit"]
subgraph enqueue["jobqueue.enqueue"]
job_attr["xrpl.job.type = jtCLIENT_RPC"]
end
subgraph command["rpc.command.submit"]
cmd_attrs["xrpl.rpc.version = 2<br/>xrpl.rpc.role = user"]
cmd_children["├── tx.deserialize<br/>├── tx.validate_local<br/>└── tx.submit_to_network"]
end
response["Response: 200 OK<br/>Duration: 45ms"]
http --> attrs
attrs --> enqueue
enqueue --> command
command --> response
end
style request fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style enqueue fill:#1565c0,stroke:#0d47a1,color:#ffffff
style command fill:#e65100,stroke:#bf360c,color:#ffffff
```
**Reading the diagram:**
- **rpc.request (green, root span)**: The outermost span representing the full RPC request lifecycle, from HTTP receipt to response. Carries the W3C `traceparent` header for distributed tracing.
- **HTTP Request node**: Shows the incoming POST request with its `traceparent` header and extracted attributes (method, peer IP, command name).
- **jobqueue.enqueue (blue)**: The span covering the asynchronous handoff from the RPC thread to the JobQueue worker thread. The trace context is preserved across this async boundary.
- **rpc.command.submit (orange)**: The span for the actual command execution, with child spans for deserialization, local validation, and network submission.
- **Response node**: The final output with HTTP status and total duration, marking the end of the root span.
- **Arrows (top to bottom)**: The sequential processing pipeline -- receive request, extract attributes, enqueue job, execute command, return response.
---
## 1.6 Key Trace Points
> **TxQ** = Transaction Queue
The following table identifies priority instrumentation points across the codebase:
| Category | Span Name | File | Method | Priority |
| --------------- | ---------------------- | ---------------------- | ----------------------- | -------- |
| **Transaction** | `tx.receive` | `PeerImp.cpp` | `handleTransaction()` | High |
| **Transaction** | `tx.validate` | `NetworkOPs.cpp` | `processTransaction()` | High |
| **Transaction** | `tx.process` | `NetworkOPs.cpp` | `doTransactionSync()` | High |
| **Transaction** | `tx.relay` | `OverlayImpl.cpp` | `relay()` | Medium |
| **Consensus** | `consensus.round` | `RCLConsensus.cpp` | `startRound()` | High |
| **Consensus** | `consensus.phase.*` | `Consensus.h` | `timerEntry()` | High |
| **Consensus** | `consensus.proposal.*` | `RCLConsensus.cpp` | `peerProposal()` | Medium |
| **RPC** | `rpc.request` | `ServerHandler.cpp` | `onRequest()` | High |
| **RPC** | `rpc.command.*` | `RPCHandler.cpp` | `doCommand()` | High |
| **Peer** | `peer.connect` | `OverlayImpl.cpp` | `onHandoff()` | Low |
| **Peer** | `peer.message.*` | `PeerImp.cpp` | `onMessage()` | Low |
| **Ledger** | `ledger.acquire` | `InboundLedgers.cpp` | `acquire()` | Medium |
| **Ledger** | `ledger.build` | `RCLConsensus.cpp` | `buildLCL()` | High |
| **PathFinding** | `pathfind.request` | `PathRequest.cpp` | `doUpdate()` | High |
| **PathFinding** | `pathfind.compute` | `Pathfinder.cpp` | `findPaths()` | High |
| **TxQ** | `txq.enqueue` | `TxQ.cpp` | `apply()` | High |
| **TxQ** | `txq.apply` | `TxQ.cpp` | `processClosedLedger()` | High |
| **Fee** | `fee.escalate` | `LoadManager.cpp` | `raiseLocalFee()` | Medium |
| **Ledger** | `ledger.replay` | `LedgerReplayer.h` | `replay()` | Medium |
| **Ledger** | `ledger.delta` | `LedgerDeltaAcquire.h` | `processData()` | Medium |
| **Validator** | `validator.list.fetch` | `ValidatorList.cpp` | `verify()` | Medium |
| **Validator** | `validator.manifest` | `Manifest.cpp` | `applyManifest()` | Low |
| **Amendment** | `amendment.vote` | `AmendmentTable.cpp` | `doVoting()` | Low |
| **SHAMap** | `shamap.sync` | `SHAMap.cpp` | `fetchRoot()` | Medium |
---
## 1.7 Instrumentation Priority
> **TxQ** = Transaction Queue
```mermaid
quadrantChart
title Instrumentation Priority Matrix
x-axis Low Complexity --> High Complexity
y-axis Low Value --> High Value
quadrant-1 Implement First
quadrant-2 Plan Carefully
quadrant-3 Quick Wins
quadrant-4 Consider Later
RPC Tracing: [0.2, 0.92]
Transaction Tracing: [0.55, 0.88]
Consensus Tracing: [0.78, 0.82]
PathFinding: [0.38, 0.75]
TxQ and Fees: [0.25, 0.65]
Ledger Sync: [0.62, 0.58]
Peer Message Tracing: [0.35, 0.25]
JobQueue Tracing: [0.2, 0.48]
Validator Mgmt: [0.48, 0.42]
Amendment Tracking: [0.15, 0.32]
SHAMap Operations: [0.72, 0.45]
```
---
## 1.8 Observable Outcomes
> **TxQ** = Transaction Queue | **UNL** = Unique Node List
After implementing OpenTelemetry, operators and developers will gain visibility into the following:
### 1.8.1 What You Will See: Traces
| Trace Type | Description | Example Query in Grafana/Tempo |
| -------------------------- | ------------------------------------------------------------------------------------------- | ---------------------------------------------------- |
| **Transaction Lifecycle** | Full journey from RPC submission through validation, relay, consensus, and ledger inclusion | `{service.name="xrpld" && xrpl.tx.hash="ABC123..."}` |
| **Cross-Node Propagation** | Transaction path across multiple xrpld nodes with timing | `{xrpl.tx.relay_count > 0}` |
| **Consensus Rounds** | Complete round with all phases (open, establish, accept) | `{span.name=~"consensus.round.*"}` |
| **RPC Request Processing** | Individual command execution with timing breakdown | `{xrpl.rpc.command="account_info"}` |
| **Ledger Acquisition** | Peer-to-peer ledger data requests and responses | `{span.name="ledger.acquire"}` |
| **PathFinding Latency** | Path computation time and cache effectiveness for payment RPCs | `{span.name="pathfind.compute"}` |
| **TxQ Behavior** | Queue depth, eviction patterns, fee escalation during congestion | `{span.name=~"txq.*"}` |
| **Ledger Sync** | Full acquisition timeline including delta and transaction fetches | `{span.name=~"ledger.acquire.*"}` |
| **Validator Health** | UNL fetch success, manifest updates, stale list detection | `{span.name=~"validator.*"}` |
### 1.8.2 What You Will See: Metrics (Derived from Traces)
| Metric | Description | Dashboard Panel |
| ----------------------------- | --------------------------------------- | --------------------------- |
| **RPC Latency (p50/p95/p99)** | Response time distribution per command | Heatmap by command |
| **Transaction Throughput** | Transactions processed per second | Time series graph |
| **Consensus Round Duration** | Time to complete consensus phases | Histogram |
| **Cross-Node Latency** | Time for transaction to reach N nodes | Line chart with percentiles |
| **Error Rate** | Failed transactions/RPC calls by type | Stacked bar chart |
| **PathFinding Latency** | Path computation time per currency pair | Heatmap by currency |
| **TxQ Depth** | Queued transactions over time | Time series with thresholds |
| **Fee Escalation Level** | Current fee multiplier | Gauge with alert thresholds |
| **Ledger Sync Duration** | Time to acquire missing ledgers | Histogram |
### 1.8.3 Concrete Dashboard Examples
**Transaction Trace View (Tempo):**
```
┌────────────────────────────────────────────────────────────────────────────────┐
│ Trace: abc123... (Transaction Submission) Duration: 847ms │
├────────────────────────────────────────────────────────────────────────────────┤
│ ├── rpc.request [ServerHandler] ████░░░░░░ 45ms │
│ │ └── rpc.command.submit [RPCHandler] ████░░░░░░ 42ms │
│ │ └── tx.receive [NetworkOPs] ███░░░░░░░ 35ms │
│ │ ├── tx.validate [TxQ] █░░░░░░░░░ 8ms │
│ │ └── tx.relay [Overlay] ██░░░░░░░░ 15ms │
│ │ ├── tx.receive [Node-B] █████░░░░░ 52ms │
│ │ │ └── tx.relay [Node-B] ██░░░░░░░░ 18ms │
│ │ └── tx.receive [Node-C] ██████░░░░ 65ms │
│ └── consensus.round [RCLConsensus] ████████░░ 720ms │
│ ├── consensus.phase.open ██░░░░░░░░ 180ms │
│ ├── consensus.phase.establish █████░░░░░ 480ms │
│ └── consensus.phase.accept █░░░░░░░░░ 60ms │
└────────────────────────────────────────────────────────────────────────────────┘
```
**RPC Performance Dashboard Panel:**
```
┌─────────────────────────────────────────────────────────────┐
│ RPC Command Latency (Last 1 Hour) │
├─────────────────────────────────────────────────────────────┤
│ Command │ p50 │ p95 │ p99 │ Errors │ Rate │
│──────────────────┼────────┼────────┼────────┼────────┼──────│
│ account_info │ 12ms │ 45ms │ 89ms │ 0.1% │ 150/s│
│ submit │ 35ms │ 120ms │ 250ms │ 2.3% │ 45/s│
│ ledger │ 8ms │ 25ms │ 55ms │ 0.0% │ 80/s│
│ tx │ 15ms │ 50ms │ 100ms │ 0.5% │ 60/s│
│ server_info │ 5ms │ 12ms │ 20ms │ 0.0% │ 200/s│
└─────────────────────────────────────────────────────────────┘
```
**Consensus Health Dashboard Panel:**
```mermaid
---
config:
xyChart:
width: 1200
height: 400
plotReservedSpacePercent: 50
chartOrientation: vertical
themeVariables:
xyChart:
plotColorPalette: "#3498db"
---
xychart-beta
title "Consensus Round Duration (Last 24 Hours)"
x-axis "Time of Day (Hours)" [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24]
y-axis "Duration (seconds)" 1 --> 5
line [2.1, 2.4, 2.8, 3.2, 3.8, 4.3, 4.5, 5.0, 4.7, 4.0, 3.2, 2.6, 2.0]
```
### 1.8.4 Operator Actionable Insights
| Scenario | What You'll See | Action |
| ------------------------- | ---------------------------------------------------------------------------- | ------------------------------------------------ |
| **Slow RPC** | Span showing which phase is slow (parsing, execution, serialization) | Optimize specific code path |
| **Transaction Stuck** | Trace stops at validation; error attribute shows reason | Fix transaction parameters |
| **Consensus Delay** | Phase.establish taking too long; proposer attribute shows missing validators | Investigate network connectivity |
| **Memory Spike** | Large batch of spans correlating with memory increase | Tune batch_size or sampling |
| **Network Partition** | Traces missing cross-node links for specific peer | Check peer connectivity |
| **Path Computation Slow** | pathfind.compute span shows high latency; cache miss rate in attributes | Warm the RippleLineCache, check order book depth |
| **TxQ Full** | txq.enqueue spans show evictions; fee.escalate spans increasing | Monitor fee levels, alert operators |
| **Ledger Sync Stalled** | ledger.acquire spans timing out; peer reliability attributes show issues | Check peer connectivity, add trusted peers |
| **UNL Stale** | validator.list.fetch spans failing; last_update attribute aging | Verify validator site URLs, check DNS |
### 1.8.5 Developer Debugging Workflow
1. **Find Transaction**: Query by `xrpl.tx.hash` to get full trace
2. **Identify Bottleneck**: Look at span durations to find slowest component
3. **Check Attributes**: Review `xrpl.tx.validity`, `xrpl.rpc.status` for errors
4. **Correlate Logs**: Use `trace_id` to find related PerfLog entries
5. **Compare Nodes**: Filter by `service.instance.id` to compare behavior across nodes
---
_Next: [Design Decisions](./02-design-decisions.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -1,611 +0,0 @@
# Design Decisions
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Architecture Analysis](./01-architecture-analysis.md) | [Code Samples](./04-code-samples.md)
---
## 2.1 OpenTelemetry Components
> **OTLP** = OpenTelemetry Protocol
### 2.1.1 SDK Selection
**Primary Choice**: OpenTelemetry C++ SDK (`opentelemetry-cpp`)
| Component | Purpose | Required |
| --------------------------------------- | ---------------------- | ----------- |
| `opentelemetry-cpp::api` | Tracing API headers | Yes |
| `opentelemetry-cpp::sdk` | SDK implementation | Yes |
| `opentelemetry-cpp::ext` | Extensions (exporters) | Yes |
| `opentelemetry-cpp::otlp_grpc_exporter` | OTLP/gRPC export | Recommended |
| `opentelemetry-cpp::otlp_http_exporter` | OTLP/HTTP export | Alternative |
### 2.1.2 Instrumentation Strategy
**Manual Instrumentation** (recommended):
| Approach | Pros | Cons |
| ---------- | --------------------------------------------------------------- | ------------------------------------------------------- |
| **Manual** | Precise control, optimized placement, xrpld-specific attributes | More development effort |
| **Auto** | Less code, automatic coverage | Less control, potential overhead, limited customization |
---
## 2.2 Exporter Configuration
> **OTLP** = OpenTelemetry Protocol
```mermaid
flowchart TB
subgraph nodes["xrpld Nodes"]
node1["xrpld<br/>Node 1"]
node2["xrpld<br/>Node 2"]
node3["xrpld<br/>Node 3"]
end
collector["OpenTelemetry<br/>Collector<br/>(sidecar or standalone)"]
subgraph backends["Observability Backends"]
tempo["Tempo"]
elastic["Elastic<br/>APM"]
end
node1 -->|"OTLP/gRPC<br/>:4317"| collector
node2 -->|"OTLP/gRPC<br/>:4317"| collector
node3 -->|"OTLP/gRPC<br/>:4317"| collector
collector --> tempo
collector --> elastic
style nodes fill:#0d47a1,stroke:#082f6a,color:#ffffff
style backends fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style collector fill:#bf360c,stroke:#8c2809,color:#ffffff
```
**Reading the diagram:**
- **xrpld Nodes (blue)**: The source of telemetry data. Each xrpld node exports spans via OTLP/gRPC on port 4317.
- **OpenTelemetry Collector (red)**: The central aggregation point that receives spans from all nodes. Can run as a sidecar (per-node) or standalone (shared). Handles batching, filtering, and routing.
- **Observability Backends (green)**: The storage and visualization destinations. Tempo is the recommended backend for both development and production, and Elastic APM is an alternative. The Collector routes to one or more backends.
- **Arrows (nodes to collector to backends)**: The data pipeline -- spans flow from nodes to the Collector over gRPC, then the Collector fans out to the configured backends.
### 2.2.1 OTLP/gRPC (Recommended)
```cpp
// Configuration for OTLP over gRPC
namespace otlp = opentelemetry::exporter::otlp;
otlp::OtlpGrpcExporterOptions opts;
opts.endpoint = "localhost:4317";
opts.useTls = true;
opts.sslCaCertPath = "/path/to/ca.crt";
```
### 2.2.2 OTLP/HTTP (Alternative)
```cpp
// Configuration for OTLP over HTTP
namespace otlp = opentelemetry::exporter::otlp;
otlp::OtlpHttpExporterOptions opts;
opts.url = "http://localhost:4318/v1/traces";
opts.content_type = otlp::HttpRequestContentType::kJson; // or kBinary
```
---
## 2.3 Span Naming Conventions
> **TxQ** = Transaction Queue | **UNL** = Unique Node List | **WS** = WebSocket
### 2.3.1 Naming Schema
```
<component>.<operation>[.<sub-operation>]
```
**Examples**:
- `tx.receive` - Transaction received from peer
- `consensus.phase.establish` - Consensus establish phase
- `rpc.command.server_info` - server_info RPC command
### 2.3.2 Complete Span Catalog
```yaml
# Transaction Spans
tx:
receive: "Transaction received from network"
validate: "Transaction signature/format validation"
process: "Full transaction processing"
relay: "Transaction relay to peers"
apply: "Apply transaction to ledger"
# Consensus Spans
consensus:
round: "Complete consensus round"
phase:
open: "Open phase - collecting transactions"
establish: "Establish phase - reaching agreement"
accept: "Accept phase - applying consensus"
proposal:
receive: "Receive peer proposal"
send: "Send our proposal"
validation:
receive: "Receive peer validation"
send: "Send our validation"
# RPC Spans
rpc:
request: "HTTP/WebSocket request handling"
command:
"*": "Specific RPC command (dynamic)"
# Peer Spans
peer:
connect: "Peer connection establishment"
disconnect: "Peer disconnection"
message:
send: "Send protocol message"
receive: "Receive protocol message"
# Ledger Spans
ledger:
acquire: "Ledger acquisition from network"
build: "Build new ledger"
validate: "Ledger validation"
close: "Close ledger"
replay: "Ledger replay executed"
delta: "Delta-based ledger acquired"
# PathFinding Spans
pathfind:
request: "Path request initiated"
compute: "Path computation executed"
# TxQ Spans
txq:
enqueue: "Transaction queued"
apply: "Queued transaction applied"
# Fee/Load Spans
fee:
escalate: "Fee escalation triggered"
# Validator Spans
validator:
list:
fetch: "UNL list fetched"
manifest: "Manifest update processed"
# Amendment Spans
amendment:
vote: "Amendment voting executed"
# SHAMap Spans
shamap:
sync: "State tree synchronization"
# Job Spans
job:
enqueue: "Job added to queue"
execute: "Job execution"
```
---
## 2.4 Attribute Schema
> **TxQ** = Transaction Queue | **UNL** = Unique Node List | **OTLP** = OpenTelemetry Protocol
### 2.4.1 Resource Attributes (Set Once at Startup)
```cpp
// Standard OpenTelemetry semantic conventions
resource::SemanticConventions::SERVICE_NAME = "xrpld"
resource::SemanticConventions::SERVICE_VERSION = BuildInfo::getVersionString()
resource::SemanticConventions::SERVICE_INSTANCE_ID = <node_public_key_base58>
// Custom xrpld attributes
"xrpl.network.id" = <network_id> // e.g., 0 for mainnet
"xrpl.network.type" = "mainnet" | "testnet" | "devnet" | "standalone"
"xrpl.node.type" = "validator" | "stock" | "reporting"
"xrpl.node.cluster" = <cluster_name> // If clustered
```
### 2.4.2 Span Attributes by Category
#### Transaction Attributes
```cpp
"xrpl.tx.hash" = string // Transaction hash (hex)
"xrpl.tx.type" = string // "Payment", "OfferCreate", etc.
"xrpl.tx.account" = string // Source account (redacted in prod)
"xrpl.tx.sequence" = int64 // Account sequence number
"xrpl.tx.fee" = int64 // Fee in drops
"xrpl.tx.result" = string // "tesSUCCESS", "tecPATH_DRY", etc.
"xrpl.tx.ledger_index" = int64 // Ledger containing transaction
```
#### Consensus Attributes
```cpp
"xrpl.consensus.round" = int64 // Round number
"xrpl.consensus.phase" = string // "open", "establish", "accept"
"xrpl.consensus.mode" = string // "proposing", "observing", etc.
"xrpl.consensus.proposers" = int64 // Number of proposers
"xrpl.consensus.ledger.prev" = string // Previous ledger hash
"xrpl.consensus.ledger.seq" = int64 // Ledger sequence
"xrpl.consensus.tx_count" = int64 // Transactions in consensus set
"xrpl.consensus.duration_ms" = float64 // Round duration
```
#### RPC Attributes
```cpp
"xrpl.rpc.command" = string // Command name
"xrpl.rpc.version" = int64 // API version
"xrpl.rpc.role" = string // "admin" or "user"
"xrpl.rpc.params" = string // Sanitized parameters (optional)
```
#### Peer & Message Attributes
```cpp
"xrpl.peer.id" = string // Peer public key (base58)
"xrpl.peer.address" = string // IP:port
"xrpl.peer.latency_ms" = float64 // Measured latency
"xrpl.peer.cluster" = string // Cluster name if clustered
"xrpl.message.type" = string // Protocol message type name
"xrpl.message.size_bytes" = int64 // Message size
"xrpl.message.compressed" = bool // Whether compressed
```
#### Ledger & Job Attributes
```cpp
"xrpl.ledger.hash" = string // Ledger hash
"xrpl.ledger.index" = int64 // Ledger sequence/index
"xrpl.ledger.close_time" = int64 // Close time (epoch)
"xrpl.ledger.tx_count" = int64 // Transaction count
"xrpl.job.type" = string // Job type name
"xrpl.job.queue_ms" = float64 // Time spent in queue
"xrpl.job.worker" = int64 // Worker thread ID
```
#### PathFinding Attributes
```cpp
"xrpl.pathfind.source_currency" = string // Source currency code
"xrpl.pathfind.dest_currency" = string // Destination currency code
"xrpl.pathfind.path_count" = int64 // Number of paths found
"xrpl.pathfind.cache_hit" = bool // RippleLineCache hit
```
#### TxQ Attributes
```cpp
"xrpl.txq.queue_depth" = int64 // Current queue depth
"xrpl.txq.fee_level" = int64 // Fee level of transaction
"xrpl.txq.eviction_reason" = string // Why transaction was evicted
```
#### Fee Attributes
```cpp
"xrpl.fee.load_factor" = int64 // Current load factor
"xrpl.fee.escalation_level" = int64 // Fee escalation multiplier
```
#### Validator Attributes
```cpp
"xrpl.validator.list_size" = int64 // UNL size
"xrpl.validator.list_age_sec" = int64 // Seconds since last update
```
#### Amendment Attributes
```cpp
"xrpl.amendment.name" = string // Amendment name
"xrpl.amendment.status" = string // "enabled", "vetoed", "supported"
```
#### SHAMap Attributes
```cpp
"xrpl.shamap.type" = string // "transaction", "state", "account_state"
"xrpl.shamap.missing_nodes" = int64 // Number of missing nodes during sync
"xrpl.shamap.duration_ms" = float64 // Sync duration
```
### 2.4.3 Data Collection Summary
The following table summarizes what data is collected by category:
| Category | Attributes Collected | Purpose |
| --------------- | ---------------------------------------------------------------------- | ---------------------------- |
| **Transaction** | `tx.hash`, `tx.type`, `tx.result`, `tx.fee`, `ledger_index` | Trace transaction lifecycle |
| **Consensus** | `round`, `phase`, `mode`, `proposers` (public keys), `duration_ms` | Analyze consensus timing |
| **RPC** | `command`, `version`, `status`, `duration_ms` | Monitor RPC performance |
| **Peer** | `peer.id` (public key), `latency_ms`, `message.type`, `message.size` | Network topology analysis |
| **Ledger** | `ledger.hash`, `ledger.index`, `close_time`, `tx_count` | Ledger progression tracking |
| **Job** | `job.type`, `queue_ms`, `worker` | JobQueue performance |
| **PathFinding** | `pathfind.source_currency`, `dest_currency`, `path_count`, `cache_hit` | Payment path analysis |
| **TxQ** | `txq.queue_depth`, `fee_level`, `eviction_reason` | Queue depth and fee tracking |
| **Fee** | `fee.load_factor`, `escalation_level` | Fee escalation monitoring |
| **Validator** | `validator.list_size`, `list_age_sec` | UNL health monitoring |
| **Amendment** | `amendment.name`, `status` | Protocol upgrade tracking |
| **SHAMap** | `shamap.type`, `missing_nodes`, `duration_ms` | State tree sync performance |
### 2.4.4 Privacy & Sensitive Data Policy
> **PII** = Personally Identifiable Information
OpenTelemetry instrumentation is designed to collect **operational metadata only**, never sensitive content.
#### Data NOT Collected
The following data is explicitly **excluded** from telemetry collection:
| Excluded Data | Reason |
| ----------------------- | ----------------------------------------- |
| **Private Keys** | Never exposed; not relevant to tracing |
| **Account Balances** | Financial data; privacy sensitive |
| **Transaction Amounts** | Financial data; privacy sensitive |
| **Raw TX Payloads** | May contain sensitive memo/data fields |
| **Personal Data** | No PII collected |
| **IP Addresses** | Configurable; excluded by default in prod |
#### Privacy Protection Mechanisms
| Mechanism | Description |
| ----------------------------- | ------------------------------------------------------------------------- |
| **Account Hashing** | `xrpl.tx.account` is hashed at collector level before storage |
| **Configurable Redaction** | Sensitive fields can be excluded via `[telemetry]` config section |
| **Sampling** | Only 10% of traces recorded by default, reducing data exposure |
| **Local Control** | Node operators have full control over what gets exported |
| **No Raw Payloads** | Transaction content is never recorded, only metadata (hash, type, result) |
| **Collector-Level Filtering** | Additional redaction/hashing can be configured at OTel Collector |
#### Collector-Level Data Protection
The OpenTelemetry Collector can be configured to hash or redact sensitive attributes before export:
```yaml
processors:
attributes:
actions:
# Hash account addresses before storage
- key: xrpl.tx.account
action: hash
# Remove IP addresses entirely
- key: xrpl.peer.address
action: delete
# Redact specific fields
- key: xrpl.rpc.params
action: delete
```
#### Configuration Options for Privacy
In `xrpld.cfg`, operators can control data collection granularity:
```ini
[telemetry]
enabled=1
# Disable collection of specific components
trace_transactions=1
trace_consensus=1
trace_rpc=1
trace_peer=0 # Disable peer tracing (high volume, includes addresses)
# Redact specific attributes
redact_account=1 # Hash account addresses before export
redact_peer_address=1 # Remove peer IP addresses
```
> **Note**: The `redact_account` configuration in `xrpld.cfg` controls SDK-level redaction before export, while collector-level filtering (see [Collector-Level Data Protection](#collector-level-data-protection) above) provides an additional defense-in-depth layer. Both can operate independently.
> **Key Principle**: Telemetry collects **operational metadata** (timing, counts, hashes) — never **sensitive content** (keys, balances, amounts, raw payloads).
---
## 2.5 Context Propagation Design
> **WS** = WebSocket
### 2.5.1 Propagation Boundaries
```mermaid
flowchart TB
subgraph http["HTTP/WebSocket (RPC)"]
w3c["W3C Trace Context Headers:<br/>traceparent:<br/>00-trace_id-span_id-flags<br/>tracestate: xrpld=..."]
end
subgraph protobuf["Protocol Buffers (P2P)"]
proto["message TraceContext {<br/> bytes trace_id = 1; // 16 bytes<br/> bytes span_id = 2; // 8 bytes<br/> uint32 trace_flags = 3;<br/> string trace_state = 4;<br/>}"]
end
subgraph jobqueue["JobQueue (Internal Async)"]
job["Context captured at job creation,<br/>restored at execution<br/><br/>class Job {<br/> otel::context::Context<br/> traceContext_;<br/>};"]
end
style http fill:#0d47a1,stroke:#082f6a,color:#ffffff
style protobuf fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style jobqueue fill:#bf360c,stroke:#8c2809,color:#ffffff
```
**Reading the diagram:**
- **HTTP/WebSocket - RPC (blue)**: For client-facing RPC requests, trace context is propagated using the W3C `traceparent` header. This is the standard approach and works with any OTel-compatible client.
- **Protocol Buffers - P2P (green)**: For peer-to-peer messages between xrpld nodes, trace context is embedded as a protobuf `TraceContext` message carrying trace_id, span_id, flags, and optional trace_state.
- **JobQueue - Internal Async (red)**: For asynchronous work within a single node, the OTel context is captured when a job is created and restored when the job executes on a worker thread. This bridges the async gap so spans remain linked.
---
## 2.6 Integration with Existing Observability
> **OTLP** = OpenTelemetry Protocol | **WS** = WebSocket
### 2.6.1 Existing Frameworks Comparison
xrpld already has two observability mechanisms. OpenTelemetry complements (not replaces) them:
| Aspect | PerfLog | Beast Insight (StatsD) | OpenTelemetry |
| --------------------- | ----------------------------- | ---------------------------- | ------------------------- |
| **Type** | Logging | Metrics | Distributed Tracing |
| **Data** | JSON log entries | Counters, gauges, histograms | Spans with context |
| **Scope** | Single node | Single node | **Cross-node** |
| **Output** | `perf.log` file | StatsD server | OTLP Collector |
| **Question answered** | "What happened on this node?" | "How many? How fast?" | "What was the journey?" |
| **Correlation** | By timestamp | By metric name | By `trace_id` |
| **Overhead** | Low (file I/O) | Low (UDP packets) | Low-Medium (configurable) |
### 2.6.2 What Each Framework Does Best
#### PerfLog
- **Purpose**: Detailed local event logging for RPC and job execution
- **Strengths**:
- Rich JSON output with timing data
- Already integrated in RPC handlers
- File-based, no external dependencies
- **Limitations**:
- Single-node only (no cross-node correlation)
- No parent-child relationships between events
- Manual log parsing required
```json
// Example PerfLog entry
{
"time": "2024-01-15T10:30:00.123Z",
"method": "submit",
"duration_us": 1523,
"result": "tesSUCCESS"
}
```
#### Beast Insight (StatsD)
- **Purpose**: Real-time metrics for monitoring dashboards
- **Strengths**:
- Aggregated metrics (counters, gauges, histograms)
- Low overhead (UDP, fire-and-forget)
- Good for alerting thresholds
- **Limitations**:
- No request-level detail
- No causal relationships
- Single-node perspective
```cpp
// Example StatsD usage in xrpld
insight.increment("rpc.submit.count");
insight.gauge("ledger.age", age);
insight.timing("consensus.round", duration);
```
#### OpenTelemetry (NEW)
- **Purpose**: Distributed request tracing across nodes
- **Strengths**:
- **Cross-node correlation** via `trace_id`
- Parent-child span relationships
- Rich attributes per span
- Industry standard (CNCF)
- **Limitations**:
- Requires collector infrastructure
- Higher complexity than logging
```cpp
// Example OpenTelemetry span
auto span = telemetry.startSpan("tx.relay");
span->SetAttribute("tx.hash", hash);
span->SetAttribute("peer.id", peerId);
// Span automatically linked to parent via context
```
### 2.6.3 When to Use Each
| Scenario | PerfLog | StatsD | OpenTelemetry |
| --------------------------------------- | ---------- | ------ | ------------- |
| "How many TXs per second?" | ❌ | ✅ | ✅ |
| "What's the p99 RPC latency?" | ❌ | ✅ | ✅ |
| "Why was this specific TX slow?" | ⚠️ partial | ❌ | ✅ |
| "Which node delayed consensus?" | ❌ | ❌ | ✅ |
| "What happened on node X at time T?" | ✅ | ❌ | ✅ |
| "Show me the TX journey across 5 nodes" | ❌ | ❌ | ✅ |
### 2.6.4 Coexistence Strategy
```mermaid
flowchart TB
subgraph xrpld["xrpld Process"]
perflog["PerfLog<br/>(JSON to file)"]
insight["Beast Insight<br/>(StatsD)"]
otel["OpenTelemetry<br/>(Tracing)"]
end
perflog --> perffile["perf.log"]
insight --> statsd["StatsD Server"]
otel --> collector["OTLP Collector"]
perffile --> grafana["Grafana<br/>(Unified UI)"]
statsd --> grafana
collector --> grafana
style xrpld fill:#212121,stroke:#0a0a0a,color:#ffffff
style grafana fill:#bf360c,stroke:#8c2809,color:#ffffff
```
**Reading the diagram:**
- **xrpld Process (dark gray)**: The single xrpld node running all three observability frameworks side by side. Each framework operates independently with no interference.
- **PerfLog to perf.log**: PerfLog writes JSON-formatted event logs to a local file. Grafana can ingest these via Loki or a file-based datasource.
- **Beast Insight to StatsD Server**: Insight sends aggregated metrics (counters, gauges) over UDP to a StatsD server. Grafana reads from StatsD-compatible backends like Graphite or Prometheus (via StatsD exporter).
- **OpenTelemetry to OTLP Collector**: OTel exports spans over OTLP/gRPC to a Collector, which then forwards to a trace backend (Tempo).
- **Grafana (red, unified UI)**: All three data streams converge in Grafana, enabling operators to correlate logs, metrics, and traces in a single dashboard.
### 2.6.5 Correlation with PerfLog
Trace IDs can be correlated with existing PerfLog entries for comprehensive debugging:
```cpp
// In RPCHandler.cpp - correlate trace with PerfLog
Status doCommand(RPC::JsonContext& context, Json::Value& result)
{
// Start OpenTelemetry span
auto span = context.app.getTelemetry().startSpan(
"rpc.command." + context.method);
// Get trace ID for correlation
auto traceId = span->GetContext().trace_id().IsValid()
? toHex(span->GetContext().trace_id())
: "";
// Use existing PerfLog with trace correlation
auto const curId = context.app.getPerfLog().currentId();
context.app.getPerfLog().rpcStart(context.method, curId);
// Future: Add trace ID to PerfLog entry
// context.app.getPerfLog().setTraceId(curId, traceId);
try {
auto ret = handler(context, result);
context.app.getPerfLog().rpcFinish(context.method, curId);
span->SetStatus(opentelemetry::trace::StatusCode::kOk);
return ret;
} catch (std::exception const& e) {
context.app.getPerfLog().rpcError(context.method, curId);
span->RecordException(e);
span->SetStatus(opentelemetry::trace::StatusCode::kError, e.what());
throw;
}
}
```
---
_Previous: [Architecture Analysis](./01-architecture-analysis.md)_ | _Next: [Implementation Strategy](./03-implementation-strategy.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -1,528 +0,0 @@
# Implementation Strategy
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Code Samples](./04-code-samples.md) | [Configuration Reference](./05-configuration-reference.md)
---
## 3.1 Directory Structure
The telemetry implementation follows xrpld's existing code organization pattern:
```
include/xrpl/
├── telemetry/
│ ├── Telemetry.h # Main telemetry interface
│ ├── TelemetryConfig.h # Configuration structures
│ ├── TraceContext.h # Context propagation utilities
│ ├── SpanGuard.h # RAII span management
│ └── SpanAttributes.h # Attribute helper functions
src/libxrpl/
├── telemetry/
│ ├── Telemetry.cpp # Implementation
│ ├── TelemetryConfig.cpp # Config parsing
│ ├── TraceContext.cpp # Context serialization
│ └── NullTelemetry.cpp # No-op implementation
src/xrpld/
├── telemetry/
│ ├── TracingInstrumentation.h # Instrumentation macros
│ └── TracingInstrumentation.cpp
```
---
## 3.2 Implementation Approach
<div align="center">
```mermaid
%%{init: {'flowchart': {'nodeSpacing': 20, 'rankSpacing': 30}}}%%
flowchart TB
subgraph phase1["Phase 1: Core"]
direction LR
sdk["SDK Integration"] ~~~ interface["Telemetry Interface"] ~~~ config["Configuration"]
end
subgraph phase2["Phase 2: RPC"]
direction LR
http["HTTP Context"] ~~~ rpc["RPC Handlers"]
end
subgraph phase3["Phase 3: P2P"]
direction LR
proto["Protobuf Context"] ~~~ tx["Transaction Relay"]
end
subgraph phase4["Phase 4: Consensus"]
direction LR
consensus["Consensus Rounds"] ~~~ proposals["Proposals"]
end
phase1 --> phase2 --> phase3 --> phase4
style phase1 fill:#1565c0,stroke:#0d47a1,color:#ffffff
style phase2 fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style phase3 fill:#e65100,stroke:#bf360c,color:#ffffff
style phase4 fill:#c2185b,stroke:#880e4f,color:#ffffff
```
</div>
### Key Principles
1. **Minimal Intrusion**: Instrumentation should not alter existing control flow
2. **Zero-Cost When Disabled**: Use compile-time flags and no-op implementations
3. **Backward Compatibility**: Protocol Buffer extensions use high field numbers
4. **Graceful Degradation**: Tracing failures must not affect node operation
---
## 3.3 Performance Overhead Summary
> **OTLP** = OpenTelemetry Protocol
| Metric | Overhead | Notes |
| ------------- | ---------- | ------------------------------------------------ |
| CPU | 1-3% | Of per-transaction CPU cost (~200μs baseline) |
| Memory | ~10 MB | SDK statics + batch buffer + worker thread stack |
| Network | 10-50 KB/s | Compressed OTLP export to collector |
| Latency (p99) | <2% | With proper sampling configuration |
---
## 3.4 Detailed CPU Overhead Analysis
### 3.4.1 Per-Operation Costs
> **Note on hardware assumptions**: The costs below are based on the official OTel C++ SDK CI benchmarks
> (969 runs on GitHub Actions 2-core shared runners). On production server hardware (3+ GHz Xeon),
> expect costs at the **lower end** of each range (~30-50% improvement over CI hardware).
| Operation | Time (ns) | Frequency | Impact |
| --------------------- | --------- | ---------------------- | ---------- |
| Span creation | 500-1000 | Every traced operation | Low |
| Span end | 100-200 | Every traced operation | Low |
| SetAttribute (string) | 80-120 | 3-5 per span | Low |
| SetAttribute (int) | 40-60 | 2-3 per span | Negligible |
| AddEvent | 100-200 | 0-2 per span | Low |
| Context injection | 150-250 | Per outgoing message | Low |
| Context extraction | 100-180 | Per incoming message | Low |
| GetCurrent context | 10-20 | Thread-local access | Negligible |
**Source**: Span creation based on OTel C++ SDK `BM_SpanCreation` benchmark (AlwaysOnSampler +
SimpleSpanProcessor + InMemoryExporter), median ~1,000 ns on CI hardware. AddEvent includes
timestamp read + string copy + vector push + mutex acquisition. Context injection/extraction
confirmed by `BM_SpanCreationWithScope` benchmark delta (~160 ns).
### 3.4.2 Transaction Processing Overhead
<div align="center">
```mermaid
%%{init: {'pie': {'textPosition': 0.75}}}%%
pie showData
"tx.receive (1400ns)" : 1400
"tx.validate (1200ns)" : 1200
"tx.relay (1200ns)" : 1200
"Context inject (200ns)" : 200
```
**Transaction Tracing Overhead (~4.0μs total)**
</div>
**Overhead percentage**: 4.0 μs / 200 μs (avg tx processing) = **~2.0%**
> **Breakdown**: Each span (tx.receive, tx.validate, tx.relay) costs ~1,000 ns for creation plus
> ~200-400 ns for 3-5 attribute sets. Context injection is ~200 ns (confirmed by benchmarks).
> On production hardware, expect ~2.6 μs total (~1.3% overhead) due to faster span creation (~500-600 ns).
### 3.4.3 Consensus Round Overhead
| Operation | Count | Cost (ns) | Total |
| ---------------------- | ----- | --------- | ---------- |
| consensus.round span | 1 | ~1200 | ~1.2 μs |
| consensus.phase spans | 3 | ~1100 | ~3.3 μs |
| proposal.receive spans | ~20 | ~1100 | ~22 μs |
| proposal.send spans | ~3 | ~1100 | ~3.3 μs |
| Context operations | ~30 | ~200 | ~6 μs |
| **TOTAL** | | | **~36 μs** |
> **Why higher**: Each span costs ~1,000 ns creation + ~100-200 ns for 1-2 attributes, totaling ~1,100-1,200 ns.
> Context operations remain ~200 ns (confirmed by benchmarks). On production hardware, expect ~24 μs total.
**Overhead percentage**: 36 μs / 3s (typical round) = **~0.001%** (negligible)
### 3.4.4 RPC Request Overhead
| Operation | Cost (ns) |
| ---------------- | ------------ |
| rpc.request span | ~1200 |
| rpc.command span | ~1100 |
| Context extract | ~250 |
| Context inject | ~200 |
| **TOTAL** | **~2.75 μs** |
> **Why higher**: Each span costs ~1,000 ns creation + ~100-200 ns for attributes (command name,
> version, role). Context extract/inject costs are confirmed by OTel C++ benchmarks.
- Fast RPC (1ms): 2.75 μs / 1ms = **~0.275%**
- Slow RPC (100ms): 2.75 μs / 100ms = **~0.003%**
---
## 3.5 Memory Overhead Analysis
> **OTLP** = OpenTelemetry Protocol
### 3.5.1 Static Memory
| Component | Size | Allocated |
| ------------------------------------ | ----------- | ---------- |
| TracerProvider singleton | ~64 KB | At startup |
| BatchSpanProcessor (circular buffer) | ~16 KB | At startup |
| BatchSpanProcessor (worker thread) | ~8 MB | At startup |
| OTLP exporter (gRPC channel init) | ~256 KB | At startup |
| Propagator registry | ~8 KB | At startup |
| **Total static** | **~8.3 MB** | |
> **Why higher than earlier estimate**: The BatchSpanProcessor's circular buffer itself is only ~16 KB
> (2049 x 8-byte `AtomicUniquePtr` entries), but it spawns a dedicated worker thread whose default
> stack size on Linux is ~8 MB. The OTLP gRPC exporter allocates memory for channel stubs and TLS
> initialization. The worker thread stack dominates the static footprint.
### 3.5.2 Dynamic Memory
| Component | Size per unit | Max units | Peak |
| -------------------- | -------------- | ---------- | --------------- |
| Active span | ~500-800 bytes | 1000 | ~500-800 KB |
| Queued span (export) | ~500 bytes | 2048 | ~1 MB |
| Attribute storage | ~80 bytes | 5 per span | Included |
| Context storage | ~64 bytes | Per thread | ~6.4 KB |
| **Total dynamic** | | | **~1.5-1.8 MB** |
> **Why active spans are larger**: An active `Span` object includes the wrapper (~88 bytes: shared_ptr,
> mutex, unique_ptr to Recordable) plus `SpanData` (~250 bytes: SpanContext, timestamps, name, status,
> empty containers) plus attribute storage (~200-500 bytes for 3-5 string attributes in a `std::map`).
> Source: `sdk/src/trace/span.h` and `sdk/include/opentelemetry/sdk/trace/span_data.h`.
> Queued spans release the wrapper, keeping only `SpanData` + attributes (~500 bytes).
### 3.5.3 Memory Growth Characteristics
```mermaid
---
config:
xyChart:
width: 700
height: 400
---
xychart-beta
title "Memory Usage vs Span Rate (bounded by queue limit)"
x-axis "Spans/second" [0, 200, 400, 600, 800, 1000]
y-axis "Memory (MB)" 0 --> 12
line [8.5, 9.2, 9.6, 9.9, 10.0, 10.0]
```
**Notes**:
- Memory increases with span rate but **plateaus at queue capacity** (default 2048 spans)
- Batch export prevents unbounded growth
- At queue limit, oldest spans are dropped (not blocked)
- Maximum memory is bounded: ~8.3 MB static (dominated by worker thread stack) + 2048 queued spans x ~500 bytes (~1 MB) + active spans (~0.8 MB) ≈ **~10 MB ceiling**
- The worker thread stack (~8 MB) is virtual memory; actual RSS depends on stack usage (typically much less)
### 3.5.4 Performance Data Sources
The overhead estimates in Sections 3.3-3.5 are derived from the following sources:
| Source | What it covers | URL |
| ------------------------------------------------ | ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| OTel C++ SDK CI benchmarks (969 runs) | Span creation, context activation, sampler overhead | [Benchmark Dashboard](https://open-telemetry.github.io/opentelemetry-cpp/benchmarks/) |
| `api/test/trace/span_benchmark.cc` | API-level span creation (~22 ns no-op) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/api/test/trace/span_benchmark.cc) |
| `sdk/test/trace/sampler_benchmark.cc` | SDK span creation with samplers (~1,000 ns AlwaysOn) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/sdk/test/trace/sampler_benchmark.cc) |
| `sdk/include/.../span_data.h` | SpanData memory layout (~250 bytes base) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/sdk/include/opentelemetry/sdk/trace/span_data.h) |
| `sdk/src/trace/span.h` | Span wrapper memory layout (~88 bytes) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/sdk/src/trace/span.h) |
| `sdk/include/.../batch_span_processor_options.h` | Default queue size (2048), batch size (512) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/sdk/include/opentelemetry/sdk/trace/batch_span_processor_options.h) |
| `sdk/include/.../circular_buffer.h` | CircularBuffer implementation (AtomicUniquePtr array) | [Source](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/sdk/include/opentelemetry/sdk/common/circular_buffer.h) |
| OTLP proto definition | Serialized span size estimation | [Proto](https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/trace/v1/trace.proto) |
---
## 3.6 Network Overhead Analysis
### 3.6.1 Export Bandwidth
> **Bytes per span**: Estimates use ~500 bytes/span (conservative upper bound). OTLP protobuf analysis
> shows a typical span with 3-5 string attributes serializes to ~200-300 bytes raw; with gzip
> compression (~60-70% of raw) and batching (amortized headers), ~350 bytes/span is more realistic.
> The table uses the conservative estimate for capacity planning.
| Sampling Rate | Spans/sec | Bandwidth | Notes |
| ------------- | --------- | --------- | ---------------- |
| 100% | ~500 | ~250 KB/s | Development only |
| 10% | ~50 | ~25 KB/s | Staging |
| 1% | ~5 | ~2.5 KB/s | Production |
| Error-only | ~1 | ~0.5 KB/s | Minimal overhead |
### 3.6.2 Trace Context Propagation
| Message Type | Context Size | Messages/sec | Overhead |
| ---------------------- | ------------ | ------------ | ----------- |
| TMTransaction | 25 bytes | ~100 | ~2.5 KB/s |
| TMProposeSet | 25 bytes | ~10 | ~250 B/s |
| TMValidation | 25 bytes | ~50 | ~1.25 KB/s |
| **Total P2P overhead** | | | **~4 KB/s** |
---
## 3.7 Optimization Strategies
### 3.7.1 Sampling Strategies
#### Tail Sampling
```mermaid
flowchart TD
trace["New Trace"]
trace --> errors{"Is Error?"}
errors -->|Yes| sample["SAMPLE"]
errors -->|No| consensus{"Is Consensus?"}
consensus -->|Yes| sample
consensus -->|No| slow{"Is Slow?"}
slow -->|Yes| sample
slow -->|No| prob{"Random < 10%?"}
prob -->|Yes| sample
prob -->|No| drop["DROP"]
style sample fill:#4caf50,stroke:#388e3c,color:#fff
style drop fill:#f44336,stroke:#c62828,color:#fff
```
### 3.7.2 Batch Tuning Recommendations
| Environment | Batch Size | Batch Delay | Max Queue |
| ------------------ | ---------- | ----------- | --------- |
| Low-latency | 128 | 1000ms | 512 |
| High-throughput | 1024 | 10000ms | 8192 |
| Memory-constrained | 256 | 2000ms | 512 |
### 3.7.3 Conditional Instrumentation
```cpp
// Compile-time feature flag
#ifndef XRPL_ENABLE_TELEMETRY
// Zero-cost when disabled
#define XRPL_TRACE_SPAN(t, n) ((void)0)
#endif
// Runtime component filtering
if (telemetry.shouldTracePeer())
{
XRPL_TRACE_SPAN(telemetry, "peer.message.receive");
// ... instrumentation
}
// No overhead when component tracing disabled
```
---
## 3.8 Links to Detailed Documentation
- **[Code Samples](./04-code-samples.md)**: Complete implementation code for all components
- **[Configuration Reference](./05-configuration-reference.md)**: Configuration options and collector setup
- **[Implementation Phases](./06-implementation-phases.md)**: Detailed timeline and milestones
---
## 3.9 Code Intrusiveness Assessment
> **TxQ** = Transaction Queue
This section provides a detailed assessment of how intrusive the OpenTelemetry integration is to the existing xrpld codebase.
### 3.9.1 Files Modified Summary
| Component | Files Modified | Lines Added | Lines Changed | Architectural Impact |
| --------------------- | -------------- | ----------- | ------------- | -------------------- |
| **Core Telemetry** | 5 new files | ~800 | 0 | None (new module) |
| **Application Init** | 2 files | ~30 | ~5 | Minimal |
| **RPC Layer** | 3 files | ~80 | ~20 | Minimal |
| **Transaction Relay** | 4 files | ~120 | ~40 | Low |
| **Consensus** | 3 files | ~100 | ~30 | Low-Medium |
| **Protocol Buffers** | 1 file | ~25 | 0 | Low |
| **CMake/Build** | 3 files | ~50 | ~10 | Minimal |
| **PathFinding** | 2 | ~80 | ~5 | Minimal |
| **TxQ/Fee** | 2 | ~60 | ~5 | Minimal |
| **Validator/Amend** | 3 | ~40 | ~5 | Minimal |
| **Total** | **~28 files** | **~1,490** | **~120** | **Low** |
### 3.9.2 Detailed File Impact
```mermaid
pie title Code Changes by Component
"New Telemetry Module" : 800
"Transaction Relay" : 160
"Consensus" : 130
"RPC Layer" : 100
"PathFinding" : 80
"TxQ/Fee" : 60
"Validator/Amendment" : 40
"Application Init" : 35
"Protocol Buffers" : 25
"Build System" : 60
```
#### New Files (No Impact on Existing Code)
| File | Lines | Purpose |
| ---------------------------------------------- | ----- | -------------------- |
| `include/xrpl/telemetry/Telemetry.h` | ~160 | Main interface |
| `include/xrpl/telemetry/SpanGuard.h` | ~120 | RAII wrapper |
| `include/xrpl/telemetry/TraceContext.h` | ~80 | Context propagation |
| `src/xrpld/telemetry/TracingInstrumentation.h` | ~60 | Macros |
| `src/libxrpl/telemetry/Telemetry.cpp` | ~200 | Implementation |
| `src/libxrpl/telemetry/TelemetryConfig.cpp` | ~60 | Config parsing |
| `src/libxrpl/telemetry/NullTelemetry.cpp` | ~40 | No-op implementation |
#### Modified Files (Existing Xrpld Code)
| File | Lines Added | Lines Changed | Risk Level |
| ------------------------------------------------- | ----------- | ------------- | ---------- |
| `src/xrpld/app/main/Application.cpp` | ~15 | ~3 | Low |
| `include/xrpl/core/ServiceRegistry.h` | ~5 | ~2 | Low |
| `src/xrpld/rpc/detail/ServerHandler.cpp` | ~40 | ~10 | Low |
| `src/xrpld/rpc/handlers/*.cpp` | ~30 | ~8 | Low |
| `src/xrpld/overlay/detail/PeerImp.cpp` | ~60 | ~15 | Medium |
| `src/xrpld/overlay/detail/OverlayImpl.cpp` | ~30 | ~10 | Medium |
| `src/xrpld/app/consensus/RCLConsensus.cpp` | ~50 | ~15 | Medium |
| `src/xrpld/app/consensus/RCLConsensusAdaptor.cpp` | ~40 | ~12 | Medium |
| `src/xrpld/core/JobQueue.cpp` | ~20 | ~5 | Low |
| `src/xrpld/app/paths/PathRequest.cpp` | ~40 | ~3 | Low |
| `src/xrpld/app/paths/Pathfinder.cpp` | ~40 | ~2 | Low |
| `src/xrpld/app/misc/TxQ.cpp` | ~40 | ~3 | Low |
| `src/xrpld/app/main/LoadManager.cpp` | ~20 | ~2 | Low |
| `src/xrpld/app/misc/ValidatorList.cpp` | ~20 | ~2 | Low |
| `src/xrpld/app/misc/AmendmentTable.cpp` | ~10 | ~2 | Low |
| `src/xrpld/app/misc/Manifest.cpp` | ~10 | ~1 | Low |
| `src/xrpld/shamap/SHAMap.cpp` | ~20 | ~3 | Low |
| `src/xrpld/overlay/detail/ripple.proto` | ~25 | 0 | Low |
| `CMakeLists.txt` | ~40 | ~8 | Low |
| `cmake/FindOpenTelemetry.cmake` | ~50 | 0 | None (new) |
### 3.9.3 Risk Assessment by Component
<div align="center">
**Do First** ↖ ↗ **Plan Carefully**
```mermaid
quadrantChart
title Code Intrusiveness Risk Matrix
x-axis Low Risk --> High Risk
y-axis Low Value --> High Value
RPC Tracing: [0.2, 0.55]
Transaction Relay: [0.55, 0.85]
Consensus Tracing: [0.75, 0.92]
Peer Message Tracing: [0.85, 0.35]
JobQueue Context: [0.3, 0.42]
Ledger Acquisition: [0.48, 0.65]
PathFinding: [0.38, 0.72]
TxQ and Fees: [0.25, 0.62]
Validator Mgmt: [0.15, 0.35]
```
**Optional** ↙ ↘ **Avoid**
</div>
#### Risk Level Definitions
| Risk Level | Definition | Mitigation |
| ---------- | ---------------------------------------------------------------- | ---------------------------------- |
| **Low** | Additive changes only; no modification to existing logic | Standard code review |
| **Medium** | Minor modifications to existing functions; clear boundaries | Comprehensive unit tests |
| **High** | Changes to core logic or data structures; potential side effects | Integration tests + staged rollout |
### 3.9.4 Architectural Impact Assessment
| Aspect | Impact | Justification |
| -------------------- | ------- | -------------------------------------------------------------------------------- |
| **Data Flow** | Minimal | Read-only instrumentation; no modification to consensus or transaction data flow |
| **Threading Model** | Minimal | Context propagation uses thread-local storage (standard OTel pattern) |
| **Memory Model** | Low | Bounded queues prevent unbounded growth; RAII ensures cleanup |
| **Network Protocol** | Low | Optional fields in protobuf (high field numbers); backward compatible |
| **Configuration** | None | New config section; existing configs unaffected |
| **Build System** | Low | Optional CMake flag; builds work without OpenTelemetry |
| **Dependencies** | Low | OpenTelemetry SDK is optional; null implementation when disabled |
### 3.9.5 Backward Compatibility
| Compatibility | Status | Notes |
| --------------- | ------- | ----------------------------------------------------- |
| **Config File** | ✅ Full | New `[telemetry]` section is optional |
| **Protocol** | ✅ Full | Optional protobuf fields with high field numbers |
| **Build** | ✅ Full | `XRPL_ENABLE_TELEMETRY=OFF` produces identical binary |
| **Runtime** | ✅ Full | `enabled=0` produces zero overhead |
| **API** | ✅ Full | No changes to public RPC or P2P APIs |
### 3.9.6 Rollback Strategy
If issues are discovered after deployment:
1. **Immediate**: Set `enabled=0` in config and restart (zero code change)
2. **Quick**: Rebuild with `XRPL_ENABLE_TELEMETRY=OFF`
3. **Complete**: Revert telemetry commits (clean separation makes this easy)
### 3.9.7 Code Change Examples
**Minimal RPC Instrumentation (Low Intrusiveness):**
```cpp
// Before
void ServerHandler::onRequest(...) {
auto result = processRequest(req);
send(result);
}
// After (only ~10 lines added)
void ServerHandler::onRequest(...) {
XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.request"); // +1 line
XRPL_TRACE_SET_ATTR("xrpl.rpc.command", command); // +1 line
auto result = processRequest(req);
XRPL_TRACE_SET_ATTR("xrpl.rpc.status", status); // +1 line
send(result);
}
```
**Consensus Instrumentation (Medium Intrusiveness):**
```cpp
// Before
void RCLConsensusAdaptor::startRound(...) {
// ... existing logic
}
// After (context storage required)
void RCLConsensusAdaptor::startRound(...) {
XRPL_TRACE_CONSENSUS(app_.getTelemetry(), "consensus.round");
XRPL_TRACE_SET_ATTR("xrpl.consensus.ledger.seq", seq);
// Store context for child spans in phase transitions
currentRoundContext_ = _xrpl_guard_->context(); // New member variable
// ... existing logic unchanged
}
```
---
_Previous: [Design Decisions](./02-design-decisions.md)_ | _Next: [Code Samples](./04-code-samples.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

File diff suppressed because it is too large Load Diff

View File

@@ -1,964 +0,0 @@
# Configuration Reference
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Code Samples](./04-code-samples.md) | [Implementation Phases](./06-implementation-phases.md)
---
## 5.1 xrpld Configuration
> **OTLP** = OpenTelemetry Protocol | **TxQ** = Transaction Queue
### 5.1.1 Configuration File Section
Add to `cfg/xrpld-example.cfg`:
```ini
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY (OpenTelemetry Distributed Tracing)
# ═══════════════════════════════════════════════════════════════════════════════
#
# Enables distributed tracing for transaction flow, consensus, and RPC calls.
# Traces are exported to an OpenTelemetry Collector using OTLP protocol.
#
# [telemetry]
#
# # Enable/disable telemetry (default: 0 = disabled)
# enabled=1
#
# # Exporter type: "otlp_grpc" (default), "otlp_http", or "none"
# exporter=otlp_grpc
#
# # OTLP endpoint (default: localhost:4317 for gRPC, localhost:4318 for HTTP)
# endpoint=localhost:4317
#
# # Use TLS for exporter connection (default: 0)
# use_tls=0
#
# # Path to CA certificate for TLS (optional)
# # tls_ca_cert=/path/to/ca.crt
#
# # Sampling ratio: 0.0-1.0 (default: 1.0 = 100% sampling)
# # Use lower values in production to reduce overhead
# # Default: 1.0 (all traces). For production deployments with high
# # throughput, 0.1 (10%) is recommended to reduce overhead.
# # See Section 7.4.2 for sampling strategy details.
# sampling_ratio=0.1
#
# # Batch processor settings
# batch_size=512 # Spans per batch (default: 512)
# batch_delay_ms=5000 # Max delay before sending batch (default: 5000)
# max_queue_size=2048 # Max queued spans (default: 2048)
#
# # Component-specific tracing (default: all enabled except peer)
# trace_transactions=1 # Transaction relay and processing
# trace_consensus=1 # Consensus rounds and proposals
# trace_rpc=1 # RPC request handling
# trace_peer=0 # Peer messages (high volume, disabled by default)
# trace_ledger=1 # Ledger acquisition and building
# trace_pathfind=1 # Path computation (can be expensive)
# trace_txq=1 # Transaction queue and fee escalation
# trace_validator=0 # Validator list and manifest updates (low volume)
# trace_amendment=0 # Amendment voting (very low volume)
#
# # Service identification (automatically detected if not specified)
# # service_name=xrpld
# # service_instance_id=<node_public_key>
[telemetry]
enabled=0
```
### 5.1.2 Configuration Options Summary
| Option | Type | Default | Description |
| --------------------- | ------ | ---------------- | ----------------------------------------- |
| `enabled` | bool | `false` | Enable/disable telemetry |
| `exporter` | string | `"otlp_grpc"` | Exporter type: otlp_grpc, otlp_http, none |
| `endpoint` | string | `localhost:4317` | OTLP collector endpoint |
| `use_tls` | bool | `false` | Enable TLS for exporter connection |
| `tls_ca_cert` | string | `""` | Path to CA certificate file |
| `sampling_ratio` | float | `1.0` | Sampling ratio (0.0-1.0) |
| `batch_size` | uint | `512` | Spans per export batch |
| `batch_delay_ms` | uint | `5000` | Max delay before sending batch (ms) |
| `max_queue_size` | uint | `2048` | Maximum queued spans |
| `trace_transactions` | bool | `true` | Enable transaction tracing |
| `trace_consensus` | bool | `true` | Enable consensus tracing |
| `trace_rpc` | bool | `true` | Enable RPC tracing |
| `trace_peer` | bool | `false` | Enable peer message tracing (high volume) |
| `trace_ledger` | bool | `true` | Enable ledger tracing |
| `trace_pathfind` | bool | `true` | Enable path computation tracing |
| `trace_txq` | bool | `true` | Enable transaction queue tracing |
| `trace_validator` | bool | `false` | Enable validator list/manifest tracing |
| `trace_amendment` | bool | `false` | Enable amendment voting tracing |
| `service_name` | string | `"xrpld"` | Service name for traces |
| `service_instance_id` | string | `<node_pubkey>` | Instance identifier |
---
## 5.2 Configuration Parser
> **TxQ** = Transaction Queue
```cpp
// src/libxrpl/telemetry/TelemetryConfig.cpp
#include <xrpl/telemetry/Telemetry.h>
#include <xrpl/basics/Log.h>
namespace xrpl {
namespace telemetry {
Telemetry::Setup
setup_Telemetry(
Section const& section,
std::string const& nodePublicKey,
std::string const& version)
{
Telemetry::Setup setup;
// Basic settings
setup.enabled = section.value_or("enabled", false);
setup.serviceName = section.value_or("service_name", "xrpld");
setup.serviceVersion = version;
setup.serviceInstanceId = section.value_or(
"service_instance_id", nodePublicKey);
// Exporter settings
setup.exporterType = section.value_or("exporter", "otlp_grpc");
if (setup.exporterType == "otlp_grpc")
setup.exporterEndpoint = section.value_or("endpoint", "localhost:4317");
else if (setup.exporterType == "otlp_http")
setup.exporterEndpoint = section.value_or("endpoint", "localhost:4318");
setup.useTls = section.value_or("use_tls", false);
setup.tlsCertPath = section.value_or("tls_ca_cert", "");
// Sampling
setup.samplingRatio = section.value_or("sampling_ratio", 1.0);
if (setup.samplingRatio < 0.0 || setup.samplingRatio > 1.0)
{
Throw<std::runtime_error>(
"telemetry.sampling_ratio must be between 0.0 and 1.0");
}
// Batch processor
setup.batchSize = section.value_or("batch_size", 512u);
setup.batchDelay = std::chrono::milliseconds{
section.value_or("batch_delay_ms", 5000u)};
setup.maxQueueSize = section.value_or("max_queue_size", 2048u);
// Component filtering
setup.traceTransactions = section.value_or("trace_transactions", true);
setup.traceConsensus = section.value_or("trace_consensus", true);
setup.traceRpc = section.value_or("trace_rpc", true);
setup.tracePeer = section.value_or("trace_peer", false);
setup.traceLedger = section.value_or("trace_ledger", true);
setup.tracePathfind = section.value_or("trace_pathfind", true);
setup.traceTxQ = section.value_or("trace_txq", true);
setup.traceValidator = section.value_or("trace_validator", false);
setup.traceAmendment = section.value_or("trace_amendment", false);
return setup;
}
} // namespace telemetry
} // namespace xrpl
```
---
## 5.3 Application Integration
### 5.3.1 ApplicationImp Changes
> **Deferred identity**: The node public key (`nodeIdentity_`) is not
> available during `ApplicationImp`'s member initializer list — it is
> resolved later in `setup()`. The `Telemetry` object is therefore
> constructed with an empty `serviceInstanceId` and patched via
> `setServiceInstanceId()` once `setup()` has called `getNodeIdentity()`.
```cpp
// src/xrpld/app/main/Application.cpp (modified)
#include <xrpl/telemetry/Telemetry.h>
class ApplicationImp : public Application, public BasicApp
{
// ... existing members (perfLog_, etc.) ...
// Telemetry — constructed in the member initializer list with
// an empty serviceInstanceId, patched in setup().
std::unique_ptr<telemetry::Telemetry> telemetry_;
// Member initializer list (excerpt):
// ...
// , telemetry_(
// telemetry::make_Telemetry(
// telemetry::setup_Telemetry(
// config_->section("telemetry"),
// "", // Updated later via setServiceInstanceId()
// BuildInfo::getVersionString()),
// logs_->journal("Telemetry")))
// ...
bool setup(...) override
{
// ... existing setup code ...
nodeIdentity_ = getNodeIdentity(*this, cmdline);
// Inject node identity into telemetry resource attributes,
// unless the user already set a custom service_instance_id.
if (!config_->section("telemetry").exists("service_instance_id"))
telemetry_->setServiceInstanceId(
toBase58(TokenType::NodePublic, nodeIdentity_->first));
// ... rest of setup ...
}
void start(bool withTimers) override
{
// ... existing start code ...
telemetry_->start();
}
void run() override
{
// ... existing run/shutdown code ...
telemetry_->stop();
}
telemetry::Telemetry&
getTelemetry() override
{
return *telemetry_;
}
};
```
### 5.3.2 ServiceRegistry Interface Addition
```cpp
// include/xrpl/core/ServiceRegistry.h (modified)
namespace telemetry {
class Telemetry;
} // namespace telemetry
class ServiceRegistry
{
public:
// ... existing virtual methods ...
/** Get the telemetry system for distributed tracing. */
virtual telemetry::Telemetry&
getTelemetry() = 0;
};
```
> **Note:** `Application` extends `ServiceRegistry`, so `getTelemetry()` is
> available on both. Components that hold a `ServiceRegistry&` (e.g.
> `NetworkOPsImp`) call `registry_.get().getTelemetry()`. Components that
> still hold an `Application&` (e.g. `ServerHandler`, `PeerImp`,
> `RCLConsensusAdaptor`) call `app_.getTelemetry()` directly.
---
## 5.4 CMake Integration
> **OTLP** = OpenTelemetry Protocol
### 5.4.1 Find OpenTelemetry Module
```cmake
# cmake/FindOpenTelemetry.cmake
# Find OpenTelemetry C++ SDK
#
# This module defines:
# OpenTelemetry_FOUND - System has OpenTelemetry
# OpenTelemetry::api - API library target
# OpenTelemetry::sdk - SDK library target
# OpenTelemetry::otlp_grpc_exporter - OTLP gRPC exporter target
# OpenTelemetry::otlp_http_exporter - OTLP HTTP exporter target
find_package(opentelemetry-cpp CONFIG QUIET)
if(opentelemetry-cpp_FOUND)
set(OpenTelemetry_FOUND TRUE)
# Create imported targets if not already created by config
if(NOT TARGET OpenTelemetry::api)
add_library(OpenTelemetry::api ALIAS opentelemetry-cpp::api)
endif()
if(NOT TARGET OpenTelemetry::sdk)
add_library(OpenTelemetry::sdk ALIAS opentelemetry-cpp::sdk)
endif()
if(NOT TARGET OpenTelemetry::otlp_grpc_exporter)
add_library(OpenTelemetry::otlp_grpc_exporter ALIAS
opentelemetry-cpp::otlp_grpc_exporter)
endif()
else()
# Try pkg-config fallback
find_package(PkgConfig QUIET)
if(PKG_CONFIG_FOUND)
pkg_check_modules(OTEL opentelemetry-cpp QUIET)
if(OTEL_FOUND)
set(OpenTelemetry_FOUND TRUE)
# Create imported targets from pkg-config
add_library(OpenTelemetry::api INTERFACE IMPORTED)
target_include_directories(OpenTelemetry::api INTERFACE
${OTEL_INCLUDE_DIRS})
endif()
endif()
endif()
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(OpenTelemetry
REQUIRED_VARS OpenTelemetry_FOUND)
```
### 5.4.2 CMakeLists.txt Changes
```cmake
# CMakeLists.txt (additions)
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY OPTIONS
# ═══════════════════════════════════════════════════════════════════════════════
option(XRPL_ENABLE_TELEMETRY
"Enable OpenTelemetry distributed tracing support" OFF)
if(XRPL_ENABLE_TELEMETRY)
find_package(OpenTelemetry REQUIRED)
# Define compile-time flag
add_compile_definitions(XRPL_ENABLE_TELEMETRY)
message(STATUS "OpenTelemetry tracing: ENABLED")
else()
message(STATUS "OpenTelemetry tracing: DISABLED")
endif()
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY LIBRARY
# ═══════════════════════════════════════════════════════════════════════════════
if(XRPL_ENABLE_TELEMETRY)
add_library(xrpl_telemetry
src/libxrpl/telemetry/Telemetry.cpp
src/libxrpl/telemetry/TelemetryConfig.cpp
src/libxrpl/telemetry/TraceContext.cpp
)
target_include_directories(xrpl_telemetry
PUBLIC
${CMAKE_CURRENT_SOURCE_DIR}/include
)
target_link_libraries(xrpl_telemetry
PUBLIC
OpenTelemetry::api
OpenTelemetry::sdk
OpenTelemetry::otlp_grpc_exporter
PRIVATE
xrpl_basics
)
# Add to main library dependencies
target_link_libraries(xrpld PRIVATE xrpl_telemetry)
else()
# Create null implementation library
add_library(xrpl_telemetry
src/libxrpl/telemetry/NullTelemetry.cpp
)
target_include_directories(xrpl_telemetry
PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include
)
endif()
```
---
## 5.5 OpenTelemetry Collector Configuration
> **OTLP** = OpenTelemetry Protocol | **APM** = Application Performance Monitoring
### 5.5.1 Development Configuration
```yaml
# otel-collector-dev.yaml
# Minimal configuration for local development
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 100
exporters:
# Console output for debugging
logging:
verbosity: detailed
sampling_initial: 5
sampling_thereafter: 200
# Tempo for trace visualization
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp/tempo]
```
### 5.5.2 Production Configuration
```yaml
# otel-collector-prod.yaml
# Production configuration with filtering, sampling, and multiple backends
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
tls:
cert_file: /etc/otel/server.crt
key_file: /etc/otel/server.key
ca_file: /etc/otel/ca.crt
processors:
# Memory limiter to prevent OOM
memory_limiter:
check_interval: 1s
limit_mib: 1000
spike_limit_mib: 200
# Batch processing for efficiency
batch:
timeout: 5s
send_batch_size: 512
send_batch_max_size: 1024
# Tail-based sampling (keep errors and slow traces)
tail_sampling:
decision_wait: 10s
num_traces: 100000
expected_new_traces_per_sec: 1000
policies:
# Always keep error traces
- name: errors
type: status_code
status_code:
status_codes: [ERROR]
# Keep slow consensus rounds (>5s)
- name: slow-consensus
type: latency
latency:
threshold_ms: 5000
# Keep slow RPC requests (>1s)
- name: slow-rpc
type: and
and:
and_sub_policy:
- name: rpc-spans
type: string_attribute
string_attribute:
key: xrpl.rpc.command
values: [".*"]
enabled_regex_matching: true
- name: latency
type: latency
latency:
threshold_ms: 1000
# Probabilistic sampling for the rest
- name: probabilistic
type: probabilistic
probabilistic:
sampling_percentage: 10
# Attribute processing
attributes:
actions:
# Hash sensitive data
- key: xrpl.tx.account
action: hash
# Add deployment info
- key: deployment.environment
value: production
action: upsert
exporters:
# Grafana Tempo for long-term storage
otlp/tempo:
endpoint: tempo.monitoring:4317
tls:
insecure: false
ca_file: /etc/otel/tempo-ca.crt
# Elastic APM for correlation with logs
otlp/elastic:
endpoint: apm.elastic:8200
headers:
Authorization: "Bearer ${ELASTIC_APM_TOKEN}"
extensions:
health_check:
endpoint: 0.0.0.0:13133
zpages:
endpoint: 0.0.0.0:55679
service:
extensions: [health_check, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, tail_sampling, attributes, batch]
exporters: [otlp/tempo, otlp/elastic]
```
---
## 5.6 Docker Compose Development Environment
> **OTLP** = OpenTelemetry Protocol
```yaml
# docker-compose-telemetry.yaml
version: "3.8"
services:
# OpenTelemetry Collector
otel-collector:
image: otel/opentelemetry-collector-contrib:0.92.0
container_name: otel-collector
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-dev.yaml:/etc/otel-collector-config.yaml:ro
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
- "13133:13133" # Health check
depends_on:
- tempo
# Tempo for trace visualization
tempo:
image: grafana/tempo:2.6.1
container_name: tempo
ports:
- "3200:3200" # Tempo HTTP API
- "4317" # OTLP gRPC (internal)
# Grafana for dashboards
grafana:
image: grafana/grafana:10.2.3
container_name: grafana
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
volumes:
- ./grafana/provisioning:/etc/grafana/provisioning:ro
- ./grafana/dashboards:/var/lib/grafana/dashboards:ro
ports:
- "3000:3000"
depends_on:
- tempo
# Prometheus for metrics (optional, for correlation)
prometheus:
image: prom/prometheus:v2.48.1
container_name: prometheus
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml:ro
ports:
- "9090:9090"
networks:
default:
name: xrpld-telemetry
```
---
## 5.7 Configuration Architecture
> **OTLP** = OpenTelemetry Protocol
```mermaid
flowchart TB
subgraph config["Configuration Sources"]
cfgFile["xrpld.cfg<br/>[telemetry] section"]
cmake["CMake<br/>XRPL_ENABLE_TELEMETRY"]
end
subgraph init["Initialization"]
parse["setup_Telemetry()"]
factory["make_Telemetry()"]
end
subgraph runtime["Runtime Components"]
tracer["TracerProvider"]
exporter["OTLP Exporter"]
processor["BatchProcessor"]
end
subgraph collector["Collector Pipeline"]
recv["Receivers"]
proc["Processors"]
exp["Exporters"]
end
cfgFile --> parse
cmake -->|"compile flag"| parse
parse --> factory
factory --> tracer
tracer --> processor
processor --> exporter
exporter -->|"OTLP"| recv
recv --> proc
proc --> exp
style config fill:#e3f2fd,stroke:#1976d2
style runtime fill:#e8f5e9,stroke:#388e3c
style collector fill:#fff3e0,stroke:#ff9800
```
**Reading the diagram:**
- **Configuration Sources**: `xrpld.cfg` provides runtime settings (endpoint, sampling) while the CMake flag controls whether telemetry is compiled in at all.
- **Initialization**: `setup_Telemetry()` parses config values, then `make_Telemetry()` constructs the provider, processor, and exporter objects.
- **Runtime Components**: The `TracerProvider` creates spans, the `BatchProcessor` buffers them, and the `OTLP Exporter` serializes and sends them over the wire.
- **OTLP arrow to Collector**: Trace data leaves the xrpld process via OTLP (gRPC or HTTP) and enters the external Collector pipeline.
- **Collector Pipeline**: `Receivers` ingest OTLP data, `Processors` apply sampling/filtering/enrichment, and `Exporters` forward traces to storage backends (Tempo, etc.).
---
## 5.8 Grafana Integration
> **APM** = Application Performance Monitoring
Step-by-step instructions for integrating xrpld traces with Grafana.
### 5.8.1 Data Source Configuration
#### Tempo (Recommended)
```yaml
# grafana/provisioning/datasources/tempo.yaml
apiVersion: 1
datasources:
- name: Tempo
type: tempo
access: proxy
url: http://tempo:3200
jsonData:
httpMethod: GET
tracesToLogs:
datasourceUid: loki
tags: ["service.name", "xrpl.tx.hash"]
mappedTags: [{ key: "trace_id", value: "traceID" }]
mapTagNamesEnabled: true
filterByTraceID: true
serviceMap:
datasourceUid: prometheus
nodeGraph:
enabled: true
search:
hide: false
lokiSearch:
datasourceUid: loki
```
#### Elastic APM
```yaml
# grafana/provisioning/datasources/elastic-apm.yaml
apiVersion: 1
datasources:
- name: Elasticsearch-APM
type: elasticsearch
access: proxy
url: http://elasticsearch:9200
database: "apm-*"
jsonData:
esVersion: "8.0.0"
timeField: "@timestamp"
logMessageField: message
logLevelField: log.level
```
### 5.8.2 Dashboard Provisioning
```yaml
# grafana/provisioning/dashboards/dashboards.yaml
apiVersion: 1
providers:
- name: "xrpld-dashboards"
orgId: 1
folder: "xrpld"
folderUid: "xrpld"
type: file
disableDeletion: false
updateIntervalSeconds: 30
options:
path: /var/lib/grafana/dashboards/rippled
```
### 5.8.3 Example Dashboard: RPC Performance
```json
{
"title": "xrpld RPC Performance",
"uid": "xrpld-rpc-performance",
"panels": [
{
"title": "RPC Latency by Command",
"type": "heatmap",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"xrpld\" && span.xrpl.rpc.command != \"\"} | histogram_over_time(duration) by (span.xrpl.rpc.command)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 0 }
},
{
"title": "RPC Error Rate",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"xrpld\" && status.code=error} | rate() by (span.xrpl.rpc.command)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 0 }
},
{
"title": "Top 10 Slowest RPC Commands",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"xrpld\" && span.xrpl.rpc.command != \"\"} | avg(duration) by (span.xrpl.rpc.command) | topk(10)"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 8 }
},
{
"title": "Recent Traces",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"xrpld\"}"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 16 }
}
]
}
```
### 5.8.4 Example Dashboard: Transaction Tracing
```json
{
"title": "xrpld Transaction Tracing",
"uid": "xrpld-tx-tracing",
"panels": [
{
"title": "Transaction Throughput",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"xrpld\" && name=\"tx.receive\"} | rate()"
}
],
"gridPos": { "h": 4, "w": 6, "x": 0, "y": 0 }
},
{
"title": "Cross-Node Relay Count",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"xrpld\" && name=\"tx.relay\"} | avg(span.xrpl.tx.relay_count)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 4 }
},
{
"title": "Transaction Validation Errors",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"xrpld\" && name=\"tx.validate\" && status.code=error}"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 4 }
}
]
}
```
### 5.8.5 TraceQL Query Examples
Common queries for xrpld traces:
```
# Find all traces for a specific transaction hash
{resource.service.name="xrpld" && span.xrpl.tx.hash="ABC123..."}
# Find slow RPC commands (>100ms)
{resource.service.name="xrpld" && name=~"rpc.command.*"} | duration > 100ms
# Find consensus rounds taking >5 seconds
{resource.service.name="xrpld" && name="consensus.round"} | duration > 5s
# Find failed transactions with error details
{resource.service.name="xrpld" && name="tx.validate" && status.code=error}
# Find transactions relayed to many peers
{resource.service.name="xrpld" && name="tx.relay"} | span.xrpl.tx.relay_count > 10
# Compare latency across nodes
{resource.service.name="xrpld" && name="rpc.command.account_info"} | avg(duration) by (resource.service.instance.id)
```
### 5.8.6 Correlation with PerfLog
To correlate OpenTelemetry traces with existing PerfLog data:
**Step 1: Configure Loki to ingest PerfLog**
```yaml
# promtail-config.yaml
scrape_configs:
- job_name: xrpld-perflog
static_configs:
- targets:
- localhost
labels:
job: xrpld
__path__: /var/log/rippled/perf*.log
pipeline_stages:
- json:
expressions:
trace_id: trace_id
ledger_seq: ledger_seq
tx_hash: tx_hash
- labels:
trace_id:
ledger_seq:
tx_hash:
```
**Step 2: Add trace_id to PerfLog entries**
Modify PerfLog to include trace_id when available:
```cpp
// In PerfLog output, add trace_id from current span context
void logPerf(Json::Value& entry) {
auto span = opentelemetry::trace::GetSpan(
opentelemetry::context::RuntimeContext::GetCurrent());
if (span && span->GetContext().IsValid()) {
char traceIdHex[33];
span->GetContext().trace_id().ToLowerBase16(traceIdHex);
entry["trace_id"] = std::string(traceIdHex, 32);
}
// ... existing logging
}
```
**Step 3: Configure Grafana trace-to-logs link**
In Tempo data source configuration, set up the derived field:
```yaml
jsonData:
tracesToLogs:
datasourceUid: loki
tags: ["trace_id", "xrpl.tx.hash"]
filterByTraceID: true
filterBySpanID: false
```
### 5.8.7 Correlation with Insight/StatsD Metrics
To correlate traces with existing Beast Insight metrics:
**Step 1: Export Insight metrics to Prometheus**
```yaml
# prometheus.yaml
scrape_configs:
- job_name: "xrpld-statsd"
static_configs:
- targets: ["statsd-exporter:9102"]
```
**Step 2: Add exemplars to metrics**
OpenTelemetry SDK automatically adds exemplars (trace IDs) to metrics when using the Prometheus exporter. This links metrics spikes to specific traces.
**Step 3: Configure Grafana metric-to-trace link**
```yaml
# In Prometheus data source
jsonData:
exemplarTraceIdDestinations:
- name: trace_id
datasourceUid: tempo
```
**Step 4: Dashboard panel with exemplars**
```json
{
"title": "RPC Latency with Trace Links",
"type": "timeseries",
"datasource": "Prometheus",
"targets": [
{
"expr": "histogram_quantile(0.99, rate(xrpld_rpc_duration_seconds_bucket[5m]))",
"exemplar": true
}
]
}
```
This allows clicking on metric data points to jump directly to the related trace.
---
_Previous: [Code Samples](./04-code-samples.md)_ | _Next: [Implementation Phases](./06-implementation-phases.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -1,575 +0,0 @@
# Implementation Phases
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Configuration Reference](./05-configuration-reference.md) | [Observability Backends](./07-observability-backends.md)
---
## 6.1 Phase Overview
> **TxQ** = Transaction Queue
```mermaid
gantt
title OpenTelemetry Implementation Timeline
dateFormat YYYY-MM-DD
axisFormat Week %W
section Phase 1
Core Infrastructure :p1, 2024-01-01, 2w
SDK Integration :p1a, 2024-01-01, 4d
Telemetry Interface :p1b, after p1a, 3d
Configuration & CMake :p1c, after p1b, 3d
Unit Tests :p1d, after p1c, 2d
Buffer & Integration :p1e, after p1d, 2d
section Phase 2
RPC Tracing :p2, after p1, 2w
HTTP Context Extraction :p2a, after p1, 2d
RPC Handler Instrumentation :p2b, after p2a, 4d
PathFinding Instrumentation :p2f, after p2b, 2d
TxQ Instrumentation :p2g, after p2f, 2d
WebSocket Support :p2c, after p2g, 2d
Integration Tests :p2d, after p2c, 2d
Buffer & Review :p2e, after p2d, 4d
section Phase 3
Transaction Tracing :p3, after p2, 2w
Protocol Buffer Extension :p3a, after p2, 2d
PeerImp Instrumentation :p3b, after p3a, 3d
Fee Escalation Instrumentation :p3f, after p3b, 2d
Relay Context Propagation :p3c, after p3f, 3d
Multi-node Tests :p3d, after p3c, 2d
Buffer & Review :p3e, after p3d, 4d
section Phase 4
Consensus Tracing :p4, after p3, 2w
Consensus Round Spans :p4a, after p3, 3d
Proposal Handling :p4b, after p4a, 3d
Validator List & Manifest Tracing :p4f, after p4b, 2d
Amendment Voting Tracing :p4g, after p4f, 2d
SHAMap Sync Tracing :p4h, after p4g, 2d
Validation Tests :p4c, after p4h, 4d
Buffer & Review :p4e, after p4c, 4d
section Phase 5
Documentation & Deploy :p5, after p4, 1w
```
---
## 6.2 Phase 1: Core Infrastructure (Weeks 1-2)
**Objective**: Establish foundational telemetry infrastructure
### Tasks
| Task | Description |
| ---- | ----------------------------------------------------- |
| 1.1 | Add OpenTelemetry C++ SDK to Conan/CMake |
| 1.2 | Implement `Telemetry` interface and factory |
| 1.3 | Implement `SpanGuard` RAII wrapper |
| 1.4 | Implement configuration parser |
| 1.5 | Integrate into `ApplicationImp` |
| 1.6 | Add conditional compilation (`XRPL_ENABLE_TELEMETRY`) |
| 1.7 | Create `NullTelemetry` no-op implementation |
| 1.8 | Unit tests for core infrastructure |
### Exit Criteria
- [ ] OpenTelemetry SDK compiles and links
- [ ] Telemetry can be enabled/disabled via config
- [ ] Basic span creation works
- [ ] No performance regression when disabled
- [ ] Unit tests passing
---
## 6.3 Phase 2: RPC Tracing (Weeks 3-4)
> **TxQ** = Transaction Queue
**Objective**: Complete tracing for all RPC operations
### Tasks
| Task | Description |
| ---- | -------------------------------------------------------------------------- |
| 2.1 | Implement W3C Trace Context HTTP header extraction |
| 2.2 | Instrument `ServerHandler::onRequest()` |
| 2.3 | Instrument `RPCHandler::doCommand()` |
| 2.4 | Add RPC-specific attributes |
| 2.5 | Instrument WebSocket handler |
| 2.6 | PathFinding instrumentation (`pathfind.request`, `pathfind.compute` spans) |
| 2.7 | TxQ instrumentation (`txq.enqueue`, `txq.apply` spans) |
| 2.8 | Integration tests for RPC tracing |
| 2.9 | Performance benchmarks |
| 2.10 | Documentation |
### Exit Criteria
- [ ] All RPC commands traced
- [ ] Trace context propagates from HTTP headers
- [ ] WebSocket and HTTP both instrumented
- [ ] <1ms overhead per RPC call
- [ ] Integration tests passing
---
## 6.4 Phase 3: Transaction Tracing (Weeks 5-6)
**Objective**: Trace transaction lifecycle across network
### Tasks
| Task | Description |
| ---- | ---------------------------------------------------- |
| 3.1 | Define `TraceContext` Protocol Buffer message |
| 3.2 | Implement protobuf context serialization |
| 3.3 | Instrument `PeerImp::handleTransaction()` |
| 3.4 | Instrument `NetworkOPs::submitTransaction()` |
| 3.5 | Instrument HashRouter integration |
| 3.6 | Fee escalation instrumentation (`fee.escalate` span) |
| 3.7 | Implement relay context propagation |
| 3.8 | Integration tests (multi-node) |
| 3.9 | Performance benchmarks |
### Exit Criteria
- [ ] Transaction traces span across nodes
- [ ] Trace context in Protocol Buffer messages
- [ ] HashRouter deduplication visible in traces
- [ ] Multi-node integration tests passing
- [ ] <5% overhead on transaction throughput
---
## 6.5 Phase 4: Consensus Tracing (Weeks 7-8)
**Objective**: Full observability into consensus rounds
### Tasks
| Task | Description |
| ---- | ---------------------------------------------- |
| 4.1 | Instrument `RCLConsensusAdaptor::startRound()` |
| 4.2 | Instrument phase transitions |
| 4.3 | Instrument proposal handling |
| 4.4 | Instrument validation handling |
| 4.5 | Add consensus-specific attributes |
| 4.6 | Correlate with transaction traces |
| 4.7 | Validator list and manifest tracing |
| 4.8 | Amendment voting tracing |
| 4.9 | SHAMap sync tracing |
| 4.10 | Multi-validator integration tests |
| 4.11 | Performance validation |
### Exit Criteria
- [x] Complete consensus round traces
- [x] Phase transitions visible
- [x] Proposals and validations traced
- [x] No impact on consensus timing
- [ ] Multi-validator test network validated
### Implementation Status — Phase 4a Complete
Phase 4a (establish-phase gap fill & cross-node correlation) adds:
- **Deterministic trace ID** derived from `previousLedger.id()` so all validators
in the same round share the same `trace_id` (switchable via
`consensus_trace_strategy` config: `"deterministic"` or `"attribute"`).
See [Configuration Reference](./05-configuration-reference.md) for full
configuration options. The `consensus_trace_strategy` option will be
documented in the configuration reference as part of Phase 4a implementation.
- **Round lifecycle spans**: `consensus.round` with round-to-round span links.
- **Establish phase**: `consensus.establish`, `consensus.update_positions` (with
`dispute.resolve` events), `consensus.check` (with threshold tracking).
- **Mode changes**: `consensus.mode_change` spans.
- **Validation**: `consensus.validation.send` with span link to round span
(thread-safe cross-thread access via `roundSpanContext_` snapshot).
- **Separation of concerns**: telemetry extracted to private helpers
(`startRoundTracing`, `createValidationSpan`, `startEstablishTracing`,
`updateEstablishTracing`, `endEstablishTracing`).
See [Phase4_taskList.md](./Phase4_taskList.md) for the full spec and implementation notes.
---
## 6.6 Phase 5: Documentation & Deployment (Week 9)
**Objective**: Production readiness
### Tasks
| Task | Description |
| ---- | ----------------------------- |
| 5.1 | Operator runbook |
| 5.2 | Grafana dashboards |
| 5.3 | Alert definitions |
| 5.4 | Collector deployment examples |
| 5.5 | Developer documentation |
| 5.6 | Training materials |
| 5.7 | Final integration testing |
---
## 6.7 Risk Assessment
```mermaid
quadrantChart
title Risk Assessment Matrix
x-axis Low Impact --> High Impact
y-axis Low Likelihood --> High Likelihood
quadrant-1 Mitigate Immediately
quadrant-2 Plan Mitigation
quadrant-3 Accept Risk
quadrant-4 Monitor Closely
SDK Compat: [0.2, 0.18]
Protocol Chg: [0.75, 0.72]
Perf Overhead: [0.58, 0.42]
Context Prop: [0.4, 0.55]
Memory Leaks: [0.85, 0.25]
```
### Risk Details
| Risk | Likelihood | Impact | Mitigation |
| ------------------------------------ | ---------- | ------ | --------------------------------------- |
| Protocol changes break compatibility | Medium | High | Use high field numbers, optional fields |
| Performance overhead unacceptable | Medium | Medium | Sampling, conditional compilation |
| Context propagation complexity | Medium | Medium | Phased rollout, extensive testing |
| SDK compatibility issues | Low | Medium | Pin SDK version, fallback to no-op |
| Memory leaks in long-running nodes | Low | High | Memory profiling, bounded queues |
---
## 6.8 Success Metrics
| Metric | Target | Measurement |
| ------------------------ | -------------------------------------------------------------- | --------------------- |
| Trace coverage | >95% of transaction code paths (independent of sampling ratio) | Sampling verification |
| CPU overhead | <3% | Benchmark tests |
| Memory overhead | <10 MB | Memory profiling |
| Latency impact (p99) | <2% | Performance tests |
| Trace completeness | >99% spans with required attrs | Validation script |
| Cross-node trace linkage | >90% of multi-hop transactions | Integration tests |
---
## 6.9 Quick Wins and Crawl-Walk-Run Strategy
> **TxQ** = Transaction Queue
This section outlines a prioritized approach to maximize ROI with minimal initial investment.
### 6.9.1 Crawl-Walk-Run Overview
<div align="center">
```mermaid
flowchart TB
subgraph crawl["🐢 CRAWL (Week 1-2)"]
direction LR
c1[Core SDK Setup] ~~~ c2[RPC Tracing Only] ~~~ c3[PathFinding + TxQ Tracing] ~~~ c4[Single Node]
end
subgraph walk["🚶 WALK (Week 3-5)"]
direction LR
w1[Transaction Tracing] ~~~ w2[Fee Escalation Tracing] ~~~ w3[Cross-Node Context] ~~~ w4[Basic Dashboards]
end
subgraph run["🏃 RUN (Week 6-9)"]
direction LR
r1[Consensus Tracing] ~~~ r2[Validator, Amendment,<br/>SHAMap Tracing] ~~~ r3[Full Correlation] ~~~ r4[Production Deploy]
end
crawl --> walk --> run
style crawl fill:#1b5e20,stroke:#0d3d14,color:#fff
style walk fill:#bf360c,stroke:#8c2809,color:#fff
style run fill:#0d47a1,stroke:#082f6a,color:#fff
style c1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style c2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style c3 fill:#1b5e20,stroke:#0d3d14,color:#fff
style c4 fill:#1b5e20,stroke:#0d3d14,color:#fff
style w1 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style w2 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style w3 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style w4 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style r1 fill:#0d47a1,stroke:#082f6a,color:#fff
style r2 fill:#0d47a1,stroke:#082f6a,color:#fff
style r3 fill:#0d47a1,stroke:#082f6a,color:#fff
style r4 fill:#0d47a1,stroke:#082f6a,color:#fff
```
</div>
**Reading the diagram:**
- **CRAWL (Weeks 1-2)**: Minimal investment -- set up the SDK, instrument RPC and PathFinding/TxQ handlers, and verify on a single node. Delivers immediate latency visibility.
- **WALK (Weeks 3-5)**: Expand to transaction lifecycle tracing, fee escalation, cross-node context propagation, and basic Grafana dashboards. This is where distributed tracing starts working.
- **RUN (Weeks 6-9)**: Full consensus instrumentation, validator/amendment/SHAMap tracing, end-to-end correlation, and production deployment with sampling and alerting.
- **Arrows (crawl → walk → run)**: Each phase builds on the prior one; you cannot skip ahead because later phases depend on infrastructure established earlier.
### 6.9.2 Quick Wins (Immediate Value)
| Quick Win | Value | When to Deploy |
| ------------------------------ | ------ | -------------- |
| **RPC Command Tracing** | High | Week 2 |
| **RPC Latency Histograms** | High | Week 2 |
| **Error Rate Dashboard** | Medium | Week 2 |
| **Transaction Submit Tracing** | High | Week 3 |
| **Consensus Round Duration** | Medium | Week 6 |
### 6.9.3 CRAWL Phase (Weeks 1-2)
**Goal**: Get basic tracing working with minimal code changes.
**What You Get**:
- RPC request/response traces for all commands
- Latency breakdown per RPC command
- PathFinding and TxQ tracing (directly impacts RPC latency)
- Error visibility with stack traces
- Basic Grafana dashboard
**Code Changes**: ~15 lines in `ServerHandler.cpp`, ~40 lines in new telemetry module
**Why Start Here**:
- RPC is the lowest-risk, highest-visibility component
- PathFinding and TxQ are RPC-adjacent and directly affect latency
- Immediate value for debugging client issues
- No cross-node complexity
- Single file modification to existing code
### 6.9.4 WALK Phase (Weeks 3-5)
**Goal**: Add transaction lifecycle tracing across nodes.
**What You Get**:
- End-to-end transaction traces from submit to relay
- Fee escalation tracing within the transaction pipeline
- Cross-node correlation (see transaction path)
- HashRouter deduplication visibility
- Relay latency metrics
**Code Changes**: ~120 lines across 4 files, plus protobuf extension
**Why Do This Second**:
- Builds on RPC tracing (transactions submitted via RPC)
- Fee escalation is integral to the transaction processing pipeline
- Moderate complexity (requires context propagation)
- High value for debugging transaction issues
### 6.9.5 RUN Phase (Weeks 6-9)
**Goal**: Full observability including consensus.
**What You Get**:
- Complete consensus round visibility
- Phase transition timing
- Validator proposal tracking
- Validator list and manifest tracing
- Amendment voting tracing
- SHAMap sync tracing
- Full end-to-end traces (client → RPC → TX → consensus → ledger)
**Code Changes**: ~100 lines across 3 consensus files, plus validator/amendment/SHAMap modules
**Why Do This Last**:
- Highest complexity (consensus is critical path)
- Validator, amendment, and SHAMap components are lower priority
- Requires thorough testing
- Lower relative value (consensus issues are rarer)
### 6.9.6 ROI Prioritization Matrix
```mermaid
quadrantChart
title Implementation ROI Matrix
x-axis Low Effort --> High Effort
y-axis Low Value --> High Value
quadrant-1 Quick Wins - Do First
quadrant-2 Major Projects - Plan Carefully
quadrant-3 Nice to Have - Optional
quadrant-4 Time Sinks - Avoid
RPC Tracing: [0.15, 0.92]
TX Submit Trace: [0.3, 0.78]
TX Relay Trace: [0.5, 0.88]
Consensus Trace: [0.72, 0.72]
Peer Msg Trace: [0.85, 0.3]
Ledger Acquire: [0.55, 0.52]
```
---
## 6.10 Definition of Done
> **TxQ** = Transaction Queue | **HA** = High Availability
Clear, measurable criteria for each phase.
### 6.10.1 Phase 1: Core Infrastructure
| Criterion | Measurement | Target |
| --------------- | ---------------------------------------------------------- | ---------------------------- |
| SDK Integration | `cmake --build` succeeds with `-DXRPL_ENABLE_TELEMETRY=ON` | ✅ Compiles |
| Runtime Toggle | `enabled=0` produces zero overhead | <0.1% CPU difference |
| Span Creation | Unit test creates and exports span | Span appears in Tempo |
| Configuration | All config options parsed correctly | Config validation tests pass |
| Documentation | Developer guide exists | PR approved |
**Definition of Done**: All criteria met, PR merged, no regressions in CI.
### 6.10.2 Phase 2: RPC Tracing
| Criterion | Measurement | Target |
| ------------------ | ---------------------------------- | -------------------------- |
| Coverage | All RPC commands instrumented | 100% of commands |
| Context Extraction | traceparent header propagates | Integration test passes |
| Attributes | Command, status, duration recorded | Validation script confirms |
| Performance | RPC latency overhead | <1ms p99 |
| Dashboard | Grafana dashboard deployed | Screenshot in docs |
**Definition of Done**: RPC traces visible in Tempo for all commands, dashboard shows latency distribution.
### 6.10.3 Phase 3: Transaction Tracing
| Criterion | Measurement | Target |
| ---------------- | ------------------------------- | ---------------------------------- |
| Local Trace | Submit validate TxQ traced | Single-node test passes |
| Cross-Node | Context propagates via protobuf | Multi-node test passes |
| Relay Visibility | relay_count attribute correct | Spot check 100 txs |
| HashRouter | Deduplication visible in trace | Duplicate txs show suppressed=true |
| Performance | TX throughput overhead | <5% degradation |
**Definition of Done**: Transaction traces span 3+ nodes in test network, performance within bounds.
### 6.10.4 Phase 4: Consensus Tracing
| Criterion | Measurement | Target |
| -------------------- | ----------------------------- | ------------------------- |
| Round Tracing | startRound creates root span | Unit test passes |
| Phase Visibility | All phases have child spans | Integration test confirms |
| Proposer Attribution | Proposer ID in attributes | Spot check 50 rounds |
| Timing Accuracy | Phase durations match PerfLog | <5% variance |
| No Consensus Impact | Round timing unchanged | Performance test passes |
**Definition of Done**: Consensus rounds fully traceable, no impact on consensus timing.
### 6.10.5 Phase 5: Production Deployment
| Criterion | Measurement | Target |
| ------------ | ---------------------------- | -------------------------- |
| Collector HA | Multiple collectors deployed | No single point of failure |
| Sampling | Tail sampling configured | 10% base + errors + slow |
| Retention | Data retained per policy | 7 days hot, 30 days warm |
| Alerting | Alerts configured | Error spike, high latency |
| Runbook | Operator documentation | Approved by ops team |
| Training | Team trained | Session completed |
**Definition of Done**: Telemetry running in production, operators trained, alerts active.
### 6.10.6 Success Metrics Summary
| Phase | Primary Metric | Secondary Metric | Deadline |
| ------- | ---------------------- | --------------------------- | ------------- |
| Phase 1 | SDK compiles and runs | Zero overhead when disabled | End of Week 2 |
| Phase 2 | 100% RPC coverage | <1ms latency overhead | End of Week 4 |
| Phase 3 | Cross-node traces work | <5% throughput impact | End of Week 6 |
| Phase 4 | Consensus fully traced | No consensus timing impact | End of Week 8 |
| Phase 5 | Production deployment | Operators trained | End of Week 9 |
---
## 6.12 Recommended Implementation Order
Based on ROI analysis, implement in this exact order:
```mermaid
flowchart TB
subgraph week1["Week 1"]
t1[1. OpenTelemetry SDK<br/>Conan/CMake integration]
t2[2. Telemetry interface<br/>SpanGuard, config]
end
subgraph week2["Week 2"]
t3[3. RPC ServerHandler<br/>instrumentation]
t4[4. Basic Tempo setup<br/>for testing]
end
subgraph week3["Week 3"]
t5[5. Transaction submit<br/>tracing]
t6[6. Grafana dashboard<br/>v1]
end
subgraph week4["Week 4"]
t7[7. Protobuf context<br/>extension]
t8[8. PeerImp tx.relay<br/>instrumentation]
end
subgraph week5["Week 5"]
t9[9. Multi-node<br/>integration tests]
t10[10. Performance<br/>benchmarks]
end
subgraph week6_8["Weeks 6-8"]
t11[11. Consensus<br/>instrumentation]
t12[12. Full integration<br/>testing]
end
subgraph week9["Week 9"]
t13[13. Production<br/>deployment]
t14[14. Documentation<br/>& training]
end
t1 --> t2 --> t3 --> t4
t4 --> t5 --> t6
t6 --> t7 --> t8
t8 --> t9 --> t10
t10 --> t11 --> t12
t12 --> t13 --> t14
style week1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style week2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style week3 fill:#bf360c,stroke:#8c2809,color:#fff
style week4 fill:#bf360c,stroke:#8c2809,color:#fff
style week5 fill:#bf360c,stroke:#8c2809,color:#fff
style week6_8 fill:#0d47a1,stroke:#082f6a,color:#fff
style week9 fill:#4a148c,stroke:#2e0d57,color:#fff
style t1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style t2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style t3 fill:#1b5e20,stroke:#0d3d14,color:#fff
style t4 fill:#1b5e20,stroke:#0d3d14,color:#fff
style t5 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t6 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t7 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t8 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t9 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t10 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t11 fill:#0d47a1,stroke:#082f6a,color:#fff
style t12 fill:#0d47a1,stroke:#082f6a,color:#fff
style t13 fill:#4a148c,stroke:#2e0d57,color:#fff
style t14 fill:#4a148c,stroke:#2e0d57,color:#fff
```
**Reading the diagram:**
- **Week 1 (tasks 1-2)**: Foundation work -- integrate the OpenTelemetry SDK via Conan/CMake and build the `Telemetry` interface with `SpanGuard` and config parsing.
- **Week 2 (tasks 3-4)**: First observable output -- instrument `ServerHandler` for RPC tracing and stand up Tempo so developers can see traces immediately.
- **Weeks 3-5 (tasks 5-10)**: Transaction lifecycle -- add submit tracing, build the first Grafana dashboard, extend protobuf for cross-node context, instrument `PeerImp` relay, then validate with multi-node integration tests and performance benchmarks.
- **Weeks 6-8 (tasks 11-12)**: Consensus deep-dive -- instrument consensus rounds and phases, then run full integration testing across all instrumented paths.
- **Week 9 (tasks 13-14)**: Go-live -- deploy to production with sampling/alerting configured, and deliver documentation and operator training.
- **Arrow chain (t1 ... t14)**: Strict sequential dependency; each task's output is a prerequisite for the next.
---
_Previous: [Configuration Reference](./05-configuration-reference.md)_ | _Next: [Observability Backends](./07-observability-backends.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -1,641 +0,0 @@
# Observability Backend Recommendations
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Implementation Phases](./06-implementation-phases.md) | [Appendix](./08-appendix.md)
---
## 7.1 Development/Testing Backends
> **OTLP** = OpenTelemetry Protocol
| Backend | Pros | Cons | Use Case |
| ---------- | ----------------------------------- | ---------------------- | ------------------- |
| **Tempo** | Cost-effective, Grafana integration | Requires Grafana stack | Local dev, CI, Prod |
| **Zipkin** | Simple, lightweight | Basic features | Quick prototyping |
### Quick Start with Tempo
```bash
# Start Tempo with OTLP support
docker run -d --name tempo \
-p 3200:3200 \
-p 4317:4317 \
-p 4318:4318 \
grafana/tempo:2.6.1
```
---
## 7.2 Production Backends
> **APM** = Application Performance Monitoring
| Backend | Pros | Cons | Use Case |
| ----------------- | ----------------------------------------- | ---------------------- | --------------------------- |
| **Grafana Tempo** | Cost-effective, Grafana integration | Requires Grafana stack | Most production deployments |
| **Elastic APM** | Full observability stack, log correlation | Resource intensive | Existing Elastic users |
| **Honeycomb** | Excellent query, high cardinality | SaaS cost | Deep debugging needs |
| **Datadog APM** | Full platform, easy setup | SaaS cost | Enterprise with budget |
### Backend Selection Flowchart
```mermaid
flowchart TD
start[Select Backend] --> budget{Budget<br/>Constraints?}
budget -->|Yes| oss[Open Source]
budget -->|No| saas{Prefer<br/>SaaS?}
oss --> existing{Existing<br/>Stack?}
existing -->|Grafana| tempo[Grafana Tempo]
existing -->|Elastic| elastic[Elastic APM]
existing -->|None| tempo
saas -->|Yes| enterprise{Enterprise<br/>Support?}
saas -->|No| oss
enterprise -->|Yes| datadog[Datadog APM]
enterprise -->|No| honeycomb[Honeycomb]
tempo --> final[Configure Collector]
elastic --> final
honeycomb --> final
datadog --> final
style start fill:#0f172a,stroke:#020617,color:#fff
style budget fill:#334155,stroke:#1e293b,color:#fff
style oss fill:#1e293b,stroke:#0f172a,color:#fff
style existing fill:#334155,stroke:#1e293b,color:#fff
style saas fill:#334155,stroke:#1e293b,color:#fff
style enterprise fill:#334155,stroke:#1e293b,color:#fff
style final fill:#0f172a,stroke:#020617,color:#fff
style tempo fill:#1b5e20,stroke:#0d3d14,color:#fff
style elastic fill:#bf360c,stroke:#8c2809,color:#fff
style honeycomb fill:#0d47a1,stroke:#082f6a,color:#fff
style datadog fill:#4a148c,stroke:#2e0d57,color:#fff
```
**Reading the diagram:**
- **Budget Constraints? (Yes)**: Leads to open-source options. If you already run Grafana or Elastic, pick the matching backend; otherwise default to Grafana Tempo.
- **Budget Constraints? (No) → Prefer SaaS?**: If you want a managed service, choose between Datadog (enterprise support) and Honeycomb (developer-focused). If not, fall back to open-source.
- **Terminal nodes (Tempo / Elastic / Honeycomb / Datadog)**: Each represents a concrete backend choice, all of which feed into the same final step.
- **Configure Collector**: Regardless of backend, you always finish by configuring the OTel Collector to export to your chosen destination.
---
## 7.3 Recommended Production Architecture
> **OTLP** = OpenTelemetry Protocol | **APM** = Application Performance Monitoring | **HA** = High Availability
```mermaid
flowchart TB
subgraph validators["Validator Nodes"]
v1[xrpld<br/>Validator 1]
v2[xrpld<br/>Validator 2]
end
subgraph stock["Stock Nodes"]
s1[xrpld<br/>Stock 1]
s2[xrpld<br/>Stock 2]
end
subgraph collector["OTel Collector Cluster"]
c1[Collector<br/>DC1]
c2[Collector<br/>DC2]
end
subgraph backends["Storage Backends"]
tempo[(Grafana<br/>Tempo)]
elastic[(Elastic<br/>APM)]
archive[(S3/GCS<br/>Archive)]
end
subgraph ui["Visualization"]
grafana[Grafana<br/>Dashboards]
end
v1 -->|OTLP| c1
v2 -->|OTLP| c1
s1 -->|OTLP| c2
s2 -->|OTLP| c2
c1 --> tempo
c1 --> elastic
c2 --> tempo
c2 --> archive
tempo --> grafana
elastic --> grafana
%% Note: simplified single-collector-per-DC topology shown for clarity
style validators fill:#b71c1c,stroke:#7f1d1d,color:#ffffff
style stock fill:#0d47a1,stroke:#082f6a,color:#ffffff
style collector fill:#bf360c,stroke:#8c2809,color:#ffffff
style backends fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style ui fill:#4a148c,stroke:#2e0d57,color:#ffffff
```
**Reading the diagram:**
- **Validator / Stock Nodes**: All xrpld nodes emit trace data via OTLP. Validators and stock nodes are grouped separately because they may reside in different network zones.
- **Collector Cluster (DC1, DC2)**: Regional collectors receive OTLP from nodes in their datacenter, apply processing (sampling, enrichment), and fan out to multiple backends.
- **Storage Backends**: Tempo and Elastic provide queryable trace storage; S3/GCS Archive provides long-term cold storage for compliance or post-incident analysis.
- **Grafana Dashboards**: The single visualization layer that queries both Tempo and Elastic, giving operators a unified view of all traces.
- **Data flow direction**: Nodes → Collectors → Storage → Grafana. Each arrow represents a network hop; minimizing collector-to-backend hops reduces latency.
> **Note**: Production deployments should use multiple collector instances behind a load balancer for high availability. The diagram shows a simplified single-collector topology for clarity.
---
## 7.4 Architecture Considerations
### 7.4.1 Collector Placement
| Strategy | Description | Pros | Cons |
| ------------- | -------------------- | ------------------------ | ----------------------- |
| **Sidecar** | Collector per node | Isolation, simple config | Resource overhead |
| **DaemonSet** | Collector per host | Shared resources | Complexity |
| **Gateway** | Central collector(s) | Centralized processing | Single point of failure |
**Recommendation**: Use **Gateway** pattern with regional collectors for xrpld networks:
- One collector cluster per datacenter/region
- Tail-based sampling at collector level
- Multiple export destinations for redundancy
### 7.4.2 Sampling Strategy
```mermaid
flowchart LR
subgraph head["Head Sampling (Node)"]
hs[Node-level head sampling<br/>configurable, default: 100%<br/>recommended production: 10%]
end
subgraph tail["Tail Sampling (Collector)"]
ts1[Keep all errors]
ts2[Keep slow >5s]
ts3[Keep 10% rest]
end
head --> tail
ts1 --> final[Final Traces]
ts2 --> final
ts3 --> final
style head fill:#0d47a1,stroke:#082f6a,color:#fff
style tail fill:#1b5e20,stroke:#0d3d14,color:#fff
style hs fill:#0d47a1,stroke:#082f6a,color:#fff
style ts1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style ts2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style ts3 fill:#1b5e20,stroke:#0d3d14,color:#fff
style final fill:#bf360c,stroke:#8c2809,color:#fff
```
**Reading the diagram:**
- **Head Sampling (Node)**: The first filter -- each xrpld node decides whether to sample a trace at creation time (default 100%, recommended 10% in production). This controls the volume leaving the node.
- **Tail Sampling (Collector)**: The second filter -- the collector inspects completed traces and applies rules: keep all errors, keep anything slower than 5 seconds, and keep 10% of the remainder.
- **Arrow head → tail**: All head-sampled traces flow to the collector, where tail sampling further reduces volume while preserving the most valuable data.
- **Final Traces**: The output after both sampling stages; this is what gets stored and queried. The two-stage approach balances cost with debuggability.
### 7.4.3 Data Retention
| Environment | Hot Storage | Warm Storage | Cold Archive |
| ----------- | ----------- | ------------ | ------------ |
| Development | 24 hours | N/A | N/A |
| Staging | 7 days | N/A | N/A |
| Production | 7 days | 30 days | many years |
---
## 7.5 Integration Checklist
- [ ] Choose primary backend (Tempo recommended for cost/features)
- [ ] Deploy collector cluster with high availability
- [ ] Configure tail-based sampling for error/latency traces
- [ ] Set up Grafana dashboards for trace visualization
- [ ] Configure alerts for trace anomalies
- [ ] Establish data retention policies
- [ ] Test trace correlation with logs and metrics
---
## 7.6 Grafana Dashboard Examples
Pre-built dashboards for xrpld observability.
### 7.6.1 Consensus Health Dashboard
```json
{
"title": "xrpld Consensus Health",
"uid": "xrpld-consensus-health",
"tags": ["xrpld", "consensus", "tracing"],
"panels": [
{
"title": "Consensus Round Duration",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"xrpld\" && name=\"consensus.round\"} | avg(duration) by (resource.service.instance.id)"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"thresholds": {
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 4000 },
{ "color": "red", "value": 5000 }
]
}
}
},
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 0 }
},
{
"title": "Phase Duration Breakdown",
"type": "barchart",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"xrpld\" && name=~\"consensus.phase.*\"} | avg(duration) by (name)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 0 }
},
{
"title": "Proposers per Round",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"xrpld\" && name=\"consensus.round\"} | avg(span.xrpl.consensus.proposers)"
}
],
"gridPos": { "h": 4, "w": 6, "x": 0, "y": 8 }
},
{
"title": "Recent Slow Rounds (>5s)",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"xrpld\" && name=\"consensus.round\"} | duration > 5s"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 12 }
}
]
}
```
### 7.6.2 Node Overview Dashboard
```json
{
"title": "xrpld Node Overview",
"uid": "xrpld-node-overview",
"panels": [
{
"title": "Active Nodes",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"xrpld\"} | count_over_time() by (resource.service.instance.id) | count()"
}
],
"gridPos": { "h": 4, "w": 4, "x": 0, "y": 0 }
},
{
"title": "Total Transactions (1h)",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"xrpld\" && name=\"tx.receive\"} | count()"
}
],
"gridPos": { "h": 4, "w": 4, "x": 4, "y": 0 }
},
{
"title": "Error Rate",
"type": "gauge",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"xrpld\" && status.code=error} | rate() / {resource.service.name=\"xrpld\"} | rate() * 100"
}
],
"fieldConfig": {
"defaults": {
"unit": "percent",
"max": 10,
"thresholds": {
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 1 },
{ "color": "red", "value": 5 }
]
}
}
},
"gridPos": { "h": 4, "w": 4, "x": 8, "y": 0 }
},
{
"title": "Service Map",
"type": "nodeGraph",
"datasource": "Tempo",
"gridPos": { "h": 12, "w": 12, "x": 12, "y": 0 }
}
]
}
```
### 7.6.3 Alert Rules
```yaml
# grafana/provisioning/alerting/rippled-alerts.yaml
apiVersion: 1
groups:
- name: xrpld-tracing-alerts
folder: xrpld
interval: 1m
rules:
- uid: consensus-slow
title: Consensus Round Slow
condition: A
data:
- refId: A
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="xrpld" && name="consensus.round"} | avg(duration) > 5s'
# Note: Verify TraceQL aggregate queries are supported by your
# Tempo version. Aggregate alerting (e.g., avg(duration)) requires
# Tempo 2.3+ with TraceQL metrics enabled.
for: 5m
annotations:
summary: Consensus rounds taking >5 seconds
description: "Consensus duration: {{ $value }}ms"
labels:
severity: warning
- uid: rpc-error-spike
title: RPC Error Rate Spike
condition: B
data:
- refId: B
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="xrpld" && name=~"rpc.command.*" && status.code=error} | rate() > 0.05'
# Note: Verify TraceQL aggregate queries are supported by your
# Tempo version. Aggregate alerting (e.g., rate()) requires
# Tempo 2.3+ with TraceQL metrics enabled.
for: 2m
annotations:
summary: RPC error rate >5%
labels:
severity: critical
- uid: tx-throughput-drop
title: Transaction Throughput Drop
condition: C
data:
- refId: C
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="xrpld" && name="tx.receive"} | rate() < 10'
for: 10m
annotations:
summary: Transaction throughput below threshold
labels:
severity: warning
```
---
## 7.7 PerfLog and Insight Correlation
> **OTLP** = OpenTelemetry Protocol
How to correlate OpenTelemetry traces with existing xrpld observability.
### 7.7.1 Correlation Architecture
```mermaid
flowchart TB
subgraph xrpld["xrpld Node"]
otel[OpenTelemetry<br/>Spans]
perflog[PerfLog<br/>JSON Logs]
insight[Beast Insight<br/>StatsD Metrics]
end
subgraph collectors["Data Collection"]
otelc[OTel Collector]
promtail[Promtail/Fluentd]
statsd[StatsD Exporter]
end
subgraph storage["Storage"]
tempo[(Tempo)]
loki[(Loki)]
prom[(Prometheus)]
end
subgraph grafana["Grafana"]
traces[Trace View]
logs[Log View]
metrics[Metrics View]
corr[Correlation<br/>Panel]
end
otel -->|OTLP| otelc --> tempo
perflog -->|JSON| promtail --> loki
insight -->|StatsD| statsd --> prom
tempo --> traces
loki --> logs
prom --> metrics
traces --> corr
logs --> corr
metrics --> corr
style xrpld fill:#0d47a1,stroke:#082f6a,color:#fff
style collectors fill:#bf360c,stroke:#8c2809,color:#fff
style storage fill:#1b5e20,stroke:#0d3d14,color:#fff
style grafana fill:#4a148c,stroke:#2e0d57,color:#fff
style otel fill:#0d47a1,stroke:#082f6a,color:#fff
style perflog fill:#0d47a1,stroke:#082f6a,color:#fff
style insight fill:#0d47a1,stroke:#082f6a,color:#fff
style otelc fill:#bf360c,stroke:#8c2809,color:#fff
style promtail fill:#bf360c,stroke:#8c2809,color:#fff
style statsd fill:#bf360c,stroke:#8c2809,color:#fff
style tempo fill:#1b5e20,stroke:#0d3d14,color:#fff
style loki fill:#1b5e20,stroke:#0d3d14,color:#fff
style prom fill:#1b5e20,stroke:#0d3d14,color:#fff
style traces fill:#4a148c,stroke:#2e0d57,color:#fff
style logs fill:#4a148c,stroke:#2e0d57,color:#fff
style metrics fill:#4a148c,stroke:#2e0d57,color:#fff
style corr fill:#4a148c,stroke:#2e0d57,color:#fff
```
**Reading the diagram:**
- **xrpld Node (three sources)**: A single node emits three independent data streams -- OpenTelemetry spans, PerfLog JSON logs, and Beast Insight StatsD metrics.
- **Data Collection layer**: Each stream has its own collector -- OTel Collector for spans, Promtail/Fluentd for logs, and a StatsD exporter for metrics. They operate independently.
- **Storage layer (Tempo, Loki, Prometheus)**: Each data type lands in a purpose-built store optimized for its query patterns (trace search, log grep, metric aggregation).
- **Grafana Correlation Panel**: The key integration point -- Grafana queries all three stores and links them via shared fields (`trace_id`, `xrpl.tx.hash`, `ledger_seq`), enabling a single-pane debugging experience.
### 7.7.2 Correlation Fields
| Source | Field | Link To | Purpose |
| ----------- | --------------------------- | ------------- | -------------------------- |
| **Trace** | `trace_id` | Logs | Find log entries for trace |
| **Trace** | `xrpl.tx.hash` | Logs, Metrics | Find TX-related data |
| **Trace** | `xrpl.consensus.ledger.seq` | Logs | Find ledger-related logs |
| **PerfLog** | `trace_id` (new) | Traces | Jump to trace from log |
| **PerfLog** | `ledger_seq` | Traces | Find consensus trace |
| **Insight** | `exemplar.trace_id` | Traces | Jump from metric spike |
### 7.7.3 Example: Debugging a Slow Transaction
**Step 1: Find the trace**
```
# In Grafana Explore with Tempo
{resource.service.name="xrpld" && span.xrpl.tx.hash="ABC123..."}
```
**Step 2: Get the trace_id from the trace view**
```
Trace ID: 4bf92f3577b34da6a3ce929d0e0e4736
```
**Step 3: Find related PerfLog entries**
```
# In Grafana Explore with Loki
{job="xrpld"} |= "4bf92f3577b34da6a3ce929d0e0e4736"
```
**Step 4: Check Insight metrics for the time window**
```
# In Grafana with Prometheus
rate(xrpld_tx_applied_total[1m])
@ timestamp_from_trace
```
### 7.7.4 Unified Dashboard Example
```json
{
"title": "xrpld Unified Observability",
"uid": "xrpld-unified",
"panels": [
{
"title": "Transaction Latency (Traces)",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"xrpld\" && name=\"tx.receive\"} | histogram_over_time(duration)"
}
],
"gridPos": { "h": 6, "w": 8, "x": 0, "y": 0 }
},
{
"title": "Transaction Rate (Metrics)",
"type": "timeseries",
"datasource": "Prometheus",
"targets": [
{
"expr": "rate(xrpld_tx_received_total[5m])",
"legendFormat": "{{ instance }}"
}
],
"fieldConfig": {
"defaults": {
"links": [
{
"title": "View traces",
"url": "/explore?left={\"datasource\":\"Tempo\",\"query\":\"{resource.service.name=\\\"xrpld\\\" && name=\\\"tx.receive\\\"}\"}"
}
]
}
},
"gridPos": { "h": 6, "w": 8, "x": 8, "y": 0 }
},
{
"title": "Recent Logs",
"type": "logs",
"datasource": "Loki",
"targets": [
{
"expr": "{job=\"xrpld\"} | json"
}
],
"gridPos": { "h": 6, "w": 8, "x": 16, "y": 0 }
},
{
"title": "Trace Search",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"xrpld\"}"
}
],
"fieldConfig": {
"overrides": [
{
"matcher": { "id": "byName", "options": "traceID" },
"properties": [
{
"id": "links",
"value": [
{
"title": "View trace",
"url": "/explore?left={\"datasource\":\"Tempo\",\"query\":\"${__value.raw}\"}"
},
{
"title": "View logs",
"url": "/explore?left={\"datasource\":\"Loki\",\"query\":\"{job=\\\"xrpld\\\"} |= \\\"${__value.raw}\\\"\"}"
}
]
}
]
}
]
},
"gridPos": { "h": 12, "w": 24, "x": 0, "y": 6 }
}
]
}
```
---
_Previous: [Implementation Phases](./06-implementation-phases.md)_ | _Next: [Appendix](./08-appendix.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -1,195 +0,0 @@
# Appendix
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Observability Backends](./07-observability-backends.md)
---
## 8.1 Glossary
> **OTLP** = OpenTelemetry Protocol | **TxQ** = Transaction Queue
| Term | Definition |
| --------------------- | ---------------------------------------------------------- |
| **Span** | A unit of work with start/end time, name, and attributes |
| **Trace** | A collection of spans representing a complete request flow |
| **Trace ID** | 128-bit unique identifier for a trace |
| **Span ID** | 64-bit unique identifier for a span within a trace |
| **Context** | Carrier for trace/span IDs across boundaries |
| **Propagator** | Component that injects/extracts context |
| **Sampler** | Decides which traces to record |
| **Exporter** | Sends spans to backend |
| **Collector** | Receives, processes, and forwards telemetry |
| **OTLP** | OpenTelemetry Protocol (wire format) |
| **W3C Trace Context** | Standard HTTP headers for trace propagation |
| **Baggage** | Key-value pairs propagated across service boundaries |
| **Resource** | Entity producing telemetry (service, host, etc.) |
| **Instrumentation** | Code that creates telemetry data |
### xrpld-Specific Terms
| Term | Definition |
| ----------------- | ------------------------------------------------------------- |
| **Overlay** | P2P network layer managing peer connections |
| **Consensus** | XRP Ledger consensus algorithm (RCL) |
| **Proposal** | Validator's suggested transaction set for a ledger |
| **Validation** | Validator's signature on a closed ledger |
| **HashRouter** | Component for transaction deduplication |
| **JobQueue** | Thread pool for asynchronous task execution |
| **PerfLog** | Existing performance logging system in xrpld |
| **Beast Insight** | Existing metrics framework in xrpld |
| **PathFinding** | Payment path computation engine for cross-currency payments |
| **TxQ** | Transaction queue managing fee-based prioritization |
| **LoadManager** | Dynamic fee escalation based on network load |
| **SHAMap** | SHA-256 hash-based map (Merkle trie variant) for ledger state |
---
## 8.2 Span Hierarchy Visualization
> **TxQ** = Transaction Queue
```mermaid
flowchart TB
subgraph trace["Trace: Transaction Lifecycle"]
rpc["rpc.request<br/>(entry point)"]
validate["tx.validate"]
relay["tx.relay<br/>(parent span)"]
subgraph peers["Peer Spans"]
p1["peer.send<br/>Peer A"]
p2["peer.send<br/>Peer B"]
p3["peer.send<br/>Peer C"]
end
subgraph pathfinding["PathFinding Spans"]
pathfind["pathfind.request"]
pathcomp["pathfind.compute"]
end
consensus["consensus.round"]
apply["tx.apply"]
subgraph txqueue["TxQ Spans"]
txq["txq.enqueue"]
txqApply["txq.apply"]
end
feeCalc["fee.escalate"]
end
subgraph validators["Validator Spans"]
valFetch["validator.list.fetch"]
valManifest["validator.manifest"]
end
rpc --> validate
rpc --> pathfind
pathfind --> pathcomp
validate --> relay
relay --> p1
relay --> p2
relay --> p3
p1 -.->|"context propagation"| consensus
consensus --> apply
apply --> txq
txq --> txqApply
txq --> feeCalc
style trace fill:#0f172a,stroke:#020617,color:#fff
style peers fill:#1e3a8a,stroke:#172554,color:#fff
style pathfinding fill:#134e4a,stroke:#0f766e,color:#fff
style txqueue fill:#064e3b,stroke:#047857,color:#fff
style validators fill:#4c1d95,stroke:#6d28d9,color:#fff
style rpc fill:#1d4ed8,stroke:#1e40af,color:#fff
style validate fill:#047857,stroke:#064e3b,color:#fff
style relay fill:#047857,stroke:#064e3b,color:#fff
style p1 fill:#0e7490,stroke:#155e75,color:#fff
style p2 fill:#0e7490,stroke:#155e75,color:#fff
style p3 fill:#0e7490,stroke:#155e75,color:#fff
style consensus fill:#fef3c7,stroke:#fde68a,color:#1e293b
style apply fill:#047857,stroke:#064e3b,color:#fff
style pathfind fill:#0e7490,stroke:#155e75,color:#fff
style pathcomp fill:#0e7490,stroke:#155e75,color:#fff
style txq fill:#047857,stroke:#064e3b,color:#fff
style txqApply fill:#047857,stroke:#064e3b,color:#fff
style feeCalc fill:#047857,stroke:#064e3b,color:#fff
style valFetch fill:#6d28d9,stroke:#4c1d95,color:#fff
style valManifest fill:#6d28d9,stroke:#4c1d95,color:#fff
```
**Reading the diagram:**
- **rpc.request (blue, top)**: The entry point — every traced transaction starts as an RPC call; this root span is the parent of all downstream work.
- **tx.validate and pathfind.request (green/teal, first fork)**: The RPC request fans out into transaction validation and, for cross-currency payments, a PathFinding branch (`pathfind.request` -> `pathfind.compute`).
- **tx.relay -> Peer Spans (teal, middle)**: After validation, the transaction is relayed to peers A, B, and C in parallel; each `peer.send` is a sibling child span showing fan-out across the network.
- **context propagation (dashed arrow)**: The dotted line from `peer.send Peer A` to `consensus.round` represents the trace context crossing a node boundary — the receiving validator picks up the same `trace_id` and continues the trace.
- **consensus.round -> tx.apply -> TxQ Spans (green, lower)**: Once consensus accepts the transaction, it is applied to the ledger; the TxQ spans (`txq.enqueue`, `txq.apply`, `fee.escalate`) capture queue depth and fee escalation behavior.
- **Validator Spans (purple, detached)**: `validator.list.fetch` and `validator.manifest` are independent workflows for UNL management — they run on their own traces and are linked to consensus via Span Links, not parent-child relationships.
---
## 8.3 References
> **OTLP** = OpenTelemetry Protocol
### OpenTelemetry Resources
1. [OpenTelemetry C++ SDK](https://github.com/open-telemetry/opentelemetry-cpp)
2. [OpenTelemetry Specification](https://opentelemetry.io/docs/specs/otel/)
3. [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/)
4. [OTLP Protocol Specification](https://opentelemetry.io/docs/specs/otlp/)
### Standards
5. [W3C Trace Context](https://www.w3.org/TR/trace-context/)
6. [W3C Baggage](https://www.w3.org/TR/baggage/)
7. [Protocol Buffers](https://protobuf.dev/)
### xrpld Resources
8. [xrpld Source Code](https://github.com/XRPLF/rippled)
9. [XRP Ledger Documentation](https://xrpl.org/docs/)
10. [xrpld Overlay README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/overlay/README.md)
11. [xrpld RPC README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/rpc/README.md)
12. [xrpld Consensus README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/app/consensus/README.md)
---
## 8.4 Version History
| Version | Date | Author | Changes |
| ------- | ---------- | ------ | -------------------------------------------------------------- |
| 1.0 | 2026-02-12 | - | Initial implementation plan |
| 1.1 | 2026-02-13 | - | Refactored into modular documents |
| 1.2 | 2026-03-24 | - | Review fixes: accuracy corrections, cross-document consistency |
---
## 8.5 Document Index
### Plan Documents
| Document | Description |
| ---------------------------------------------------------------- | -------------------------------------------- |
| [OpenTelemetryPlan.md](./OpenTelemetryPlan.md) | Master overview and executive summary |
| [00-tracing-fundamentals.md](./00-tracing-fundamentals.md) | Distributed tracing concepts and OTel primer |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | xrpld architecture and trace points |
| [02-design-decisions.md](./02-design-decisions.md) | SDK selection, exporters, span conventions |
| [03-implementation-strategy.md](./03-implementation-strategy.md) | Directory structure, performance analysis |
| [04-code-samples.md](./04-code-samples.md) | C++ code examples for all components |
| [05-configuration-reference.md](./05-configuration-reference.md) | xrpld config, CMake, Collector configs |
| [06-implementation-phases.md](./06-implementation-phases.md) | Timeline, tasks, risks, success metrics |
| [07-observability-backends.md](./07-observability-backends.md) | Backend selection and architecture |
| [08-appendix.md](./08-appendix.md) | Glossary, references, version history |
### Task Lists
| Document | Description |
| ------------------------------------ | --------------------------------------------------- |
| [POC_taskList.md](./POC_taskList.md) | Proof-of-concept telemetry integration |
| [presentation.md](./presentation.md) | Presentation slides for OpenTelemetry plan overview |
---
_Previous: [Observability Backends](./07-observability-backends.md)_ | _Back to: [Overview](./OpenTelemetryPlan.md)_

View File

@@ -1,230 +0,0 @@
# [OpenTelemetry](00-tracing-fundamentals.md) Distributed Tracing Implementation Plan for xrpld (xrpld)
## Executive Summary
> **OTLP** = OpenTelemetry Protocol
This document provides a comprehensive implementation plan for integrating OpenTelemetry distributed tracing into the xrpld XRP Ledger node software. The plan addresses the unique challenges of a decentralized peer-to-peer system where trace context must propagate across network boundaries between independent nodes.
### Key Benefits
- **End-to-end transaction visibility**: Track transactions from submission through consensus to ledger inclusion
- **Consensus round analysis**: Understand timing and behavior of consensus phases across validators
- **RPC performance insights**: Identify slow handlers and optimize response times
- **Network topology understanding**: Visualize message propagation patterns between peers
- **Incident debugging**: Correlate events across distributed nodes during issues
### Estimated Performance Overhead
| Metric | Overhead | Notes |
| ------------- | ---------- | ----------------------------------- |
| CPU | 1-3% | Span creation and attribute setting |
| Memory | 2-5 MB | Batch buffer for pending spans |
| Network | 10-50 KB/s | Compressed OTLP export to collector |
| Latency (p99) | <2% | With proper sampling configuration |
---
## Document Structure
This implementation plan is organized into modular documents for easier navigation:
<div align="center">
```mermaid
flowchart TB
overview["📋 OpenTelemetryPlan.md<br/>(This Document)"]
subgraph fundamentals["Fundamentals"]
fund["00-tracing-fundamentals.md"]
end
subgraph analysis["Analysis & Design"]
arch["01-architecture-analysis.md"]
design["02-design-decisions.md"]
end
subgraph impl["Implementation"]
strategy["03-implementation-strategy.md"]
code["04-code-samples.md"]
config["05-configuration-reference.md"]
end
subgraph deploy["Deployment & Planning"]
phases["06-implementation-phases.md"]
backends["07-observability-backends.md"]
appendix["08-appendix.md"]
poc["POC_taskList.md"]
end
overview --> fundamentals
overview --> analysis
overview --> impl
overview --> deploy
fund --> arch
arch --> design
design --> strategy
strategy --> code
code --> config
config --> phases
phases --> backends
backends --> appendix
phases --> poc
style overview fill:#1b5e20,stroke:#0d3d14,color:#fff,stroke-width:2px
style fundamentals fill:#00695c,stroke:#004d40,color:#fff
style fund fill:#00695c,stroke:#004d40,color:#fff
style analysis fill:#0d47a1,stroke:#082f6a,color:#fff
style impl fill:#bf360c,stroke:#8c2809,color:#fff
style deploy fill:#4a148c,stroke:#2e0d57,color:#fff
style arch fill:#0d47a1,stroke:#082f6a,color:#fff
style design fill:#0d47a1,stroke:#082f6a,color:#fff
style strategy fill:#bf360c,stroke:#8c2809,color:#fff
style code fill:#bf360c,stroke:#8c2809,color:#fff
style config fill:#bf360c,stroke:#8c2809,color:#fff
style phases fill:#4a148c,stroke:#2e0d57,color:#fff
style backends fill:#4a148c,stroke:#2e0d57,color:#fff
style appendix fill:#4a148c,stroke:#2e0d57,color:#fff
style poc fill:#4a148c,stroke:#2e0d57,color:#fff
```
</div>
---
## Table of Contents
| Section | Document | Description |
| ------- | ---------------------------------------------------------- | ---------------------------------------------------------------------- |
| **0** | [Tracing Fundamentals](./00-tracing-fundamentals.md) | Distributed tracing concepts, span relationships, context propagation |
| **1** | [Architecture Analysis](./01-architecture-analysis.md) | xrpld component analysis, trace points, instrumentation priorities |
| **2** | [Design Decisions](./02-design-decisions.md) | SDK selection, exporters, span naming, attributes, context propagation |
| **3** | [Implementation Strategy](./03-implementation-strategy.md) | Directory structure, key principles, performance optimization |
| **4** | [Code Samples](./04-code-samples.md) | C++ implementation examples for core infrastructure and key modules |
| **5** | [Configuration Reference](./05-configuration-reference.md) | xrpld config, CMake integration, Collector configurations |
| **6** | [Implementation Phases](./06-implementation-phases.md) | 5-phase timeline, tasks, risks, success metrics |
| **7** | [Observability Backends](./07-observability-backends.md) | Backend selection guide and production architecture |
| **8** | [Appendix](./08-appendix.md) | Glossary, references, version history |
| **POC** | [POC Task List](./POC_taskList.md) | Proof of concept tasks for RPC tracing end-to-end demo |
---
## 0. Tracing Fundamentals
This document introduces distributed tracing concepts for readers unfamiliar with the domain. It covers what traces and spans are, how parent-child and follows-from relationships model causality, how context propagates across service boundaries, and how sampling controls data volume. It also maps these concepts to xrpld-specific scenarios like transaction relay and consensus.
➡️ **[Read Tracing Fundamentals](./00-tracing-fundamentals.md)**
---
## 1. Architecture Analysis
> **WS** = WebSocket | **TxQ** = Transaction Queue
The xrpld node consists of several key components that require instrumentation for comprehensive distributed tracing. The main areas include the RPC server (HTTP/WebSocket), Overlay P2P network, Consensus mechanism (RCLConsensus), JobQueue for async task execution, PathFinding, Transaction Queue (TxQ), fee escalation (LoadManager), ledger acquisition, validator management, and existing observability infrastructure (PerfLog, Insight/StatsD, Journal logging).
Key trace points span across transaction submission via RPC, peer-to-peer message propagation, consensus round execution, ledger building, path computation, transaction queue behavior, fee escalation, and validator health. The implementation prioritizes high-value, low-risk components first: RPC handlers provide immediate value with minimal risk, while consensus tracing requires careful implementation to avoid timing impacts.
➡️ **[Read full Architecture Analysis](./01-architecture-analysis.md)**
---
## 2. Design Decisions
> **OTLP** = OpenTelemetry Protocol | **CNCF** = Cloud Native Computing Foundation
The OpenTelemetry C++ SDK is selected for its CNCF backing, active development, and native performance characteristics. Traces are exported via OTLP/gRPC (primary) or OTLP/HTTP (fallback) to an OpenTelemetry Collector, which provides flexible routing and sampling.
Span naming follows a hierarchical `<component>.<operation>` convention (e.g., `rpc.submit`, `tx.relay`, `consensus.round`). Context propagation uses W3C Trace Context headers for HTTP and embedded Protocol Buffer fields for P2P messages. The implementation coexists with existing PerfLog and Insight observability systems through correlation IDs.
**Data Collection & Privacy**: Telemetry collects only operational metadata (timing, counts, hashes) — never sensitive content (private keys, balances, amounts, raw payloads). Privacy protection includes account hashing, configurable redaction, sampling, and collector-level filtering. Node operators retain full control over telemetry configuration.
➡️ **[Read full Design Decisions](./02-design-decisions.md)**
---
## 3. Implementation Strategy
The telemetry code is organized under `include/xrpl/telemetry/` for headers and `src/libxrpl/telemetry/` for implementation. Key principles include RAII-based span management via `SpanGuard`, conditional compilation with `XRPL_ENABLE_TELEMETRY`, and minimal runtime overhead through batch processing and efficient sampling.
Performance optimization strategies include probabilistic head sampling (10% default), tail-based sampling at the collector for errors and slow traces, batch export to reduce network overhead, and conditional instrumentation that compiles to no-ops when disabled.
➡️ **[Read full Implementation Strategy](./03-implementation-strategy.md)**
---
## 4. Code Samples
C++ implementation examples are provided for the core telemetry infrastructure and key modules:
- `Telemetry.h` - Core interface for tracer access and span creation
- `SpanGuard.h` - RAII wrapper for automatic span lifecycle management
- `TracingInstrumentation.h` - Macros for conditional instrumentation
- Protocol Buffer extensions for trace context propagation
- Module-specific instrumentation (RPC, Consensus, P2P, JobQueue)
- Remaining modules (PathFinding, TxQ, Validator, etc.) follow the same patterns
➡️ **[View all Code Samples](./04-code-samples.md)**
---
## 5. Configuration Reference
> **OTLP** = OpenTelemetry Protocol | **APM** = Application Performance Monitoring
Configuration is handled through the `[telemetry]` section in `xrpld.cfg` with options for enabling/disabling, exporter selection, endpoint configuration, sampling ratios, and component-level filtering. CMake integration includes a `XRPL_ENABLE_TELEMETRY` option for compile-time control.
OpenTelemetry Collector configurations are provided for development and production (with tail-based sampling, Tempo, and Elastic APM). Docker Compose examples enable quick local development environment setup.
➡️ **[View full Configuration Reference](./05-configuration-reference.md)**
---
## 6. Implementation Phases
The implementation spans 9 weeks across 5 phases:
| Phase | Duration | Focus | Key Deliverables |
| ----- | --------- | ------------------- | --------------------------------------------------- |
| 1 | Weeks 1-2 | Core Infrastructure | SDK integration, Telemetry interface, Configuration |
| 2 | Weeks 3-4 | RPC Tracing | HTTP context extraction, Handler instrumentation |
| 3 | Weeks 5-6 | Transaction Tracing | Protocol Buffer context, Relay propagation |
| 4 | Weeks 7-8 | Consensus Tracing | Round spans, Proposal/validation tracing |
| 5 | Week 9 | Documentation | Runbook, Dashboards, Training |
**Total Effort**: 47 person-days (2 developers working in parallel)
➡️ **[View full Implementation Phases](./06-implementation-phases.md)**
---
## 7. Observability Backends
> **APM** = Application Performance Monitoring | **GCS** = Google Cloud Storage
Grafana Tempo is recommended for all environments due to its cost-effectiveness and Grafana integration, while Elastic APM is ideal for organizations with existing Elastic infrastructure.
The recommended production architecture uses a gateway collector pattern with regional collectors performing tail-based sampling, routing traces to multiple backends (Tempo for primary storage, Elastic for log correlation, S3/GCS for long-term archive).
➡️ **[View Observability Backend Recommendations](./07-observability-backends.md)**
---
## 8. Appendix
The appendix contains a glossary of OpenTelemetry and xrpld-specific terms, references to external documentation and specifications, version history for this implementation plan, and a complete document index.
➡️ **[View Appendix](./08-appendix.md)**
---
## POC Task List
A step-by-step task list for building a minimal end-to-end proof of concept that demonstrates distributed tracing in xrpld. The POC scope is limited to RPC tracing — showing request traces flowing from xrpld through an OpenTelemetry Collector into Tempo, viewable in Grafana.
➡️ **[View POC Task List](./POC_taskList.md)**
---
_This document provides a comprehensive implementation plan for integrating OpenTelemetry distributed tracing into the xrpld XRP Ledger node software. For detailed information on any section, follow the links to the corresponding sub-documents._

View File

@@ -1,636 +0,0 @@
# OpenTelemetry POC Task List
> **Goal**: Build a minimal end-to-end proof of concept that demonstrates distributed tracing in xrpld. A successful POC will show RPC request traces flowing from xrpld through an OTel Collector into Tempo, viewable in Grafana.
>
> **Scope**: RPC tracing only (highest value, lowest risk per the [CRAWL phase](./06-implementation-phases.md#6102-quick-wins-immediate-value) in the implementation phases). No cross-node P2P context propagation or consensus tracing in the POC.
### Related Plan Documents
| Document | Relevance to POC |
| ---------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [00-tracing-fundamentals.md](./00-tracing-fundamentals.md) | Core concepts: traces, spans, context propagation, sampling |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | RPC request flow (§1.5), key trace points (§1.6), instrumentation priority (§1.7) |
| [02-design-decisions.md](./02-design-decisions.md) | SDK selection (§2.1), exporter config (§2.2), span naming (§2.3), attribute schema (§2.4), coexistence with PerfLog/Insight (§2.6) |
| [03-implementation-strategy.md](./03-implementation-strategy.md) | Directory structure (§3.1), key principles (§3.2), performance overhead (§3.3-3.6), conditional compilation (§3.7.3), code intrusiveness (§3.9) |
| [04-code-samples.md](./04-code-samples.md) | Telemetry interface (§4.1), SpanGuard (§4.2), macros (§4.3), RPC instrumentation (§4.5.3) |
| [05-configuration-reference.md](./05-configuration-reference.md) | xrpld config (§5.1), config parser (§5.2), Application integration (§5.3), CMake (§5.4), Collector config (§5.5), Docker Compose (§5.6), Grafana (§5.8) |
| [06-implementation-phases.md](./06-implementation-phases.md) | Phase 1 core tasks (§6.2), Phase 2 RPC tasks (§6.3), quick wins (§6.10), definition of done (§6.11) |
| [07-observability-backends.md](./07-observability-backends.md) | Tempo dev setup (§7.1), Grafana dashboards (§7.6), alert rules (§7.6.3) |
---
## Task 0: Docker Observability Stack Setup
> **OTLP** = OpenTelemetry Protocol
**Objective**: Stand up the backend infrastructure to receive, store, and display traces.
**What to do**:
- Create `docker/telemetry/docker-compose.yml` in the repo with three services:
1. **OpenTelemetry Collector** (`otel/opentelemetry-collector-contrib:0.92.0`)
- Expose ports `4317` (OTLP gRPC) and `4318` (OTLP HTTP)
- Expose port `13133` (health check)
- Mount a config file `docker/telemetry/otel-collector-config.yaml`
2. **Tempo** (`grafana/tempo:2.6.1`)
- Expose port `3200` (HTTP API) and `4317` (OTLP gRPC, internal)
3. **Grafana** (`grafana/grafana:latest`) — optional but useful
- Expose port `3000`
- Enable anonymous admin access for local dev (`GF_AUTH_ANONYMOUS_ENABLED=true`, `GF_AUTH_ANONYMOUS_ORG_ROLE=Admin`)
- Provision Tempo as a data source via `docker/telemetry/grafana/provisioning/datasources/tempo.yaml`
- Create `docker/telemetry/otel-collector-config.yaml`:
```yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 100
exporters:
logging:
verbosity: detailed
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp/tempo]
```
- Create Grafana Tempo datasource provisioning file at `docker/telemetry/grafana/provisioning/datasources/tempo.yaml`:
```yaml
apiVersion: 1
datasources:
- name: Tempo
type: tempo
access: proxy
url: http://tempo:3200
```
**Verification**: Run `docker compose -f docker/telemetry/docker-compose.yml up -d`, then:
- `curl http://localhost:13133` returns healthy (Collector)
- `http://localhost:3000` opens Grafana (Tempo datasource available, no traces yet)
**Reference**:
- [05-configuration-reference.md §5.5](./05-configuration-reference.md) — Collector config (dev YAML with Tempo exporter)
- [05-configuration-reference.md §5.6](./05-configuration-reference.md) — Docker Compose development environment
- [07-observability-backends.md §7.1](./07-observability-backends.md) — Tempo quick start and backend selection
- [05-configuration-reference.md §5.8](./05-configuration-reference.md) — Grafana datasource provisioning and dashboards
---
## Task 1: Add OpenTelemetry C++ SDK Dependency
**Objective**: Make `opentelemetry-cpp` available to the build system.
**What to do**:
- Edit `conanfile.py` to add `opentelemetry-cpp` as an **optional** dependency. The gRPC otel plugin flag (`"grpc/*:otel_plugin": False`) in the existing conanfile may need to remain false — we pull the OTel SDK separately.
- Add a Conan option: `with_telemetry = [True, False]` defaulting to `False`
- When `with_telemetry` is `True`, add `opentelemetry-cpp` to `self.requires()`
- Required OTel Conan components: `opentelemetry-cpp` (which bundles api, sdk, and exporters). If the package isn't in Conan Center, consider using `FetchContent` in CMake or building from source as a fallback.
- Edit `CMakeLists.txt`:
- Add option: `option(XRPL_ENABLE_TELEMETRY "Enable OpenTelemetry tracing" OFF)`
- When ON, `find_package(opentelemetry-cpp CONFIG REQUIRED)` and add compile definition `XRPL_ENABLE_TELEMETRY`
- When OFF, do nothing (zero build impact)
- Verify the build succeeds with `-DXRPL_ENABLE_TELEMETRY=OFF` (no regressions) and with `-DXRPL_ENABLE_TELEMETRY=ON` (SDK links successfully).
**Key files**:
- `conanfile.py`
- `CMakeLists.txt`
**Reference**:
- [05-configuration-reference.md §5.4](./05-configuration-reference.md) — CMake integration, `FindOpenTelemetry.cmake`, `XRPL_ENABLE_TELEMETRY` option
- [03-implementation-strategy.md §3.2](./03-implementation-strategy.md) — Key principle: zero-cost when disabled via compile-time flags
- [02-design-decisions.md §2.1](./02-design-decisions.md) — SDK selection rationale and required OTel components
---
## Task 2: Create Core Telemetry Interface and NullTelemetry
**Objective**: Define the `Telemetry` abstract interface and a no-op implementation so the rest of the codebase can reference telemetry without hard-depending on the OTel SDK.
**What to do**:
- Create `include/xrpl/telemetry/Telemetry.h`:
- Define `namespace xrpl::telemetry`
- Define `struct Telemetry::Setup` holding: `enabled`, `exporterEndpoint`, `samplingRatio`, `serviceName`, `serviceVersion`, `serviceInstanceId`, `traceRpc`, `traceTransactions`, `traceConsensus`, `tracePeer`
- Define abstract `class Telemetry` with:
- `virtual void start() = 0;`
- `virtual void stop() = 0;`
- `virtual bool isEnabled() const = 0;`
- `virtual nostd::shared_ptr<Tracer> getTracer(string_view name = "xrpld") = 0;`
- `virtual nostd::shared_ptr<Span> startSpan(string_view name, SpanKind kind = kInternal) = 0;`
- `virtual nostd::shared_ptr<Span> startSpan(string_view name, Context const& parentContext, SpanKind kind = kInternal) = 0;`
- `virtual bool shouldTraceRpc() const = 0;`
- `virtual bool shouldTraceTransactions() const = 0;`
- `virtual bool shouldTraceConsensus() const = 0;`
- Factory: `std::unique_ptr<Telemetry> make_Telemetry(Setup const&, beast::Journal);`
- Config parser: `Telemetry::Setup setup_Telemetry(Section const&, std::string const& nodePublicKey, std::string const& version);`
- Create `include/xrpl/telemetry/SpanGuard.h`:
- RAII guard that takes an `nostd::shared_ptr<Span>`, creates a `Scope`, and calls `span->End()` in destructor.
- Convenience: `setAttribute()`, `setOk()`, `setStatus()`, `addEvent()`, `recordException()`, `context()`
- See [04-code-samples.md](./04-code-samples.md) §4.2 for the full implementation.
- Create `src/libxrpl/telemetry/NullTelemetry.cpp`:
- Implements `Telemetry` with all no-ops.
- `isEnabled()` returns `false`, `startSpan()` returns a noop span.
- This is used when `XRPL_ENABLE_TELEMETRY` is OFF or `enabled=0` in config.
- Guard all OTel SDK headers behind `#ifdef XRPL_ENABLE_TELEMETRY`. The `NullTelemetry` implementation should compile without the OTel SDK present.
**Key new files**:
- `include/xrpl/telemetry/Telemetry.h`
- `include/xrpl/telemetry/SpanGuard.h`
- `src/libxrpl/telemetry/NullTelemetry.cpp`
**Reference**:
- [04-code-samples.md §4.1](./04-code-samples.md) — Full `Telemetry` interface with `Setup` struct, lifecycle, tracer access, span creation, and component filtering methods
- [04-code-samples.md §4.2](./04-code-samples.md) — Full `SpanGuard` RAII implementation and `NullSpanGuard` no-op class
- [03-implementation-strategy.md §3.1](./03-implementation-strategy.md) — Directory structure: `include/xrpl/telemetry/` for headers, `src/libxrpl/telemetry/` for implementation
- [03-implementation-strategy.md §3.7.3](./03-implementation-strategy.md) — Conditional instrumentation and zero-cost compile-time disabled pattern
---
## Task 3: Implement OTel-Backed Telemetry
> **OTLP** = OpenTelemetry Protocol
**Objective**: Implement the real `Telemetry` class that initializes the OTel SDK, configures the OTLP exporter and batch processor, and creates tracers/spans.
**What to do**:
- Create `src/libxrpl/telemetry/Telemetry.cpp` (compiled only when `XRPL_ENABLE_TELEMETRY=ON`):
- `class TelemetryImpl : public Telemetry` that:
- In `start()`: creates a `TracerProvider` with:
- Resource attributes: `service.name`, `service.version`, `service.instance.id`
- An `OtlpHttpExporter` pointed at `setup.exporterEndpoint` (default `localhost:4318`)
- A `BatchSpanProcessor` with configurable batch size and delay
- A `TraceIdRatioBasedSampler` using `setup.samplingRatio`
- Sets the global `TracerProvider`
- In `stop()`: calls `ForceFlush()` then shuts down the provider
- In `startSpan()`: delegates to `getTracer()->StartSpan(name, ...)`
- `shouldTraceRpc()` etc. read from `Setup` fields
- Create `src/libxrpl/telemetry/TelemetryConfig.cpp`:
- `setup_Telemetry()` parses the `[telemetry]` config section from `xrpld.cfg`
- Maps config keys: `enabled`, `exporter`, `endpoint`, `sampling_ratio`, `trace_rpc`, `trace_transactions`, `trace_consensus`, `trace_peer`
- Wire `make_Telemetry()` factory:
- If `setup.enabled` is true AND `XRPL_ENABLE_TELEMETRY` is defined: return `TelemetryImpl`
- Otherwise: return `NullTelemetry`
- Add telemetry source files to CMake. When `XRPL_ENABLE_TELEMETRY=ON`, compile `Telemetry.cpp` and `TelemetryConfig.cpp` and link against `opentelemetry-cpp::api`, `opentelemetry-cpp::sdk`, `opentelemetry-cpp::otlp_grpc_exporter`. When OFF, compile only `NullTelemetry.cpp`.
**Key new files**:
- `src/libxrpl/telemetry/Telemetry.cpp`
- `src/libxrpl/telemetry/TelemetryConfig.cpp`
**Key modified files**:
- `CMakeLists.txt` (add telemetry library target)
**Reference**:
- [04-code-samples.md §4.1](./04-code-samples.md) — `Telemetry` interface that `TelemetryImpl` must implement
- [05-configuration-reference.md §5.2](./05-configuration-reference.md) — `setup_Telemetry()` config parser implementation
- [02-design-decisions.md §2.2](./02-design-decisions.md) — OTLP/gRPC exporter config (endpoint, TLS options)
- [02-design-decisions.md §2.4.1](./02-design-decisions.md) — Resource attributes: `service.name`, `service.version`, `service.instance.id`, `xrpl.network.id`
- [03-implementation-strategy.md §3.4](./03-implementation-strategy.md) — Per-operation CPU costs and overhead budget for span creation
- [03-implementation-strategy.md §3.5](./03-implementation-strategy.md) — Memory overhead: static (~456 KB) and dynamic (~1.2 MB) budgets
---
## Task 4: Integrate Telemetry into Application Lifecycle
**Objective**: Wire the `Telemetry` object into the `ServiceRegistry` / `Application` so all components can access it.
**What to do**:
- Edit `include/xrpl/core/ServiceRegistry.h`:
- Forward-declare `namespace telemetry { class Telemetry; }` inside `namespace xrpl`
- Add pure virtual method: `virtual telemetry::Telemetry& getTelemetry() = 0;`
- (`Application` extends `ServiceRegistry`, so this is automatically available on `Application` too)
- Edit `src/xrpld/app/main/Application.cpp` (the `ApplicationImp` class):
- Add member: `std::unique_ptr<telemetry::Telemetry> telemetry_;`
- In the member initializer list, construct telemetry with an empty
`serviceInstanceId` (node identity is not yet known):
```cpp
, telemetry_(
telemetry::make_Telemetry(
telemetry::setup_Telemetry(
config_->section("telemetry"),
"", // Updated later via setServiceInstanceId()
BuildInfo::getVersionString()),
logs_->journal("Telemetry")))
```
- In `setup()`, after `nodeIdentity_` is resolved, inject the node
public key as the service instance ID:
```cpp
if (!config_->section("telemetry").exists("service_instance_id"))
telemetry_->setServiceInstanceId(
toBase58(TokenType::NodePublic, nodeIdentity_->first));
```
- In `start()`: call `telemetry_->start()`
- In `run()` (shutdown path): call `telemetry_->stop()` (to flush pending spans)
- Implement `getTelemetry()` override: return `*telemetry_`
- Add `[telemetry]` section to the example config `cfg/xrpld-example.cfg`:
```ini
# [telemetry]
# enabled=1
# endpoint=http://localhost:4318/v1/traces
# sampling_ratio=1.0
# trace_rpc=1
```
> **Access patterns**: Components holding `ServiceRegistry&` (e.g.
> `NetworkOPsImp`) call `registry_.get().getTelemetry()`. Components
> holding `Application&` (e.g. `ServerHandler`, `PeerImp`,
> `RCLConsensusAdaptor`) call `app_.getTelemetry()` directly. Both
> resolve to the same `Telemetry` instance.
**Key modified files**:
- `include/xrpl/core/ServiceRegistry.h`
- `src/xrpld/app/main/Application.cpp`
- `cfg/xrpld-example.cfg` (example config)
**Reference**:
- [05-configuration-reference.md §5.3](./05-configuration-reference.md) — `ApplicationImp` changes: member declaration, constructor init, `start()`/`stop()` wiring, `getTelemetry()` override
- [05-configuration-reference.md §5.1](./05-configuration-reference.md) — `[telemetry]` config section format and all option defaults
- [03-implementation-strategy.md §3.9.2](./03-implementation-strategy.md) — File impact assessment: `Application.cpp` ~15 lines added, ~3 changed (Low risk)
---
## Task 5: Create Instrumentation Macros
**Objective**: Define convenience macros that make instrumenting code one-liners, and that compile to zero-cost no-ops when telemetry is disabled.
**What to do**:
- Create `src/xrpld/telemetry/TracingInstrumentation.h`:
- When `XRPL_ENABLE_TELEMETRY` is defined:
```cpp
#define XRPL_TRACE_SPAN(telemetry, name) \
auto _xrpl_span_ = (telemetry).startSpan(name); \
::xrpl::telemetry::SpanGuard _xrpl_guard_(_xrpl_span_)
#define XRPL_TRACE_RPC(telemetry, name) \
std::optional<::xrpl::telemetry::SpanGuard> _xrpl_guard_; \
if ((telemetry).shouldTraceRpc()) { \
_xrpl_guard_.emplace((telemetry).startSpan(name)); \
}
#define XRPL_TRACE_SET_ATTR(key, value) \
if (_xrpl_guard_.has_value()) { \
_xrpl_guard_->setAttribute(key, value); \
}
#define XRPL_TRACE_EXCEPTION(e) \
if (_xrpl_guard_.has_value()) { \
_xrpl_guard_->recordException(e); \
}
```
- When `XRPL_ENABLE_TELEMETRY` is NOT defined, all macros expand to `((void)0)`
**Key new file**:
- `src/xrpld/telemetry/TracingInstrumentation.h`
**Reference**:
- [04-code-samples.md §4.3](./04-code-samples.md) — Full macro definitions for `XRPL_TRACE_SPAN`, `XRPL_TRACE_RPC`, `XRPL_TRACE_CONSENSUS`, `XRPL_TRACE_SET_ATTR`, `XRPL_TRACE_EXCEPTION` with both enabled and disabled branches
- [03-implementation-strategy.md §3.7.3](./03-implementation-strategy.md) — Conditional instrumentation pattern: compile-time `#ifndef` and runtime `shouldTrace*()` checks
- [03-implementation-strategy.md §3.9.7](./03-implementation-strategy.md) — Before/after code examples showing minimal intrusiveness (~1-3 lines per instrumentation point)
---
## Task 6: Instrument RPC ServerHandler
> **WS** = WebSocket
**Objective**: Add tracing to the HTTP RPC entry point so every incoming RPC request creates a span.
**What to do**:
- Edit `src/xrpld/rpc/detail/ServerHandler.cpp`:
- `#include` the `TracingInstrumentation.h` header
- In `ServerHandler::onRequest(Session& session)`:
- At the top of the method, add: `XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.request");`
- After the RPC command name is extracted, set attribute: `XRPL_TRACE_SET_ATTR("xrpl.rpc.command", command);`
- After the response status is known, set: `XRPL_TRACE_SET_ATTR("http.status_code", static_cast<int64_t>(statusCode));`
- Wrap error paths with: `XRPL_TRACE_EXCEPTION(e);`
- In `ServerHandler::processRequest(...)`:
- Add a child span: `XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.process");`
- Set method attribute: `XRPL_TRACE_SET_ATTR("xrpl.rpc.method", request_method);`
- In `ServerHandler::onWSMessage(...)` (WebSocket path):
- Add: `XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.ws.message");`
- The goal is to see spans like:
```
rpc.request
└── rpc.process
```
in Tempo/Grafana for every HTTP RPC call.
**Key modified file**:
- `src/xrpld/rpc/detail/ServerHandler.cpp` (~15-25 lines added)
**Reference**:
- [04-code-samples.md §4.5.3](./04-code-samples.md) — Complete `ServerHandler::onRequest()` instrumented code sample with W3C header extraction, span creation, attribute setting, and error handling
- [01-architecture-analysis.md §1.5](./01-architecture-analysis.md) — RPC request flow diagram: HTTP request -> attributes -> jobqueue.enqueue -> rpc.command -> response
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — Key trace points table: `rpc.request` in `ServerHandler.cpp::onRequest()` (Priority: High)
- [02-design-decisions.md §2.3](./02-design-decisions.md) — Span naming convention: `rpc.request`, `rpc.command.*`
- [02-design-decisions.md §2.4.2](./02-design-decisions.md) — RPC span attributes: `xrpl.rpc.command`, `xrpl.rpc.version`, `xrpl.rpc.role`, `xrpl.rpc.params`
- [03-implementation-strategy.md §3.9.2](./03-implementation-strategy.md) — File impact: `ServerHandler.cpp` ~40 lines added, ~10 changed (Low risk)
---
## Task 7: Instrument RPC Command Execution
**Objective**: Add per-command tracing inside the RPC handler so each command (e.g., `submit`, `account_info`, `server_info`) gets its own child span.
**What to do**:
- Edit `src/xrpld/rpc/detail/RPCHandler.cpp`:
- `#include` the `TracingInstrumentation.h` header
- In `doCommand(RPC::JsonContext& context, Json::Value& result)`:
- At the top: `XRPL_TRACE_RPC(context.app.getTelemetry(), "rpc.command." + context.method);`
- Set attributes:
- `XRPL_TRACE_SET_ATTR("xrpl.rpc.command", context.method);`
- `XRPL_TRACE_SET_ATTR("xrpl.rpc.version", static_cast<int64_t>(context.apiVersion));`
- `XRPL_TRACE_SET_ATTR("xrpl.rpc.role", (context.role == Role::ADMIN) ? "admin" : "user");`
- On success: `XRPL_TRACE_SET_ATTR("xrpl.rpc.status", "success");`
- On error: `XRPL_TRACE_SET_ATTR("xrpl.rpc.status", "error");` and set the error message
- After this, traces in Tempo/Grafana should look like:
```
rpc.request (xrpl.rpc.command=account_info)
└── rpc.process
└── rpc.command.account_info (xrpl.rpc.version=2, xrpl.rpc.role=user, xrpl.rpc.status=success)
```
**Key modified file**:
- `src/xrpld/rpc/detail/RPCHandler.cpp` (~15-20 lines added)
**Reference**:
- [04-code-samples.md §4.5.3](./04-code-samples.md) — `ServerHandler::onRequest()` code sample (includes child span pattern for `rpc.command.*`)
- [02-design-decisions.md §2.3](./02-design-decisions.md) — Span naming: `rpc.command.*` pattern with dynamic command name (e.g., `rpc.command.server_info`)
- [02-design-decisions.md §2.4.2](./02-design-decisions.md) — RPC attribute schema: `xrpl.rpc.command`, `xrpl.rpc.version`, `xrpl.rpc.role`, `xrpl.rpc.status`
- [01-architecture-analysis.md §1.6](./01-architecture-analysis.md) — Key trace points table: `rpc.command.*` in `RPCHandler.cpp::doCommand()` (Priority: High)
- [02-design-decisions.md §2.6.5](./02-design-decisions.md) — Correlation with PerfLog: how `doCommand()` can link trace_id with existing PerfLog entries
- [03-implementation-strategy.md §3.4.4](./03-implementation-strategy.md) — RPC request overhead budget: ~1.75 μs total per request
---
## Task 8: Build, Run, and Verify End-to-End
> **OTLP** = OpenTelemetry Protocol
**Objective**: Prove the full pipeline works: xrpld emits traces -> OTel Collector receives them -> Tempo stores them for Grafana visualization.
**What to do**:
1. **Start the Docker stack**:
```bash
docker compose -f docker/telemetry/docker-compose.yml up -d
```
Verify Collector health: `curl http://localhost:13133`
2. **Build xrpld with telemetry**:
```bash
# Adjust for your actual build workflow
conan install . --build=missing -o with_telemetry=True
cmake --preset default -DXRPL_ENABLE_TELEMETRY=ON
cmake --build --preset default
```
3. **Configure xrpld**:
Add to `xrpld.cfg` (or your local test config):
```ini
[telemetry]
enabled=1
endpoint=localhost:4317
sampling_ratio=1.0
trace_rpc=1
```
4. **Start xrpld** in standalone mode:
```bash
./rippled --conf xrpld.cfg -a --start
```
5. **Generate RPC traffic**:
```bash
# server_info
curl -s -X POST http://localhost:5005 \
-H "Content-Type: application/json" \
-d '{"method":"server_info","params":[{}]}'
# ledger
curl -s -X POST http://localhost:5005 \
-H "Content-Type: application/json" \
-d '{"method":"ledger","params":[{"ledger_index":"current"}]}'
# account_info (will error in standalone, that's fine — we trace errors too)
curl -s -X POST http://localhost:5005 \
-H "Content-Type: application/json" \
-d '{"method":"account_info","params":[{"account":"rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"}]}'
```
6. **Verify in Grafana (Tempo)**:
- Open `http://localhost:3000`
- Navigate to Explore → select Tempo datasource
- Search for service `xrpld`
- Confirm you see traces with spans: `rpc.request` -> `rpc.process` -> `rpc.command.server_info`
- Click into a trace and verify attributes: `xrpl.rpc.command`, `xrpl.rpc.status`, `xrpl.rpc.version`
7. **Verify zero-overhead when disabled**:
- Rebuild with `XRPL_ENABLE_TELEMETRY=OFF`, or set `enabled=0` in config
- Run the same RPC calls
- Confirm no new traces appear and no errors in xrpld logs
**Verification Checklist**:
- [ ] Docker stack starts without errors
- [ ] xrpld builds with `-DXRPL_ENABLE_TELEMETRY=ON`
- [ ] xrpld starts and connects to OTel Collector (check xrpld logs for telemetry messages)
- [ ] Traces appear in Grafana/Tempo under service "xrpld"
- [ ] Span hierarchy is correct (parent-child relationships)
- [ ] Span attributes are populated (`xrpl.rpc.command`, `xrpl.rpc.status`, etc.)
- [ ] Error spans show error status and message
- [ ] Building with `XRPL_ENABLE_TELEMETRY=OFF` produces no regressions
- [ ] Setting `enabled=0` at runtime produces no traces and no errors
**Reference**:
- [06-implementation-phases.md §6.11.1](./06-implementation-phases.md) — Phase 1 definition of done: SDK compiles, runtime toggle works, span creation verified in Tempo, config validation passes
- [06-implementation-phases.md §6.11.2](./06-implementation-phases.md#6112-phase-2-rpc-tracing) — Phase 2 definition of done: 100% RPC coverage, traceparent propagation, <1ms p99 overhead, dashboard deployed
- [06-implementation-phases.md §6.8](./06-implementation-phases.md) — Success metrics: trace coverage >95%, CPU overhead <3%, memory <5 MB, latency impact <2%
- [03-implementation-strategy.md §3.9.5](./03-implementation-strategy.md) — Backward compatibility: config optional, protocol unchanged, `XRPL_ENABLE_TELEMETRY=OFF` produces identical binary
- [01-architecture-analysis.md §1.8](./01-architecture-analysis.md) — Observable outcomes: what traces, metrics, and dashboards to expect
---
## Task 9: Document POC Results and Next Steps
> **OTLP** = OpenTelemetry Protocol | **WS** = WebSocket
**Objective**: Capture findings, screenshots, and remaining work for the team.
**What to do**:
- Take screenshots of Grafana/Tempo showing:
- The service list with "xrpld"
- A trace with the full span tree
- Span detail view showing attributes
- Document any issues encountered (build issues, SDK quirks, missing attributes)
- Note performance observations (build time impact, any noticeable runtime overhead)
- Write a short summary of what the POC proves and what it doesn't cover yet:
- **Proves**: OTel SDK integrates with xrpld, OTLP export works, RPC traces visible
- **Doesn't cover**: Cross-node P2P context propagation, consensus tracing, protobuf trace context, W3C traceparent header extraction, tail-based sampling, production deployment
- Outline next steps (mapping to the full plan phases):
- [Phase 2](./06-implementation-phases.md) completion: [W3C header extraction](./02-design-decisions.md) (§2.5), WebSocket tracing, all [RPC handlers](./01-architecture-analysis.md) (§1.6)
- [Phase 3](./06-implementation-phases.md): [Protobuf `TraceContext` message](./04-code-samples.md) (§4.4), [transaction relay tracing](./04-code-samples.md) (§4.5.1) across nodes
- [Phase 4](./06-implementation-phases.md): [Consensus round and phase tracing](./04-code-samples.md) (§4.5.2)
- [Phase 5](./06-implementation-phases.md): [Production collector config](./05-configuration-reference.md) (§5.5.2), [Grafana dashboards](./07-observability-backends.md) (§7.6), [alerting](./07-observability-backends.md) (§7.6.3)
**Reference**:
- [06-implementation-phases.md §6.1](./06-implementation-phases.md) — Full 5-phase timeline overview and Gantt chart
- [06-implementation-phases.md §6.10](./06-implementation-phases.md) — Crawl-Walk-Run strategy: POC is the CRAWL phase, next steps are WALK and RUN
- [06-implementation-phases.md §6.12](./06-implementation-phases.md) — Recommended implementation order (14 steps across 9 weeks)
- [03-implementation-strategy.md §3.9](./03-implementation-strategy.md) — Code intrusiveness assessment and risk matrix for each remaining component
- [07-observability-backends.md §7.2](./07-observability-backends.md) — Production backend selection (Tempo, Elastic APM, Honeycomb, Datadog)
- [02-design-decisions.md §2.5](./02-design-decisions.md) — Context propagation design: W3C HTTP headers, protobuf P2P, JobQueue internal
- [00-tracing-fundamentals.md](./00-tracing-fundamentals.md) — Reference for team onboarding on distributed tracing concepts
---
## Summary
| Task | Description | New Files | Modified Files | Depends On |
| ---- | ------------------------------------ | --------- | -------------- | ---------- |
| 0 | Docker observability stack | 4 | 0 | — |
| 1 | OTel C++ SDK dependency | 0 | 2 | — |
| 2 | Core Telemetry interface + NullImpl | 3 | 0 | 1 |
| 3 | OTel-backed Telemetry implementation | 2 | 1 | 1, 2 |
| 4 | Application lifecycle integration | 0 | 3 | 2, 3 |
| 5 | Instrumentation macros | 1 | 0 | 2 |
| 6 | Instrument RPC ServerHandler | 0 | 1 | 4, 5 |
| 7 | Instrument RPC command execution | 0 | 1 | 4, 5 |
| 8 | End-to-end verification | 0 | 0 | 0-7 |
| 9 | Document results and next steps | 1 | 0 | 8 |
**Parallel work**: Tasks 0 and 1 can run in parallel. Tasks 2 and 5 have no dependency on each other. Tasks 6 and 7 can be done in parallel once Tasks 4 and 5 are complete.
---
## Next Steps (Post-POC)
> **OTLP** = OpenTelemetry Protocol | **WS** = WebSocket
### Metrics Pipeline for Grafana Dashboards
The current POC exports **traces only**. Grafana's Explore view can query Tempo for individual traces, but time-series charts (latency histograms, request throughput, error rates) require a **metrics pipeline**. To enable this:
1. **Add a `spanmetrics` connector** to the OTel Collector config that derives RED metrics (Rate, Errors, Duration) from trace spans automatically:
```yaml
connectors:
spanmetrics:
histogram:
explicit:
buckets: [1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 5s]
dimensions:
- name: xrpl.rpc.command
- name: xrpl.rpc.status
exporters:
prometheus:
endpoint: 0.0.0.0:8889
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp/tempo, spanmetrics]
metrics:
receivers: [spanmetrics]
exporters: [prometheus]
```
2. **Add Prometheus** to the Docker Compose stack to scrape the collector's metrics endpoint.
3. **Add Prometheus as a Grafana datasource** and build dashboards for:
- RPC request latency (p50/p95/p99) by command
- RPC throughput (requests/sec) by command
- Error rate by command
- Span duration distribution
### Additional Instrumentation
- **W3C `traceparent` header extraction** in `ServerHandler` to support cross-service context propagation from external callers
- **WebSocket RPC tracing** in `ServerHandler::onWSMessage()`
- **Transaction relay tracing** across nodes using protobuf `TraceContext` messages
- **Consensus round and phase tracing** for validator coordination visibility
- **Ledger close tracing** to measure close-to-validated latency
### Production Hardening
- **Tail-based sampling** in the OTel Collector to reduce volume while retaining error/slow traces
- **TLS configuration** for the OTLP exporter in production deployments
- **Resource limits** on the batch processor queue to prevent unbounded memory growth
- **Health monitoring** for the telemetry pipeline itself (collector lag, export failures)
### POC Lessons Learned
Issues encountered during POC implementation that inform future work:
| Issue | Resolution | Impact on Future Work |
| -------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------- | ---------------------------------------------------------------- |
| Conan lockfile rejected `opentelemetry-cpp/1.18.0` | Used `--lockfile=""` to bypass | Lockfile must be regenerated when adding new dependencies |
| Conan package only builds OTLP HTTP exporter, not gRPC | Switched from gRPC to HTTP exporter (`localhost:4318/v1/traces`) | HTTP exporter is the default; gRPC requires custom Conan profile |
| CMake target `opentelemetry-cpp::api` etc. don't exist in Conan package | Use umbrella target `opentelemetry-cpp::opentelemetry-cpp` | Conan targets differ from upstream CMake targets |
| OTel Collector `logging` exporter deprecated | Renamed to `debug` exporter | Use `debug` in all collector configs going forward |
| Macro parameter `telemetry` collided with `::xrpl::telemetry::` namespace | Renamed macro params to `_tel_obj_`, `_span_name_` | Avoid common words as macro parameter names |
| `opentelemetry::trace::Scope` creates new context on move | Store scope as member, create once in constructor | SpanGuard move semantics need care with Scope lifecycle |
| `TracerProviderFactory::Create` returns `unique_ptr<sdk::TracerProvider>`, not `nostd::shared_ptr` | Use `std::shared_ptr` member, wrap in `nostd::shared_ptr` for global provider | OTel SDK factory return types don't match API provider types |

View File

@@ -1,673 +0,0 @@
# OpenTelemetry Distributed Tracing for xrpld
---
## Slide 1: Introduction
> **CNCF** = Cloud Native Computing Foundation
### What is OpenTelemetry?
OpenTelemetry is an open-source, CNCF-backed observability framework for distributed tracing, metrics, and logs.
### Why OpenTelemetry for xrpld?
- **End-to-End Transaction Visibility**: Track transactions from submission → consensus → ledger inclusion
- **Cross-Node Correlation**: Follow requests across multiple independent nodes using a unique `trace_id`
- **Consensus Round Analysis**: Understand timing and behavior across validators
- **Incident Debugging**: Correlate events across distributed nodes during issues
```mermaid
flowchart LR
A["Node A<br/>tx.receive<br/>trace_id: abc123"] --> B["Node B<br/>tx.relay<br/>trace_id: abc123"] --> C["Node C<br/>tx.validate<br/>trace_id: abc123"] --> D["Node D<br/>ledger.apply<br/>trace_id: abc123"]
style A fill:#1565c0,stroke:#0d47a1,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#2e7d32,stroke:#1b5e20,color:#fff
style D fill:#e65100,stroke:#bf360c,color:#fff
```
**Reading the diagram:**
- **Node A (blue, leftmost)**: The originating node that first receives the transaction and assigns a new `trace_id: abc123`; this ID becomes the correlation key for the entire distributed trace.
- **Node B and Node C (green, middle)**: Relay and validation nodes — each creates its own span but carries the same `trace_id`, so their work is linked to the original submission without any central coordinator.
- **Node D (orange, rightmost)**: The final node that applies the transaction to the ledger; the trace now spans the full lifecycle from submission to ledger inclusion.
- **Left-to-right flow**: The horizontal progression shows the real-world message path — a transaction hops from node to node, and the shared `trace_id` stitches all hops into a single queryable trace.
> **Trace ID: abc123** — All nodes share the same trace, enabling cross-node correlation.
---
## Slide 2: OpenTelemetry vs Open Source Alternatives
> **CNCF** = Cloud Native Computing Foundation
| Feature | OpenTelemetry | Jaeger | Zipkin | SkyWalking | Pinpoint | Prometheus |
| ------------------- | ---------------- | ---------------- | ------------------ | ---------- | ---------- | ---------- |
| **Tracing** | YES | YES | YES | YES | YES | NO |
| **Metrics** | YES | NO | NO | YES | YES | YES |
| **Logs** | YES | NO | NO | YES | NO | NO |
| **C++ SDK** | YES Official | YES (Deprecated) | YES (Unmaintained) | NO | NO | YES |
| **Vendor Neutral** | YES Primary goal | NO | NO | NO | NO | NO |
| **Instrumentation** | Manual + Auto | Manual | Manual | Auto-first | Auto-first | Manual |
| **Backend** | Any (exporters) | Self | Self | Self | Self | Self |
| **CNCF Status** | Incubating | Graduated | NO | Incubating | NO | Graduated |
> **Why OpenTelemetry?** It's the only actively maintained, full-featured C++ option with vendor neutrality — allowing export to Tempo, Prometheus, Grafana, or any commercial backend without changing instrumentation.
---
## Slide 3: Adoption Scope — Traces Only (Current Plan)
OpenTelemetry supports three signal types: **Traces**, **Metrics**, and **Logs**. xrpld already captures metrics (StatsD via Beast Insight) and logs (Journal/PerfLog). The question is: how much of OTel do we adopt?
> **Scenario A**: Add distributed tracing. Keep StatsD for metrics and Journal for logs.
```mermaid
flowchart LR
subgraph xrpld["xrpld Process"]
direction TB
OTel["OTel SDK<br/>(Traces)"]
Insight["Beast Insight<br/>(StatsD Metrics)"]
Journal["Journal + PerfLog<br/>(Logging)"]
end
OTel -->|"OTLP"| Collector["OTel Collector"]
Insight -->|"UDP"| StatsD["StatsD Server"]
Journal -->|"File I/O"| LogFile["perf.log / debug.log"]
Collector --> Tempo["Tempo"]
StatsD --> Graphite["Graphite / Grafana"]
LogFile --> Loki["Loki (optional)"]
style xrpld fill:#424242,stroke:#212121,color:#fff
style OTel fill:#2e7d32,stroke:#1b5e20,color:#fff
style Insight fill:#1565c0,stroke:#0d47a1,color:#fff
style Journal fill:#e65100,stroke:#bf360c,color:#fff
style Collector fill:#2e7d32,stroke:#1b5e20,color:#fff
```
| Aspect | Details |
| ------------------------------ | --------------------------------------------------------------------------------------------------------------- |
| **What changes for operators** | Deploy OTel Collector + trace backend. Existing StatsD and log pipelines stay as-is. |
| **Codebase impact** | New `Telemetry` module (~1500 LOC). Beast Insight and Journal untouched. |
| **New capabilities** | Cross-node trace correlation, span-based debugging, request lifecycle visibility. |
| **What we still can't do** | Correlate metrics with specific traces natively. StatsD metrics remain fire-and-forget with no trace exemplars. |
| **Maintenance burden** | Three separate observability systems to maintain (OTel + StatsD + Journal). |
| **Risk** | Lowest — additive change, no existing systems disturbed. |
---
## Slide 4: Future Adoption — Metrics & Logs via OTel
### Scenario B: + OTel Metrics (Replace StatsD)
> Migrate StatsD to OTel Metrics API, exposing Prometheus-compatible metrics. Remove Beast Insight.
```mermaid
flowchart LR
subgraph xrpld["xrpld Process"]
direction TB
OTel["OTel SDK<br/>(Traces + Metrics)"]
Journal["Journal + PerfLog<br/>(Logging)"]
end
OTel -->|"OTLP"| Collector["OTel Collector"]
Journal -->|"File I/O"| LogFile["perf.log / debug.log"]
Collector --> Tempo["Tempo<br/>(Traces)"]
Collector --> Prom["Prometheus<br/>(Metrics)"]
LogFile --> Loki["Loki (optional)"]
style xrpld fill:#424242,stroke:#212121,color:#fff
style OTel fill:#2e7d32,stroke:#1b5e20,color:#fff
style Journal fill:#e65100,stroke:#bf360c,color:#fff
style Collector fill:#2e7d32,stroke:#1b5e20,color:#fff
```
- **Better metrics?** Yes — Prometheus gives native histograms (p50/p95/p99), multi-dimensional labels, and exemplars linking metric spikes to traces.
- **Codebase**: Remove `Beast::Insight` + `StatsDCollector` (~2000 LOC). Single SDK for traces and metrics.
- **Operator effort**: Rewrite dashboards from StatsD/Graphite queries to PromQL. Run both in parallel during transition.
- **Risk**: Medium — operators must migrate monitoring infrastructure.
### Scenario C: + OTel Logs (Full Stack)
> Also replace Journal logging with OTel Logs API. Single SDK for everything.
```mermaid
flowchart LR
subgraph xrpld["xrpld Process"]
OTel["OTel SDK<br/>(Traces + Metrics + Logs)"]
end
OTel -->|"OTLP"| Collector["OTel Collector"]
Collector --> Tempo["Tempo<br/>(Traces)"]
Collector --> Prom["Prometheus<br/>(Metrics)"]
Collector --> Loki["Loki / Elastic<br/>(Logs)"]
style xrpld fill:#424242,stroke:#212121,color:#fff
style OTel fill:#2e7d32,stroke:#1b5e20,color:#fff
style Collector fill:#2e7d32,stroke:#1b5e20,color:#fff
```
- **Structured logging**: OTel Logs API outputs structured records with `trace_id`, `span_id`, severity, and attributes by design.
- **Full correlation**: Every log line carries `trace_id`. Click trace → see logs. Click metric spike → see trace → see logs.
- **Codebase**: Remove Beast Insight (~2000 LOC) + simplify Journal/PerfLog (~3000 LOC). One dependency instead of three.
- **Risk**: Highest — `beast::Journal` is deeply embedded in every component. Large refactor. OTel C++ Logs API is newer (stable since v1.11, less battle-tested).
### Recommendation
```mermaid
flowchart LR
A["Phase 1<br/><b>Traces Only</b><br/>(Current Plan)"] --> B["Phase 2<br/><b>+ Metrics</b><br/>(Replace StatsD)"] --> C["Phase 3<br/><b>+ Logs</b><br/>(Full OTel)"]
style A fill:#2e7d32,stroke:#1b5e20,color:#fff
style B fill:#1565c0,stroke:#0d47a1,color:#fff
style C fill:#e65100,stroke:#bf360c,color:#fff
```
| Phase | Signal | Strategy | Risk |
| -------------------- | --------- | -------------------------------------------------------------- | ------ |
| **Phase 1** (now) | Traces | Add OTel traces. Keep StatsD and Journal. Prove value. | Low |
| **Phase 2** (future) | + Metrics | Migrate StatsD → Prometheus via OTel. Remove Beast Insight. | Medium |
| **Phase 3** (future) | + Logs | Adopt OTel Logs API. Align with structured logging initiative. | High |
> **Key Takeaway**: Start with traces (unique value, lowest risk), then incrementally adopt metrics and logs as the OTel infrastructure proves itself.
---
## Slide 5: Comparison with xrpld's Existing Solutions
### Current Observability Stack
| Aspect | PerfLog (JSON) | StatsD (Metrics) | OpenTelemetry (NEW) |
| --------------------- | --------------------- | --------------------- | --------------------------- |
| **Type** | Logging | Metrics | Distributed Tracing |
| **Scope** | Single node | Single node | **Cross-node** |
| **Data** | JSON log entries | Counters, gauges | Spans with context |
| **Correlation** | By timestamp | By metric name | By `trace_id` |
| **Overhead** | Low (file I/O) | Low (UDP) | Low-Medium (configurable) |
| **Question Answered** | "What happened here?" | "How many? How fast?" | **"What was the journey?"** |
### Use Case Matrix
| Scenario | PerfLog | StatsD | OpenTelemetry |
| -------------------------------- | ------- | ------ | ------------- |
| "How many TXs per second?" | ❌ | ✅ | ❌ |
| "Why was this specific TX slow?" | ⚠️ | ❌ | ✅ |
| "Which node delayed consensus?" | ❌ | ❌ | ✅ |
| "Show TX journey across 5 nodes" | ❌ | ❌ | ✅ |
> **Key Insight**: In the **traces-only** approach (Phase 1), OpenTelemetry **complements** existing systems. In future phases, OTel metrics and logs could **replace** StatsD and Journal respectively — see Slides 3-4 for the full adoption roadmap.
---
## Slide 6: Architecture
> **OTLP** = OpenTelemetry Protocol | **WS** = WebSocket
### High-Level Integration Architecture
```mermaid
flowchart TB
subgraph xrpld["xrpld Node"]
subgraph services["Core Services"]
direction LR
RPC["RPC Server<br/>(HTTP/WS)"] ~~~ Overlay["Overlay<br/>(P2P Network)"] ~~~ Consensus["Consensus<br/>(RCLConsensus)"]
end
Telemetry["Telemetry Module<br/>(OpenTelemetry SDK)"]
services --> Telemetry
end
Telemetry -->|OTLP/gRPC| Collector["OTel Collector"]
Collector --> Tempo["Grafana Tempo"]
Collector --> Elastic["Elastic APM"]
style xrpld fill:#424242,stroke:#212121,color:#fff
style services fill:#1565c0,stroke:#0d47a1,color:#fff
style Telemetry fill:#2e7d32,stroke:#1b5e20,color:#fff
style Collector fill:#e65100,stroke:#bf360c,color:#fff
```
**Reading the diagram:**
- **Core Services (blue, top)**: RPC Server, Overlay, and Consensus are the three primary components that generate trace data — they represent the entry points for client requests, peer messages, and consensus rounds respectively.
- **Telemetry Module (green, middle)**: The OpenTelemetry SDK sits below the core services and receives span data from all three; it acts as a single collection point within the xrpld process.
- **OTel Collector (orange, center)**: An external process that receives spans over OTLP/gRPC from the Telemetry Module; it decouples xrpld from backend choices and handles batching, sampling, and routing.
- **Backends (bottom row)**: Tempo and Elastic APM are interchangeable — the Collector fans out to any combination, so operators can switch backends without modifying xrpld code.
- **Top-to-bottom flow**: Data flows from instrumented code down through the SDK, out over the network to the Collector, and finally into storage/visualization backends.
### Context Propagation
```mermaid
sequenceDiagram
participant Client
participant NodeA as Node A
participant NodeB as Node B
Client->>NodeA: Submit TX (no context)
Note over NodeA: Creates trace_id: abc123<br/>span: tx.receive
NodeA->>NodeB: Relay TX<br/>(traceparent: abc123)
Note over NodeB: Links to trace_id: abc123<br/>span: tx.relay
```
- **HTTP/RPC**: W3C Trace Context headers (`traceparent`)
- **P2P Messages**: Protocol Buffer extension fields
---
## Slide 7: Implementation Plan
### 5-Phase Rollout (9 Weeks)
> **Note**: Dates shown are relative to project start, not calendar dates.
```mermaid
gantt
title Implementation Timeline
dateFormat YYYY-MM-DD
axisFormat Week %W
section Phase 1
Core Infrastructure :p1, 2024-01-01, 2w
section Phase 2
RPC Tracing :p2, after p1, 2w
section Phase 3
Transaction Tracing :p3, after p2, 2w
section Phase 4
Consensus Tracing :p4, after p3, 2w
section Phase 5
Documentation :p5, after p4, 1w
```
### Phase Details
| Phase | Focus | Key Deliverables | Effort |
| ----- | ------------------- | -------------------------------------------- | ------- |
| 1 | Core Infrastructure | SDK integration, Telemetry interface, Config | 10 days |
| 2 | RPC Tracing | HTTP context extraction, Handler spans | 10 days |
| 3 | Transaction Tracing | Protobuf context, P2P relay propagation | 10 days |
| 4 | Consensus Tracing | Round spans, Proposal/validation tracing | 10 days |
| 5 | Documentation | Runbook, Dashboards, Training | 7 days |
**Total Effort**: ~47 developer-days (2 developers)
> **Future Phases** (not in current scope): After traces are stable, OTel metrics can replace StatsD (~3 weeks), and OTel logs can replace Journal (~4 weeks, aligned with structured logging initiative). See Slides 3-4 for the full adoption roadmap.
---
## Slide 8: Performance Overhead
> **OTLP** = OpenTelemetry Protocol
### Estimated System Impact
| Metric | Overhead | Notes |
| ----------------- | ---------- | ------------------------------------------------ |
| **CPU** | 1-3% | Span creation and attribute setting |
| **Memory** | ~10 MB | SDK statics + batch buffer + worker thread stack |
| **Network** | 10-50 KB/s | Compressed OTLP export to collector |
| **Latency (p99)** | <2% | With proper sampling configuration |
#### How We Arrived at These Numbers
**Assumptions (XRPL mainnet baseline)**:
| Parameter | Value | Source |
| ------------------------- | ---------------------- | --------------------------------------------------------------------------------------------------- |
| Transaction throughput | ~25 TPS (peaks to ~50) | Mainnet average |
| Default peers per node | 21 | `peerfinder/detail/Tuning.h` (`defaultMaxPeers`) |
| Consensus round frequency | ~1 round / 3-4 seconds | `ConsensusParms.h` (`ledgerMIN_CONSENSUS=1950ms`) |
| Proposers per round | ~20-35 | Mainnet UNL size |
| P2P message rate | ~160 msgs/sec | See message breakdown below |
| Avg TX processing time | ~200 μs | Profiled baseline |
| Single span creation cost | 500-1000 ns | OTel C++ SDK benchmarks (see [3.5.4](./03-implementation-strategy.md#354-performance-data-sources)) |
**P2P message breakdown** (per node, mainnet):
| Message Type | Rate | Derivation |
| ------------- | ------------ | --------------------------------------------------------------------- |
| TMTransaction | ~100/sec | ~25 TPS × ~4 relay hops per TX, deduplicated by HashRouter |
| TMValidation | ~50/sec | ~35 validators × ~1 validation/3s round ~12/sec, plus relay fan-out |
| TMProposeSet | ~10/sec | ~35 proposers / 3s round ~12/round, clustered in establish phase |
| **Total** | **~160/sec** | **Only traced message types counted** |
**CPU (1-3%) — Calculation**:
Per-transaction tracing cost breakdown:
| Operation | Cost | Notes |
| ----------------------------------------------- | ----------- | ------------------------------------------ |
| `tx.receive` span (create + end + 4 attributes) | ~1400 ns | ~1000ns create + ~200ns end + 4×50ns attrs |
| `tx.validate` span | ~1200 ns | ~1000ns create + ~200ns for 2 attributes |
| `tx.relay` span | ~1200 ns | ~1000ns create + ~200ns for 2 attributes |
| Context injection into P2P message | ~200 ns | Serialize trace_id + span_id into protobuf |
| **Total per TX** | **~4.0 μs** | |
> **CPU overhead**: 4.0 μs / 200 μs baseline = **~2.0% per transaction**. Under high load with consensus + RPC spans overlapping, reaches ~3%. Consensus itself adds only ~36 μs per 3-second round (~0.001%), so the TX path dominates. On production server hardware (3+ GHz Xeon), span creation drops to ~500-600 ns, bringing per-TX cost to ~2.6 μs (~1.3%). See [Section 3.5.4](./03-implementation-strategy.md#354-performance-data-sources) for benchmark sources.
**Memory (~10 MB) — Calculation**:
| Component | Size | Notes |
| --------------------------------------------- | ------------------ | ------------------------------------- |
| TracerProvider + Exporter (gRPC channel init) | ~320 KB | Allocated once at startup |
| BatchSpanProcessor (circular buffer) | ~16 KB | 2049 × 8-byte AtomicUniquePtr entries |
| BatchSpanProcessor (worker thread stack) | ~8 MB | Default Linux thread stack size |
| Active spans (in-flight, max ~1000) | ~500-800 KB | ~500-800 bytes/span × 1000 concurrent |
| Export queue (batch buffer, max 2048 spans) | ~1 MB | ~500 bytes/span × 2048 queue depth |
| Thread-local context storage (~100 threads) | ~6.4 KB | ~64 bytes/thread |
| **Total** | **~10 MB ceiling** | |
> Memory plateaus once the export queue fills — the `max_queue_size=2048` config bounds growth.
> The worker thread stack (~8 MB) dominates the static footprint but is virtual memory; actual RSS
> depends on stack usage (typically much less). Active spans are larger than originally estimated
> (~500-800 bytes) because the OTel SDK `Span` object includes a mutex (~40 bytes), `SpanData`
> recordable (~250 bytes base), and `std::map`-based attribute storage (~200-500 bytes for 3-5
> string attributes). See [Section 3.5.4](./03-implementation-strategy.md#354-performance-data-sources) for source references.
**Network (10-50 KB/s) — Calculation**:
Two sources of network overhead:
**(A) OTLP span export to Collector:**
| Sampling Rate | Effective Spans/sec | Avg Span Size (compressed) | Bandwidth |
| -------------------------- | ------------------- | -------------------------- | ------------ |
| 100% (dev only) | ~500 | ~500 bytes | ~250 KB/s |
| **10% (recommended prod)** | **~50** | **~500 bytes** | **~25 KB/s** |
| 1% (minimal) | ~5 | ~500 bytes | ~2.5 KB/s |
> The ~500 spans/sec at 100% comes from: ~100 TX spans + ~160 P2P context spans + ~23 consensus spans/round + ~50 RPC spans = ~500/sec. OTLP protobuf with gzip compression yields ~500 bytes/span average.
**(B) P2P trace context overhead** (added to existing messages, always-on regardless of sampling):
| Message Type | Rate | Context Size | Bandwidth |
| ------------- | -------- | ------------ | ------------- |
| TMTransaction | ~100/sec | 29 bytes | ~2.9 KB/s |
| TMValidation | ~50/sec | 29 bytes | ~1.5 KB/s |
| TMProposeSet | ~10/sec | 29 bytes | ~0.3 KB/s |
| **Total P2P** | | | **~4.7 KB/s** |
> **Combined**: 25 KB/s (OTLP export at 10%) + 5 KB/s (P2P context) ≈ **~30 KB/s typical**. The 10-50 KB/s range covers 10-20% sampling under normal to peak mainnet load.
**Latency (<2%) — Calculation**:
| Path | Tracing Cost | Baseline | Overhead |
| ------------------------------ | ------------ | -------- | -------- |
| Fast RPC (e.g., `server_info`) | 2.75 μs | ~1 ms | 0.275% |
| Slow RPC (e.g., `path_find`) | 2.75 μs | ~100 ms | 0.003% |
| Transaction processing | 4.0 μs | ~200 μs | 2.0% |
| Consensus round | 36 μs | ~3 sec | 0.001% |
> At p99, even the worst case (TX processing at 2.0%) is within the 1-3% range. RPC and consensus overhead are negligible. On production hardware, TX overhead drops to ~1.3%.
### Per-Message Overhead (Context Propagation)
Each P2P message carries trace context with the following overhead:
| Field | Size | Description |
| ------------- | ------------- | ----------------------------------------- |
| `trace_id` | 16 bytes | Unique identifier for the entire trace |
| `span_id` | 8 bytes | Current span (becomes parent on receiver) |
| `trace_flags` | 1 byte | Sampling decision flags |
| `trace_state` | 0-4 bytes | Optional vendor-specific data |
| **Total** | **~29 bytes** | **Added per traced P2P message** |
```mermaid
flowchart LR
subgraph msg["P2P Message with Trace Context"]
A["Original Message<br/>(variable size)"] --> B["+ TraceContext<br/>(~29 bytes)"]
end
subgraph breakdown["Context Breakdown"]
C["trace_id<br/>16 bytes"]
D["span_id<br/>8 bytes"]
E["flags<br/>1 byte"]
F["state<br/>0-4 bytes"]
end
B --> breakdown
style A fill:#424242,stroke:#212121,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#1565c0,stroke:#0d47a1,color:#fff
style D fill:#1565c0,stroke:#0d47a1,color:#fff
style E fill:#e65100,stroke:#bf360c,color:#fff
style F fill:#4a148c,stroke:#2e0d57,color:#fff
```
**Reading the diagram:**
- **Original Message (gray, left)**: The existing P2P message payload of variable size this is unchanged; trace context is appended, never modifying the original data.
- **+ TraceContext (green, right of message)**: The additional 29-byte context block attached to each traced message; the arrow from the original message shows it is a pure addition.
- **Context Breakdown (right subgraph)**: The four fields `trace_id` (16 bytes), `span_id` (8 bytes), `flags` (1 byte), and `state` (0-4 bytes) show exactly what is added and their individual sizes.
- **Color coding**: Blue fields (`trace_id`, `span_id`) are the core identifiers required for trace correlation; orange (`flags`) controls sampling decisions; purple (`state`) is optional vendor data typically omitted.
> **Note**: 29 bytes represents ~1-6% overhead depending on message size (500B simple TX to 5KB proposal), which is acceptable for the observability benefits provided.
### Mitigation Strategies
```mermaid
flowchart LR
A["Head Sampling<br/>10% default"] --> B["Tail Sampling<br/>Keep errors/slow"] --> C["Batch Export<br/>Reduce I/O"] --> D["Conditional Compile<br/>XRPL_ENABLE_TELEMETRY"]
style A fill:#1565c0,stroke:#0d47a1,color:#fff
style B fill:#2e7d32,stroke:#1b5e20,color:#fff
style C fill:#e65100,stroke:#bf360c,color:#fff
style D fill:#4a148c,stroke:#2e0d57,color:#fff
```
> For a detailed explanation of head vs. tail sampling, see Slide 9.
### Kill Switches (Rollback Options)
1. **Config Disable**: Set `enabled=0` in config instant disable, no restart needed for sampling
2. **Rebuild**: Compile with `XRPL_ENABLE_TELEMETRY=OFF` zero overhead (no-op)
3. **Full Revert**: Clean separation allows easy commit reversion
---
## Slide 9: Sampling Strategies — Head vs. Tail
> Sampling controls **which traces are recorded and exported**. Without sampling, every operation generates a trace — at 500+ spans/sec, this overwhelms storage and network. Sampling lets you keep the signal, discard the noise.
### Head Sampling (Decision at Start)
The sampling decision is made **when a trace begins**, before any work is done. A random number is generated; if it falls within the configured ratio, the entire trace is recorded. Otherwise, the trace is silently dropped.
```mermaid
flowchart LR
A["New Request<br/>Arrives"] --> B{"Random < 10%?"}
B -->|"Yes (1 in 10)"| C["Record Entire Trace<br/>(all spans)"]
B -->|"No (9 in 10)"| D["Drop Entire Trace<br/>(zero overhead)"]
style C fill:#2e7d32,stroke:#1b5e20,color:#fff
style D fill:#c62828,stroke:#8c2809,color:#fff
style B fill:#1565c0,stroke:#0d47a1,color:#fff
```
| Aspect | Details |
| ----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Where it runs** | Inside xrpld (SDK-level). Configured via `sampling_ratio` in `xrpld.cfg`. |
| **When the decision happens** | At trace creation time before the first span is even populated. |
| **How it works** | `sampling_ratio=0.1` means each trace has a 10% probability of being recorded. Dropped traces incur near-zero overhead (no spans created, no attributes set, no export). |
| **Propagation** | Once a trace is sampled, the `trace_flags` field (1 byte in the context header) tells downstream nodes to also sample it. Unsampled traces propagate `trace_flags=0`, so downstream nodes skip them too. |
| **Pros** | Lowest overhead. Simple to configure. Predictable resource usage. |
| **Cons** | **Blind** it doesn't know if the trace will be interesting. A rare error or slow consensus round has only a 10% chance of being captured. |
| **Best for** | High-volume, steady-state traffic where most traces look similar (e.g., routine RPC requests). |
**xrpld configuration**:
```ini
[telemetry]
# Record 10% of traces (recommended for production)
sampling_ratio=0.1
```
### Tail Sampling (Decision at End)
The sampling decision is made **after the trace completes**, based on its actual content was it slow? Did it error? Was it a consensus round? This requires buffering complete traces before deciding.
```mermaid
flowchart TB
A["All Traces<br/>Buffered (100%)"] --> B["OTel Collector<br/>Evaluates Rules"]
B --> C{"Error?"}
C -->|Yes| K["KEEP"]
C -->|No| D{"Slow?<br/>(>5s consensus,<br/>>1s RPC)"}
D -->|Yes| K
D -->|No| E{"Random < 10%?"}
E -->|Yes| K
E -->|No| F["DROP"]
style K fill:#2e7d32,stroke:#1b5e20,color:#fff
style F fill:#c62828,stroke:#8c2809,color:#fff
style B fill:#1565c0,stroke:#0d47a1,color:#fff
style C fill:#e65100,stroke:#bf360c,color:#fff
style D fill:#e65100,stroke:#bf360c,color:#fff
style E fill:#4a148c,stroke:#2e0d57,color:#fff
```
| Aspect | Details |
| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Where it runs** | In the **OTel Collector** (external process), not inside xrpld. xrpld exports 100% of traces; the Collector decides what to keep. |
| **When the decision happens** | After the Collector has received all spans for a trace (waits `decision_wait=10s` for stragglers). |
| **How it works** | Policy rules evaluate the completed trace: keep all errors, keep slow operations above a threshold, keep all consensus rounds, then probabilistically sample the rest at 10%. |
| **Pros** | **Never misses important traces**. Errors, slow requests, and consensus anomalies are always captured regardless of probability. |
| **Cons** | Higher resource usage xrpld must export 100% of spans to the Collector, which buffers them in memory before deciding. The Collector needs more RAM (configured via `num_traces` and `decision_wait`). |
| **Best for** | Production troubleshooting where you can't afford to miss errors or anomalies. |
**Collector configuration** (tail sampling rules for xrpld):
```yaml
processors:
tail_sampling:
decision_wait: 10s # Wait for all spans in a trace
num_traces: 100000 # Buffer up to 100K concurrent traces
policies:
- name: errors # Always keep error traces
type: status_code
status_code: { status_codes: [ERROR] }
- name: slow-consensus # Keep consensus rounds >5s
type: latency
latency: { threshold_ms: 5000 }
- name: slow-rpc # Keep slow RPC requests >1s
type: latency
latency: { threshold_ms: 1000 }
- name: probabilistic # Sample 10% of everything else
type: probabilistic
probabilistic: { sampling_percentage: 10 }
```
### Head vs. Tail — Side-by-Side
| | Head Sampling | Tail Sampling |
| ----------------------------- | ---------------------------------------- | ------------------------------------------------ |
| **Decision point** | Trace start (inside xrpld) | Trace end (in OTel Collector) |
| **Knows trace content?** | No (random coin flip) | Yes (evaluates completed trace) |
| **Overhead on xrpld** | Lowest (dropped traces = no-op) | Higher (must export 100% to Collector) |
| **Collector resource usage** | Low (receives only sampled traces) | Higher (buffers all traces before deciding) |
| **Captures all errors?** | No (only if trace was randomly selected) | **Yes** (error policy catches them) |
| **Captures slow operations?** | No (random) | **Yes** (latency policy catches them) |
| **Configuration** | `xrpld.cfg`: `sampling_ratio=0.1` | `otel-collector.yaml`: `tail_sampling` processor |
| **Best for** | High-throughput steady-state | Troubleshooting & anomaly detection |
### Recommended Strategy for xrpld
Use **both** in a layered approach:
```mermaid
flowchart LR
subgraph xrpld["xrpld (Head Sampling)"]
HS["sampling_ratio=1.0<br/>(export everything)"]
end
subgraph collector["OTel Collector (Tail Sampling)"]
TS["Keep: errors + slow + 10% random<br/>Drop: routine traces"]
end
subgraph storage["Backend Storage"]
ST["Only interesting traces<br/>stored long-term"]
end
xrpld -->|"100% of spans"| collector -->|"~15-20% kept"| storage
style xrpld fill:#424242,stroke:#212121,color:#fff
style collector fill:#1565c0,stroke:#0d47a1,color:#fff
style storage fill:#2e7d32,stroke:#1b5e20,color:#fff
```
> **Why this works**: xrpld exports everything (no blind drops), the Collector applies intelligent filtering (keep errors/slow/anomalies, sample the rest), and only ~15-20% of traces reach storage. If Collector resource usage becomes a concern, add head sampling at `sampling_ratio=0.5` to halve the export volume while still giving the Collector enough data for good tail-sampling decisions.
---
## Slide 10: Data Collection & Privacy
### What Data is Collected
| Category | Attributes Collected | Purpose |
| --------------- | ------------------------------------------------------------------------------------ | --------------------------- |
| **Transaction** | `tx.hash`, `tx.type`, `tx.result`, `tx.fee`, `ledger_index` | Trace transaction lifecycle |
| **Consensus** | `round`, `phase`, `mode`, `proposers` (count of proposing validators), `duration_ms` | Analyze consensus timing |
| **RPC** | `command`, `version`, `status`, `duration_ms` | Monitor RPC performance |
| **Peer** | `peer.id`(public key), `latency_ms`, `message.type`, `message.size` | Network topology analysis |
| **Ledger** | `ledger.hash`, `ledger.index`, `close_time`, `tx_count` | Ledger progression tracking |
| **Job** | `job.type`, `queue_ms`, `worker` | JobQueue performance |
### What is NOT Collected (Privacy Guarantees)
```mermaid
flowchart LR
subgraph notCollected["❌ NOT Collected"]
direction LR
A["Private Keys"] ~~~ B["Account Balances"] ~~~ C["Transaction Amounts"]
end
subgraph alsoNot["❌ Also Excluded"]
direction LR
D["IP Addresses<br/>(configurable)"] ~~~ E["Personal Data"] ~~~ F["Raw TX Payloads"]
end
style A fill:#c62828,stroke:#8c2809,color:#fff
style B fill:#c62828,stroke:#8c2809,color:#fff
style C fill:#c62828,stroke:#8c2809,color:#fff
style D fill:#c62828,stroke:#8c2809,color:#fff
style E fill:#c62828,stroke:#8c2809,color:#fff
style F fill:#c62828,stroke:#8c2809,color:#fff
```
**Reading the diagram:**
- **NOT Collected (top row, red)**: Private Keys, Account Balances, and Transaction Amounts are explicitly excluded these are financial/security-sensitive fields that telemetry never touches.
- **Also Excluded (bottom row, red)**: IP Addresses (configurable per deployment), Personal Data, and Raw TX Payloads are also excluded these protect operator and user privacy.
- **All-red styling**: Every box is styled in red to visually reinforce that these are hard exclusions, not optional the telemetry system has no code path to collect any of these fields.
- **Two-row layout**: The split between "NOT Collected" and "Also Excluded" distinguishes between financial data (top) and operational/personal data (bottom), making the privacy boundaries clear to auditors.
### Privacy Protection Mechanisms
| Mechanism | Description |
| -------------------------- | ------------------------------------------------------------- |
| **Account Hashing** | `xrpl.tx.account` is hashed at collector level before storage |
| **Configurable Redaction** | Sensitive fields can be excluded via config |
| **Sampling** | Only 10% of traces recorded by default (reduces exposure) |
| **Local Control** | Node operators control what gets exported |
| **No Raw Payloads** | Transaction content is never recorded, only metadata |
> **Key Principle**: Telemetry collects **operational metadata** (timing, counts, hashes) — never **sensitive content** (keys, balances, amounts).
---
_End of Presentation_

View File

@@ -8,11 +8,11 @@ The [XRP Ledger](https://xrpl.org/) is a decentralized cryptographic ledger powe
[XRP](https://xrpl.org/xrp.html) is a public, counterparty-free crypto-asset native to the XRP Ledger, and is designed as a gas token for network services and to bridge different currencies. XRP is traded on the open-market and is available for anyone to access. The XRP Ledger was created in 2012 with a finite supply of 100 billion units of XRP.
## xrpld
## rippled
The server software that powers the XRP Ledger is called `xrpld` and is available in this repository under the permissive [ISC open-source license](LICENSE.md). The `xrpld` server software is written primarily in C++ and runs on a variety of platforms. The `xrpld` server software can run in several modes depending on its [configuration](https://xrpl.org/rippled-server-modes.html).
The server software that powers the XRP Ledger is called `rippled` and is available in this repository under the permissive [ISC open-source license](LICENSE.md). The `rippled` server software is written primarily in C++ and runs on a variety of platforms. The `rippled` server software can run in several modes depending on its [configuration](https://xrpl.org/rippled-server-modes.html).
If you are interested in running an **API Server** (including a **Full History Server**), take a look at [Clio](https://github.com/XRPLF/clio). (xrpld Reporting Mode has been replaced by Clio.)
If you are interested in running an **API Server** (including a **Full History Server**), take a look at [Clio](https://github.com/XRPLF/clio). (rippled Reporting Mode has been replaced by Clio.)
### Build from Source
@@ -41,19 +41,19 @@ If you are interested in running an **API Server** (including a **Full History S
Here are some good places to start learning the source code:
- Read the markdown files in the source tree: `src/xrpld/**/*.md`.
- Read the markdown files in the source tree: `src/ripple/**/*.md`.
- Read [the levelization document](.github/scripts/levelization) to get an idea of the internal dependency graph.
- In the big picture, the `main` function constructs an `ApplicationImp` object, which implements the `Application` virtual interface. Almost every component in the application takes an `Application&` parameter in its constructor, typically named `app` and stored as a member variable `app_`. This allows most components to depend on any other component.
### Repository Contents
| Folder | Contents |
| :--------- | :--------------------------------------------- |
| `./bin` | Scripts and data files for XRPL developers. |
| `./Builds` | Platform-specific guides for building `xrpld`. |
| `./docs` | Source documentation files and doxygen config. |
| `./cfg` | Example configuration files. |
| `./src` | Source code. |
| Folder | Contents |
| :--------- | :----------------------------------------------- |
| `./bin` | Scripts and data files for Ripple integrators. |
| `./Builds` | Platform-specific guides for building `rippled`. |
| `./docs` | Source documentation files and doxygen config. |
| `./cfg` | Example configuration files. |
| `./src` | Source code. |
Some of the directories under `src` are external repositories included using
git-subtree. See those directories' README files for more details.

View File

@@ -6,7 +6,7 @@ For more details on operating an XRP Ledger server securely, please visit https:
## Supported Versions
Software constantly evolves. In order to focus resources, we generally only accept vulnerability reports that affect recent and current versions of the software. We always accept reports for issues present in the **master**, **release** or **develop** branches, and with proposed, [open pull requests](https://github.com/XRPLF/rippled/pulls).
Software constantly evolves. In order to focus resources, we only generally only accept vulnerability reports that affect recent and current versions of the software. We always accept reports for issues present in the **master**, **release** or **develop** branches, and with proposed, [open pull requests](https://github.com/ripple/rippled/pulls).
## Identifying and Reporting Vulnerabilities
@@ -22,10 +22,117 @@ Responsible investigation includes, but isn't limited to, the following:
- Not targeting physical security measures, or attempting to use social engineering, spam, distributed denial of service (DDOS) attacks, etc.
- Investigating bugs in a way that makes a reasonable, good faith effort not to be disruptive or harmful to the XRP Ledger and the broader ecosystem.
### Responsible Disclosure
If you discover a vulnerability or potential threat, or if you _think_
you have, please reach out by dropping an email using the contact
information below.
Your report should include the following:
- Your contact information (typically, an email address);
- The description of the vulnerability;
- The attack scenario (if any);
- The steps to reproduce the vulnerability;
- Any other relevant details or artifacts, including code, scripts or patches.
In your email, please describe the issue or potential threat. If possible, include a "repro" (code that can reproduce the issue) or describe the best way to reproduce and replicate the issue. Please make your report as detailed and comprehensive as possible.
For more information on responsible disclosure, please read this [Wikipedia article](https://en.wikipedia.org/wiki/Responsible_disclosure).
## Report Handling Process
Please report the bug directly to us and limit further disclosure. If you want to prove that you knew the bug as of a given time, consider using a cryptographic pre-commitment: hash the content of your report and publish the hash on a medium of your choice (e.g. on Twitter or as a memo in a transaction) as "proof" that you had written the text at a given point in time.
Once we receive a report, we:
1. Assign two people to independently evaluate the report;
2. Consider their recommendations;
3. If action is necessary, formulate a plan to address the issue;
4. Communicate privately with the reporter to explain our plan.
5. Prepare, test and release a version which fixes the issue; and
6. Announce the vulnerability publicly.
We will triage and respond to your disclosure within 24 hours. Beyond that, we will work to analyze the issue in more detail, formulate, develop and test a fix.
While we commit to responding with 24 hours of your initial report with our triage assessment, we cannot guarantee a response time for the remaining steps. We will communicate with you throughout this process, letting you know where we are and keeping you updated on the timeframe.
## Bug Bounty Program
[Ripple](https://ripple.com) is generously sponsoring a bug bounty program for vulnerabilities in [`xrpld`](https://github.com/XRPLF/rippled) (and other related projects, like [`Clio`](https://github.com/XRPLF/clio), [`xrpl.js`](https://github.com/XRPLF/xrpl.js), [`xrpl-py`](https://github.com/XRPLF/xrpl-py), [`xrpl4j`](https://github.com/XRPLF/xrpl4j)).
[Ripple](https://ripple.com) is generously sponsoring a bug bounty program for vulnerabilities in [`rippled`](https://github.com/XRPLF/rippled) (and other related projects, like [`xrpl.js`](https://github.com/XRPLF/xrpl.js), [`xrpl-py`](https://github.com/XRPLF/xrpl-py), [`xrpl4j`](https://github.com/XRPLF/xrpl4j)).
This program allows us to recognize and reward individuals or groups that identify and report bugs.
This program allows us to recognize and reward individuals or groups that identify and report bugs. In summary, in order to qualify for a bounty, the bug must be:
We have partnered with Bugcrowd to manage this program. It is a private program, and security researchers can participate based on invitation. If you need access to the program, please email bugs@ripple.com with your Bugcrowd handle or Bugcrowd registered email, and we will get you added to the program. Once you have been added, please submit vulnerability reports through Bugcrowd, not by email. The detailed bug bounty policy is available on the Bugcrowd website.
1. **In scope**. Only bugs in software under the scope of the program qualify. Currently, that means `rippled`, `xrpl.js`, `xrpl-py`, `xrpl4j`.
2. **Relevant**. A security issue, posing a danger to user funds, privacy, or the operation of the XRP Ledger.
3. **Original and previously unknown**. Bugs that are already known and discussed in public do not qualify. Previously reported bugs, even if publicly unknown, are not eligible.
4. **Specific**. We welcome general security advice or recommendations, but we cannot pay bounties for that.
5. **Fixable**. There has to be something we can do to permanently fix the problem. Note that bugs in other peoples software may still qualify in some cases. For example, if you find a bug in a library that we use which can compromise the security of software that is in scope and we can get it fixed, you may qualify for a bounty.
6. **Unused**. If you use the exploit to attack the XRP Ledger, you do not qualify for a bounty. If you report a vulnerability used in an ongoing or past attack and there is specific, concrete evidence that suggests you are the attacker we reserve the right not to pay a bounty.
The amount paid varies dramatically. Vulnerabilities that are harmless on their own, but could form part of a critical exploit will usually receive a bounty. Full-blown exploits can receive much higher bounties. Please dont hold back partial vulnerabilities while trying to construct a full-blown exploit. We will pay a bounty to anyone who reports a complete chain of vulnerabilities even if they have reported each component of the exploit separately and those vulnerabilities have been fixed in the meantime. However, to qualify for a the full bounty, you must to have been the first to report each of the partial exploits.
### Contacting Us
To report a qualifying bug, please send a detailed report to:
| Email Address | bugs@ripple.com |
| :-----------: | :-------------------------------------------------- |
| Short Key ID | `0xA9F514E0` |
| Long Key ID | `0xD900855AA9F514E0` |
| Fingerprint | `B72C 0654 2F2A E250 2763 A268 D900 855A A9F5 14E0` |
The full PGP key for this address, which is also available on several key servers (e.g. on [keyserver.ubuntu.com](https://keyserver.ubuntu.com)), is:
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGkSZAQBEACprU199OhgdsOsygNjiQV4msuN3vDOUooehL+NwfsGfW79Tbqq
Q2u7uQ3NZjW+M2T4nsDwuhkr7pe7xSReR5W8ssaczvtUyxkvbMClilcgZ2OSCAuC
N9tzJsqOqkwBvXoNXkn//T2jnPz0ZU2wSF+NrEibq5FeuyGdoX3yXXBxq9pW9HzK
HkQll63QSl6BzVSGRQq+B6lGgaZGLwf3mzmIND9Z5VGLNK2jKynyz9z091whNG/M
kV+E7/r/bujHk7WIVId07G5/COTXmSr7kFnNEkd2Umw42dkgfiNKvlmJ9M7c1wLK
KbL9Eb4ADuW6rRc5k4s1e6GT8R4/VPliWbCl9SE32hXH8uTkqVIFZP2eyM5WRRHs
aKzitkQG9UK9gcb0kdgUkxOvvgPHAe5IuZlcHFzU4y0dBbU1VEFWVpiLU0q+IuNw
5BRemeHc59YNsngkmAZ+/9zouoShRusZmC8Wzotv75C2qVBcjijPvmjWAUz0Zunm
Lsr+O71vqHE73pERjD07wuD/ISjiYRYYE/bVrXtXLZijC7qAH4RE3nID+2ojcZyO
/2jMQvt7un56RsGH4UBHi3aBHi9bUoDGCXKiQY981cEuNaOxpou7Mh3x/ONzzSvk
sTV6nl1LOZHykN1JyKwaNbTSAiuyoN+7lOBqbV04DNYAHL88PrT21P83aQARAQAB
tB1SaXBwbGUgTGFicyA8YnVnc0ByaXBwbGUuY29tPokCTgQTAQgAOBYhBLcsBlQv
KuJQJ2OiaNkAhVqp9RTgBQJpEmQEAhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4BAheA
AAoJENkAhVqp9RTgBzgP/i7y+aDWl1maig1XMdyb+o0UGusumFSW4Hmj278wlKVv
usgLPihYgHE0PKrv6WRyKOMC1tQEcYYN93M+OeQ1vFhS2YyURq6RCMmh4zq/awXG
uZbG36OURB5NH8lGBOHiN/7O+nY0CgenBT2JWm+GW3nEOAVOVm4+r5GlpPlv+Dp1
NPBThcKXFMnH73++NpSQoDzTfRYHPxhDAX3jkLi/moXfSanOLlR6l94XNNN0jBHW
Quao0rzf4WSXq9g6AS224xhAA5JyIcFl8TX7hzj5HaFn3VWo3COoDu4U7H+BM0fl
85yqiMQypp7EhN2gxpMMWaHY5TFM85U/bFXFYfEgihZ4/gt4uoIzsNI9jlX7mYvG
KFdDij+oTlRsuOxdIy60B3dKcwOH9nZZCz0SPsN/zlRWgKzK4gDKdGhFkU9OlvPu
94ZqscanoiWKDoZkF96+sjgfjkuHsDK7Lwc1Xi+T4drHG/3aVpkYabXox+lrKB/S
yxZjeqOIQzWPhnLgCaLyvsKo5hxKzL0w3eURu8F3IS7RgOOlljv4M+Me9sEVcdNV
aN3/tQwbaomSX1X5D5YXqhBwC3rU3wXwamsscRTGEpkV+JCX6KUqGP7nWmxCpAly
FL05XuOd5SVHJjXLeuje0JqLUpN514uL+bThWwDbDTdAdwW3oK/2WbXz7IfJRLBj
uQINBGkSZAQBEADdI3SL2F72qkrgFqXWE6HSRBu9bsAvTE5QrRPWk7ux6at537r4
S4sIw2dOwLvbyIrDgKNq3LQ5wCK88NO/NeCOFm4AiCJSl3pJHXYnTDoUxTrrxx+o
vSRI4I3fHEql/MqzgiAb0YUezjgFdh3vYheMPp/309PFbOLhiFqEcx80Mx5h06UH
gDzu1qNj3Ec+31NLic5zwkrAkvFvD54d6bqYR3SEgMau6aYEewpGHbWBi2pLqSi2
lQcAeOFixqGpTwDmAnYR8YtjBYepy0MojEAdTHcQQlOYSDk4q4elG+io2N8vECfU
rD6ORecN48GXdZINYWTAdslrUeanmBdgQrYkSpce8TSghgT9P01SNaXxmyaehVUO
lqI4pcg5G2oojAE8ncNS3TwDtt7daTaTC3bAdr4PXDVAzNAiewjMNZPB7xidkDGQ
Y4W1LxTMXyJVWxehYOH7tsbBRKninlfRnLgYzmtIbNRAAvNcsxU6ihv3AV0WFknN
YbSzotEv1Xq/5wk309x8zCDe+sP0cQicvbXafXmUzPAZzeqFg+VLFn7F9MP1WGlW
B1u7VIvBF1Mp9Nd3EAGBAoLRdRu+0dVWIjPTQuPIuD9cCatJA0wVaKUrjYbBMl88
a12LixNVGeSFS9N7ADHx0/o7GNT6l88YbaLP6zggUHpUD/bR+cDN7vllIQARAQAB
iQI2BBgBCAAgFiEEtywGVC8q4lAnY6Jo2QCFWqn1FOAFAmkSZAQCGwwACgkQ2QCF
Wqn1FOAfAA/8CYq4p0p4bobY20CKEMsZrkBTFJyPDqzFwMeTjgpzqbD7Y3Qq5QCK
OBbvY02GWdiIsNOzKdBxiuam2xYP9WHZj4y7/uWEvT0qlPVmDFu+HXjoJ43oxwFd
CUp2gMuQ4cSL3X94VRJ3BkVL+tgBm8CNY0vnTLLOO3kum/R69VsGJS1JSGUWjNM+
4qwS3mz+73xJu1HmERyN2RZF/DGIZI2PyONQQ6aH85G1Dd2ohu2/DBAkQAMBrPbj
FrbDaBLyFhODxU3kTWqnfLlaElSm2EGdIU2yx7n4BggEa//NZRMm5kyeo4vzhtlQ
YIVUMLAOLZvnEqDnsLKp+22FzNR/O+htBQC4lPywl53oYSALdhz1IQlcAC1ru5KR
XPzhIXV6IIzkcx9xNkEclZxmsuy5ERXyKEmLbIHAlzFmnrldlt2ZgXDtzaorLmxj
klKibxd5tF50qOpOivz+oPtFo7n+HmFa1nlVAMxlDCUdM0pEVeYDKI5zfVwalyhZ
NnjpakdZSXMwgc7NP/hH9buF35hKDp7EckT2y3JNYwHsDdy1icXN2q40XZw5tSIn
zkPWdu3OUY8PISohN6Pw4h0RH4ZmoX97E8sEfmdKaT58U4Hf2aAv5r9IWCSrAVqY
u5jvac29CzQR9Kal0A+8phHAXHNFD83SwzIC0syaT9ficAguwGH8X6Q=
=nGuD
-----END PGP PUBLIC KEY BLOCK-----
```

View File

@@ -1,13 +1,14 @@
#!/bin/bash
if [[ $# -ne 1 || "$1" == "--help" || "$1" == "-h" ]]; then
name=$( basename $0 )
cat <<- USAGE
Usage: $name <username>
if [[ $# -ne 1 || "$1" == "--help" || "$1" == "-h" ]]
then
name=$( basename $0 )
cat <<- USAGE
Usage: $name <username>
Where <username> is the Github username of the upstream repo. e.g. XRPLF
Where <username> is the Github username of the upstream repo. e.g. XRPLF
USAGE
exit 0
exit 0
fi
# Create upstream remotes based on origin
@@ -15,9 +16,10 @@ shift
user="$1"
# Get the origin URL. Expect it be an SSH-style URL
origin=$( git remote get-url origin )
if [[ "${origin}" == "" ]]; then
echo Invalid origin remote >&2
exit 1
if [[ "${origin}" == "" ]]
then
echo Invalid origin remote >&2
exit 1
fi
# echo "Origin: ${origin}"
# Parse the origin
@@ -28,9 +30,11 @@ IFS='@' read sshuser server <<< "${remote}"
# echo "SSHUser: ${sshuser}, Server: ${server}"
IFS='/' read originuser repo <<< "${originpath}"
# echo "Originuser: ${originuser}, Repo: ${repo}"
if [[ "${sshuser}" == "" || "${server}" == "" || "${originuser}" == "" || "${repo}" == "" ]]; then
echo "Can't parse origin URL: ${origin}" >&2
exit 1
if [[ "${sshuser}" == "" || "${server}" == "" || "${originuser}" == ""
|| "${repo}" == "" ]]
then
echo "Can't parse origin URL: ${origin}" >&2
exit 1
fi
upstream="https://${server}/${user}/${repo}"
upstreampush="${remote}:${user}/${repo}"
@@ -38,34 +42,42 @@ upstreamgroup="upstream upstream-push"
current=$( git remote get-url upstream 2>/dev/null )
currentpush=$( git remote get-url upstream-push 2>/dev/null )
currentgroup=$( git config remotes.upstreams )
if [[ "${current}" == "${upstream}" ]]; then
echo "Upstream already set up correctly. Skip"
elif [[ -n "${current}" && "${current}" != "${upstream}" && "${current}" != "${upstreampush}" ]]; then
echo "Upstream already set up as: ${current}. Skip"
if [[ "${current}" == "${upstream}" ]]
then
echo "Upstream already set up correctly. Skip"
elif [[ -n "${current}" && "${current}" != "${upstream}" &&
"${current}" != "${upstreampush}" ]]
then
echo "Upstream already set up as: ${current}. Skip"
else
if [[ "${current}" == "${upstreampush}" ]]; then
echo "Upstream set to dangerous push URL. Update."
_run git remote rename upstream upstream-push || \
_run git remote remove upstream
currentpush=$( git remote get-url upstream-push 2>/dev/null )
fi
_run git remote add upstream "${upstream}"
if [[ "${current}" == "${upstreampush}" ]]
then
echo "Upstream set to dangerous push URL. Update."
_run git remote rename upstream upstream-push || \
_run git remote remove upstream
currentpush=$( git remote get-url upstream-push 2>/dev/null )
fi
_run git remote add upstream "${upstream}"
fi
if [[ "${currentpush}" == "${upstreampush}" ]]; then
echo "upstream-push already set up correctly. Skip"
elif [[ -n "${currentpush}" && "${currentpush}" != "${upstreampush}" ]]; then
echo "upstream-push already set up as: ${currentpush}. Skip"
if [[ "${currentpush}" == "${upstreampush}" ]]
then
echo "upstream-push already set up correctly. Skip"
elif [[ -n "${currentpush}" && "${currentpush}" != "${upstreampush}" ]]
then
echo "upstream-push already set up as: ${currentpush}. Skip"
else
_run git remote add upstream-push "${upstreampush}"
_run git remote add upstream-push "${upstreampush}"
fi
if [[ "${currentgroup}" == "${upstreamgroup}" ]]; then
echo "Upstreams group already set up correctly. Skip"
elif [[ -n "${currentgroup}" && "${currentgroup}" != "${upstreamgroup}" ]]; then
echo "Upstreams group already set up as: ${currentgroup}. Skip"
if [[ "${currentgroup}" == "${upstreamgroup}" ]]
then
echo "Upstreams group already set up correctly. Skip"
elif [[ -n "${currentgroup}" && "${currentgroup}" != "${upstreamgroup}" ]]
then
echo "Upstreams group already set up as: ${currentgroup}. Skip"
else
_run git config --add remotes.upstreams "${upstreamgroup}"
_run git config --add remotes.upstreams "${upstreamgroup}"
fi
_run git fetch --jobs=$(nproc) upstreams

View File

@@ -1,16 +1,17 @@
#!/bin/bash
if [[ $# -lt 3 || "$1" == "--help" || "$1" = "-h" ]]; then
name=$( basename $0 )
cat <<- USAGE
Usage: $name workbranch base/branch user/branch [user/branch [...]]
if [[ $# -lt 3 || "$1" == "--help" || "$1" = "-h" ]]
then
name=$( basename $0 )
cat <<- USAGE
Usage: $name workbranch base/branch user/branch [user/branch [...]]
* workbranch will be created locally from base/branch
* base/branch and user/branch may be specified as user:branch to allow
easy copying from Github PRs
* Remotes for each user must already be set up
* workbranch will be created locally from base/branch
* base/branch and user/branch may be specified as user:branch to allow
easy copying from Github PRs
* Remotes for each user must already be set up
USAGE
exit 0
exit 0
fi
work="$1"
@@ -23,8 +24,9 @@ unset branches[0]
set -e
users=()
for b in "${branches[@]}"; do
users+=( $( echo $b | cut -d/ -f1 ) )
for b in "${branches[@]}"
do
users+=( $( echo $b | cut -d/ -f1 ) )
done
users=( $( printf '%s\n' "${users[@]}" | sort -u ) )
@@ -32,9 +34,10 @@ users=( $( printf '%s\n' "${users[@]}" | sort -u ) )
git fetch --multiple upstreams "${users[@]}"
git checkout -B "$work" --no-track "$base"
for b in "${branches[@]}"; do
git merge --squash "${b}"
git commit -S # Use the commit message provided on the PR
for b in "${branches[@]}"
do
git merge --squash "${b}"
git commit -S # Use the commit message provided on the PR
done
# Make sure the commits look right
@@ -44,11 +47,13 @@ parts=( $( echo $base | sed "s/\// /" ) )
repo="${parts[0]}"
b="${parts[1]}"
push=$repo
if [[ "$push" == "upstream" ]]; then
push="upstream-push"
if [[ "$push" == "upstream" ]]
then
push="upstream-push"
fi
if [[ "$repo" == "upstream" ]]; then
repo="upstreams"
if [[ "$repo" == "upstream" ]]
then
repo="upstreams"
fi
cat << PUSH

View File

@@ -1,16 +1,17 @@
#!/bin/bash
if [[ $# -ne 3 || "$1" == "--help" || "$1" = "-h" ]]; then
name=$( basename $0 )
cat <<- USAGE
Usage: $name workbranch base/branch version
if [[ $# -ne 3 || "$1" == "--help" || "$1" = "-h" ]]
then
name=$( basename $0 )
cat <<- USAGE
Usage: $name workbranch base/branch version
* workbranch will be created locally from base/branch. If it exists,
it will be reused, so make sure you don't overwrite any work.
* base/branch may be specified as user:branch to allow easy copying
from Github PRs.
* workbranch will be created locally from base/branch. If it exists,
it will be reused, so make sure you don't overwrite any work.
* base/branch may be specified as user:branch to allow easy copying
from Github PRs.
USAGE
exit 0
exit 0
fi
work="$1"
@@ -29,9 +30,10 @@ git fetch upstreams
git checkout -B "${work}" --no-track "${base}"
push=$( git rev-parse --abbrev-ref --symbolic-full-name '@{push}' \
2>/dev/null ) || true
if [[ "${push}" != "" ]]; then
echo "Warning: ${push} may already exist."
2>/dev/null ) || true
if [[ "${push}" != "" ]]
then
echo "Warning: ${push} may already exist."
fi
build=$( find -name BuildInfo.cpp )

View File

@@ -1,206 +0,0 @@
#!/usr/bin/env python3
"""Pre-commit hook that runs clang-tidy on changed files using run-clang-tidy."""
from __future__ import annotations
import json
import os
import re
import shutil
import subprocess
import sys
from collections import defaultdict
from pathlib import Path
HEADER_EXTENSIONS = {".h", ".hpp", ".ipp"}
SOURCE_EXTENSIONS = {".cpp"}
INCLUDE_RE = re.compile(r"^\s*#\s*include\s*[<\"]([^>\"]+)[>\"]")
def find_run_clang_tidy() -> str | None:
for candidate in ("run-clang-tidy-21", "run-clang-tidy"):
if path := shutil.which(candidate):
return path
return None
def find_build_dir(repo_root: Path) -> Path | None:
for name in (".build", "build"):
candidate = repo_root / name
if (candidate / "compile_commands.json").exists():
return candidate
return None
def build_include_graph(build_dir: Path, repo_root: Path) -> tuple[dict, set]:
"""
Scan all files reachable from compile_commands.json and build an inverted include graph.
Returns:
inverted: header_path -> set of files that include it
source_files: set of all TU paths from compile_commands.json
"""
with open(build_dir / "compile_commands.json") as f:
db = json.load(f)
source_files = {Path(e["file"]).resolve() for e in db}
include_roots = [repo_root / "include", repo_root / "src"]
inverted: dict[Path, set[Path]] = defaultdict(set)
to_scan: set[Path] = set(source_files)
scanned: set[Path] = set()
while to_scan:
file = to_scan.pop()
if file in scanned or not file.exists():
continue
scanned.add(file)
content = file.read_text()
for line in content.splitlines():
m = INCLUDE_RE.match(line)
if not m:
continue
for root in include_roots:
candidate = (root / m.group(1)).resolve()
if candidate.exists():
inverted[candidate].add(file)
if candidate not in scanned:
to_scan.add(candidate)
break
return inverted, source_files
def find_tus_for_headers(
headers: list[Path],
inverted: dict[Path, set[Path]],
source_files: set[Path],
) -> set[Path]:
"""
For each header, pick one TU that transitively includes it.
Prefers a TU whose stem matches the header's stem, otherwise picks the first found.
"""
result: set[Path] = set()
for header in headers:
preferred: Path | None = None
visited: set[Path] = {header}
stack: list[Path] = [header]
while stack:
h = stack.pop()
for inc in inverted.get(h, ()):
if inc in source_files:
if inc.stem == header.stem:
preferred = inc
break
if preferred is None:
preferred = inc
if inc not in visited:
visited.add(inc)
stack.append(inc)
if preferred is not None and preferred.stem == header.stem:
break
if preferred is not None:
result.add(preferred)
return result
def resolve_files(
input_files: list[str], build_dir: Path, repo_root: Path
) -> list[str]:
"""
Split input into source files and headers. Source files are passed through;
headers are resolved to the TUs that transitively include them.
"""
sources: list[Path] = []
headers: list[Path] = []
for f in input_files:
p = Path(f).resolve()
if p.suffix in SOURCE_EXTENSIONS:
sources.append(p)
elif p.suffix in HEADER_EXTENSIONS:
headers.append(p)
if not headers:
return [str(p) for p in sources]
print(
f"Resolving {len(headers)} header(s) to compilation units...", file=sys.stderr
)
inverted, source_files = build_include_graph(build_dir, repo_root)
tus = find_tus_for_headers(headers, inverted, source_files)
if not tus:
print(
"Warning: no compilation units found that include the modified headers; "
"skipping clang-tidy for headers.",
file=sys.stderr,
)
return sorted({str(p) for p in (*sources, *tus)})
def staged_files(repo_root: Path) -> list[str]:
result = subprocess.run(
["git", "diff", "--staged", "--name-only", "--diff-filter=d"],
capture_output=True,
text=True,
cwd=repo_root,
)
if result.returncode != 0:
print(
"clang-tidy check failed: 'git diff --staged' command failed.",
file=sys.stderr,
)
if result.stderr:
print(result.stderr, file=sys.stderr)
sys.exit(result.returncode or 1)
return [str(repo_root / p) for p in result.stdout.splitlines() if p]
def main():
if not os.environ.get("TIDY"):
return 0
repo_root = Path(__file__).parent.parent
files = staged_files(repo_root)
if not files:
return 0
run_clang_tidy = find_run_clang_tidy()
if not run_clang_tidy:
print(
"clang-tidy check failed: TIDY is enabled but neither "
"'run-clang-tidy-21' nor 'run-clang-tidy' was found in PATH.",
file=sys.stderr,
)
return 1
build_dir = find_build_dir(repo_root)
if not build_dir:
print(
"clang-tidy check failed: no build directory with compile_commands.json found "
"(looked for .build/ and build/)",
file=sys.stderr,
)
return 1
tidy_files = resolve_files(files, build_dir, repo_root)
if not tidy_files:
return 0
result = subprocess.run(
[run_clang_tidy, "-quiet", "-p", str(build_dir), "-fix", "-allow-no-checks"]
+ tidy_files
)
return result.returncode
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,37 +0,0 @@
#!/usr/bin/env python3
"""
Converts quoted includes (#include "...") to angle-bracket includes
(#include <...>), which is the required style in this project.
Usage: ./bin/pre-commit/fix_include_style.py <file1> <file2> ...
"""
import re
import sys
from pathlib import Path
PATTERN = re.compile(r'^(\s*#include\s*)"([^"]+)"', re.MULTILINE)
def fix_includes(path: Path) -> bool:
original = path.read_text(encoding="utf-8")
fixed = PATTERN.sub(r"\1<\2>", original)
if fixed != original:
path.write_text(fixed, encoding="utf-8")
return False
return True
def main() -> int:
files = [Path(f) for f in sys.argv[1:]]
success = True
for path in files:
success &= fix_includes(path)
return 0 if success else 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -28,7 +28,7 @@
# https://vl.ripple.com
# https://unl.xrplf.org
# http://127.0.0.1:8000
# file:///etc/opt/xrpld/vl.txt
# file:///etc/opt/ripple/vl.txt
#
# [validator_list_keys]
#
@@ -43,11 +43,11 @@
# ED307A760EE34F2D0CAA103377B1969117C38B8AA0AA1E2A24DAC1F32FC97087ED
#
# The default validator list publishers that the xrpld instance
# The default validator list publishers that the rippled instance
# trusts.
#
# WARNING: Changing these values can cause your xrpld instance to see a
# validated ledger that contradicts other xrpld instances'
# WARNING: Changing these values can cause your rippled instance to see a
# validated ledger that contradicts other rippled instances'
# validated ledgers (aka a ledger fork) if your validator list(s)
# do not sufficiently overlap with the list(s) used by others.
# See: https://arxiv.org/pdf/1802.07242.pdf

View File

@@ -9,7 +9,7 @@
#
# 2. Peer Protocol
#
# 3. XRPL protocol
# 3. Ripple Protocol
#
# 4. HTTPS Client
#
@@ -383,7 +383,7 @@
#
# These settings control security and access attributes of the Peer to Peer
# server section of the xrpld process. Peer Protocol implements the
# XRPL payment protocol. It is over peer connections that transactions
# Ripple Payment protocol. It is over peer connections that transactions
# and validations are passed from to machine to machine, to determine the
# contents of validated ledgers.
#
@@ -406,7 +406,7 @@
#
# [ips]
#
# List of hostnames or ips where the XRPL protocol is served. A default
# List of hostnames or ips where the Ripple protocol is served. A default
# starter list is included in the code and used if no other hostnames are
# available.
#
@@ -435,7 +435,7 @@
# List of IP addresses or hostnames to which xrpld should always attempt to
# maintain peer connections with. This is useful for manually forming private
# networks, for example to configure a validation server that connects to the
# XRPL network through a public-facing server, or for building a set
# Ripple network through a public-facing server, or for building a set
# of cluster peers.
#
# One address or domain names per line is allowed. A port must be specified
@@ -748,8 +748,8 @@
# the folder in which the xrpld.cfg file is located.
#
# Examples:
# /home/username/validators.txt
# C:/home/username/validators.txt
# /home/ripple/validators.txt
# C:/home/ripple/validators.txt
#
# Example content:
# [validators]
@@ -840,7 +840,7 @@
#
# 0: Disable the ledger replay feature [default]
# 1: Enable the ledger replay feature. With this feature enabled, when
# acquiring a ledger from the network, an xrpld node only downloads
# acquiring a ledger from the network, a xrpld node only downloads
# the ledger header and the transactions instead of the whole ledger.
# And the ledger is built by applying the transactions to the parent
# ledger.
@@ -853,7 +853,7 @@
#
# The xrpld server instance uses HTTPS GET requests in a variety of
# circumstances, including but not limited to contacting trusted domains to
# fetch information such as mapping an email address to an XRPL payment
# fetch information such as mapping an email address to a Ripple Payment
# Network address.
#
# [ssl_verify]
@@ -1227,7 +1227,7 @@
#
#----------
#
# The vote settings configure settings for the entire XRPL network.
# The vote settings configure settings for the entire Ripple network.
# While a single instance of xrpld cannot unilaterally enforce network-wide
# settings, these choices become part of the instance's vote during the
# consensus process for each voting ledger.
@@ -1416,12 +1416,6 @@
# in this section to a comma-separated list of the addresses
# of your Clio servers, in order to bypass xrpld's rate limiting.
#
# TLS/SSL can be enabled for gRPC by specifying ssl_cert and ssl_key.
# Both parameters must be provided together. The ssl_cert_chain parameter
# is optional and provides intermediate CA certificates for the certificate
# chain. The ssl_client_ca parameter is optional and enables mutual TLS
# (client certificate verification).
#
# This port is commented out but can be enabled by removing
# the '#' from each corresponding line including the entry under [server]
#
@@ -1471,74 +1465,11 @@ admin = 127.0.0.1
protocol = ws
send_queue_limit = 500
# gRPC TLS/SSL Configuration
#
# The gRPC port supports optional TLS/SSL encryption. When TLS is not
# configured, the gRPC server will accept unencrypted connections.
#
# ssl_cert = <filename>
# ssl_key = <filename>
#
# To enable TLS for gRPC, both ssl_cert and ssl_key must be specified.
# If only one is provided, xrpld will fail to start.
#
# ssl_cert: Path to the server's SSL certificate file in PEM format.
# ssl_key: Path to the server's SSL private key file in PEM format.
#
# When configured, the gRPC server will only accept TLS-encrypted
# connections. Clients must use TLS (secure) channel credentials rather
# than plaintext / insecure connections.
#
# ssl_cert_chain = <filename>
#
# Optional. Path to intermediate CA certificate(s) in PEM format that
# complete the server's certificate chain.
#
# This file should contain the intermediate CA certificate(s) needed
# to build a trust chain from the server certificate (ssl_cert) to a
# root CA that clients trust. Multiple certificates should be
# concatenated in PEM format.
#
# This is needed when your server certificate was signed by an
# intermediate CA rather than directly by a root CA. Without this,
# clients may fail to verify your server certificate.
#
# If not specified, only the server certificate from ssl_cert will be
# presented to clients.
#
# ssl_client_ca = <filename>
#
# Optional. Path to a CA certificate file in PEM format for verifying
# client certificates (mutual TLS / mTLS).
#
# When specified, the gRPC server will verify client certificates
# against this CA. This enables mutual authentication where both the
# server and client verify each other's identity.
#
# This is typically NOT needed for public-facing gRPC servers. Only
# use this if you want to restrict access to clients with valid
# certificates signed by the specified CA.
#
# If not specified, the server will use one-way TLS (server
# authentication only) and will accept connections from any client.
#
[port_grpc]
port = 50051
ip = 127.0.0.1
secure_gateway = 127.0.0.1
# Optional TLS/SSL configuration for gRPC
# To enable TLS, uncomment and configure both ssl_cert and ssl_key:
#ssl_cert = /etc/ssl/certs/grpc-server.crt
#ssl_key = /etc/ssl/private/grpc-server.key
# Optional: Include intermediate CA certificates for complete certificate chain
#ssl_cert_chain = /etc/ssl/certs/grpc-intermediate-ca.crt
# Optional: Enable mutual TLS (client certificate verification)
# Uncomment to require and verify client certificates:
#ssl_client_ca = /etc/ssl/certs/grpc-client-ca.crt
#[port_ws_public]
#port = 6005
#ip = 127.0.0.1

View File

@@ -1,32 +1,29 @@
macro(exclude_from_default target_)
macro (exclude_from_default target_)
set_target_properties(${target_} PROPERTIES EXCLUDE_FROM_ALL ON)
set_target_properties(${target_} PROPERTIES EXCLUDE_FROM_DEFAULT_BUILD ON)
endmacro()
endmacro ()
macro(exclude_if_included target_)
macro (exclude_if_included target_)
get_directory_property(has_parent PARENT_DIRECTORY)
if(has_parent)
if (has_parent)
exclude_from_default(${target_})
endif()
endmacro()
endif ()
endmacro ()
find_package(Git)
function(git_branch branch_val)
if(NOT GIT_FOUND)
function (git_branch branch_val)
if (NOT GIT_FOUND)
return()
endif()
endif ()
set(_branch "")
execute_process(
COMMAND ${GIT_EXECUTABLE} "rev-parse" "--abbrev-ref" "HEAD"
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
RESULT_VARIABLE _git_exit_code
OUTPUT_VARIABLE _temp_branch
OUTPUT_STRIP_TRAILING_WHITESPACE
ERROR_QUIET
)
if(_git_exit_code EQUAL 0)
execute_process(COMMAND ${GIT_EXECUTABLE} "rev-parse" "--abbrev-ref" "HEAD"
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
RESULT_VARIABLE _git_exit_code
OUTPUT_VARIABLE _temp_branch
OUTPUT_STRIP_TRAILING_WHITESPACE ERROR_QUIET)
if (_git_exit_code EQUAL 0)
set(_branch ${_temp_branch})
endif()
endif ()
set(${branch_val} "${_branch}" PARENT_SCOPE)
endfunction()
endfunction ()

View File

@@ -1,62 +1,49 @@
find_program(CCACHE_PATH "ccache")
if(NOT CCACHE_PATH)
if (NOT CCACHE_PATH)
return()
endif()
endif ()
# For Linux and macOS we can use the ccache binary directly.
if(NOT MSVC)
if (NOT MSVC)
set(CMAKE_C_COMPILER_LAUNCHER "${CCACHE_PATH}")
set(CMAKE_CXX_COMPILER_LAUNCHER "${CCACHE_PATH}")
message(STATUS "Found ccache: ${CCACHE_PATH}")
return()
endif()
endif ()
# For Windows more effort is required. The code below is a modified version of
# https://github.com/ccache/ccache/wiki/MS-Visual-Studio#usage-with-cmake.
if("${CCACHE_PATH}" MATCHES "chocolatey")
if ("${CCACHE_PATH}" MATCHES "chocolatey")
message(DEBUG "Ccache path: ${CCACHE_PATH}")
# Chocolatey uses a shim executable that we cannot use directly, in which case we have to find the executable it
# points to. If we cannot find the target executable then we cannot use ccache.
find_program(BASH_PATH "bash")
if(NOT BASH_PATH)
if (NOT BASH_PATH)
message(WARNING "Could not find bash.")
return()
endif()
endif ()
execute_process(
COMMAND
bash -c
"export LC_ALL='en_US.UTF-8'; ${CCACHE_PATH} --shimgen-noop | grep -oP 'path to executable: \\K.+' | head -c -1"
OUTPUT_VARIABLE CCACHE_PATH
)
execute_process(COMMAND bash -c
"export LC_ALL='en_US.UTF-8'; ${CCACHE_PATH} --shimgen-noop | grep -oP 'path to executable: \\K.+' | head -c -1"
OUTPUT_VARIABLE CCACHE_PATH)
if(NOT CCACHE_PATH)
if (NOT CCACHE_PATH)
message(WARNING "Could not find ccache target.")
return()
endif()
endif ()
file(TO_CMAKE_PATH "${CCACHE_PATH}" CCACHE_PATH)
endif()
endif ()
message(STATUS "Found ccache: ${CCACHE_PATH}")
# Tell cmake to use ccache for compiling with Visual Studio.
file(COPY_FILE ${CCACHE_PATH} ${CMAKE_BINARY_DIR}/cl.exe ONLY_IF_DIFFERENT)
set(CMAKE_VS_GLOBALS
"CLToolExe=cl.exe"
"CLToolPath=${CMAKE_BINARY_DIR}"
"TrackFileAccess=false"
"UseMultiToolTask=true"
)
set(CMAKE_VS_GLOBALS "CLToolExe=cl.exe" "CLToolPath=${CMAKE_BINARY_DIR}" "TrackFileAccess=false"
"UseMultiToolTask=true")
# By default Visual Studio generators will use /Zi to capture debug information, which is not compatible with ccache, so
# tell it to use /Z7 instead.
if(MSVC)
foreach(
var_
CMAKE_C_FLAGS_DEBUG
CMAKE_C_FLAGS_RELEASE
CMAKE_CXX_FLAGS_DEBUG
CMAKE_CXX_FLAGS_RELEASE
)
if (MSVC)
foreach (var_ CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE)
string(REPLACE "/Zi" "/Z7" ${var_} "${${var_}}")
endforeach()
endif()
endforeach ()
endif ()

View File

@@ -174,45 +174,36 @@ option(CODE_COVERAGE_VERBOSE "Verbose information" FALSE)
# Check prereqs
find_program(GCOVR_PATH gcovr PATHS ${CMAKE_SOURCE_DIR}/scripts/test)
if(DEFINED CODE_COVERAGE_GCOV_TOOL)
if (DEFINED CODE_COVERAGE_GCOV_TOOL)
set(GCOV_TOOL "${CODE_COVERAGE_GCOV_TOOL}")
elseif(DEFINED ENV{CODE_COVERAGE_GCOV_TOOL})
elseif (DEFINED ENV{CODE_COVERAGE_GCOV_TOOL})
set(GCOV_TOOL "$ENV{CODE_COVERAGE_GCOV_TOOL}")
elseif("${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if(APPLE)
execute_process(
COMMAND xcrun -f llvm-cov
OUTPUT_VARIABLE LLVMCOV_PATH
OUTPUT_STRIP_TRAILING_WHITESPACE
)
else()
elseif ("${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if (APPLE)
execute_process(COMMAND xcrun -f llvm-cov OUTPUT_VARIABLE LLVMCOV_PATH OUTPUT_STRIP_TRAILING_WHITESPACE)
else ()
find_program(LLVMCOV_PATH llvm-cov)
endif()
if(LLVMCOV_PATH)
endif ()
if (LLVMCOV_PATH)
set(GCOV_TOOL "${LLVMCOV_PATH} gcov")
endif()
elseif("${CMAKE_CXX_COMPILER_ID}" MATCHES "GNU")
endif ()
elseif ("${CMAKE_CXX_COMPILER_ID}" MATCHES "GNU")
find_program(GCOV_PATH gcov)
set(GCOV_TOOL "${GCOV_PATH}")
endif()
endif ()
# Check supported compiler (Clang, GNU and Flang)
get_property(LANGUAGES GLOBAL PROPERTY ENABLED_LANGUAGES)
foreach(LANG ${LANGUAGES})
if("${CMAKE_${LANG}_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if("${CMAKE_${LANG}_COMPILER_VERSION}" VERSION_LESS 3)
message(
FATAL_ERROR
"Clang version must be 3.0.0 or greater! Aborting..."
)
endif()
elseif(
NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES "GNU"
AND NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES "(LLVM)?[Ff]lang"
)
foreach (LANG ${LANGUAGES})
if ("${CMAKE_${LANG}_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if ("${CMAKE_${LANG}_COMPILER_VERSION}" VERSION_LESS 3)
message(FATAL_ERROR "Clang version must be 3.0.0 or greater! Aborting...")
endif ()
elseif (NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES "GNU" AND NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES
"(LLVM)?[Ff]lang")
message(FATAL_ERROR "Compiler is not GNU or Flang! Aborting...")
endif()
endforeach()
endif ()
endforeach ()
set(COVERAGE_COMPILER_FLAGS "-g --coverage" CACHE INTERNAL "")
@@ -221,7 +212,7 @@ set(COVERAGE_C_COMPILER_FLAGS "")
set(COVERAGE_CXX_LINKER_FLAGS "")
set(COVERAGE_C_LINKER_FLAGS "")
if(CMAKE_CXX_COMPILER_ID MATCHES "(GNU|Clang)")
if (CMAKE_CXX_COMPILER_ID MATCHES "(GNU|Clang)")
include(CheckCXXCompilerFlag)
include(CheckCCompilerFlag)
include(CheckLinkerFlag)
@@ -232,77 +223,51 @@ if(CMAKE_CXX_COMPILER_ID MATCHES "(GNU|Clang)")
set(COVERAGE_C_LINKER_FLAGS ${COVERAGE_COMPILER_FLAGS})
check_cxx_compiler_flag(-fprofile-abs-path HAVE_cxx_fprofile_abs_path)
if(HAVE_cxx_fprofile_abs_path)
set(COVERAGE_CXX_COMPILER_FLAGS
"${COVERAGE_CXX_COMPILER_FLAGS} -fprofile-abs-path"
)
endif()
if (HAVE_cxx_fprofile_abs_path)
set(COVERAGE_CXX_COMPILER_FLAGS "${COVERAGE_CXX_COMPILER_FLAGS} -fprofile-abs-path")
endif ()
check_c_compiler_flag(-fprofile-abs-path HAVE_c_fprofile_abs_path)
if(HAVE_c_fprofile_abs_path)
set(COVERAGE_C_COMPILER_FLAGS
"${COVERAGE_C_COMPILER_FLAGS} -fprofile-abs-path"
)
endif()
if (HAVE_c_fprofile_abs_path)
set(COVERAGE_C_COMPILER_FLAGS "${COVERAGE_C_COMPILER_FLAGS} -fprofile-abs-path")
endif ()
check_linker_flag(CXX -fprofile-abs-path HAVE_cxx_linker_fprofile_abs_path)
if(HAVE_cxx_linker_fprofile_abs_path)
set(COVERAGE_CXX_LINKER_FLAGS
"${COVERAGE_CXX_LINKER_FLAGS} -fprofile-abs-path"
)
endif()
if (HAVE_cxx_linker_fprofile_abs_path)
set(COVERAGE_CXX_LINKER_FLAGS "${COVERAGE_CXX_LINKER_FLAGS} -fprofile-abs-path")
endif ()
check_linker_flag(C -fprofile-abs-path HAVE_c_linker_fprofile_abs_path)
if(HAVE_c_linker_fprofile_abs_path)
set(COVERAGE_C_LINKER_FLAGS
"${COVERAGE_C_LINKER_FLAGS} -fprofile-abs-path"
)
endif()
if (HAVE_c_linker_fprofile_abs_path)
set(COVERAGE_C_LINKER_FLAGS "${COVERAGE_C_LINKER_FLAGS} -fprofile-abs-path")
endif ()
check_cxx_compiler_flag(-fprofile-update=atomic HAVE_cxx_fprofile_update)
if(HAVE_cxx_fprofile_update)
set(COVERAGE_CXX_COMPILER_FLAGS
"${COVERAGE_CXX_COMPILER_FLAGS} -fprofile-update=atomic"
)
endif()
if (HAVE_cxx_fprofile_update)
set(COVERAGE_CXX_COMPILER_FLAGS "${COVERAGE_CXX_COMPILER_FLAGS} -fprofile-update=atomic")
endif ()
check_c_compiler_flag(-fprofile-update=atomic HAVE_c_fprofile_update)
if(HAVE_c_fprofile_update)
set(COVERAGE_C_COMPILER_FLAGS
"${COVERAGE_C_COMPILER_FLAGS} -fprofile-update=atomic"
)
endif()
if (HAVE_c_fprofile_update)
set(COVERAGE_C_COMPILER_FLAGS "${COVERAGE_C_COMPILER_FLAGS} -fprofile-update=atomic")
endif ()
check_linker_flag(
CXX
-fprofile-update=atomic
HAVE_cxx_linker_fprofile_update
)
if(HAVE_cxx_linker_fprofile_update)
set(COVERAGE_CXX_LINKER_FLAGS
"${COVERAGE_CXX_LINKER_FLAGS} -fprofile-update=atomic"
)
endif()
check_linker_flag(CXX -fprofile-update=atomic HAVE_cxx_linker_fprofile_update)
if (HAVE_cxx_linker_fprofile_update)
set(COVERAGE_CXX_LINKER_FLAGS "${COVERAGE_CXX_LINKER_FLAGS} -fprofile-update=atomic")
endif ()
check_linker_flag(C -fprofile-update=atomic HAVE_c_linker_fprofile_update)
if(HAVE_c_linker_fprofile_update)
set(COVERAGE_C_LINKER_FLAGS
"${COVERAGE_C_LINKER_FLAGS} -fprofile-update=atomic"
)
endif()
endif()
if (HAVE_c_linker_fprofile_update)
set(COVERAGE_C_LINKER_FLAGS "${COVERAGE_C_LINKER_FLAGS} -fprofile-update=atomic")
endif ()
get_property(
GENERATOR_IS_MULTI_CONFIG
GLOBAL
PROPERTY GENERATOR_IS_MULTI_CONFIG
)
if(NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG))
message(
WARNING
"Code coverage results with an optimised (non-Debug) build may be misleading"
)
endif() # NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG)
endif ()
get_property(GENERATOR_IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
if (NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG))
message(WARNING "Code coverage results with an optimised (non-Debug) build may be misleading")
endif () # NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG)
# Defines a target for running and collection code coverage information
# Builds dependencies, runs the given executable and outputs reports.
@@ -326,167 +291,123 @@ endif() # NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG)
# )
# The user can set the variable GCOVR_ADDITIONAL_ARGS to supply additional flags to the
# GCVOR command.
function(setup_target_for_coverage_gcovr)
function (setup_target_for_coverage_gcovr)
set(options NONE)
set(oneValueArgs BASE_DIRECTORY NAME FORMAT)
set(multiValueArgs EXCLUDE EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)
cmake_parse_arguments(
Coverage
"${options}"
"${oneValueArgs}"
"${multiValueArgs}"
${ARGN}
)
cmake_parse_arguments(Coverage "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT GCOV_TOOL)
if (NOT GCOV_TOOL)
message(FATAL_ERROR "Could not find gcov or llvm-cov tool! Aborting...")
endif()
endif ()
if(NOT GCOVR_PATH)
if (NOT GCOVR_PATH)
message(FATAL_ERROR "Could not find gcovr tool! Aborting...")
endif()
endif ()
# Set base directory (as absolute path), or default to PROJECT_SOURCE_DIR
if(DEFINED Coverage_BASE_DIRECTORY)
if (DEFINED Coverage_BASE_DIRECTORY)
get_filename_component(BASEDIR ${Coverage_BASE_DIRECTORY} ABSOLUTE)
else()
else ()
set(BASEDIR ${PROJECT_SOURCE_DIR})
endif()
endif ()
if(NOT DEFINED Coverage_FORMAT)
if (NOT DEFINED Coverage_FORMAT)
set(Coverage_FORMAT xml)
endif()
endif ()
if(NOT DEFINED Coverage_EXECUTABLE AND DEFINED Coverage_EXECUTABLE_ARGS)
message(
FATAL_ERROR
"EXECUTABLE_ARGS must not be set if EXECUTABLE is not set"
)
endif()
if (NOT DEFINED Coverage_EXECUTABLE AND DEFINED Coverage_EXECUTABLE_ARGS)
message(FATAL_ERROR "EXECUTABLE_ARGS must not be set if EXECUTABLE is not set")
endif ()
if("--output" IN_LIST GCOVR_ADDITIONAL_ARGS)
message(
FATAL_ERROR
"Unsupported --output option detected in GCOVR_ADDITIONAL_ARGS! Aborting..."
)
else()
if(
(Coverage_FORMAT STREQUAL "html-details")
OR (Coverage_FORMAT STREQUAL "html-nested")
)
set(GCOVR_OUTPUT_FILE
${PROJECT_BINARY_DIR}/${Coverage_NAME}/index.html
)
if ("--output" IN_LIST GCOVR_ADDITIONAL_ARGS)
message(FATAL_ERROR "Unsupported --output option detected in GCOVR_ADDITIONAL_ARGS! Aborting...")
else ()
if ((Coverage_FORMAT STREQUAL "html-details") OR (Coverage_FORMAT STREQUAL "html-nested"))
set(GCOVR_OUTPUT_FILE ${PROJECT_BINARY_DIR}/${Coverage_NAME}/index.html)
set(GCOVR_CREATE_FOLDER ${PROJECT_BINARY_DIR}/${Coverage_NAME})
elseif(Coverage_FORMAT STREQUAL "html-single")
elseif (Coverage_FORMAT STREQUAL "html-single")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.html)
elseif(
(Coverage_FORMAT STREQUAL "json-summary")
OR (Coverage_FORMAT STREQUAL "json-details")
OR (Coverage_FORMAT STREQUAL "coveralls")
)
elseif ((Coverage_FORMAT STREQUAL "json-summary") OR (Coverage_FORMAT STREQUAL "json-details")
OR (Coverage_FORMAT STREQUAL "coveralls"))
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.json)
elseif(Coverage_FORMAT STREQUAL "txt")
elseif (Coverage_FORMAT STREQUAL "txt")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.txt)
elseif(Coverage_FORMAT STREQUAL "csv")
elseif (Coverage_FORMAT STREQUAL "csv")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.csv)
elseif(Coverage_FORMAT STREQUAL "lcov")
elseif (Coverage_FORMAT STREQUAL "lcov")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.lcov)
else()
else ()
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.xml)
endif()
endif()
endif ()
endif ()
if(
(Coverage_FORMAT STREQUAL "cobertura")
OR (Coverage_FORMAT STREQUAL "xml")
)
if ((Coverage_FORMAT STREQUAL "cobertura") OR (Coverage_FORMAT STREQUAL "xml"))
list(APPEND GCOVR_ADDITIONAL_ARGS --cobertura "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --cobertura-pretty)
set(Coverage_FORMAT cobertura) # overwrite xml
elseif(Coverage_FORMAT STREQUAL "sonarqube")
elseif (Coverage_FORMAT STREQUAL "sonarqube")
list(APPEND GCOVR_ADDITIONAL_ARGS --sonarqube "${GCOVR_OUTPUT_FILE}")
elseif(Coverage_FORMAT STREQUAL "jacoco")
elseif (Coverage_FORMAT STREQUAL "jacoco")
list(APPEND GCOVR_ADDITIONAL_ARGS --jacoco "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --jacoco-pretty)
elseif(Coverage_FORMAT STREQUAL "clover")
elseif (Coverage_FORMAT STREQUAL "clover")
list(APPEND GCOVR_ADDITIONAL_ARGS --clover "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --clover-pretty)
elseif(Coverage_FORMAT STREQUAL "lcov")
elseif (Coverage_FORMAT STREQUAL "lcov")
list(APPEND GCOVR_ADDITIONAL_ARGS --lcov "${GCOVR_OUTPUT_FILE}")
elseif(Coverage_FORMAT STREQUAL "json-summary")
elseif (Coverage_FORMAT STREQUAL "json-summary")
list(APPEND GCOVR_ADDITIONAL_ARGS --json-summary "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --json-summary-pretty)
elseif(Coverage_FORMAT STREQUAL "json-details")
elseif (Coverage_FORMAT STREQUAL "json-details")
list(APPEND GCOVR_ADDITIONAL_ARGS --json "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --json-pretty)
elseif(Coverage_FORMAT STREQUAL "coveralls")
elseif (Coverage_FORMAT STREQUAL "coveralls")
list(APPEND GCOVR_ADDITIONAL_ARGS --coveralls "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --coveralls-pretty)
elseif(Coverage_FORMAT STREQUAL "csv")
elseif (Coverage_FORMAT STREQUAL "csv")
list(APPEND GCOVR_ADDITIONAL_ARGS --csv "${GCOVR_OUTPUT_FILE}")
elseif(Coverage_FORMAT STREQUAL "txt")
elseif (Coverage_FORMAT STREQUAL "txt")
list(APPEND GCOVR_ADDITIONAL_ARGS --txt "${GCOVR_OUTPUT_FILE}")
elseif(Coverage_FORMAT STREQUAL "html-single")
elseif (Coverage_FORMAT STREQUAL "html-single")
list(APPEND GCOVR_ADDITIONAL_ARGS --html "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --html-self-contained)
elseif(Coverage_FORMAT STREQUAL "html-nested")
elseif (Coverage_FORMAT STREQUAL "html-nested")
list(APPEND GCOVR_ADDITIONAL_ARGS --html-nested "${GCOVR_OUTPUT_FILE}")
elseif(Coverage_FORMAT STREQUAL "html-details")
elseif (Coverage_FORMAT STREQUAL "html-details")
list(APPEND GCOVR_ADDITIONAL_ARGS --html-details "${GCOVR_OUTPUT_FILE}")
else()
message(
FATAL_ERROR
"Unsupported output style ${Coverage_FORMAT}! Aborting..."
)
endif()
else ()
message(FATAL_ERROR "Unsupported output style ${Coverage_FORMAT}! Aborting...")
endif ()
# Collect excludes (CMake 3.4+: Also compute absolute paths)
set(GCOVR_EXCLUDES "")
foreach(
EXCLUDE
${Coverage_EXCLUDE}
${COVERAGE_EXCLUDES}
${COVERAGE_GCOVR_EXCLUDES}
)
if(CMAKE_VERSION VERSION_GREATER 3.4)
get_filename_component(
EXCLUDE
${EXCLUDE}
ABSOLUTE
BASE_DIR ${BASEDIR}
)
endif()
foreach (EXCLUDE ${Coverage_EXCLUDE} ${COVERAGE_EXCLUDES} ${COVERAGE_GCOVR_EXCLUDES})
if (CMAKE_VERSION VERSION_GREATER 3.4)
get_filename_component(EXCLUDE ${EXCLUDE} ABSOLUTE BASE_DIR ${BASEDIR})
endif ()
list(APPEND GCOVR_EXCLUDES "${EXCLUDE}")
endforeach()
endforeach ()
list(REMOVE_DUPLICATES GCOVR_EXCLUDES)
# Combine excludes to several -e arguments
set(GCOVR_EXCLUDE_ARGS "")
foreach(EXCLUDE ${GCOVR_EXCLUDES})
foreach (EXCLUDE ${GCOVR_EXCLUDES})
list(APPEND GCOVR_EXCLUDE_ARGS "-e")
list(APPEND GCOVR_EXCLUDE_ARGS "${EXCLUDE}")
endforeach()
endforeach ()
# Set up commands which will be run to generate coverage data
# If EXECUTABLE is not set, the user is expected to run the tests manually
# before running the coverage target NAME
if(DEFINED Coverage_EXECUTABLE)
set(GCOVR_EXEC_TESTS_CMD
${Coverage_EXECUTABLE}
${Coverage_EXECUTABLE_ARGS}
)
endif()
if (DEFINED Coverage_EXECUTABLE)
set(GCOVR_EXEC_TESTS_CMD ${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS})
endif ()
# Create folder
if(DEFINED GCOVR_CREATE_FOLDER)
set(GCOVR_FOLDER_CMD
${CMAKE_COMMAND}
-E
make_directory
${GCOVR_CREATE_FOLDER}
)
endif()
if (DEFINED GCOVR_CREATE_FOLDER)
set(GCOVR_FOLDER_CMD ${CMAKE_COMMAND} -E make_directory ${GCOVR_CREATE_FOLDER})
endif ()
# Running gcovr
set(GCOVR_CMD
@@ -498,95 +419,53 @@ function(setup_target_for_coverage_gcovr)
${BASEDIR}
${GCOVR_ADDITIONAL_ARGS}
${GCOVR_EXCLUDE_ARGS}
--object-directory=${PROJECT_BINARY_DIR}
)
--object-directory=${PROJECT_BINARY_DIR})
if(CODE_COVERAGE_VERBOSE)
if (CODE_COVERAGE_VERBOSE)
message(STATUS "Executed command report")
if(NOT "${GCOVR_EXEC_TESTS_CMD}" STREQUAL "")
if (NOT "${GCOVR_EXEC_TESTS_CMD}" STREQUAL "")
message(STATUS "Command to run tests: ")
string(
REPLACE ";"
" "
GCOVR_EXEC_TESTS_CMD_SPACED
"${GCOVR_EXEC_TESTS_CMD}"
)
string(REPLACE ";" " " GCOVR_EXEC_TESTS_CMD_SPACED "${GCOVR_EXEC_TESTS_CMD}")
message(STATUS "${GCOVR_EXEC_TESTS_CMD_SPACED}")
endif()
endif ()
if(NOT "${GCOVR_FOLDER_CMD}" STREQUAL "")
if (NOT "${GCOVR_FOLDER_CMD}" STREQUAL "")
message(STATUS "Command to create a folder: ")
string(
REPLACE ";"
" "
GCOVR_FOLDER_CMD_SPACED
"${GCOVR_FOLDER_CMD}"
)
string(REPLACE ";" " " GCOVR_FOLDER_CMD_SPACED "${GCOVR_FOLDER_CMD}")
message(STATUS "${GCOVR_FOLDER_CMD_SPACED}")
endif()
endif ()
message(STATUS "Command to generate gcovr coverage data: ")
string(REPLACE ";" " " GCOVR_CMD_SPACED "${GCOVR_CMD}")
message(STATUS "${GCOVR_CMD_SPACED}")
endif()
endif ()
add_custom_target(
${Coverage_NAME}
COMMAND ${GCOVR_EXEC_TESTS_CMD}
COMMAND ${GCOVR_FOLDER_CMD}
COMMAND ${GCOVR_CMD}
BYPRODUCTS ${GCOVR_OUTPUT_FILE}
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
DEPENDS ${Coverage_DEPENDENCIES}
VERBATIM # Protect arguments to commands
COMMENT "Running gcovr to produce code coverage report."
)
add_custom_target(${Coverage_NAME}
COMMAND ${GCOVR_EXEC_TESTS_CMD}
COMMAND ${GCOVR_FOLDER_CMD}
COMMAND ${GCOVR_CMD}
BYPRODUCTS ${GCOVR_OUTPUT_FILE}
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
DEPENDS ${Coverage_DEPENDENCIES}
VERBATIM # Protect arguments to commands
COMMENT "Running gcovr to produce code coverage report.")
# Show info where to find the report
add_custom_command(
TARGET ${Coverage_NAME}
POST_BUILD
COMMAND echo
COMMENT
"Code coverage report saved in ${GCOVR_OUTPUT_FILE} formatted as ${Coverage_FORMAT}"
)
endfunction() # setup_target_for_coverage_gcovr
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD COMMAND echo
COMMENT "Code coverage report saved in ${GCOVR_OUTPUT_FILE} formatted as ${Coverage_FORMAT}")
endfunction () # setup_target_for_coverage_gcovr
function(add_code_coverage_to_target name scope)
separate_arguments(
COVERAGE_CXX_COMPILER_FLAGS
NATIVE_COMMAND
"${COVERAGE_CXX_COMPILER_FLAGS}"
)
separate_arguments(
COVERAGE_C_COMPILER_FLAGS
NATIVE_COMMAND
"${COVERAGE_C_COMPILER_FLAGS}"
)
separate_arguments(
COVERAGE_CXX_LINKER_FLAGS
NATIVE_COMMAND
"${COVERAGE_CXX_LINKER_FLAGS}"
)
separate_arguments(
COVERAGE_C_LINKER_FLAGS
NATIVE_COMMAND
"${COVERAGE_C_LINKER_FLAGS}"
)
function (add_code_coverage_to_target name scope)
separate_arguments(COVERAGE_CXX_COMPILER_FLAGS NATIVE_COMMAND "${COVERAGE_CXX_COMPILER_FLAGS}")
separate_arguments(COVERAGE_C_COMPILER_FLAGS NATIVE_COMMAND "${COVERAGE_C_COMPILER_FLAGS}")
separate_arguments(COVERAGE_CXX_LINKER_FLAGS NATIVE_COMMAND "${COVERAGE_CXX_LINKER_FLAGS}")
separate_arguments(COVERAGE_C_LINKER_FLAGS NATIVE_COMMAND "${COVERAGE_C_LINKER_FLAGS}")
# Add compiler options to the target
target_compile_options(
${name}
${scope}
$<$<COMPILE_LANGUAGE:CXX>:${COVERAGE_CXX_COMPILER_FLAGS}>
$<$<COMPILE_LANGUAGE:C>:${COVERAGE_C_COMPILER_FLAGS}>
)
target_compile_options(${name} ${scope} $<$<COMPILE_LANGUAGE:CXX>:${COVERAGE_CXX_COMPILER_FLAGS}>
$<$<COMPILE_LANGUAGE:C>:${COVERAGE_C_COMPILER_FLAGS}>)
target_link_libraries(
${name}
${scope}
$<$<LINK_LANGUAGE:CXX>:${COVERAGE_CXX_LINKER_FLAGS}>
$<$<LINK_LANGUAGE:C>:${COVERAGE_C_LINKER_FLAGS}>
)
endfunction() # add_code_coverage_to_target
target_link_libraries(${name} ${scope} $<$<LINK_LANGUAGE:CXX>:${COVERAGE_CXX_LINKER_FLAGS}>
$<$<LINK_LANGUAGE:C>:${COVERAGE_C_LINKER_FLAGS}>)
endfunction () # add_code_coverage_to_target

View File

@@ -14,20 +14,20 @@ set(is_gcc FALSE)
set(is_msvc FALSE)
set(is_xcode FALSE)
if(CMAKE_CXX_COMPILER_ID MATCHES ".*Clang") # Clang or AppleClang
if (CMAKE_CXX_COMPILER_ID MATCHES ".*Clang") # Clang or AppleClang
set(is_clang TRUE)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
elseif (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(is_gcc TRUE)
elseif(MSVC)
elseif (MSVC)
set(is_msvc TRUE)
else()
else ()
message(FATAL_ERROR "Unsupported C++ compiler: ${CMAKE_CXX_COMPILER_ID}")
endif()
endif ()
# Xcode generator detection
if(CMAKE_GENERATOR STREQUAL "Xcode")
if (CMAKE_GENERATOR STREQUAL "Xcode")
set(is_xcode TRUE)
endif()
endif ()
# --------------------------------------------------------------------
# Operating system detection
@@ -36,23 +36,23 @@ set(is_linux FALSE)
set(is_windows FALSE)
set(is_macos FALSE)
if(CMAKE_SYSTEM_NAME STREQUAL "Linux")
if (CMAKE_SYSTEM_NAME STREQUAL "Linux")
set(is_linux TRUE)
elseif(CMAKE_SYSTEM_NAME STREQUAL "Windows")
elseif (CMAKE_SYSTEM_NAME STREQUAL "Windows")
set(is_windows TRUE)
elseif(CMAKE_SYSTEM_NAME STREQUAL "Darwin")
elseif (CMAKE_SYSTEM_NAME STREQUAL "Darwin")
set(is_macos TRUE)
endif()
endif ()
# --------------------------------------------------------------------
# Architecture
# --------------------------------------------------------------------
set(is_amd64 FALSE)
set(is_arm64 FALSE)
if(CMAKE_SYSTEM_PROCESSOR MATCHES "x86_64|AMD64")
if (CMAKE_SYSTEM_PROCESSOR MATCHES "x86_64|AMD64")
set(is_amd64 TRUE)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|arm64|ARM64")
elseif (CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|arm64|ARM64")
set(is_arm64 TRUE)
else()
else ()
message(FATAL_ERROR "Unknown architecture: ${CMAKE_SYSTEM_PROCESSOR}")
endif()
endif ()

View File

@@ -1,28 +0,0 @@
include_guard()
set(GIT_BUILD_BRANCH "")
set(GIT_COMMIT_HASH "")
find_package(Git)
if(NOT Git_FOUND)
message(WARNING "Git not found. Git branch and commit hash will be empty.")
return()
endif()
set(GIT_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/.git)
execute_process(
COMMAND
${GIT_EXECUTABLE} --git-dir=${GIT_DIRECTORY} rev-parse --abbrev-ref HEAD
OUTPUT_STRIP_TRAILING_WHITESPACE
OUTPUT_VARIABLE GIT_BUILD_BRANCH
)
execute_process(
COMMAND ${GIT_EXECUTABLE} --git-dir=${GIT_DIRECTORY} rev-parse HEAD
OUTPUT_STRIP_TRAILING_WHITESPACE
OUTPUT_VARIABLE GIT_COMMIT_HASH
)
message(STATUS "Git branch: ${GIT_BUILD_BRANCH}")
message(STATUS "Git commit hash: ${GIT_COMMIT_HASH}")

View File

@@ -1,22 +1,13 @@
include(isolate_headers)
function(xrpl_add_test name)
function (xrpl_add_test name)
set(target ${PROJECT_NAME}.test.${name})
file(
GLOB_RECURSE sources
CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/${name}/*.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/${name}.cpp"
)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/${name}/*.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/${name}.cpp")
add_executable(${target} ${ARGN} ${sources})
isolate_headers(
${target}
"${CMAKE_SOURCE_DIR}"
"${CMAKE_SOURCE_DIR}/tests/${name}"
PRIVATE
)
isolate_headers(${target} "${CMAKE_SOURCE_DIR}" "${CMAKE_SOURCE_DIR}/tests/${name}" PRIVATE)
add_test(NAME ${target} COMMAND ${target})
endfunction()
endfunction ()

View File

@@ -14,42 +14,31 @@ include(XrplSanitizers)
# add a single global dependency on this interface lib
link_libraries(Xrpl::common)
# Respect CMAKE_POSITION_INDEPENDENT_CODE setting (may be set by Conan toolchain)
if(NOT DEFINED CMAKE_POSITION_INDEPENDENT_CODE)
if (NOT DEFINED CMAKE_POSITION_INDEPENDENT_CODE)
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
endif()
set_target_properties(
common
PROPERTIES
INTERFACE_POSITION_INDEPENDENT_CODE ${CMAKE_POSITION_INDEPENDENT_CODE}
)
endif ()
set_target_properties(common PROPERTIES INTERFACE_POSITION_INDEPENDENT_CODE ${CMAKE_POSITION_INDEPENDENT_CODE})
set(CMAKE_CXX_EXTENSIONS OFF)
target_compile_definitions(
common
INTERFACE
$<$<CONFIG:Debug>:DEBUG
_DEBUG>
#[===[
INTERFACE $<$<CONFIG:Debug>:DEBUG
_DEBUG>
#[===[
NOTE: CMAKE release builds already have NDEBUG defined, so no need to add it
explicitly except for the special case of (profile ON) and (assert OFF).
Presumably this is because we don't want profile builds asserting unless
asserts were specifically requested.
]===]
$<$<AND:$<BOOL:${profile}>,$<NOT:$<BOOL:${assert}>>>:NDEBUG>
# TODO: Remove once we have migrated functions from OpenSSL 1.x to 3.x.
OPENSSL_SUPPRESS_DEPRECATED
)
$<$<AND:$<BOOL:${profile}>,$<NOT:$<BOOL:${assert}>>>:NDEBUG>
# TODO: Remove once we have migrated functions from OpenSSL 1.x to 3.x.
OPENSSL_SUPPRESS_DEPRECATED)
if(MSVC)
if (MSVC)
# remove existing exception flag since we set it to -EHa
string(REGEX REPLACE "[-/]EH[a-z]+" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
foreach(
var_
CMAKE_C_FLAGS_DEBUG
CMAKE_C_FLAGS_RELEASE
CMAKE_CXX_FLAGS_DEBUG
CMAKE_CXX_FLAGS_RELEASE
)
foreach (var_ CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE)
# also remove dynamic runtime
string(REGEX REPLACE "[-/]MD[d]*" " " ${var_} "${${var_}}")
@@ -57,143 +46,117 @@ if(MSVC)
string(REPLACE "/ZI" "/Zi" ${var_} "${${var_}}")
# omit debug info completely under CI (not needed)
if(is_ci)
if (is_ci)
string(REPLACE "/Zi" " " ${var_} "${${var_}}")
string(REPLACE "/Z7" " " ${var_} "${${var_}}")
endif()
endforeach()
endif ()
endforeach ()
target_compile_options(
common
INTERFACE # Increase object file max size
-bigobj
# Floating point behavior
-fp:precise
# __cdecl calling convention
-Gd
# Minimal rebuild: disabled
-Gm-
# Function level linking: disabled
-Gy-
# Multiprocessor compilation
-MP
# pragma omp: disabled
-openmp-
# No error reporting to Internet
-errorReport:none
# Suppress login banner
-nologo
# Disable signed/unsigned comparison warnings
-wd4018
# Disable float to int possible loss of data warnings
-wd4244
# Disable size_t to T possible loss of data warnings
-wd4267
# Disable C4800(int to bool performance)
-wd4800
# Decorated name length exceeded, name was truncated
-wd4503
$<$<COMPILE_LANGUAGE:CXX>:
-EHa
-GR
>
$<$<CONFIG:Release>:-Ox>
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:
-GS
-Zc:forScope
>
# static runtime
$<$<CONFIG:Debug>:-MTd>
$<$<NOT:$<CONFIG:Debug>>:-MT>
$<$<BOOL:${werr}>:-WX>
)
-bigobj
# Floating point behavior
-fp:precise
# __cdecl calling convention
-Gd
# Minimal rebuild: disabled
-Gm-
# Function level linking: disabled
-Gy-
# Multiprocessor compilation
-MP
# pragma omp: disabled
-openmp-
# No error reporting to Internet
-errorReport:none
# Suppress login banner
-nologo
# Disable signed/unsigned comparison warnings
-wd4018
# Disable float to int possible loss of data warnings
-wd4244
# Disable size_t to T possible loss of data warnings
-wd4267
# Disable C4800(int to bool performance)
-wd4800
# Decorated name length exceeded, name was truncated
-wd4503
$<$<COMPILE_LANGUAGE:CXX>:
-EHa
-GR
>
$<$<CONFIG:Release>:-Ox>
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:
-GS
-Zc:forScope
>
# static runtime
$<$<CONFIG:Debug>:-MTd>
$<$<NOT:$<CONFIG:Debug>>:-MT>
$<$<BOOL:${werr}>:-WX>)
target_compile_definitions(
common
INTERFACE
_WIN32_WINNT=0x6000
_SCL_SECURE_NO_WARNINGS
_CRT_SECURE_NO_WARNINGS
WIN32_CONSOLE
WIN32_LEAN_AND_MEAN
NOMINMAX
# TODO: Resolve these warnings, don't just silence them
_SILENCE_ALL_CXX17_DEPRECATION_WARNINGS
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:_CRTDBG_MAP_ALLOC>
)
INTERFACE _WIN32_WINNT=0x6000
_SCL_SECURE_NO_WARNINGS
_CRT_SECURE_NO_WARNINGS
WIN32_CONSOLE
WIN32_LEAN_AND_MEAN
NOMINMAX
# TODO: Resolve these warnings, don't just silence them
_SILENCE_ALL_CXX17_DEPRECATION_WARNINGS
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:_CRTDBG_MAP_ALLOC>)
target_link_libraries(common INTERFACE -errorreport:none -machine:X64)
else()
else ()
target_compile_options(
common
INTERFACE
-Wall
-Wdeprecated
$<$<BOOL:${is_clang}>:-Wno-deprecated-declarations>
$<$<BOOL:${wextra}>:-Wextra
-Wno-unused-parameter>
$<$<BOOL:${werr}>:-Werror>
-fstack-protector
-Wno-sign-compare
-Wno-unused-but-set-variable
$<$<NOT:$<CONFIG:Debug>>:-fno-strict-aliasing>
# tweak gcc optimization for debug
$<$<AND:$<BOOL:${is_gcc}>,$<CONFIG:Debug>>:-O0>
# Add debug symbols to release config
$<$<CONFIG:Release>:-g>
)
INTERFACE -Wall
-Wdeprecated
$<$<BOOL:${is_clang}>:-Wno-deprecated-declarations>
$<$<BOOL:${wextra}>:-Wextra
-Wno-unused-parameter>
$<$<BOOL:${werr}>:-Werror>
-fstack-protector
-Wno-sign-compare
-Wno-unused-but-set-variable
$<$<NOT:$<CONFIG:Debug>>:-fno-strict-aliasing>
# tweak gcc optimization for debug
$<$<AND:$<BOOL:${is_gcc}>,$<CONFIG:Debug>>:-O0>
# Add debug symbols to release config
$<$<CONFIG:Release>:-g>)
target_link_libraries(
common
INTERFACE
-rdynamic
$<$<BOOL:${is_linux}>:-Wl,-z,relro,-z,now,--build-id>
# link to static libc/c++ iff: * static option set and * NOT APPLE (AppleClang does not support static
# libc/c++) and * NOT SANITIZERS (sanitizers typically don't work with static libc/c++)
$<$<AND:$<BOOL:${static}>,$<NOT:$<BOOL:${APPLE}>>,$<NOT:$<BOOL:${SANITIZERS_ENABLED}>>>:
-static-libstdc++
-static-libgcc
>
)
endif()
INTERFACE -rdynamic
$<$<BOOL:${is_linux}>:-Wl,-z,relro,-z,now,--build-id>
# link to static libc/c++ iff: * static option set and * NOT APPLE (AppleClang does not support static
# libc/c++) and * NOT SANITIZERS (sanitizers typically don't work with static libc/c++)
$<$<AND:$<BOOL:${static}>,$<NOT:$<BOOL:${APPLE}>>,$<NOT:$<BOOL:${SANITIZERS_ENABLED}>>>:
-static-libstdc++
-static-libgcc
>)
endif ()
# Antithesis instrumentation will only be built and deployed using machines running Linux.
if(voidstar)
if(NOT CMAKE_BUILD_TYPE STREQUAL "Debug")
message(
FATAL_ERROR
"Antithesis instrumentation requires Debug build type, aborting..."
)
elseif(NOT is_linux)
message(
FATAL_ERROR
"Antithesis instrumentation requires Linux, aborting..."
)
elseif(
NOT (is_clang AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 16.0)
)
message(
FATAL_ERROR
"Antithesis instrumentation requires Clang version 16 or later, aborting..."
)
endif()
endif()
if (voidstar)
if (NOT CMAKE_BUILD_TYPE STREQUAL "Debug")
message(FATAL_ERROR "Antithesis instrumentation requires Debug build type, aborting...")
elseif (NOT is_linux)
message(FATAL_ERROR "Antithesis instrumentation requires Linux, aborting...")
elseif (NOT (is_clang AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 16.0))
message(FATAL_ERROR "Antithesis instrumentation requires Clang version 16 or later, aborting...")
endif ()
endif ()
if(use_mold)
if (use_mold)
# use mold linker if available
execute_process(
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=mold -Wl,--version
ERROR_QUIET
OUTPUT_VARIABLE LD_VERSION
)
if("${LD_VERSION}" MATCHES "mold")
execute_process(COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=mold -Wl,--version ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if ("${LD_VERSION}" MATCHES "mold")
target_link_libraries(common INTERFACE -fuse-ld=mold)
endif()
endif ()
unset(LD_VERSION)
elseif(use_gold AND is_gcc)
elseif (use_gold AND is_gcc)
# use gold linker if available
execute_process(
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=gold -Wl,--version
ERROR_QUIET
OUTPUT_VARIABLE LD_VERSION
)
execute_process(COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=gold -Wl,--version ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
#[=========================================================[
NOTE: THE gold linker inserts -rpath as DT_RUNPATH by
default instead of DT_RPATH, so you might have slightly
@@ -207,37 +170,31 @@ elseif(use_gold AND is_gcc)
disabling would be to figure out all the settings
required to make gold play nicely with jemalloc.
#]=========================================================]
if(("${LD_VERSION}" MATCHES "GNU gold") AND (NOT jemalloc))
if (("${LD_VERSION}" MATCHES "GNU gold") AND (NOT jemalloc))
target_link_libraries(
common
INTERFACE
-fuse-ld=gold
-Wl,--no-as-needed
#[=========================================================[
INTERFACE -fuse-ld=gold
-Wl,--no-as-needed
#[=========================================================[
see https://bugs.launchpad.net/ubuntu/+source/eglibc/+bug/1253638/comments/5
DT_RUNPATH does not work great for transitive
dependencies (of which boost has a few) - so just
switch to DT_RPATH if doing dynamic linking with gold
#]=========================================================]
$<$<NOT:$<BOOL:${static}>>:-Wl,--disable-new-dtags>
)
endif()
$<$<NOT:$<BOOL:${static}>>:-Wl,--disable-new-dtags>)
endif ()
unset(LD_VERSION)
elseif(use_lld)
elseif (use_lld)
# use lld linker if available
execute_process(
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=lld -Wl,--version
ERROR_QUIET
OUTPUT_VARIABLE LD_VERSION
)
if("${LD_VERSION}" MATCHES "LLD")
execute_process(COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=lld -Wl,--version ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if ("${LD_VERSION}" MATCHES "LLD")
target_link_libraries(common INTERFACE -fuse-ld=lld)
endif()
endif ()
unset(LD_VERSION)
endif()
endif ()
if(assert)
foreach(var_ CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_RELEASE)
if (assert)
foreach (var_ CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_RELEASE)
string(REGEX REPLACE "[-/]DNDEBUG" "" ${var_} "${${var_}}")
endforeach()
endif()
endforeach ()
endif ()

52
cmake/XrplConfig.cmake Normal file
View File

@@ -0,0 +1,52 @@
include(CMakeFindDependencyMacro)
# need to represent system dependencies of the lib here
#[=========================================================[
Boost
#]=========================================================]
if (static OR APPLE OR MSVC)
set(Boost_USE_STATIC_LIBS ON)
endif ()
set(Boost_USE_MULTITHREADED ON)
if (static OR MSVC)
set(Boost_USE_STATIC_RUNTIME ON)
else ()
set(Boost_USE_STATIC_RUNTIME OFF)
endif ()
find_dependency(Boost
COMPONENTS
chrono
container
context
coroutine2
date_time
filesystem
program_options
regex
system
thread)
#[=========================================================[
OpenSSL
#]=========================================================]
if (NOT DEFINED OPENSSL_ROOT_DIR)
if (DEFINED ENV{OPENSSL_ROOT})
set(OPENSSL_ROOT_DIR $ENV{OPENSSL_ROOT})
elseif (APPLE)
find_program(homebrew brew)
if (homebrew)
execute_process(COMMAND ${homebrew} --prefix openssl OUTPUT_VARIABLE OPENSSL_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
endif ()
endif ()
file(TO_CMAKE_PATH "${OPENSSL_ROOT_DIR}" OPENSSL_ROOT_DIR)
endif ()
if (static OR APPLE OR MSVC)
set(OPENSSL_USE_STATIC_LIBS ON)
endif ()
set(OPENSSL_MSVC_STATIC_RT ON)
find_dependency(OpenSSL REQUIRED)
find_dependency(ZLIB)
find_dependency(date)
if (TARGET ZLIB::ZLIB)
set_target_properties(OpenSSL::Crypto PROPERTIES INTERFACE_LINK_LIBRARIES ZLIB::ZLIB)
endif ()

View File

@@ -10,44 +10,23 @@ include(target_protobuf_sources)
# so we just build them as a separate library.
add_library(xrpl.libpb)
set_target_properties(xrpl.libpb PROPERTIES UNITY_BUILD OFF)
target_protobuf_sources(
xrpl.libpb
xrpl/proto
LANGUAGE cpp
IMPORT_DIRS include/xrpl/proto
PROTOS include/xrpl/proto/xrpl.proto
)
target_protobuf_sources(xrpl.libpb xrpl/proto LANGUAGE cpp IMPORT_DIRS include/xrpl/proto
PROTOS include/xrpl/proto/xrpl.proto)
file(GLOB_RECURSE protos "include/xrpl/proto/org/*.proto")
target_protobuf_sources(xrpl.libpb xrpl/proto LANGUAGE cpp IMPORT_DIRS include/xrpl/proto PROTOS "${protos}")
target_protobuf_sources(
xrpl.libpb
xrpl/proto
LANGUAGE cpp
IMPORT_DIRS include/xrpl/proto
PROTOS "${protos}"
)
target_protobuf_sources(
xrpl.libpb
xrpl/proto
xrpl.libpb xrpl/proto
LANGUAGE grpc
IMPORT_DIRS include/xrpl/proto
PROTOS "${protos}"
PLUGIN protoc-gen-grpc=$<TARGET_FILE:gRPC::grpc_cpp_plugin>
GENERATE_EXTENSIONS .grpc.pb.h .grpc.pb.cc
)
GENERATE_EXTENSIONS .grpc.pb.h .grpc.pb.cc)
target_compile_options(
xrpl.libpb
PUBLIC
$<$<BOOL:${is_msvc}>:-wd4996>
$<$<BOOL:${is_xcode}>:
--system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec
>
PRIVATE
$<$<BOOL:${is_msvc}>:-wd4065>
$<$<NOT:$<BOOL:${is_msvc}>>:-Wno-deprecated-declarations>
)
xrpl.libpb PUBLIC $<$<BOOL:${is_msvc}>:-wd4996> $<$<BOOL:${is_xcode}>: --system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec >
PRIVATE $<$<BOOL:${is_msvc}>:-wd4065> $<$<NOT:$<BOOL:${is_msvc}>>:-Wno-deprecated-declarations>)
target_link_libraries(xrpl.libpb PUBLIC protobuf::libprotobuf gRPC::grpc++)
@@ -56,21 +35,19 @@ add_library(xrpl.imports.main INTERFACE)
target_link_libraries(
xrpl.imports.main
INTERFACE
absl::random_random
date::date
ed25519::ed25519
LibArchive::LibArchive
OpenSSL::Crypto
Xrpl::boost
Xrpl::libs
Xrpl::opts
Xrpl::syslibs
secp256k1::secp256k1
xrpl.libpb
xxHash::xxhash
$<$<BOOL:${voidstar}>:antithesis-sdk-cpp>
)
INTERFACE absl::random_random
date::date
ed25519::ed25519
LibArchive::LibArchive
OpenSSL::Crypto
Xrpl::boost
Xrpl::libs
Xrpl::opts
Xrpl::syslibs
secp256k1::secp256k1
xrpl.libpb
xxHash::xxhash
$<$<BOOL:${voidstar}>:antithesis-sdk-cpp>)
include(add_module)
include(target_link_modules)
@@ -79,16 +56,6 @@ include(target_link_modules)
add_module(xrpl beast)
target_link_libraries(xrpl.libxrpl.beast PUBLIC xrpl.imports.main)
include(GitInfo)
add_module(xrpl git)
target_compile_definitions(
xrpl.libxrpl.git
PRIVATE
GIT_COMMIT_HASH="${GIT_COMMIT_HASH}"
GIT_BUILD_BRANCH="${GIT_BUILD_BRANCH}"
)
target_link_libraries(xrpl.libxrpl.git PUBLIC xrpl.imports.main)
# Level 02
add_module(xrpl basics)
target_link_libraries(xrpl.libxrpl.basics PUBLIC xrpl.libxrpl.beast)
@@ -102,75 +69,34 @@ target_link_libraries(xrpl.libxrpl.crypto PUBLIC xrpl.libxrpl.basics)
# Level 04
add_module(xrpl protocol)
target_link_libraries(
xrpl.libxrpl.protocol
PUBLIC xrpl.libxrpl.crypto xrpl.libxrpl.git xrpl.libxrpl.json
)
target_link_libraries(xrpl.libxrpl.protocol PUBLIC xrpl.libxrpl.crypto xrpl.libxrpl.json)
# Level 05
add_module(xrpl protocol_autogen)
target_link_libraries(
xrpl.libxrpl.protocol_autogen
PUBLIC xrpl.libxrpl.protocol
)
add_module(xrpl core)
target_link_libraries(xrpl.libxrpl.core PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol)
# Level 06
add_module(xrpl core)
target_link_libraries(
xrpl.libxrpl.core
PUBLIC
xrpl.libxrpl.basics
xrpl.libxrpl.json
xrpl.libxrpl.protocol
xrpl.libxrpl.protocol_autogen
)
# Level 07
add_module(xrpl resource)
target_link_libraries(xrpl.libxrpl.resource PUBLIC xrpl.libxrpl.protocol)
# Level 08
# Level 07
add_module(xrpl net)
target_link_libraries(
xrpl.libxrpl.net
PUBLIC
xrpl.libxrpl.basics
xrpl.libxrpl.json
xrpl.libxrpl.protocol
xrpl.libxrpl.resource
)
target_link_libraries(xrpl.libxrpl.net PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol
xrpl.libxrpl.resource)
add_module(xrpl nodestore)
target_link_libraries(
xrpl.libxrpl.nodestore
PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol
)
target_link_libraries(xrpl.libxrpl.nodestore PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol)
add_module(xrpl shamap)
target_link_libraries(
xrpl.libxrpl.shamap
PUBLIC
xrpl.libxrpl.basics
xrpl.libxrpl.crypto
xrpl.libxrpl.protocol
xrpl.libxrpl.nodestore
)
target_link_libraries(xrpl.libxrpl.shamap PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.crypto xrpl.libxrpl.protocol
xrpl.libxrpl.nodestore)
add_module(xrpl rdb)
target_link_libraries(
xrpl.libxrpl.rdb
PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.core
)
target_link_libraries(xrpl.libxrpl.rdb PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.core)
add_module(xrpl server)
target_link_libraries(
xrpl.libxrpl.server
PUBLIC
xrpl.libxrpl.protocol
xrpl.libxrpl.core
xrpl.libxrpl.rdb
xrpl.libxrpl.resource
)
target_link_libraries(xrpl.libxrpl.server PUBLIC xrpl.libxrpl.protocol xrpl.libxrpl.core xrpl.libxrpl.rdb
xrpl.libxrpl.resource)
add_module(xrpl conditions)
target_link_libraries(xrpl.libxrpl.conditions PUBLIC xrpl.libxrpl.server)
@@ -178,16 +104,13 @@ target_link_libraries(xrpl.libxrpl.conditions PUBLIC xrpl.libxrpl.server)
add_module(xrpl ledger)
target_link_libraries(
xrpl.libxrpl.ledger
PUBLIC
xrpl.libxrpl.basics
xrpl.libxrpl.json
xrpl.libxrpl.protocol
xrpl.libxrpl.protocol_autogen
xrpl.libxrpl.rdb
xrpl.libxrpl.server
xrpl.libxrpl.shamap
xrpl.libxrpl.conditions
)
PUBLIC xrpl.libxrpl.basics
xrpl.libxrpl.json
xrpl.libxrpl.protocol
xrpl.libxrpl.rdb
xrpl.libxrpl.server
xrpl.libxrpl.shamap
xrpl.libxrpl.conditions)
add_module(xrpl tx)
target_link_libraries(xrpl.libxrpl.tx PUBLIC xrpl.libxrpl.ledger)
@@ -197,11 +120,7 @@ set_target_properties(xrpl.libxrpl PROPERTIES OUTPUT_NAME xrpl)
add_library(xrpl::libxrpl ALIAS xrpl.libxrpl)
file(
GLOB_RECURSE sources
CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/libxrpl/*.cpp"
)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/libxrpl/*.cpp")
target_sources(xrpl.libxrpl PRIVATE ${sources})
target_link_modules(
@@ -212,19 +131,16 @@ target_link_modules(
conditions
core
crypto
git
json
ledger
net
nodestore
protocol
protocol_autogen
rdb
resource
server
shamap
tx
)
tx)
# All headers in libxrpl are in modules.
# Uncomment this stanza if you have not yet moved new headers into a module.
@@ -235,51 +151,34 @@ target_link_modules(
# $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>
# $<INSTALL_INTERFACE:include>)
if(xrpld)
if (xrpld)
add_executable(xrpld)
if(tests)
if (tests)
target_compile_definitions(xrpld PUBLIC ENABLE_TESTS)
target_compile_definitions(
xrpld
PRIVATE UNIT_TEST_REFERENCE_FEE=${UNIT_TEST_REFERENCE_FEE}
)
endif()
target_include_directories(
xrpld
PRIVATE $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>
)
target_compile_definitions(xrpld PRIVATE UNIT_TEST_REFERENCE_FEE=${UNIT_TEST_REFERENCE_FEE})
endif ()
target_include_directories(xrpld PRIVATE $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>)
file(
GLOB_RECURSE sources
CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/xrpld/*.cpp"
)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/xrpld/*.cpp")
target_sources(xrpld PRIVATE ${sources})
if(tests)
file(
GLOB_RECURSE sources
CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/test/*.cpp"
)
if (tests)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/test/*.cpp")
target_sources(xrpld PRIVATE ${sources})
endif()
endif ()
target_link_libraries(xrpld Xrpl::boost Xrpl::opts Xrpl::libs xrpl.libxrpl)
exclude_if_included(xrpld)
# define a macro for tests that might need to
# be excluded or run differently in CI environment
if(is_ci)
if (is_ci)
target_compile_definitions(xrpld PRIVATE XRPL_RUNNING_IN_CI)
endif()
endif ()
if(voidstar)
if (voidstar)
target_compile_options(xrpld PRIVATE -fsanitize-coverage=trace-pc-guard)
# xrpld requires access to antithesis-sdk-cpp implementation file
# antithesis_instrumentation.h, which is not exported as INTERFACE
target_include_directories(
xrpld
PRIVATE ${CMAKE_SOURCE_DIR}/external/antithesis-sdk
)
endif()
endif()
target_include_directories(xrpld PRIVATE ${CMAKE_SOURCE_DIR}/external/antithesis-sdk)
endif ()
endif ()

View File

@@ -2,17 +2,14 @@
coverage report target
#]===================================================================]
if(NOT coverage)
if (NOT coverage)
message(FATAL_ERROR "Code coverage not enabled! Aborting ...")
endif()
endif ()
if(CMAKE_CXX_COMPILER_ID MATCHES "MSVC")
message(
WARNING
"Code coverage on Windows is not supported, ignoring 'coverage' flag"
)
if (CMAKE_CXX_COMPILER_ID MATCHES "MSVC")
message(WARNING "Code coverage on Windows is not supported, ignoring 'coverage' flag")
return()
endif()
endif ()
include(ProcessorCount)
ProcessorCount(PROCESSOR_COUNT)
@@ -24,30 +21,32 @@ include(CodeCoverage)
# `CodeCoverage.cmake`)
set(GCOVR_ADDITIONAL_ARGS ${coverage_extra_args})
if(NOT GCOVR_ADDITIONAL_ARGS STREQUAL "")
if (NOT GCOVR_ADDITIONAL_ARGS STREQUAL "")
separate_arguments(GCOVR_ADDITIONAL_ARGS)
endif()
endif ()
list(
APPEND GCOVR_ADDITIONAL_ARGS
--exclude-throw-branches
--exclude-noncode-lines
--exclude-unreachable-branches
-s
-j
${PROCESSOR_COUNT}
)
list(APPEND
GCOVR_ADDITIONAL_ARGS
--exclude-throw-branches
--exclude-noncode-lines
--exclude-unreachable-branches
-s
-j
${PROCESSOR_COUNT})
setup_target_for_coverage_gcovr(
NAME coverage
FORMAT ${coverage_format}
NAME
coverage
FORMAT
${coverage_format}
EXCLUDE
"src/test"
"src/tests"
"include/xrpl/beast/test"
"include/xrpl/beast/unit_test"
"${CMAKE_BINARY_DIR}/pb-xrpl.libpb"
DEPENDENCIES xrpld xrpl.tests
)
"src/test"
"src/tests"
"include/xrpl/beast/test"
"include/xrpl/beast/unit_test"
"${CMAKE_BINARY_DIR}/pb-xrpl.libpb"
DEPENDENCIES
xrpld
xrpl.tests)
add_code_coverage_to_target(opts INTERFACE)

View File

@@ -2,90 +2,71 @@
docs target (optional)
#]===================================================================]
if(NOT only_docs)
if (NOT only_docs)
return()
endif()
endif ()
find_package(Doxygen)
if(NOT TARGET Doxygen::doxygen)
if (NOT TARGET Doxygen::doxygen)
message(STATUS "doxygen executable not found -- skipping docs target")
return()
endif()
endif ()
set(doxygen_output_directory "${CMAKE_BINARY_DIR}/docs")
set(doxygen_include_path "${CMAKE_CURRENT_SOURCE_DIR}/src")
set(doxygen_index_file "${doxygen_output_directory}/html/index.html")
set(doxyfile "${CMAKE_CURRENT_SOURCE_DIR}/docs/Doxyfile")
file(
GLOB_RECURSE doxygen_input
docs/*.md
include/*.h
include/*.cpp
include/*.md
src/*.h
src/*.cpp
src/*.md
Builds/*.md
*.md
)
file(GLOB_RECURSE
doxygen_input
docs/*.md
include/*.h
include/*.cpp
include/*.md
src/*.h
src/*.cpp
src/*.md
Builds/*.md
*.md)
list(APPEND doxygen_input external/README.md)
set(dependencies "${doxygen_input}" "${doxyfile}")
function(verbose_find_path variable name)
function (verbose_find_path variable name)
# find_path sets a CACHE variable, so don't try using a "local" variable.
find_path(${variable} "${name}" ${ARGN})
if(NOT ${variable})
if (NOT ${variable})
message(NOTICE "could not find ${name}")
else()
else ()
message(STATUS "found ${name}: ${${variable}}/${name}")
endif()
endfunction()
endif ()
endfunction ()
verbose_find_path(
doxygen_plantuml_jar_path
plantuml.jar
PATH_SUFFIXES share/plantuml
)
verbose_find_path(doxygen_plantuml_jar_path plantuml.jar PATH_SUFFIXES share/plantuml)
verbose_find_path(doxygen_dot_path dot)
# https://en.cppreference.com/w/Cppreference:Archives
# https://stackoverflow.com/questions/60822559/how-to-move-a-file-download-from-configure-step-to-build-step
set(download_script "${CMAKE_BINARY_DIR}/docs/download-cppreference.cmake")
file(
WRITE "${download_script}"
"file(DOWNLOAD \
file(WRITE "${download_script}"
"file(DOWNLOAD \
https://github.com/PeterFeicht/cppreference-doc/releases/download/v20250209/html-book-20250209.zip \
${CMAKE_BINARY_DIR}/docs/cppreference.zip \
EXPECTED_HASH MD5=bda585f72fbca4b817b29a3d5746567b \
)\n \
execute_process( \
COMMAND \"${CMAKE_COMMAND}\" -E tar -xf cppreference.zip \
)\n"
)
)\n")
set(tagfile "${CMAKE_BINARY_DIR}/docs/cppreference-doxygen-web.tag.xml")
add_custom_command(
OUTPUT "${tagfile}"
COMMAND "${CMAKE_COMMAND}" -P "${download_script}"
WORKING_DIRECTORY "${CMAKE_BINARY_DIR}/docs"
)
add_custom_command(OUTPUT "${tagfile}" COMMAND "${CMAKE_COMMAND}" -P "${download_script}"
WORKING_DIRECTORY "${CMAKE_BINARY_DIR}/docs")
set(doxygen_tagfiles "${tagfile}=http://en.cppreference.com/w/")
add_custom_command(
OUTPUT "${doxygen_index_file}"
COMMAND
"${CMAKE_COMMAND}" -E env
"DOXYGEN_OUTPUT_DIRECTORY=${doxygen_output_directory}"
"DOXYGEN_INCLUDE_PATH=${doxygen_include_path}"
"DOXYGEN_TAGFILES=${doxygen_tagfiles}"
"DOXYGEN_PLANTUML_JAR_PATH=${doxygen_plantuml_jar_path}"
"DOXYGEN_DOT_PATH=${doxygen_dot_path}" "${DOXYGEN_EXECUTABLE}"
"${doxyfile}"
COMMAND "${CMAKE_COMMAND}" -E env "DOXYGEN_OUTPUT_DIRECTORY=${doxygen_output_directory}"
"DOXYGEN_INCLUDE_PATH=${doxygen_include_path}" "DOXYGEN_TAGFILES=${doxygen_tagfiles}"
"DOXYGEN_PLANTUML_JAR_PATH=${doxygen_plantuml_jar_path}" "DOXYGEN_DOT_PATH=${doxygen_dot_path}"
"${DOXYGEN_EXECUTABLE}" "${doxyfile}"
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
DEPENDS "${dependencies}" "${tagfile}"
)
add_custom_target(
docs
DEPENDS "${doxygen_index_file}"
SOURCES "${dependencies}"
)
DEPENDS "${dependencies}" "${tagfile}")
add_custom_target(docs DEPENDS "${doxygen_index_file}" SOURCES "${dependencies}")

View File

@@ -2,38 +2,75 @@
install stuff
#]===================================================================]
include(GNUInstallDirs)
include(create_symbolic_link)
if(is_root_project AND TARGET xrpld)
install(
TARGETS xrpld
RUNTIME DESTINATION "${CMAKE_INSTALL_BINDIR}" COMPONENT runtime
)
# If no suffix is defined for executables (e.g. Windows uses .exe but Linux
# and macOS use none), then explicitly set it to the empty string.
if (NOT DEFINED suffix)
set(suffix "")
endif ()
install(
FILES "${CMAKE_CURRENT_SOURCE_DIR}/cfg/xrpld-example.cfg"
DESTINATION "${CMAKE_INSTALL_SYSCONFDIR}/xrpld"
RENAME xrpld.cfg
COMPONENT runtime
)
install(TARGETS common
opts
xrpl_boost
xrpl_libs
xrpl_syslibs
xrpl.imports.main
xrpl.libpb
xrpl.libxrpl
xrpl.libxrpl.basics
xrpl.libxrpl.beast
xrpl.libxrpl.conditions
xrpl.libxrpl.core
xrpl.libxrpl.crypto
xrpl.libxrpl.json
xrpl.libxrpl.rdb
xrpl.libxrpl.ledger
xrpl.libxrpl.net
xrpl.libxrpl.nodestore
xrpl.libxrpl.protocol
xrpl.libxrpl.resource
xrpl.libxrpl.server
xrpl.libxrpl.shamap
xrpl.libxrpl.tx
antithesis-sdk-cpp
EXPORT XrplExports
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
RUNTIME DESTINATION bin
INCLUDES
DESTINATION include)
install(
FILES "${CMAKE_CURRENT_SOURCE_DIR}/cfg/validators-example.txt"
DESTINATION "${CMAKE_INSTALL_SYSCONFDIR}/xrpld"
RENAME validators.txt
COMPONENT runtime
)
endif()
install(DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/include/xrpl" DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}")
install(
TARGETS xrpl.libpb xrpl.libxrpl
LIBRARY DESTINATION "${CMAKE_INSTALL_LIBDIR}" COMPONENT development
ARCHIVE DESTINATION "${CMAKE_INSTALL_LIBDIR}" COMPONENT development
RUNTIME DESTINATION "${CMAKE_INSTALL_BINDIR}" COMPONENT development
)
install(EXPORT XrplExports FILE XrplTargets.cmake NAMESPACE Xrpl:: DESTINATION lib/cmake/xrpl)
include(CMakePackageConfigHelpers)
write_basic_package_version_file(XrplConfigVersion.cmake VERSION ${xrpld_version} COMPATIBILITY SameMajorVersion)
install(
DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/include/xrpl"
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}"
COMPONENT development
)
if (is_root_project AND TARGET xrpld)
install(TARGETS xrpld RUNTIME DESTINATION bin)
set_target_properties(xrpld PROPERTIES INSTALL_RPATH_USE_LINK_PATH ON)
# sample configs should not overwrite existing files
# install if-not-exists workaround as suggested by
# https://cmake.org/Bug/view.php?id=12646
install(CODE "
macro (copy_if_not_exists SRC DEST NEWNAME)
if (NOT EXISTS \"\$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/\${DEST}/\${NEWNAME}\")
file (INSTALL FILE_PERMISSIONS OWNER_READ OWNER_WRITE DESTINATION \"\${CMAKE_INSTALL_PREFIX}/\${DEST}\" FILES \"\${SRC}\" RENAME \"\${NEWNAME}\")
else ()
message (\"-- Skipping : \$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/\${DEST}/\${NEWNAME}\")
endif ()
endmacro()
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/xrpld-example.cfg\" etc xrpld.cfg)
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/validators-example.txt\" etc validators.txt)
")
install(CODE "
set(CMAKE_MODULE_PATH \"${CMAKE_MODULE_PATH}\")
include(create_symbolic_link)
create_symbolic_link(xrpld${suffix} \
\$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_BINDIR}/rippled${suffix})
")
endif ()
install(FILES ${CMAKE_CURRENT_SOURCE_DIR}/cmake/XrplConfig.cmake ${CMAKE_CURRENT_BINARY_DIR}/XrplConfigVersion.cmake
DESTINATION lib/cmake/xrpl)

View File

@@ -5,55 +5,44 @@
include(CompilationEnv)
# Set defaults for optional variables to avoid uninitialized variable warnings
if(NOT DEFINED voidstar)
if (NOT DEFINED voidstar)
set(voidstar OFF)
endif()
endif ()
add_library(opts INTERFACE)
add_library(Xrpl::opts ALIAS opts)
target_compile_definitions(
opts
INTERFACE
BOOST_ASIO_DISABLE_HANDLER_TYPE_REQUIREMENTS
BOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT
BOOST_CONTAINER_FWD_BAD_DEQUE
HAS_UNCAUGHT_EXCEPTIONS=1
$<$<BOOL:${boost_show_deprecated}>:
BOOST_ASIO_NO_DEPRECATED
BOOST_FILESYSTEM_NO_DEPRECATED
>
$<$<NOT:$<BOOL:${boost_show_deprecated}>>:
BOOST_COROUTINES2_NO_DEPRECATION_WARNING
BOOST_BEAST_ALLOW_DEPRECATED
BOOST_FILESYSTEM_DEPRECATED
>
$<$<BOOL:${beast_no_unit_test_inline}>:BEAST_NO_UNIT_TEST_INLINE=1>
$<$<BOOL:${beast_disable_autolink}>:BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES=1>
$<$<BOOL:${single_io_service_thread}>:XRPL_SINGLE_IO_SERVICE_THREAD=1>
$<$<BOOL:${voidstar}>:ENABLE_VOIDSTAR>
)
INTERFACE BOOST_ASIO_DISABLE_HANDLER_TYPE_REQUIREMENTS
BOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT
BOOST_CONTAINER_FWD_BAD_DEQUE
HAS_UNCAUGHT_EXCEPTIONS=1
$<$<BOOL:${boost_show_deprecated}>:
BOOST_ASIO_NO_DEPRECATED
BOOST_FILESYSTEM_NO_DEPRECATED
>
$<$<NOT:$<BOOL:${boost_show_deprecated}>>:
BOOST_COROUTINES_NO_DEPRECATION_WARNING
BOOST_BEAST_ALLOW_DEPRECATED
BOOST_FILESYSTEM_DEPRECATED
>
$<$<BOOL:${beast_no_unit_test_inline}>:BEAST_NO_UNIT_TEST_INLINE=1>
$<$<BOOL:${beast_disable_autolink}>:BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES=1>
$<$<BOOL:${single_io_service_thread}>:XRPL_SINGLE_IO_SERVICE_THREAD=1>
$<$<BOOL:${voidstar}>:ENABLE_VOIDSTAR>)
target_compile_options(
opts
INTERFACE
$<$<AND:$<BOOL:${is_gcc}>,$<COMPILE_LANGUAGE:CXX>>:-Wsuggest-override>
$<$<BOOL:${is_gcc}>:-Wno-maybe-uninitialized>
$<$<BOOL:${perf}>:-fno-omit-frame-pointer>
$<$<BOOL:${profile}>:-pg>
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>
)
INTERFACE $<$<AND:$<BOOL:${is_gcc}>,$<COMPILE_LANGUAGE:CXX>>:-Wsuggest-override>
$<$<BOOL:${is_gcc}>:-Wno-maybe-uninitialized> $<$<BOOL:${perf}>:-fno-omit-frame-pointer>
$<$<BOOL:${profile}>:-pg> $<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>)
target_link_libraries(
opts
INTERFACE
$<$<BOOL:${profile}>:-pg>
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>
)
target_link_libraries(opts INTERFACE $<$<BOOL:${profile}>:-pg> $<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>)
if(jemalloc)
if (jemalloc)
find_package(jemalloc REQUIRED)
target_compile_definitions(opts INTERFACE PROFILE_JEMALLOC)
target_link_libraries(opts INTERFACE jemalloc::jemalloc)
endif()
endif ()
#[===================================================================[
xrpld transitive library deps via an interface library
@@ -63,33 +52,31 @@ add_library(xrpl_syslibs INTERFACE)
add_library(Xrpl::syslibs ALIAS xrpl_syslibs)
target_link_libraries(
xrpl_syslibs
INTERFACE
$<$<BOOL:${is_msvc}>:
legacy_stdio_definitions.lib
Shlwapi
kernel32
user32
gdi32
winspool
comdlg32
advapi32
shell32
ole32
oleaut32
uuid
odbc32
odbccp32
crypt32
>
$<$<NOT:$<BOOL:${is_msvc}>>:dl>
$<$<NOT:$<OR:$<BOOL:${is_msvc}>,$<BOOL:${is_macos}>>>:rt>
)
INTERFACE $<$<BOOL:${is_msvc}>:
legacy_stdio_definitions.lib
Shlwapi
kernel32
user32
gdi32
winspool
comdlg32
advapi32
shell32
ole32
oleaut32
uuid
odbc32
odbccp32
crypt32
>
$<$<NOT:$<BOOL:${is_msvc}>>:dl>
$<$<NOT:$<OR:$<BOOL:${is_msvc}>,$<BOOL:${is_macos}>>>:rt>)
if(NOT is_msvc)
if (NOT is_msvc)
set(THREADS_PREFER_PTHREAD_FLAG ON)
find_package(Threads)
target_link_libraries(xrpl_syslibs INTERFACE Threads::Threads)
endif()
endif ()
add_library(xrpl_libs INTERFACE)
add_library(Xrpl::libs ALIAS xrpl_libs)

View File

@@ -1,146 +0,0 @@
#[===================================================================[
Protocol Autogen - Code generation for protocol wrapper classes
#]===================================================================]
set(CODEGEN_VENV_DIR
"${CMAKE_CURRENT_SOURCE_DIR}/.venv"
CACHE PATH
"Path to a Python virtual environment for code generation. A venv will be created here by setup_code_gen and used to run generation scripts."
)
# Directory paths
set(MACRO_DIR "${CMAKE_CURRENT_SOURCE_DIR}/include/xrpl/protocol/detail")
set(AUTOGEN_HEADER_DIR
"${CMAKE_CURRENT_SOURCE_DIR}/include/xrpl/protocol_autogen"
)
set(AUTOGEN_TEST_DIR
"${CMAKE_CURRENT_SOURCE_DIR}/src/tests/libxrpl/protocol_autogen"
)
set(SCRIPTS_DIR "${CMAKE_CURRENT_SOURCE_DIR}/cmake/scripts/codegen")
# Input macro files
set(TRANSACTIONS_MACRO "${MACRO_DIR}/transactions.macro")
set(LEDGER_ENTRIES_MACRO "${MACRO_DIR}/ledger_entries.macro")
set(SFIELDS_MACRO "${MACRO_DIR}/sfields.macro")
# Python scripts and templates
set(GENERATE_TX_SCRIPT "${SCRIPTS_DIR}/generate_tx_classes.py")
set(GENERATE_LEDGER_SCRIPT "${SCRIPTS_DIR}/generate_ledger_classes.py")
set(REQUIREMENTS_FILE "${SCRIPTS_DIR}/requirements.txt")
set(MACRO_PARSER_COMMON "${SCRIPTS_DIR}/macro_parser_common.py")
set(TX_TEMPLATE "${SCRIPTS_DIR}/templates/Transaction.h.mako")
set(TX_TEST_TEMPLATE "${SCRIPTS_DIR}/templates/TransactionTests.cpp.mako")
set(LEDGER_TEMPLATE "${SCRIPTS_DIR}/templates/LedgerEntry.h.mako")
set(LEDGER_TEST_TEMPLATE "${SCRIPTS_DIR}/templates/LedgerEntryTests.cpp.mako")
set(ALL_INPUT_FILES
"${TRANSACTIONS_MACRO}"
"${LEDGER_ENTRIES_MACRO}"
"${SFIELDS_MACRO}"
"${GENERATE_TX_SCRIPT}"
"${GENERATE_LEDGER_SCRIPT}"
"${REQUIREMENTS_FILE}"
"${MACRO_PARSER_COMMON}"
"${TX_TEMPLATE}"
"${TX_TEST_TEMPLATE}"
"${LEDGER_TEMPLATE}"
"${LEDGER_TEST_TEMPLATE}"
)
# Create output directories
file(MAKE_DIRECTORY "${AUTOGEN_HEADER_DIR}/transactions")
file(MAKE_DIRECTORY "${AUTOGEN_HEADER_DIR}/ledger_entries")
file(MAKE_DIRECTORY "${AUTOGEN_TEST_DIR}/ledger_entries")
file(MAKE_DIRECTORY "${AUTOGEN_TEST_DIR}/transactions")
# Find Python3
if(NOT Python3_EXECUTABLE)
find_package(Python3 COMPONENTS Interpreter QUIET)
endif()
if(NOT Python3_EXECUTABLE)
find_program(Python3_EXECUTABLE NAMES python3 python)
endif()
if(NOT Python3_EXECUTABLE)
message(
WARNING
"Python3 not found. The 'code_gen' and 'setup_code_gen' targets will not be available."
)
return()
endif()
# Warn if pip is configured with a non-default index (may need VPN).
execute_process(
COMMAND ${Python3_EXECUTABLE} -m pip config get global.index-url
OUTPUT_VARIABLE PIP_INDEX_URL
OUTPUT_STRIP_TRAILING_WHITESPACE
ERROR_QUIET
RESULT_VARIABLE PIP_CONFIG_RESULT
)
if(PIP_CONFIG_RESULT EQUAL 0 AND PIP_INDEX_URL)
if(
NOT PIP_INDEX_URL STREQUAL "https://pypi.org/simple"
AND NOT PIP_INDEX_URL STREQUAL "https://pypi.python.org/simple"
)
message(
WARNING
"Private pip index URL detected: ${PIP_INDEX_URL}\n"
"You may need to connect to VPN to access this URL."
)
endif()
endif()
# Determine which Python interpreter to use for code generation.
if(CODEGEN_VENV_DIR)
if(WIN32)
set(CODEGEN_PYTHON "${CODEGEN_VENV_DIR}/Scripts/python.exe")
else()
set(CODEGEN_PYTHON "${CODEGEN_VENV_DIR}/bin/python")
endif()
else()
set(CODEGEN_PYTHON "${Python3_EXECUTABLE}")
message(
WARNING
"CODEGEN_VENV_DIR is not set. Dependencies will be installed globally.\n"
"If this is not intended, reconfigure with:\n"
" cmake . -UCODEGEN_VENV_DIR"
)
endif()
# Custom target to create a venv and install Python dependencies.
# Run manually with: cmake --build . --target setup_code_gen
if(CODEGEN_VENV_DIR)
add_custom_target(
setup_code_gen
COMMAND ${Python3_EXECUTABLE} -m venv "${CODEGEN_VENV_DIR}"
COMMAND ${CODEGEN_PYTHON} -m pip install -r "${REQUIREMENTS_FILE}"
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
COMMENT "Creating venv and installing code generation dependencies..."
)
else()
add_custom_target(
setup_code_gen
COMMAND ${Python3_EXECUTABLE} -m pip install -r "${REQUIREMENTS_FILE}"
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
COMMENT "Installing code generation dependencies..."
)
endif()
# Custom target for code generation, excluded from ALL.
# Run manually with: cmake --build . --target code_gen
add_custom_target(
code_gen
COMMAND
${CMAKE_COMMAND} -DCODEGEN_PYTHON=${CODEGEN_PYTHON}
-DGENERATE_TX_SCRIPT=${GENERATE_TX_SCRIPT}
-DGENERATE_LEDGER_SCRIPT=${GENERATE_LEDGER_SCRIPT}
-DTRANSACTIONS_MACRO=${TRANSACTIONS_MACRO}
-DLEDGER_ENTRIES_MACRO=${LEDGER_ENTRIES_MACRO}
-DSFIELDS_MACRO=${SFIELDS_MACRO}
-DAUTOGEN_HEADER_DIR=${AUTOGEN_HEADER_DIR}
-DAUTOGEN_TEST_DIR=${AUTOGEN_TEST_DIR} -P
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/XrplProtocolAutogenRun.cmake"
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
COMMENT "Running protocol code generation..."
SOURCES ${ALL_INPUT_FILES}
)

View File

@@ -1,39 +0,0 @@
#[===================================================================[
Protocol Autogen - Run script invoked by the 'code_gen' target
#]===================================================================]
# Generate transaction classes.
execute_process(
COMMAND
${CODEGEN_PYTHON} "${GENERATE_TX_SCRIPT}" "${TRANSACTIONS_MACRO}"
--header-dir "${AUTOGEN_HEADER_DIR}/transactions" --test-dir
"${AUTOGEN_TEST_DIR}/transactions" --sfields-macro "${SFIELDS_MACRO}"
RESULT_VARIABLE TX_RESULT
OUTPUT_VARIABLE TX_OUTPUT
ERROR_VARIABLE TX_ERROR
)
if(NOT TX_RESULT EQUAL 0)
message(
FATAL_ERROR
"Transaction code generation failed:\n${TX_OUTPUT}\n${TX_ERROR}\n${TX_RESULT}"
)
endif()
# Generate ledger entry classes.
execute_process(
COMMAND
${CODEGEN_PYTHON} "${GENERATE_LEDGER_SCRIPT}" "${LEDGER_ENTRIES_MACRO}"
--header-dir "${AUTOGEN_HEADER_DIR}/ledger_entries" --test-dir
"${AUTOGEN_TEST_DIR}/ledger_entries" --sfields-macro "${SFIELDS_MACRO}"
RESULT_VARIABLE LEDGER_RESULT
OUTPUT_VARIABLE LEDGER_OUTPUT
ERROR_VARIABLE LEDGER_ERROR
)
if(NOT LEDGER_RESULT EQUAL 0)
message(
FATAL_ERROR
"Ledger entry code generation failed:\n${LEDGER_OUTPUT}\n${LEDGER_ERROR}\n${TX_RESULT}"
)
endif()
message(STATUS "Protocol autogen: code generation complete")

View File

@@ -44,26 +44,23 @@ include(CompilationEnv)
# Read environment variable
set(SANITIZERS "")
if(DEFINED ENV{SANITIZERS})
if (DEFINED ENV{SANITIZERS})
set(SANITIZERS "$ENV{SANITIZERS}")
endif()
endif ()
# Set SANITIZERS_ENABLED flag for use in other modules
if(SANITIZERS MATCHES "address|thread|undefinedbehavior")
if (SANITIZERS MATCHES "address|thread|undefinedbehavior")
set(SANITIZERS_ENABLED TRUE)
else()
else ()
set(SANITIZERS_ENABLED FALSE)
return()
endif()
endif ()
# Sanitizers are not supported on Windows/MSVC
if(is_msvc)
message(
FATAL_ERROR
"Sanitizers are not supported on Windows/MSVC. "
"Please unset the SANITIZERS environment variable."
)
endif()
if (is_msvc)
message(FATAL_ERROR "Sanitizers are not supported on Windows/MSVC. "
"Please unset the SANITIZERS environment variable.")
endif ()
message(STATUS "Configuring sanitizers: ${SANITIZERS}")
@@ -77,30 +74,24 @@ set(san_list "${SANITIZERS}")
string(REPLACE "," ";" san_list "${san_list}")
separate_arguments(san_list)
foreach(san IN LISTS san_list)
if(san STREQUAL "address")
foreach (san IN LISTS san_list)
if (san STREQUAL "address")
set(enable_asan TRUE)
elseif(san STREQUAL "thread")
elseif (san STREQUAL "thread")
set(enable_tsan TRUE)
elseif(san STREQUAL "undefinedbehavior")
elseif (san STREQUAL "undefinedbehavior")
set(enable_ubsan TRUE)
else()
message(
FATAL_ERROR
"Unsupported sanitizer type: ${san}"
"Supported: address, thread, undefinedbehavior and their combinations."
)
endif()
endforeach()
else ()
message(FATAL_ERROR "Unsupported sanitizer type: ${san}"
"Supported: address, thread, undefinedbehavior and their combinations.")
endif ()
endforeach ()
# Validate sanitizer compatibility
if(enable_asan AND enable_tsan)
message(
FATAL_ERROR
"AddressSanitizer and ThreadSanitizer are incompatible and cannot be enabled simultaneously. "
"Use 'address' or 'thread', optionally with 'undefinedbehavior'."
)
endif()
if (enable_asan AND enable_tsan)
message(FATAL_ERROR "AddressSanitizer and ThreadSanitizer are incompatible and cannot be enabled simultaneously. "
"Use 'address' or 'thread', optionally with 'undefinedbehavior'.")
endif ()
# Frame pointer is required for meaningful stack traces. Sanitizers recommend minimum of -O1 for reasonable performance
set(SANITIZERS_COMPILE_FLAGS "-fno-omit-frame-pointer" "-O1")
@@ -108,79 +99,66 @@ set(SANITIZERS_COMPILE_FLAGS "-fno-omit-frame-pointer" "-O1")
# Build the sanitizer flags list
set(SANITIZER_TYPES)
if(enable_asan)
if (enable_asan)
list(APPEND SANITIZER_TYPES "address")
elseif(enable_tsan)
elseif (enable_tsan)
list(APPEND SANITIZER_TYPES "thread")
endif()
endif ()
if(enable_ubsan)
if (enable_ubsan)
# UB sanitizer flags
list(APPEND SANITIZER_TYPES "undefined" "float-divide-by-zero")
if(is_clang)
if (is_clang)
# Clang supports additional UB checks. More info here
# https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html
list(APPEND SANITIZER_TYPES "unsigned-integer-overflow")
endif()
endif()
endif ()
endif ()
# Configure code model for GCC on amd64 Use large code model for ASAN to avoid relocation errors Use medium code model
# for TSAN (large is not compatible with TSAN)
set(SANITIZERS_RELOCATION_FLAGS)
# Compiler-specific configuration
if(is_gcc)
if (is_gcc)
# Disable mold, gold and lld linkers for GCC with sanitizers Use default linker (bfd/ld) which is more lenient with
# mixed code models This is needed since the size of instrumented binary exceeds the limits set by mold, lld and
# gold linkers
set(use_mold OFF CACHE BOOL "Use mold linker" FORCE)
set(use_gold OFF CACHE BOOL "Use gold linker" FORCE)
set(use_lld OFF CACHE BOOL "Use lld linker" FORCE)
message(
STATUS
" Disabled mold, gold, and lld linkers for GCC with sanitizers"
)
message(STATUS " Disabled mold, gold, and lld linkers for GCC with sanitizers")
# Suppress false positive warnings in GCC with stringop-overflow
list(APPEND SANITIZERS_COMPILE_FLAGS "-Wno-stringop-overflow")
if(is_amd64 AND enable_asan)
if (is_amd64 AND enable_asan)
message(STATUS " Using large code model (-mcmodel=large)")
list(APPEND SANITIZERS_COMPILE_FLAGS "-mcmodel=large")
list(APPEND SANITIZERS_RELOCATION_FLAGS "-mcmodel=large")
elseif(enable_tsan)
elseif (enable_tsan)
# GCC doesn't support atomic_thread_fence with tsan. Suppress warnings.
list(APPEND SANITIZERS_COMPILE_FLAGS "-Wno-tsan")
message(STATUS " Using medium code model (-mcmodel=medium)")
list(APPEND SANITIZERS_COMPILE_FLAGS "-mcmodel=medium")
list(APPEND SANITIZERS_RELOCATION_FLAGS "-mcmodel=medium")
endif()
endif ()
# Join sanitizer flags with commas for -fsanitize option
list(JOIN SANITIZER_TYPES "," SANITIZER_TYPES_STR)
# Add sanitizer to compile and link flags
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
set(SANITIZERS_LINK_FLAGS
"${SANITIZERS_RELOCATION_FLAGS}"
"-fsanitize=${SANITIZER_TYPES_STR}"
)
elseif(is_clang)
# Add ignorelist for Clang (GCC doesn't support this) Use CMAKE_SOURCE_DIR to get the path to the ignorelist
set(IGNORELIST_PATH
"${CMAKE_SOURCE_DIR}/sanitizers/suppressions/sanitizer-ignorelist.txt"
)
if(NOT EXISTS "${IGNORELIST_PATH}")
message(
FATAL_ERROR
"Sanitizer ignorelist not found: ${IGNORELIST_PATH}"
)
endif()
set(SANITIZERS_LINK_FLAGS "${SANITIZERS_RELOCATION_FLAGS}" "-fsanitize=${SANITIZER_TYPES_STR}")
list(
APPEND SANITIZERS_COMPILE_FLAGS
"-fsanitize-ignorelist=${IGNORELIST_PATH}"
)
elseif (is_clang)
# Add ignorelist for Clang (GCC doesn't support this) Use CMAKE_SOURCE_DIR to get the path to the ignorelist
set(IGNORELIST_PATH "${CMAKE_SOURCE_DIR}/sanitizers/suppressions/sanitizer-ignorelist.txt")
if (NOT EXISTS "${IGNORELIST_PATH}")
message(FATAL_ERROR "Sanitizer ignorelist not found: ${IGNORELIST_PATH}")
endif ()
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize-ignorelist=${IGNORELIST_PATH}")
message(STATUS " Using sanitizer ignorelist: ${IGNORELIST_PATH}")
# Join sanitizer flags with commas for -fsanitize option
@@ -189,35 +167,31 @@ elseif(is_clang)
# Add sanitizer to compile and link flags
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
set(SANITIZERS_LINK_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
endif()
endif ()
message(STATUS " Compile flags: ${SANITIZERS_COMPILE_FLAGS}")
message(STATUS " Link flags: ${SANITIZERS_LINK_FLAGS}")
# Apply the sanitizer flags to the 'common' interface library This is the same library used by XrplCompiler.cmake
target_compile_options(
common
INTERFACE
$<$<COMPILE_LANGUAGE:CXX>:${SANITIZERS_COMPILE_FLAGS}>
$<$<COMPILE_LANGUAGE:C>:${SANITIZERS_COMPILE_FLAGS}>
)
target_compile_options(common INTERFACE $<$<COMPILE_LANGUAGE:CXX>:${SANITIZERS_COMPILE_FLAGS}>
$<$<COMPILE_LANGUAGE:C>:${SANITIZERS_COMPILE_FLAGS}>)
# Apply linker flags
target_link_options(common INTERFACE ${SANITIZERS_LINK_FLAGS})
# Define SANITIZERS macro for BuildInfo.cpp
set(sanitizers_list)
if(enable_asan)
if (enable_asan)
list(APPEND sanitizers_list "ASAN")
endif()
if(enable_tsan)
endif ()
if (enable_tsan)
list(APPEND sanitizers_list "TSAN")
endif()
if(enable_ubsan)
endif ()
if (enable_ubsan)
list(APPEND sanitizers_list "UBSAN")
endif()
endif ()
if(sanitizers_list)
if (sanitizers_list)
list(JOIN sanitizers_list "." sanitizers_str)
target_compile_definitions(common INTERFACE SANITIZERS=${sanitizers_str})
endif()
endif ()

View File

@@ -7,56 +7,38 @@ include(CompilationEnv)
get_property(is_multiconfig GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
set(CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "" FORCE)
if(NOT is_multiconfig)
if(NOT CMAKE_BUILD_TYPE)
if (NOT is_multiconfig)
if (NOT CMAKE_BUILD_TYPE)
message(STATUS "Build type not specified - defaulting to Release")
set(CMAKE_BUILD_TYPE Release CACHE STRING "build type" FORCE)
elseif(
NOT (CMAKE_BUILD_TYPE STREQUAL Debug OR CMAKE_BUILD_TYPE STREQUAL Release)
)
elseif (NOT (CMAKE_BUILD_TYPE STREQUAL Debug OR CMAKE_BUILD_TYPE STREQUAL Release))
# for simplicity, these are the only two config types we care about. Limiting the build types simplifies dealing
# with external project builds especially
message(
FATAL_ERROR
" *** Only Debug or Release build types are currently supported ***"
)
endif()
endif()
message(FATAL_ERROR " *** Only Debug or Release build types are currently supported ***")
endif ()
endif ()
if(is_clang) # both Clang and AppleClang
if(
"${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang"
AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 16.0
)
if (is_clang) # both Clang and AppleClang
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang" AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 16.0)
message(FATAL_ERROR "This project requires clang 16 or later")
endif()
elseif(is_gcc)
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS 12.0)
endif ()
elseif (is_gcc)
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 12.0)
message(FATAL_ERROR "This project requires GCC 12 or later")
endif()
endif()
endif ()
endif ()
# check for in-source build and fail
if("${CMAKE_CURRENT_SOURCE_DIR}" STREQUAL "${CMAKE_BINARY_DIR}")
message(
FATAL_ERROR
"Builds (in-source) are not allowed in "
"${CMAKE_CURRENT_SOURCE_DIR}. Please remove CMakeCache.txt and the CMakeFiles "
"directory from ${CMAKE_CURRENT_SOURCE_DIR} and try building in a separate directory."
)
endif()
if ("${CMAKE_CURRENT_SOURCE_DIR}" STREQUAL "${CMAKE_BINARY_DIR}")
message(FATAL_ERROR "Builds (in-source) are not allowed in "
"${CMAKE_CURRENT_SOURCE_DIR}. Please remove CMakeCache.txt and the CMakeFiles "
"directory from ${CMAKE_CURRENT_SOURCE_DIR} and try building in a separate directory.")
endif ()
if(MSVC AND CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
if (MSVC AND CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
message(FATAL_ERROR "Visual Studio 32-bit build is not supported.")
endif()
endif ()
if(voidstar AND NOT is_amd64)
message(
FATAL_ERROR
"The voidstar library only supported on amd64/x86_64. Detected archictecture was: ${CMAKE_SYSTEM_PROCESSOR}"
)
endif()
if(APPLE AND NOT HOMEBREW)
if (APPLE AND NOT HOMEBREW)
find_program(HOMEBREW brew)
endif()
endif ()

View File

@@ -5,67 +5,59 @@
include(CompilationEnv)
set(is_ci FALSE)
if(DEFINED ENV{CI})
if("$ENV{CI}" STREQUAL "true")
if (DEFINED ENV{CI})
if ("$ENV{CI}" STREQUAL "true")
set(is_ci TRUE)
endif()
endif()
endif ()
endif ()
get_directory_property(has_parent PARENT_DIRECTORY)
if(has_parent)
if (has_parent)
set(is_root_project OFF)
else()
else ()
set(is_root_project ON)
endif()
endif ()
option(assert "Enables asserts, even in release builds" OFF)
option(xrpld "Build xrpld" ON)
option(tests "Build tests" ON)
if(tests)
if (tests)
# This setting allows making a separate workflow to test fees other than default 10
if(NOT UNIT_TEST_REFERENCE_FEE)
if (NOT UNIT_TEST_REFERENCE_FEE)
set(UNIT_TEST_REFERENCE_FEE "10" CACHE STRING "")
endif()
endif()
endif ()
endif ()
option(unity "Creates a build using UNITY support in cmake." OFF)
if(unity)
if(NOT is_ci)
if (unity)
if (NOT is_ci)
set(CMAKE_UNITY_BUILD_BATCH_SIZE 15 CACHE STRING "")
endif()
endif ()
set(CMAKE_UNITY_BUILD ON CACHE BOOL "Do a unity build")
endif()
endif ()
if(is_clang AND is_linux)
if (is_clang AND is_linux)
option(voidstar "Enable Antithesis instrumentation." OFF)
endif()
endif ()
if(is_gcc OR is_clang)
if (is_gcc OR is_clang)
include(ProcessorCount)
ProcessorCount(PROCESSOR_COUNT)
option(coverage "Generates coverage info." OFF)
option(profile "Add profiling flags" OFF)
set(coverage_format
"html-details"
CACHE STRING
"Output format of the coverage report."
)
set(coverage_extra_args
""
CACHE STRING
"Additional arguments to pass to gcovr."
)
set(coverage_format "html-details" CACHE STRING "Output format of the coverage report.")
set(coverage_extra_args "" CACHE STRING "Additional arguments to pass to gcovr.")
option(wextra "compile with extra gcc/clang warnings enabled" ON)
else()
else ()
set(profile OFF CACHE BOOL "gcc/clang only" FORCE)
set(coverage OFF CACHE BOOL "gcc/clang only" FORCE)
set(wextra OFF CACHE BOOL "gcc/clang only" FORCE)
endif()
endif ()
if(is_linux AND NOT SANITIZER)
if (is_linux AND NOT SANITIZER)
option(BUILD_SHARED_LIBS "build shared xrpl libraries" OFF)
option(static "link protobuf, openssl, libc++, and boost statically" ON)
option(perf "Enables flags that assist with perf recording" OFF)
@@ -73,83 +65,50 @@ if(is_linux AND NOT SANITIZER)
option(use_mold "enables detection of mold (binutils) linker" ON)
# Set a default value for the log flag based on the build type. This provides a sensible default (on for debug, off
# for release) while still allowing the user to override it for any build.
if(CMAKE_BUILD_TYPE STREQUAL "Debug")
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
set(TRUNCATED_LOGS_DEFAULT ON)
else()
else ()
set(TRUNCATED_LOGS_DEFAULT OFF)
endif()
option(
TRUNCATED_THREAD_NAME_LOGS
"Show warnings about truncated thread names on Linux."
${TRUNCATED_LOGS_DEFAULT}
)
if(TRUNCATED_THREAD_NAME_LOGS)
endif ()
option(TRUNCATED_THREAD_NAME_LOGS "Show warnings about truncated thread names on Linux." ${TRUNCATED_LOGS_DEFAULT})
if (TRUNCATED_THREAD_NAME_LOGS)
add_compile_definitions(TRUNCATED_THREAD_NAME_LOGS)
endif()
else()
endif ()
else ()
# we are not ready to allow shared-libs on windows because it would require export declarations. On macos it's more
# feasible, but static openssl produces odd linker errors, thus we disable shared lib builds for now.
set(BUILD_SHARED_LIBS
OFF
CACHE BOOL
"build shared xrpl libraries - OFF for win/macos"
FORCE
)
set(BUILD_SHARED_LIBS OFF CACHE BOOL "build shared xrpl libraries - OFF for win/macos" FORCE)
set(static ON CACHE BOOL "static link, linux only. ON for WIN/macos" FORCE)
set(perf OFF CACHE BOOL "perf flags, linux only" FORCE)
set(use_gold OFF CACHE BOOL "gold linker, linux only" FORCE)
set(use_mold OFF CACHE BOOL "mold linker, linux only" FORCE)
endif()
endif ()
if(is_clang)
if (is_clang)
option(use_lld "enables detection of lld linker" ON)
else()
else ()
set(use_lld OFF CACHE BOOL "try lld linker, clang only" FORCE)
endif()
endif ()
option(jemalloc "Enables jemalloc for heap profiling" OFF)
option(werr "treat warnings as errors" OFF)
option(
local_protobuf
"Force a local build of protobuf instead of looking for an installed version."
OFF
)
option(
local_grpc
"Force a local build of gRPC instead of looking for an installed version."
OFF
)
option(local_protobuf "Force a local build of protobuf instead of looking for an installed version." OFF)
option(local_grpc "Force a local build of gRPC instead of looking for an installed version." OFF)
# the remaining options are obscure and rarely used
option(
beast_no_unit_test_inline
"Prevents unit test definitions from being inserted into global table"
OFF
)
option(
single_io_service_thread
"Restricts the number of threads calling io_context::run to one. \
This can be useful when debugging."
OFF
)
option(
boost_show_deprecated
"Allow boost to fail on deprecated usage. Only useful if you're trying\
to find deprecated calls."
OFF
)
option(beast_no_unit_test_inline "Prevents unit test definitions from being inserted into global table" OFF)
option(single_io_service_thread "Restricts the number of threads calling io_context::run to one. \
This can be useful when debugging." OFF)
option(boost_show_deprecated "Allow boost to fail on deprecated usage. Only useful if you're trying\
to find deprecated calls." OFF)
if(WIN32)
option(
beast_disable_autolink
"Disables autolinking of system libraries on WIN32"
OFF
)
else()
if (WIN32)
option(beast_disable_autolink "Disables autolinking of system libraries on WIN32" OFF)
else ()
set(beast_disable_autolink OFF CACHE BOOL "WIN32 only" FORCE)
endif()
endif ()
if(coverage)
if (coverage)
message(STATUS "coverage build requested - forcing Debug build")
set(CMAKE_BUILD_TYPE Debug CACHE STRING "build type" FORCE)
endif()
endif ()

View File

@@ -1,26 +1,17 @@
option(
validator_keys
"Enables building of validator-keys tool as a separate target (imported via FetchContent)"
OFF
)
option(validator_keys "Enables building of validator-keys tool as a separate target (imported via FetchContent)" OFF)
if(validator_keys)
if (validator_keys)
git_branch(current_branch)
# default to tracking VK master branch unless we are on release
if(NOT (current_branch STREQUAL "release"))
if (NOT (current_branch STREQUAL "release"))
set(current_branch "master")
endif()
endif ()
message(STATUS "Tracking ValidatorKeys branch: ${current_branch}")
FetchContent_Declare(
validator_keys
GIT_REPOSITORY https://github.com/ripple/validator-keys-tool.git
GIT_TAG "${current_branch}"
)
FetchContent_Declare(validator_keys GIT_REPOSITORY https://github.com/ripple/validator-keys-tool.git
GIT_TAG "${current_branch}")
FetchContent_MakeAvailable(validator_keys)
set_target_properties(
validator-keys
PROPERTIES RUNTIME_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}"
)
set_target_properties(validator-keys PROPERTIES RUNTIME_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}")
install(TARGETS validator-keys RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR})
endif()
endif ()

View File

@@ -3,13 +3,13 @@
#]===================================================================]
file(STRINGS src/libxrpl/protocol/BuildInfo.cpp BUILD_INFO)
foreach(line_ ${BUILD_INFO})
if(line_ MATCHES "versionString[ ]*=[ ]*\"(.+)\"")
foreach (line_ ${BUILD_INFO})
if (line_ MATCHES "versionString[ ]*=[ ]*\"(.+)\"")
set(xrpld_version ${CMAKE_MATCH_1})
endif()
endforeach()
if(xrpld_version)
endif ()
endforeach ()
if (xrpld_version)
message(STATUS "xrpld version: ${xrpld_version}")
else()
else ()
message(FATAL_ERROR "unable to determine xrpld version")
endif()
endif ()

View File

@@ -12,29 +12,14 @@ include(isolate_headers)
# add_module(parent a)
# add_module(parent b)
# target_link_libraries(project.libparent.b PUBLIC project.libparent.a)
function(add_module parent name)
function (add_module parent name)
set(target ${PROJECT_NAME}.lib${parent}.${name})
add_library(${target} OBJECT)
file(
GLOB_RECURSE sources
CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}/*.cpp"
)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}/*.cpp")
target_sources(${target} PRIVATE ${sources})
target_include_directories(
${target}
PUBLIC "$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>"
)
isolate_headers(
${target}
"${CMAKE_CURRENT_SOURCE_DIR}/include"
"${CMAKE_CURRENT_SOURCE_DIR}/include/${parent}/${name}"
PUBLIC
)
isolate_headers(
${target}
"${CMAKE_CURRENT_SOURCE_DIR}/src"
"${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}"
PRIVATE
)
endfunction()
target_include_directories(${target} PUBLIC "$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>")
isolate_headers(${target} "${CMAKE_CURRENT_SOURCE_DIR}/include"
"${CMAKE_CURRENT_SOURCE_DIR}/include/${parent}/${name}" PUBLIC)
isolate_headers(${target} "${CMAKE_CURRENT_SOURCE_DIR}/src" "${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}"
PRIVATE)
endfunction ()

View File

@@ -1,21 +1,19 @@
# file(CREATE_SYMLINK) only works on Windows with administrator privileges. https://stackoverflow.com/a/61244115/618906
function(create_symbolic_link target link)
if(WIN32)
if(NOT IS_SYMLINK "${link}")
if(NOT IS_ABSOLUTE "${target}")
function (create_symbolic_link target link)
if (WIN32)
if (NOT IS_SYMLINK "${link}")
if (NOT IS_ABSOLUTE "${target}")
# Relative links work do not work on Windows.
set(target "${link}/../${target}")
endif()
endif ()
file(TO_NATIVE_PATH "${target}" target)
file(TO_NATIVE_PATH "${link}" link)
execute_process(
COMMAND cmd.exe /c mklink /J "${link}" "${target}"
)
endif()
else()
execute_process(COMMAND cmd.exe /c mklink /J "${link}" "${target}")
endif ()
else ()
file(CREATE_LINK "${target}" "${link}" SYMBOLIC)
endif()
if(NOT IS_SYMLINK "${link}")
endif ()
if (NOT IS_SYMLINK "${link}")
message(ERROR "failed to create symlink: <${link}>")
endif()
endfunction()
endif ()
endfunction ()

View File

@@ -1,63 +1,44 @@
include(CompilationEnv)
include(XrplSanitizers)
find_package(
Boost
REQUIRED
COMPONENTS
chrono
container
context
date_time
filesystem
json
program_options
regex
system
thread
)
find_package(Boost REQUIRED
COMPONENTS chrono
container
context
date_time
filesystem
json
program_options
regex
system
thread)
add_library(xrpl_boost INTERFACE)
add_library(Xrpl::boost ALIAS xrpl_boost)
target_link_libraries(
xrpl_boost
INTERFACE
Boost::headers
Boost::chrono
Boost::container
Boost::context
Boost::date_time
Boost::filesystem
Boost::json
Boost::process
Boost::program_options
Boost::regex
Boost::thread
)
if(Boost_COMPILER)
INTERFACE Boost::headers
Boost::chrono
Boost::container
Boost::context
Boost::date_time
Boost::filesystem
Boost::json
Boost::process
Boost::program_options
Boost::regex
Boost::thread)
if (Boost_COMPILER)
target_link_libraries(xrpl_boost INTERFACE Boost::disable_autolinking)
endif()
# GCC 14+ has a false positive -Wuninitialized warning in Boost.Coroutine2's
# state.hpp when compiled with -O3. This is due to GCC's intentional behavior
# change (Bug #98871, #119388) where warnings from inlined system header code
# are no longer suppressed by -isystem. The warning occurs in operator|= in
# boost/coroutine2/detail/state.hpp when inlined from push_control_block::destroy().
# See: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=119388
if(is_gcc AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 14)
target_compile_options(xrpl_boost INTERFACE -Wno-uninitialized)
endif()
# Boost.Context's ucontext backend has ASAN fiber-switching annotations
# (start/finish_switch_fiber) that are compiled in when BOOST_USE_ASAN is defined.
# This tells ASAN about coroutine stack switches, preventing false positive
# stack-use-after-scope errors. BOOST_USE_UCONTEXT ensures the ucontext backend
# is selected (fcontext does not support ASAN annotations).
# These defines must match what Boost was compiled with (see conan/profiles/sanitizers).
if(enable_asan)
target_compile_definitions(
xrpl_boost
INTERFACE BOOST_USE_ASAN BOOST_USE_UCONTEXT
)
endif()
endif ()
if (SANITIZERS_ENABLED AND is_clang)
# TODO: gcc does not support -fsanitize-blacklist...can we do something else for gcc ?
if (NOT Boost_INCLUDE_DIRS AND TARGET Boost::headers)
get_target_property(Boost_INCLUDE_DIRS Boost::headers INTERFACE_INCLUDE_DIRECTORIES)
endif ()
message(STATUS "Adding [${Boost_INCLUDE_DIRS}] to sanitizer blacklist")
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt "src:${Boost_INCLUDE_DIRS}/*")
target_compile_options(opts INTERFACE # ignore boost headers for sanitizing
-fsanitize-blacklist=${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt)
endif ()

View File

@@ -37,7 +37,7 @@ include(create_symbolic_link)
# `${CMAKE_CURRENT_BINARY_DIR}/include/${target}`.
#
# isolate_headers(target A B scope)
function(isolate_headers target A B scope)
function (isolate_headers target A B scope)
file(RELATIVE_PATH C "${A}" "${B}")
set(X "${CMAKE_CURRENT_BINARY_DIR}/modules/${target}")
set(Y "${X}/${C}")
@@ -45,4 +45,4 @@ function(isolate_headers target A B scope)
file(MAKE_DIRECTORY "${parent}")
create_symbolic_link("${B}" "${Y}")
target_include_directories(${target} ${scope} "$<BUILD_INTERFACE:${X}>")
endfunction()
endfunction ()

View File

@@ -1,211 +0,0 @@
#!/usr/bin/env python3
"""
Generate C++ wrapper classes for XRP Ledger entry types from ledger_entries.macro.
This script parses the ledger_entries.macro file and generates type-safe wrapper
classes for each ledger entry type, similar to the transaction wrapper classes.
Uses pcpp to preprocess the macro file and pyparsing to parse the DSL.
"""
import io
import argparse
from pathlib import Path
import pyparsing as pp
# Import common utilities
from macro_parser_common import (
CppCleaner,
parse_sfields_macro,
parse_field_list,
generate_cpp_class,
generate_from_template,
clear_output_directory,
)
def create_ledger_entry_parser():
"""Create a pyparsing parser for LEDGER_ENTRY macros.
This parser extracts the full LEDGER_ENTRY macro call and parses its arguments
using pyparsing's nesting-aware delimited list parsing.
"""
# Match the exact words
ledger_entry = pp.Keyword("LEDGER_ENTRY") | pp.Keyword("LEDGER_ENTRY_DUPLICATE")
# Define nested structures so pyparsing protects them
nested_braces = pp.original_text_for(pp.nested_expr("{", "}"))
nested_parens = pp.original_text_for(pp.nested_expr("(", ")"))
# Define standard text (anything that isn't a comma, parens, or braces)
plain_text = pp.Word(pp.printables + " \t\n", exclude_chars=",{}()")
# A single argument is any combination of the above
single_arg = pp.Combine(pp.OneOrMore(nested_braces | nested_parens | plain_text))
single_arg.set_parse_action(lambda t: t[0].strip())
# The arguments are a delimited list
args_list = pp.DelimitedList(single_arg)
# The full macro: LEDGER_ENTRY(args) or LEDGER_ENTRY_DUPLICATE(args)
macro_parser = (
ledger_entry + pp.Suppress("(") + pp.Group(args_list)("args") + pp.Suppress(")")
)
return macro_parser
def parse_ledger_entry_args(args_list):
"""Parse the arguments of a LEDGER_ENTRY macro call.
Args:
args_list: A list of parsed arguments from pyparsing, e.g.,
['ltACCOUNT_ROOT', '0x0061', 'AccountRoot', 'account', '({...})']
Returns:
A dict with parsed ledger entry information.
"""
if len(args_list) < 5:
raise ValueError(
f"Expected at least 5 parts in LEDGER_ENTRY, got {len(args_list)}: {args_list}"
)
tag = args_list[0]
value = args_list[1]
name = args_list[2]
rpc_name = args_list[3]
fields_str = args_list[-1]
# Parse fields: ({field1, field2, ...})
fields = parse_field_list(fields_str)
return {
"tag": tag,
"value": value,
"name": name,
"rpc_name": rpc_name,
"fields": fields,
}
def parse_macro_file(file_path):
"""Parse the ledger_entries.macro file and return a list of ledger entry definitions.
Uses pcpp to preprocess the file and pyparsing to parse the LEDGER_ENTRY macros.
"""
with open(file_path, "r") as f:
c_code = f.read()
# Step 1: Clean the C++ code using pcpp
cleaner = CppCleaner("LEDGER_ENTRY_INCLUDE", "LEDGER_ENTRY")
cleaner.parse(c_code)
out = io.StringIO()
cleaner.write(out)
clean_text = out.getvalue()
# Step 2: Parse the clean text using pyparsing
parser = create_ledger_entry_parser()
entries = []
for match, _, _ in parser.scan_string(clean_text):
# Extract the macro name and arguments
raw_args = match.args
# Parse the arguments
entry_data = parse_ledger_entry_args(raw_args)
entries.append(entry_data)
return entries
def main():
parser = argparse.ArgumentParser(
description="Generate C++ ledger entry classes from ledger_entries.macro"
)
parser.add_argument("macro_path", help="Path to ledger_entries.macro")
parser.add_argument(
"--header-dir",
help="Output directory for header files",
default="include/xrpl/protocol_autogen/ledger_entries",
)
parser.add_argument(
"--test-dir",
help="Output directory for test files (optional)",
default=None,
)
parser.add_argument(
"--sfields-macro",
help="Path to sfields.macro (default: auto-detect from macro_path)",
)
parser.add_argument("--venv-dir", help=argparse.SUPPRESS)
args = parser.parse_args()
# Parse the macro file to get ledger entry names
entries = parse_macro_file(args.macro_path)
# Auto-detect sfields.macro path if not provided
if args.sfields_macro:
sfields_path = Path(args.sfields_macro)
else:
# Assume sfields.macro is in the same directory as ledger_entries.macro
macro_path = Path(args.macro_path)
sfields_path = macro_path.parent / "sfields.macro"
# Parse sfields.macro to get field type information
print(f"Parsing {sfields_path}...")
field_types = parse_sfields_macro(sfields_path)
print(
f"Found {len(field_types)} field definitions ({sum(1 for f in field_types.values() if f['typed'])} typed, {sum(1 for f in field_types.values() if not f['typed'])} untyped)\n"
)
print(f"Found {len(entries)} ledger entries\n")
for entry in entries:
print(f"Ledger Entry: {entry['name']}")
print(f" Tag: {entry['tag']}")
print(f" Value: {entry['value']}")
print(f" RPC Name: {entry['rpc_name']}")
print(f" Fields: {len(entry['fields'])}")
for field in entry["fields"]:
mpt_info = f" ({field['mpt_support']})" if "mpt_support" in field else ""
print(f" - {field['name']}: {field['requirement']}{mpt_info}")
print()
# Set up template directory
script_dir = Path(__file__).parent
template_dir = script_dir / "templates"
# Generate C++ classes
header_dir = Path(args.header_dir)
header_dir.mkdir(parents=True, exist_ok=True)
# Clear existing generated files before regenerating
clear_output_directory(header_dir)
for entry in entries:
generate_cpp_class(
entry, header_dir, template_dir, field_types, "LedgerEntry.h.mako"
)
print(f"\nGenerated {len(entries)} ledger entry classes")
# Generate unit tests if --test-dir is provided
if args.test_dir:
test_dir = Path(args.test_dir)
test_dir.mkdir(parents=True, exist_ok=True)
# Clear existing generated test files before regenerating
clear_output_directory(test_dir)
for entry in entries:
# Fields are already enriched from generate_cpp_class above
generate_from_template(
entry, test_dir, template_dir, "LedgerEntryTests.cpp.mako", "Tests.cpp"
)
print(f"\nGenerated {len(entries)} ledger entry test files")
if __name__ == "__main__":
main()

View File

@@ -1,231 +0,0 @@
#!/usr/bin/env python3
"""
Parse transactions.macro file to extract transaction information
and generate C++ classes for each transaction type.
Uses pcpp to preprocess the macro file and pyparsing to parse the DSL.
"""
import io
import argparse
from pathlib import Path
import pyparsing as pp
# Import common utilities
from macro_parser_common import (
CppCleaner,
parse_sfields_macro,
parse_field_list,
generate_cpp_class,
generate_from_template,
clear_output_directory,
)
def create_transaction_parser():
"""Create a pyparsing parser for TRANSACTION macros.
This parser extracts the full TRANSACTION macro call and parses its arguments
using pyparsing's nesting-aware delimited list parsing.
"""
# Define nested structures so pyparsing protects them
nested_braces = pp.original_text_for(pp.nested_expr("{", "}"))
nested_parens = pp.original_text_for(pp.nested_expr("(", ")"))
# Define standard text (anything that isn't a comma, parens, or braces)
plain_text = pp.Word(pp.printables + " \t\n", exclude_chars=",{}()")
# A single argument is any combination of the above
single_arg = pp.Combine(pp.OneOrMore(nested_braces | nested_parens | plain_text))
single_arg.set_parse_action(lambda t: t[0].strip())
# The arguments are a delimited list
args_list = pp.DelimitedList(single_arg)
# The full macro: TRANSACTION(args)
macro_parser = (
pp.Keyword("TRANSACTION")
+ pp.Suppress("(")
+ pp.Group(args_list)("args")
+ pp.Suppress(")")
)
return macro_parser
def parse_transaction_args(args_list):
"""Parse the arguments of a TRANSACTION macro call.
Args:
args_list: A list of parsed arguments from pyparsing, e.g.,
['ttPAYMENT', '0', 'Payment', 'Delegation::delegable',
'uint256{}', 'createAcct', '({...})']
Returns:
A dict with parsed transaction information.
"""
if len(args_list) < 7:
raise ValueError(
f"Expected at least 7 parts in TRANSACTION, got {len(args_list)}: {args_list}"
)
tag = args_list[0]
value = args_list[1]
name = args_list[2]
delegable = args_list[3]
amendments = args_list[4]
privileges = args_list[5]
fields_str = args_list[-1]
# Parse fields: ({field1, field2, ...})
fields = parse_field_list(fields_str)
return {
"tag": tag,
"value": value,
"name": name,
"delegable": delegable,
"amendments": amendments,
"privileges": privileges,
"fields": fields,
}
def parse_macro_file(filepath):
"""Parse the transactions.macro file.
Uses pcpp to preprocess the file and pyparsing to parse the TRANSACTION macros.
"""
with open(filepath, "r") as f:
c_code = f.read()
# Step 1: Clean the C++ code using pcpp
cleaner = CppCleaner("TRANSACTION_INCLUDE", "TRANSACTION")
cleaner.parse(c_code)
out = io.StringIO()
cleaner.write(out)
clean_text = out.getvalue()
# Step 2: Parse the clean text using pyparsing
parser = create_transaction_parser()
transactions = []
for match, _, _ in parser.scan_string(clean_text):
# Extract the macro name and arguments
raw_args = match.args
# Parse the arguments
tx_data = parse_transaction_args(raw_args)
transactions.append(tx_data)
return transactions
# TransactionBase is a static file in the repository at:
# - include/xrpl/protocol/TransactionBase.h
# - src/libxrpl/protocol/TransactionBase.cpp
# It is NOT generated by this script.
def main():
parser = argparse.ArgumentParser(
description="Generate C++ transaction classes from transactions.macro"
)
parser.add_argument("macro_path", help="Path to transactions.macro")
parser.add_argument(
"--header-dir",
help="Output directory for header files",
default="include/xrpl/protocol_autogen/transactions",
)
parser.add_argument(
"--test-dir",
help="Output directory for test files (optional)",
default=None,
)
parser.add_argument(
"--sfields-macro",
help="Path to sfields.macro (default: auto-detect from macro_path)",
)
parser.add_argument("--venv-dir", help=argparse.SUPPRESS)
args = parser.parse_args()
# Parse the macro file to get transaction names
transactions = parse_macro_file(args.macro_path)
# Auto-detect sfields.macro path if not provided
if args.sfields_macro:
sfields_path = Path(args.sfields_macro)
else:
# Assume sfields.macro is in the same directory as transactions.macro
macro_path = Path(args.macro_path)
sfields_path = macro_path.parent / "sfields.macro"
# Parse sfields.macro to get field type information
print(f"Parsing {sfields_path}...")
field_types = parse_sfields_macro(sfields_path)
print(
f"Found {len(field_types)} field definitions ({sum(1 for f in field_types.values() if f['typed'])} typed, {sum(1 for f in field_types.values() if not f['typed'])} untyped)\n"
)
print(f"Found {len(transactions)} transactions\n")
for tx in transactions:
print(f"Transaction: {tx['name']}")
print(f" Tag: {tx['tag']}")
print(f" Value: {tx['value']}")
print(f" Fields: {len(tx['fields'])}")
for field in tx["fields"]:
print(f" - {field['name']}: {field['requirement']}")
print()
# Set up output directory
header_dir = Path(args.header_dir)
header_dir.mkdir(parents=True, exist_ok=True)
# Clear existing generated files before regenerating
clear_output_directory(header_dir)
print(f"\nGenerating header-only template classes...")
print(f" Headers: {header_dir}\n")
# Set up template directory
script_dir = Path(__file__).parent
template_dir = script_dir / "templates"
generated_files = []
for tx_info in transactions:
header_path = generate_cpp_class(
tx_info, header_dir, template_dir, field_types, "Transaction.h.mako"
)
generated_files.append(header_path)
print(f" Generated: {tx_info['name']}.h")
print(
f"\nGenerated {len(transactions)} transaction classes ({len(generated_files)} header files)"
)
print(f" Headers: {header_dir.absolute()}")
# Generate unit tests if --test-dir is provided
if args.test_dir:
test_dir = Path(args.test_dir)
test_dir.mkdir(parents=True, exist_ok=True)
# Clear existing generated test files before regenerating
clear_output_directory(test_dir)
for tx_info in transactions:
# Fields are already enriched from generate_cpp_class above
generate_from_template(
tx_info,
test_dir,
template_dir,
"TransactionTests.cpp.mako",
"Tests.cpp",
)
print(f"\nGenerated {len(transactions)} transaction test files")
if __name__ == "__main__":
main()

View File

@@ -1,297 +0,0 @@
#!/usr/bin/env python3
"""
Common utilities for parsing XRP Ledger macro files.
This module provides shared functionality for parsing transactions.macro
and ledger_entries.macro files using pcpp and pyparsing.
"""
import re
import shutil
from pathlib import Path
import pyparsing as pp
from pcpp import Preprocessor
def clear_output_directory(directory):
"""Clear all generated files from an output directory.
Removes all .h and .cpp files from the directory, but preserves
the directory itself and any subdirectories.
Args:
directory: Path to the directory to clear
"""
dir_path = Path(directory)
if not dir_path.exists():
return
# Remove generated files (headers and source files)
for pattern in ["*.h", "*.cpp"]:
for file_path in dir_path.glob(pattern):
file_path.unlink()
print(f"Cleared output directory: {dir_path}")
class CppCleaner(Preprocessor):
"""C preprocessor that removes C++ noise while preserving macro calls."""
def __init__(self, macro_include_name, macro_name):
"""
Initialize the preprocessor.
Args:
macro_include_name: The name of the include flag to set to 0
(e.g., "TRANSACTION_INCLUDE" or "LEDGER_ENTRY_INCLUDE")
macro_name: The name of the macro to define so #if !defined() checks pass
(e.g., "TRANSACTION" or "LEDGER_ENTRY")
"""
super(CppCleaner, self).__init__()
# Define flags so #if blocks evaluate correctly
# We set the include flag to 0 so includes are skipped
self.define(f"{macro_include_name} 0")
# Define the macro so #if !defined(MACRO) / #error checks pass
# We define it to expand to itself so the macro calls remain in the output
# for pyparsing to find and parse
self.define(f"{macro_name}(...) {macro_name}(__VA_ARGS__)")
# Suppress line directives
self.line_directive = None
def on_error(self, file, line, msg):
# Ignore #error directives
pass
def on_include_not_found(
self, is_malformed, is_system_include, curdir, includepath
):
# Ignore missing headers
pass
def parse_sfields_macro(sfields_path):
"""
Parse sfields.macro to determine which fields are typed vs untyped.
Returns a dict mapping field names to their type information:
{
'sfMemos': {'typed': False, 'stiSuffix': 'ARRAY', 'typeData': {...}},
'sfAmount': {'typed': True, 'stiSuffix': 'AMOUNT', 'typeData': {...}},
...
}
"""
# Mapping from STI suffix to C++ type for untyped fields
UNTYPED_TYPE_MAP = {
"ARRAY": {
"getter_method": "getFieldArray",
"setter_method": "setFieldArray",
"setter_use_brackets": False,
"setter_type": "STArray const&",
"return_type": "STArray const&",
"return_type_optional": "std::optional<std::reference_wrapper<STArray const>>",
},
"OBJECT": {
"getter_method": "getFieldObject",
"setter_method": "setFieldObject",
"setter_use_brackets": False,
"setter_type": "STObject const&",
"return_type": "STObject",
"return_type_optional": "std::optional<STObject>",
},
"PATHSET": {
"getter_method": "getFieldPathSet",
"setter_method": "setFieldPathSet",
"setter_use_brackets": False,
"setter_type": "STPathSet const&",
"return_type": "STPathSet const&",
"return_type_optional": "std::optional<std::reference_wrapper<STPathSet const>>",
},
}
field_info = {}
with open(sfields_path, "r") as f:
content = f.read()
# Parse TYPED_SFIELD entries
# Format: TYPED_SFIELD(sfName, stiSuffix, fieldValue, ...)
typed_pattern = r"TYPED_SFIELD\s*\(\s*(\w+)\s*,\s*(\w+)\s*,"
for match in re.finditer(typed_pattern, content):
field_name = match.group(1)
sti_suffix = match.group(2)
field_info[field_name] = {
"typed": True,
"stiSuffix": sti_suffix,
"typeData": {
"getter_method": "at",
"setter_method": "",
"setter_use_brackets": True,
"setter_type": f"std::decay_t<typename SF_{sti_suffix}::type::value_type> const&",
"return_type": f"SF_{sti_suffix}::type::value_type",
"return_type_optional": f"protocol_autogen::Optional<SF_{sti_suffix}::type::value_type>",
},
}
# Parse UNTYPED_SFIELD entries
# Format: UNTYPED_SFIELD(sfName, stiSuffix, fieldValue, ...)
untyped_pattern = r"UNTYPED_SFIELD\s*\(\s*(\w+)\s*,\s*(\w+)\s*,"
for match in re.finditer(untyped_pattern, content):
field_name = match.group(1)
sti_suffix = match.group(2)
type_data = UNTYPED_TYPE_MAP.get(
sti_suffix, UNTYPED_TYPE_MAP.get("OBJECT")
) # Default to OBJECT
field_info[field_name] = {
"typed": False,
"stiSuffix": sti_suffix,
"typeData": type_data,
}
return field_info
def create_field_list_parser():
"""Create a pyparsing parser for field lists like '({...})'."""
# A field identifier (e.g., sfDestination, soeREQUIRED, soeMPTSupported)
field_identifier = pp.Word(pp.alphas + "_", pp.alphanums + "_")
# A single field definition: {sfName, soeREQUIRED, ...}
# Allow optional trailing comma inside the braces
field_def = (
pp.Suppress("{")
+ pp.Group(pp.DelimitedList(field_identifier) + pp.Optional(pp.Suppress(",")))(
"parts"
)
+ pp.Suppress("}")
)
# The field list: ({field1, field2, ...}) or ({}) for empty lists
# Allow optional trailing comma after the last field definition
field_list = (
pp.Suppress("(")
+ pp.Suppress("{")
+ pp.Group(
pp.Optional(pp.DelimitedList(field_def) + pp.Optional(pp.Suppress(",")))
)("fields")
+ pp.Suppress("}")
+ pp.Suppress(")")
)
return field_list
def parse_field_list(fields_str):
"""Parse a field list string like '({...})' using pyparsing.
Args:
fields_str: A string like '({
{sfDestination, soeREQUIRED},
{sfAmount, soeREQUIRED, soeMPTSupported}
})'
Returns:
A list of field dicts with 'name', 'requirement', 'flags', and 'supports_mpt'.
"""
parser = create_field_list_parser()
try:
result = parser.parse_string(fields_str, parse_all=True)
fields = []
for field_parts in result.fields:
if len(field_parts) < 2:
continue
field_name = field_parts[0]
requirement = field_parts[1]
flags = list(field_parts[2:]) if len(field_parts) > 2 else []
supports_mpt = "soeMPTSupported" in flags
fields.append(
{
"name": field_name,
"requirement": requirement,
"flags": flags,
"supports_mpt": supports_mpt,
}
)
return fields
except pp.ParseException as e:
raise ValueError(f"Failed to parse field list: {e}")
def enrich_fields_with_type_data(entry_info, field_types):
"""Enrich field information with type data from sfields.macro.
Args:
entry_info: Dict containing entry information (name, fields, etc.)
field_types: Dict mapping field names to type information
Modifies entry_info["fields"] in place.
"""
for field in entry_info["fields"]:
field_name = field["name"]
if field_name in field_types:
field["typed"] = field_types[field_name]["typed"]
field["paramName"] = field_name[2].lower() + field_name[3:]
field["stiSuffix"] = field_types[field_name]["stiSuffix"]
field["typeData"] = field_types[field_name]["typeData"]
else:
# Unknown field - assume typed for safety
field["typed"] = True
field["paramName"] = ""
field["stiSuffix"] = None
field["typeData"] = None
def generate_from_template(
entry_info, output_dir, template_dir, template_name, output_suffix
):
"""Generate a file from a Mako template.
Args:
entry_info: Dict containing entry information (name, fields, etc.)
Fields should already be enriched with type data.
output_dir: Output directory for generated files
template_dir: Directory containing Mako templates
template_name: Name of the Mako template file to use
output_suffix: Suffix for the output file (e.g., ".h" or "Tests.cpp")
Returns:
Path to the generated file
"""
from mako.template import Template
template_path = Path(template_dir) / template_name
template = Template(filename=str(template_path))
# Render the template - pass entry_info directly so templates can access any field
content = template.render(**entry_info)
# Write output file in binary mode to avoid any line ending conversion
output_path = Path(output_dir) / f"{entry_info['name']}{output_suffix}"
with open(output_path, "wb") as f:
f.write(content.encode("utf-8"))
print(f"Generated {output_path}")
return output_path
def generate_cpp_class(
entry_info, header_dir, template_dir, field_types, template_name
):
"""Generate C++ header file from a Mako template.
Args:
entry_info: Dict containing entry information (name, fields, etc.)
header_dir: Output directory for generated header files
template_dir: Directory containing Mako templates
field_types: Dict mapping field names to type information
template_name: Name of the Mako template file to use
"""
# Enrich field information with type data
enrich_fields_with_type_data(entry_info, field_types)
# Generate the header file
generate_from_template(entry_info, header_dir, template_dir, template_name, ".h")

Some files were not shown because too many files have changed in this diff Show More